A Reader Writes: Basic FMRI Questions

Mine's bigger

I recently received an email with some relatively basic questions about FMRI, and both the question and answers might apply to some of you. Admittedly, I am not 100% sure about the weighting of the regressors for the ANOVA, but I think it's pretty close. Whatever, man; I'm union.

Some of the words have been changed to protect the identity of the author.


Dear Andrew,
 
Thanks for your message. My background is in medicine and I am trying  to do fmri research!
I will be grateful for your help: 
 
1. How do you interpret the results of the higher and first level fsl analysis - I am used to p values and Confidence intervals - are fMRI results read in a similar way?
 
2. Importantly- I have a series of subjects and we are interested to look at effect of [a manipulation] on their response to [cat images] over one year, we have four time points one before the operation and three after. These time points are roughly 4 months apart.
 
Our Idea was to see how the response to [cat images] changes over time- with each subject serving as their own control- How do I analyse that? We have some missing time points as well- subjects did not come for all the time points!
 
Regards,
 
Simon Legree
 
 
Hi Simon,
Congratulations on your foray into FMRI research; I wish you the best of luck, and I hope you find it enjoyable and rewarding!
In response to your questions:
1. FMRI results also use p-values and confidence intervals, but these are calculated at every single voxel in the brain. For example, if you are looking at the average BOLD response to [cat images] at each voxel, a parameter will be estimated at that voxel, along with a particular p-value and confidence interval. What you'll notice in the FSL GUI is a cluster thresholding which will only display a specified number of spatially contiguous voxels all passing the same p-threshold.
One crucial difference between first and higher-level analyses in FSL (and any FMRI analysis, really) is the degrees of freedom. At the first-level, the degrees of freedom is specified as the number of time points minus the number of regressors; at the second-level (or higher level) the degrees of freedom is specified as the number of time points that went into that higher-level analysis - which is usually the number of subjects included in the analysis. Unless you are doing a case study, you usually will not be dealing with the degrees of freedom at the individual level. (However, see documentation on mixed-effect analyses like AFNI's 3dMEMA, which will take individual variance and degrees of freedom into account.)
2. For an analysis with each patient serving as their own control, you would probably want to do a paired t-test or repeated-measures ANOVA for each subject. For the paired t-test, you would need to weight each cluster of regressors so that they sum to +1 and -1, respectively; in your case, +1*Before, -0.33*After1, -0.33*After2, -0.33*After3. However, if you hypothesize that there is a linear response over time, you might want to do an ANOVA and weight the timepoints linearly; e.g., for a decreasing response over time, +0.66*Before, +0.33*After1, -0.33*After2, -0.66*After3. There are a number of different ways you could do this. As for the subjects with missing time points, you would need to take that into account when weighting your regressors; I also recommend doing a sanity check by doing the analysis both with the timepoint-less subjects and with them. If there is a huge discrepancy between the two analyses, it might suggest that there is something else correlated with missing time points.


Hope this helps!
-Andy

Establishing Casaulity Between Prediction Errors and Learning

You've just submitted a big grant, and you anxiously await the verdict on your proposal, which is due any day now. Finally, you get an email with the results of your proposal. Sweat drips from your brow and onto your hands and onto your pantlegs and soaks through your clothing until you look like some alien creature excavated from a marsh. You read the first line - and then read it again. You can't believe what you just saw - you got the grant!

Never in a million years did you think this would happen. The proposal was crap, you thought; and everyone else you sent it to for review thought it was crap, too. You can just imagine their faces now as they are barely able to restrain their choked-back venom while they congratulate you on getting the big grant while they have to go another year without funding and force their graduate students to work part-time at Kilroy's for the summer and get hit on by sleazy patrons with slicked-back ponytails and names like Tony and Butch and save money by moving into that rundown, cockroaches-on-your-miniwheats-infested, two-bedroom apartment downtown with five roommates and sewage backup problems on the regular.

This scenario illustrates a key component of reinforcement learning known as prediction error: Organisms tend to associate outcomes with particular actions - sometimes randomly, at first - and over time come to form a cause-effect relationship between actions and results. Computational modeling and neuroimaging has implicated dopamine (DA) as a critical neurotransmitter responsible for making these associations, as shown in a landmark study by Schultz and colleagues back in 1997. When you have no prediction about what is going to happen, but a reward - or punishment - appears out of the blue, DA tracks this occurrence by increasing firing, usually originating from clusters of DA neurons in midbrain areas in the ventral tegmental area (VTA). Over time, these outcomes can become associated with particular stimuli or particular actions, and DA firing drifts to the onset of the stimulus or action. Other types of predictions and violations you may be familiar with include certain forms of humor, items failing to drop from the vending machine, and the Houdini.

Figure 1 reproduced from Schutlz et al (1997). Note that when a reward is predicted but no reward occurs, DA firing drops precipitously.

In spite of a large body of empirical results, most reinforcement learning experiments have difficulty establishing a causal link between DA firing and the learning process, often due to relatively poor temporal resolution. However, a recent study in Nature Neuroscience by Steinberg et al (2013) used a form of neuronal activation known as optogenetics to stimulate neurons with pulses of light during critical time periods of learning. One aspect of learning, known as blocking, presented an opportunity to use the superior temporal resolution of optogenetics to test the role of DA in reinforcement learning.

To illustrate the concept of blocking, imagine that you are a rat. Life isn't terribly interesting, but you get to run around inside a box, run on a wheel, and push a lever to get pellets. One day you hear a tone, and a pellet falls down a nearby chute; and it turns out to be the juiciest, moistest, tastiest pellet you've ever had in your life since you were born about seven weeks ago. The same thing happens again and again, with the same tone and the same super-pellet delivered into your cage. Then, at some point, right after you hear the tone you begin to see light flashed into your cage. The pellet is still delivered; all that has changed is now you have a tone and a light, instead of just the tone. At this point, you begin to get all hot and excited whenever you hear the tone; however, the light isn't really doing it for you, and about the light you couldn't really care less. Your learning toward the light has been blocked; everything is present to learn an association between the light and the uber-pellet, but since you've already been highly trained on the association between the tone and the pellet, the light doesn't add any predictive power to the situation.

What Steinberg and colleagues did was to optogenetically stimulate DA neurons whenever rats were presented with the blocked stimulus; in the example above, the light stimulus. This induced a prediction error that was then associated with the blocked object - and rats later presented with the blocked object exhibited similar learning behavior to that stimulus as they did to the primary reinforcer - in the example above, the tone stimulus - lending direct support to the theory that DA serves as a prediction error signal, rather than a salience or surprise signal. Followup experiments showed that optogenetic stimulation of DA neurons could also interfere with the extinction process, when stimuli are no longer associated with a reward, but still manipulated to precede a prediction error. Taken together, these results are a solid contribution to reinforcement learning theory, and have prompted the FDA to recommend more dopamine as part of a healthy diet.

And now, what you've all been waiting for - a gunfight scene from Django Unchained.



How to Fake Data and Make Tons of Money, Part 2: CreateNIFTI.m

After you've been working in science long enough, you may start to discover that the results that you get aren't necessarily the ones that you want. This discrepancy is an abomination, and clearly must be eliminated. One way to do this - at least with FMRI data - is to manually read in a dataset, and overwrite existing values with new ones.

While you can overwrite values in any dataset, I find it helpful to first create a blank dataset that has the same dimensions and orientation as the other data that you are working with. For example, by creating a copy of an existing image and then switching all the values in that dataset to zero. Starting from the ANALYZE files (i.e., .img/.hdr), you will need to convert them to NIFTI before you can use the script; I use AFNI's 3dcopy and then 3dAFNItoNIFTI to do this.

Once you have copied your file, you can zero out the values by using the script createBlankNIFTI.m and then fill in new values using createNIFTI.m. I'm sure these can both be combined somehow in the future, but there isn't a terribly high demand for this capability yet, so I'll leave it as is.



AFNI Command: 3dUnifize

Are you disgusted with the way your T1-weighted images look? Trying to reduce the intensity imbalance in your anatomical? When your friends tell you that your structural looks "fine", do you find yourself not really believing them? If so, then 3dUnifize can help. Simply supply a prefix for the output dataset and the name of your anatomical image, and thirty seconds later you have a relatively balanced T1 image.

To be honest, probably very few, if any, people will use this program. However, it does draw attention to the ability to look up and check on intensity values of your images, as this can be a useful diagnostic tool in some cases. Once you've loaded up the AFNI GUI click on the Overlay tab and look at the values to the right of the ULay (Underlay) and OLay (Overlay) labels. These will provide the intensity of the image, whether in arbitrary units - for example, if you had a raw T1- or T2-weighted image loaded - or statistical values, such as when you overlay a t-map.

Also, due to a recent purge in the lab, I am now officially moved in to a new office. More exciting details about my life, plus an obnoxiously loud computer fan, can be found in the following video.


Electrically Shocking Your Brain Enhances Cognition, Somehow

For those of us wishing to improve our cognitive abilities without having to deal with a concomitant host of unpleasant hobgoblins, including patience, perseverance, industry, toil, thrift, caloric restriction, abdominal exercises, and master cleansing, science has once again come to the rescue. In place of the boredom and the loneliness and the indignity of solitary study, we can instead improve our minds by zapping localized patches of cerebral cortex with small electrical currents, which to me seems conceptually similar to sticking a fork in an electrical socket. This technique is known as transcranial magnetic stimulation, or TMS.

There is a TMS facility at my university, although I have only observed it done to others. From what I could tell, the real payoff occurred when the experimenter was able to position the TMS coils just right so that when the unit was turned on, the subject's finger moved a couple of centimeters, as though jerked by an invisible string. I didn't fully appreciate its significance at the time, although I later learned how it could be used to establish which regions were necessary to carry out specific processes, and how disruption of activity in one area of the brain could either up- or down-regulate functions in other areas.

However, far from being only used to disrupt neural communication, several experiments have shown that TMS can be used to enhance neural functioning, and, by extension, possibly improve skill acquisition and memory retention. Although TMS is designed to depolarize neurons and induce neural firing, different frequencies of TMS can lead to markedly different results; and even within the same frequency, different results can be induced from applying the same frequency to different regions and during different tasks. For example, a 5 Hz TMS pulse over the dorsolateral prefrontal cortex disrupts performance in a delayed match-to-sample task, while the same frequency applied to the midline parietal cortex increased performance.

One theoretical framework for how TMS can improve performance is that of addition-by-subtraction, in which cognitive functioning is enhanced through disruption of competing cognitive processes. For example - and to use extremely simplified cortical representations - let's say that a participant is shown a series of emotional and fearful men's and women's faces and is asked to categorize each face as male or female. TMS is then used to disrupt the emotional part of the brain (again, extremely simplified example), which prevents the processing of extraneous, competing emotional information to interfere with the task. The participant thus becomes faster and more efficient at categorizing the face by gender.

An excellent review of these techniques and other issues related to TMS can be found in the new issue of Neuroimage in an article by Luber & Lisanby, found here. Obviously, it is only a small series of steps before we install public TMS facilities that look like huge futuristic salon hairdryers which can be used for an array of cognitive enhancing techniques, such as mathematical reasoning, crossword puzzle solving, and remembering your girlfriend's birthday.

Creating NIFTI Images from Scratch: CreateNIFTI.m

There may come a time, for whatever reason, where you want to create your own NIFTI image with your own values at each voxel. After all, those processed t-maps and beta maps tend to become a nuisance once in a while, and it feels far better to simply create your own.

First, you need to create a text file with the voxel coordinates and the value at that coordinate. For my script, for example, there are four columns: The first column is the value, and the next three columns are the x-, y-, and z-coordinates for that value. (Note that these are the native coordinates of the image, and not MNI or Talairach coordinates; if you want to use normalized coordinates, open up the image in a viewer, navigate to those normalized coordinates, and write down the corresponding native coordinates.) A sample text file might look something like this:


4 50 40 30 %Insert value of 4 at coordinates 50, 40, 30
5 50 40 31
7 50 40 32
10 50 40 33
500 50 40 35


Once you have saved the text file, you can either use an existing image or create a blank template image using a program like "nifti_tool -create_im ". Then, use the following script with both the image and the text file as arguments (making sure to pass them as strings):

function createNIFTI(imageFile, textFile)


hdr = spm_vol(imageFile);
img = spm_read_vols(hdr);

fid = fopen(textFile);
nrows = numel(cell2mat(textscan(fid,'%1c%*[^\n]')));
fclose(fid);

fid = 0;



for i = 1:nrows
    if fid == 0
        fid = fopen(textFile);
    end
   
    Z = fscanf(fid, '%g', 4);
   
    img(Z(2), Z(3), Z(4)) = Z(1);
    spm_write_vol(hdr, img);
end


This can then be modified to suit your evil purposes.

Tutorial video coming soon; in the meantime, I have some business to attend to, which involves going back home and having fun and laughing with my friends. Just hang tight.

An Empirical Look at the Peer-Review Process

"Everybody talks about the weather," Mark Twain once famously declared, "But nobody ever does anything about it." He then ate a knife to show the seriousness of his statement.

The same sentiment applies to reviewers in academia. Young researchers are frequently traumatized by their first exposure to the peer-review process, in which a paper is scrutinized and criticized by one's colleagues, ostensibly to judge whether the paper is acceptable for publication and to suggest improvements. However, as every one of us scientists intuitively understands, in reality it serves as nothing more than a vicious catharsis for the shattered dreams and failed career and (probably) frustrated sex life of the reviewer, unable to distinguish between his personal neuroses and legitimate criticism, by ruthlessly excoriating an otherwise brilliant exposition of scientific reasoning and experimental methodology. This trauma is then propagated along to other reviews in a never-ending cycle of misery and despond, with the abused becoming the abusers. Conrad summed up this mentality in his book The Secret Agent, in which he characterized a particularly violent group of terrorists plotting against a repressive government as not necessarily opposed to tyranny per se; they were merely unhappy that they were not the ones doing the tyrannizing.

The whole review process has been thoroughly criticized by many, but one of the first systematic exposés of peer-review was done back in the 1980s by two young researchers. To better understand their motivations, let's turn the clock back a few decades to the United States of the '80's: Top Gun, one of the supreme achievements of world cinema, had just been released in theaters to universal acclaim; cocaine flowed like crack through the streets of America; and the nation was beginning to feel its oats after successfully invading and destroying the hated island of Grenada. Life was so good, you could taste it in your spit.

However, even in the midst of all this success and prosperity, problems with the peer-review process still loomed large in the minds of the public. This is where our two young heroes, a couple of spitfire hoydens named Peters & Ceci, entered the picture, intent on producing concrete evidence of systematic bias among reviewer. Although subsequently tortured and murdered for their heresy by the peer-review mafia, their publication still circled clandestinely among groups dedicated to changing how the review process works.

Before their ghastly deaths, Peters & Ceci had taken papers previously accepted for publication and changed a few trivial details on the manuscript - for example, changing the author affiliation from a prestigious university, such as Yale, to a less prestigious-sounding, totally fictitious institution, such as the Nelson Center for the Study of Bodacious Ta-Ta's. Everything else, including materials and methods, remained exactly the same. After resubmitting these manuscripts, surprisingly few were reaccepted; in fact, out of the nine journals sampled, only one accepted the same papers it had accepted earlier! Keep in mind that these were all papers that had been accepted at respectable journals; virtually nothing had been changed, except for trivial details that had no bearing on the quality of the reported results. Reviewer comments were focused on methodological details ("Analysis of variance was improperly used in a post-hoc fashion"), writing style ("The theoretical focus of the introduction could be tightened up considerably"), and even, incredibly, features of the individual authors ("Just where can I meet this Nelson fellow?").

On the face of it, these results may seem to be cause for suffering and despair, as we find ourselves at the mercy of only one more facet of an openly hostile universe. One may well ask whether anything can be done, or whether it would instead make more sense to curse God, and die.

I offer a few suggestions. First, make sure that you are at a prestigious university; and if you are not, begin to associate with people who are at prestigious universities. Offer to make them authors on your paper, even if they haven't done anything. Ingratiate yourself with well-known figures in your field, even going so far as to openly debase yourself by performing menial tasks for them, or by humiliating yourself by offering to eat one of their used tissues. By stripping away any vestige of self-respect, you will increase your chances of their agreeing to be a co-author on your paper, as long as they don't have to do any work and never have to see you again. This, in turn, will increase your chances at publication.

The word sycophant has had negative connotations, long associated with spineless, weak, craven poltroons who would not hesitate to pawn their souls for the most trivial of earthly gains. Our job is to make it a positive word, associated with success, advancement, and - above all - publication.


EPILOGUE

Ceci and Peters laid on the grimy floor of their cell, wasted, defeated, only a putrid dish of water sitting between them. Their eyes were hollow and sunken and their frames emaciated, their faces dirty and wizened, their hair wild and bedraggled. Ceci's eyes roved around the room and he gibbered in an incomprehensible argot, thin strands of drool running down his cheeks in viscous rivulets. They had not eaten in five days and they awaited the hour of their execution with neither apprehension nor resolve but exhaustion.

There was the sound of a key turning in the door and an enormous Swede entered the room, holding a maul in his right hand. He seemed to fill up the entire room with his frame and he had two little pig's eyes set into the sockets of his skull and a cheesy rictus pasted on his face like a doll's. When he moved he gave the impression of a boulder set into motion. He gestured with the maul for Peters to get up but Peters merely glared at him and would not move. The Swede seemed to expect this and he bent down and picked him up by the meatless bone of his arm and dragged him to his feet.

Peters spit in the face of the Swede but it seemed as though it was all part of his day for he did not blink or wince but merely smiled, his right hand raising up slowly and methodically and then snapping down like the lid of a box and the maul made a sound like hitting a pumpkin with a club. Peters' eyes bulged as he collapsed to the floor like his bones had turned to jelly and the blood began to stream out of his ears.

The Swede bent down to look at him and watched as the capillaries in his eyes began to break up. Ceci's eyes darted zigzag across the room, his mouth open and his lower jaw flapping up and down uncontrollably. The Swede looked over at him and smiled.

*   *   *

The next day their lifeless bodies were dumped in a shallow arroyo along with everyone else who had dared to challenge the might of the peer-review process. For the next two days the air was filled with the whine of flies and the wolves and buzzards wallowed in the carrion, and when the feast was over there was nothing left save for their bleached skulls and the bones of their briskets curving in upon themselves like grotesque half-closed traps.

FIN


Link to the paper can be found here.

Thanks once again to Keith Bartley, who is quickly becoming an Andy's Brain Blog VIP. VIP status includes a free year-long subscription to the blog, shampoo samples, Nutella aroma therapy, and a summer vacation to North Dakota, including a tour of the legendary sugarbeet processing plants, a pitchfork steak fondue, and tickets to the one-hour acting monologue Bully! The Teddy Roosevelt Story.

Overview of afni_proc.py

For those of you without access to a Unix-based platform, or whether you are just having a difficult time correctly installing your Python libraries, uber_subject.py may not be an option for generating templates of AFNI processing scripts. In that case, you can still use afni_proc.py, the command called upon by the uber_subject.py GUI. Building an afni_proc.py script from scratch is not all that difficult, but it can be time-consuming, and I recommend using one of the templates already provided on the AFNI website before creating your own.

The simplest use of afni_proc.py is to use the -ask_me option, which, although limited and not supported anymore, can be useful for creating processing templates that you can then alter. -ask_me will then go through a series of basic questions, such as where your functional data and timing files are located. You can also specify options specific to 3dDeconvolve, such as the basis function for each regressor.

While this will generate a basic template script, however, it is a relatively limited one. For more advanced purposes, I recommend copying one of the examples provided in the help of afni_proc.py, pasting it into a tcsh script, altering it as you need, and then executing it. For example, the following piece of code will search for datasets in the sb23 directory, make a copy of the anatomical dataset and move it to that directory, remove the first 3 volumes of each run, and align each EPI dataset to the last acquired volume (usually, the volume acquired closest to the anatomical run). An individual label is provided for each timing file (make sure that they line up in the order that they are input), and each is convolved with a boxcar function for a duration of 30 seconds. Three separate contrasts are carried out, with different weights for each calculated beta (see the lines under "regress_opts_3dD").



                afni_proc.py -subj_id sb23.blk                             \
                        -dsets sb23/epi_r??+orig.HEAD                      \
                        -copy_anat sb23/sb23_mpra+orig                     \
                        -tcat_remove_first_trs 3                           \
                        -volreg_align_to last                              \
                        -regress_stim_times sb23/stim_files/blk_times.*.1D \
                        -regress_stim_labels tneg tpos tneu eneg epos      \
                                             eneu fneg fpos fneu           \
                        -regress_basis 'BLOCK(30,1)'                       \
                        -regress_opts_3dD                                  \
                            -gltsym 'SYM: +eneg -fneg'                     \
                            -glt_label 1 eneg_vs_fneg                      \
                            -gltsym 'SYM: 0.5*fneg 0.5*fpos -1.0*fneu'     \
                            -glt_label 2 face_contrast                     \
                            -gltsym 'SYM: tpos epos fpos -tneg -eneg -fneg'\
                            -glt_label 3 pos_vs_neg                        \
                        -regress_est_blur_epits                            \
                        -regress_est_blur_errts



Aside from that, make sure to read the help output of afni_proc.py thoroughly. Most likely, one of the examples given will relate to what you want, and it is most efficient to copy that example and make the necessary changes. Also remember that afni_proc.py will only generate a script, which contains all of the individual lines needed to run each step - e.g., slice-timing correction through 3dTshift, regression through 3dDeconvolve, and all of the rest. If you need to make any further alterations, you can simply open up the script with your favorite editor and tweak it.

Overview of the -ask_me option in afni_proc.py. This is for little kids, and grandmas.



More advanced video showing you how to copying other people's work and pass it off as your own.



Thanks to Harshawardhan Deshpande, who is called by his enemies, both out of fear and respect, "El Muchacho".

Combining Subjects with Different Scanning Parameters

As I was sorting through my fan mail yesterday, I came across a question from alert reader Joe Statin, asking whether it is OK to combine data from subjects with different scanning parameters; for example, if ten subjects were scanned with a TR of two seconds, and the other ten subjects scanned with a TR of four seconds, with everything else being equal. There are no definite answers for this, but in general, you need to ask yourself whether the difference in scanning parameters would make much difference. In the case of different TRs, a longer TR would result in slightly higher signal-to-noise ratio (SNR), but other than that, would not affect much else. (It would be much more difficult to carry out a meaningful finite impulse responses analysis looking at each acquisition individually, but in this case let's say we are dealing with simple contrasts across regressors.)

One way you can test this quantitatively is by directly comparing the beta estimates from one group of subjects, and comparing it to the other group of subjects that was scanned differently. If it doesn't appear to make much of a difference one way or the other, it is probably OK to combine both groups; if, however, there is a large difference, or if the results appear to be highly sensitive to whether someone was scanned with a two-second TR or a four-second TR, then you may want to hold off on combining them.

Combine the groups, and you will regret it; do not combine the groups, and you will regret it. Combine the groups or do not combine the groups, you will regret it either way. This, gentlemen, is the sum and substance of all philosophy.

3dROIstats: Promises and Pitfalls

Just as a follow-up to the previous post, with AFNI's 3dROIstats, it usually makes sense to assign different values to different ROIs; that way, when data is extracted from those ROIs, each one is labeled individually. Since every one of you you reads everything I write and watches everything I film, you probably noticed that I used a couple of different commands for combining ROIs, such as:

3dcalc -a roi1 -b roi2 -c roi3 -expr '(a+b+c)' -prefix outputfile

and, in the video,

3dcalc -a roi1 -b roi2 -c roi3 -expr 'step(a) + 2*step(b) + 4*step(c)' -prefix outputfile


The reason for assigning values increasing exponentially (1, 2, 4, 8, 16, etc) is to identify which voxels belong to a single ROI, and which voxels belong to overlaps of ROIs. For example, let's say that we have three ROIs, A, B, and C. Using weights of 1, 2, and 4 would result in the following table of values:

1: A only
2: B only
3: overlap of A & B
4: C only
5: overlap of A & C
6: overlap of B & C
7: overlap of A & B & C

By contrast, if you used weights of 1, 2, and 3, then the intersection of A and B would be indistinguishable from region C. (Of course, if do not think that any of your ROIs will overlap, then using discrete consecutive digits, such as 1, 2, 3, etc, is also fine.)

Once you have created your combined mask and are happy with the locations of your ROIs, the command 3dROIstats can be used to dump out data extracted from each ROI from one or more sub-briks (e.g., beta-maps) of a statistical dataset. The command is pretty straightforward, and this template command should be pretty much all you need:

3dROIstats -mask combinedMask+tlrc 'stats+tlrc[1,2,5,...etc]'

For each sub-brik specified in the dataset, 3dROIstats will dump out the average data value within each ROI. So, for example, if you have three ROIs in your mask and two sub-briks, 3dROIstats will output a 2x3 table with three values for each sub-brik, one for each ROI.

I hope that is as clear as a glaucous sky, which I think is a word that means unclear, or something. I just read it in a Cormac McCarthy novel and he uses a lot of words that I don't know. Whatever. I was trying to be ironic.