The Hunt for the Paracingulate Sulcus

Within the disgusting recesses of your brain all of you have a cingulate sulcus: a deep groove that runs front-to-back along the medial sides of your hemispheres, just above he corpus callosum. You are neither interesting nor special if you have a cingulate sulcus - you are average.

However, there is a subset of individuals who have another groove running above and parallel to their cingulate sulcus. This additional groove is called the paracingulate sulcus, and it confers great honor upon its possessor. Before, all of you had a mere cingulate sulcus - but behold, I teach you the oversulcus; and those who overcome themselves are the bridge to the oversulcus.

As shown in the following video, the paracingulate sulcus is often readily visible, although I recommend using the sagittal and coronal slices to zero in on it; and to look about 15-20 millimeters anterior of the anterior commissure, and about 4-8 millimeters to the left and right of the longitudinal fissure. In particular, within the coronal section look for double invaginations stacked like pancakes. These folds correspond to the cingulate sulcus (ventral) and the paracingulate sulcus (dorsal). Furthermore, be aware that there are four possible combinations that you can see: Either there is no paracingulate sulcus; there is only one paracingulate sulcus, and it is either on the left hemisphere, or on the right hemisphere; or there are two paracingulate sulci, one on each hemisphere.

Regardless of whether a paracingulate sulcus makes you special or not, you may be wondering what the hullabaloo is all about; everyone's brain has some variability, you may say, and these differences wash out at a higher-level analysis. That may be true; but several experiments have also shown that the presence of a paracingulate suclus can significantly alter your results, as well as make the location of your results more uncertain (cf. Amiez et al, 2013). To remove these sources of variability, you can classify your subjects according to whether they are paracingulate-positive or not, and extract your beta weights from different regions of the medial prefrontal cortex.

I am doing an analysis like this right now, so feel free to follow me as I puzzle out how to apply this to my own dataset. I promise that the results will be, if not shockingly scandalous, at least spicy enough for your curiosity's appetite.


Removal of Physiological Noise from FMRI Data

A couple of weeks ago, as I wrapped up the series on resting-state analyses, one of my alert readers asked whether I would be willing to go through a demonstration of removing physiological artifacts - for example, breathing, heart rate, sweating, growth of nasal hair, etc. - in other words, all of those things necessary for life that nevertheless can interfere with FMRI analyses.

Unfortunately, I do not have any breathing or heart rate data available, so I cannot adequately demonstrate what this looks like. In fact, in this post I'm not even going to explain it very concretely; I'm simply going to describe what my comes to mind when someone talks about removing physiological signals - to take you by the hand and guide you through my thought process. This may leave you unsatisfied, and you may have legitimate quarrels with my reasoning. However, it is also an opportunity to show how I think through things, which you may find helpful when tackling your own problems, or when evaluating how much you should trust my explanations on other matters.

Let's say that we had just collected an FMRI dataset along with respiratory and heart rate data. My advisor then calls me to his office, and tells me to figure out how to remove these sources of variance from the neuroimaging data. Failure to do so will result in receiving a vasectomy with a weed-whipper. After considering several ways to leave graduate school and finally concluding that it would not be feasible, the first thing that would come to mind is how to model these additional data.

Keep in mind that everything that you collect in an imaging experiment - well, almost everything - can be modeled. When we talk about creating a model in FMRI, often we mean including our regressors of interest, and inserting other sources of variance, such as head motion, into the model as "nuisance" regressors. This is not always an apt distinction, as it matters what you are interested in, and how you intend to model your data. Typically we convolve our regressors of interest with a gamma-shaped waveform, because we assume that whatever area of the brain is sensitive to that condition or stimulus will show a corresponding wave in MRI signal. However, in the case of head motion, or nuisance regressors, we don't convolve these regressors with anything; we simply enter them into the model as they are, one value per timepoint. My first thought would be to do something similar with any respiratory or heart rate data, and possibly resample it to be on the same timescale as the neuroimaging data. A few options available within AFNI come to mind, such as 1dUpsample and the stim_files option in 3dDeconvolve, that would allow entering this physiological data into the model.

However, upon further research I find that there is a function specifically built for removal of such artifacts, 3dretroicor, and that cardiac and respiratory data can be entered separately or together. I then look into programs such as afni_proc.py, which can automatically generate the appropriate code to place this command where it belongs; and, notwithstanding reading up on the options within 3dretroicor and some of the original literature it is based on, my search is finished. I and my vas deferens can rest easily.

This may all seem an indirect, roundabout way of doing things; and you may say that it would be more efficient and straightforward to do an online search for the terms of my problem, to look through the AFNI message boards perhaps, and then be done with it. That is doubtless true is many cases. However, I would still have questions about what exactly is being done to the data at what step; and, in the present case, if the nuisance data is not being modeled, how it is being removed or filtered out of the imaging data. Then there are further issues about how this compounds with other processing steps, and what other precautions must be taken; and the list could go on. In any case, the user needs to know what is being done, and why.

For example, would there ever be any case where one would want to resample physiological data to the same timegrid as the imaging data, and remove it that way? Other physiological responses, such as galvanic skin response (GSR; broadly speaking, the amount of sweat secreted on the palms) can also be measured and inserted into a model, sometimes as parametric modulators if they are thought to capture any information beyond what the regressors provide; would this ever be appropriate in the case of breathing or heart rate measurements? Or does the slower periodicity of breathing and heart rate make only certain methods of modeling appropriate, but not others?

In the case of heart rate and breathing data, I can't say, because I haven't analyzed any; however, the question of what to do with it is an important one, and what is outlined above is a rough sketch of what I would think when presented with that problem. You can put away the weed-whipper now.

Viennese Waltz Dance Showcase

In my unending quest to become a quadruple threat - which, obviously, consists of being a neuroscience blogger, marathon runner, wedding pianist, and ballroom dancer - I recently did a Viennese Waltz showcase, which may look incredibly suave and sophisticated, or incredibly lame. I don't know. All I was focused on was not dropping my partner during any of the lifts, and wondering why there was a giant moose painted on the wall in the background. Maybe because it's called the Moose Lodge, silly. Did I say you could type anything on here? No, but if you're going to keep asking dumb questions, I might as well. Don't make me drop you.

Michelle, my stunningly beautiful partner, is the one who did all of the choreography, so if you want to know any of the moves, go bug her about it.



Music: So She Dances, by Josh Gropin'

Spring Break Decadence



Fellow neuroscientists, brainbloggers, and acolytes,

I will be gone for the rest of the week visiting Chicago to see the sights, run along the esplanade, dine sumptuously on porterhouse steaks and fine wines, attend La Clemenza di Tito, see Mitsuko Uchida in concert, view rare masterpieces on display at the Art Institute, spend long evenings at the Congress Plaza hotel dancing until the suede of my shoes become slick with wax, playing chess with fellow enthusiasts, and in general wallow in my own decadence. As a result I will not be thinking about anything whatsoever, and I will not be updating for a while.

However, next week I plan on applying the same resting-state template to FSL, since that seems to be what many are clamoring for; and, like many quick-study leaders of the free world, I live only to serve the whims and vicissitudes of the people. The same basics will be covered, just with a different platform and a slightly different technique, as well as the issue of transferring preprocessed datasets from AFNI for connectivity analyses in FSL. All will be covered, in time.

But first, Chicago. Laugh, my friends, and grow fat!

Andy's Brain Blog Advice Column: How to Make Yourself an Irresistible Applicant for Graduate School

In our modern times, obtaining an advanced degree is imperative for getting a good job. Whereas in the past merely completing the eighth grade qualified you for self-sustaining and socially acceptable jobs, such as crime boss, today those same academic credentials will probably shoehorn you into a crime boss chauffeur position at best. Nobody wants to graduate high school just to find that the only options available to them are menial, boring, "dead-end" jobs, such as chess grandmaster or porn star.

Because of this, increasing numbers of people are starting to attend graduate school to receive even more advanced training. For those of you who are not in graduate school - and you guys can trust me - I would describe this training as eerily similar to the kind of training that Luke Skywalker did on planet Dagobah in the Star Wars movies, including becoming fluent in pseudo-philosophical BSing, developing telekinetic powers to remove your car from snowdrifts, and carrying your adviser on your back whilst running through jungles and doing backflips. In fact, your adviser will even look and talk like Yoda, although there are important physical differences between the two, as shown here:



In addition to that, you will also become an expert in a specialized field, such as existential motifs in Russian literature, or the neural correlates of the ever-elusive default poop network. Paradoxically, however, even as one learns more abstruse and recondite information, many graduate school veterans have reported losing knowledge in such simple and rudimentary areas such as basic math, maintaining eye contact when talking to someone, and personal hygiene. Furthermore, for all of its emphasis on reading, graduate school can actually lead to the atrophy of normal reading skills, as one who reads nothing but scientific articles and technical manuals will, after a long period of immersion in his studies, find books dealing with actual human beings or fantastical creatures as bizarre and ridiculous, even hateful.

"I don't have time to read for pleasure anymore!" one of my colleagues once exclaimed. Partly this was a boast to bring attention to his laudable reading habits, at the expense of anything else that could possibly vie for his attention; partly to sound a note of despair, as I really believed that he hadn't read anything for pleasure in years, defined as something unrelated to his work or something that was, practically speaking, useless, but somehow pleasurable and, possibly, edifying. I once tested the strength of his claim by having him read the back of a Honey Nut Cheerios cereal box in order to use a series of clues to solve a riddle; after wrestling with this headbreaker for several hours, he finally gave up, utterly exhausted. (Although, to be honest, some of those puzzles can be pretty tough.)

But I digress. The fact is, there are legions of talented, motivated, eager young persons all applying to the same graduate programs that you desire, and there is simply no practical way to kill them all. To make yourself stand out, therefore, requires a superhuman amount of dedication, responsibility, work ethic, intelligence, charm, good looks, ruthlessness, and knowledge of advanced interrogation techniques - all qualities that you, quite frankly, don't have. Clearly, other methods are required. I'm not going to come right out and say things like "bribery," "blackmail," and "intimidation," but that's pretty much the gist of it. Perhaps you can even call in a favor or two from the local crime lord that you chauffeur around downtown Chicago.

However, if this kind of skullduggery just isn't to your taste, there are other ways to manipulate the thoughts and feelings of the admission committee to realize that you are, in fact, just the applicant that they are looking for. One underhanded way to worm yourself into their good graces is by working for several years or decades as a lab RA, which is an acronym that stands for "Indentured Servant." The way this works is that you literally beg a professor to work in their lab for free, for ten, twenty, even sixty hours a week. You need to make it clear that you absolutely, positively, swear-on-a-box-of-Honey-Nut-Cheerios need this position, and that you will kill for this professor, if necessary. Professors are used to getting these kinds of requests all the time, and in fact find it odd whenever somebody asks to work with them for something in return, such as money, recognition, or humane working conditions. By whatever means possible, do not fall into this "it's all about me" mindset! At this point remember that you are not even a graduate student yet, which, in the academia hierarchy, places you a couple of rungs below a Staph infection. If you are lucky, you will possibly get a letter of recommendation from the principal investigator, which you should expect to type yourself. Just remember to print your name correctly.

But, against all odds, let's assume that you have gained some experience, worked a few relevant jobs, carried out a few hits on your professor's enemies, and have finally been invited to a university for their graduate recruitment weekend. However, even after you have been invited to look around the campus and meet with the faculty, you will still need to have the street smarts to ace the interview.

Let us say, for example, that you have been invited to visit the Dwayne T. Fensterwhacker University of Fine Arts, Sciences, and Advanced Interrogation Techniques, and in particular that you are keen on working with distinguished professor Earl W. Gropeswanker. During the interview, you should be ready for curveball questions, such as the following:

DR. GROPESWANKER: Who is your favorite scientist?
YOU: That is a tough question, but a fair one, to which I reply, entirely of my own volition: Earl Gropeswanker.
DR. GROPESWANKER: Excellent answer. But surely, aren't there any other scientists whom you admire?
YOU: Well, let's see...Walter White, he was a scientist, wasn't he? He was pretty good. Same with Albert Einstein. The rest of them are scum.
DR. GROPESWANKER: You are hired on the spot.

Obviously you should be ready for tough questions on other topics, such as: How much do you respect, love, and admire the professor you are currently interviewing? Would you be willing to chauffeur this person around campus to meet with the heads of the other departmental families? How would you rate your capabilities as bodyguard, trafficker, and yegg? Once you have determined whether you can beneficially work with this person, you should be prepared to work with them for a long time, and to develop other talents and skills so numerous, that I suppose not all the books in the world could contain them. But that is another topic for another day.

One Weird Trick to Set P-Values in AFNI

For those new to AFNI (and even those who have been using AFNI for a while, like I have), there is a way to set a specific uncorrected voxel-wise p-value in the AFNI GUI. Simply scroll over the top or bottom area of the p-value threshold slider (i.e., around the area marked "Z-t" or "p=...q=...") and press mouse button #3 (e.g., depressing the mouse wheel for a standard PC mouse). This will bring up a window where you can enter a specific p-value, which automatically sets the slider to correspond to that value. No more guessing, no more tears.

UPDATE: It turns out that you can use any mouse button, not just mouse button #3, to get that same pop-up window.



Resting State Analysis, Part VII: The End

For the last part of the analysis, we are going to simply take all of the z-maps we have just created, and enter them into a second-level analysis across all subjects. To do this, first I suggest creating separate directories for the Autism and Control subjects, based on the phenotypic data provided by the KKI on the ABIDE website; once you have those, run uber_ttest.py to begin loading Z-maps from each subject into the corresponding category of Autism or Control.

Most of this is easier to see than to describe, so I've made a final tutorial video to show how this is all done. Hope it helps!




P.S. I've collated all of the past videos over the past week and a half into a playlist, which should make sorting through everything slightly easier:

Resting State Analysis, Parts V and VI: Creating Correlation Maps and Z-Maps

Note: Refer to Example #9 of afni_proc.py for AFNI's most recent version of resting-state analysis.


Now that we've laid down some of the theory behind resting-state analyses, and have seen that it is nothing more than a glorified functional connectivity analysis, which in turn is nothing more than a glorified bivariate correlation, which in turn is something that I just made up, the time has now come to create the correlation maps and z-maps which we have so craved. I believe I have talked about correlation maps and their subsequent transmogrification into z-maps, but in the interest of redundancy* and also in the interest of showcasing a few ties that I picked up at Goodwill, I've created two more videos to show each of the steps in turn.

First, use 3dmaskave to extract the timecourse information from your ROI placed in the vmPFC:

3dmaskave -quiet -mask vmPFC+tlrc errts.{$subj}+tlrc > timeCourse.txt

This information is then used by 3dfim+ to generate a correlation map:

3dfim+ -input errts.{$subj}+tlrc -polort 0 -ideal_file timeCourse.txt -out Correlation -bucket vmPFC_Corr



Once those correlation maps are generated, use 3dcalc to convert them into z-maps:

3dcalc -a vmPFC_Corr+tlrc -expr 'log((1+a)/(1-a))/2' -prefix Corr_subj{$subj}_Z





N.B. In each of the above examples, {$subj} is a placeholder for the subject ID you are currently processing; with a few tweaks, you should be able to put this all into a script that automates these processes for each subject.

N.N.B. (I think that's how you do it): The original script that I uploaded had a couple of bugs; one of the placeholders should have been changed to a generic $subj variable, and also -giant_move option has been added to the align_epi_anat.py part of the script, since the anatomical and functional images actually start out quite far away from each other. If you haven't used it yet, downloading the new script should take care of those issues. Also, another hidden change I made was to increase the motion limit from 0.2 to 0.3mm; too many subjects were getting thrown out, and even though a more rigorous analysis would leave the motion threshold at a more conservative 0.2, I've raised it for now, for pedagogical purposes.

N.N.N.B. Find out what "N.B." means.


*Sponsored by the United States Department of Redundancy Department

Resting-State Analysis Part IV: Generating a Seed Region for Resting-State Analysis

Part of the resting-state pipeline includes warping each individual anatomical image into a standardized space, so that results can be compared across subjects. This also means that we can place a seed voxel or seed region into one location of the brain, and it will be roughly in the same spot for every subject in our analysis.

To do this, we will focus on one of the core components of the so-called "resting-state network," which is a reliable pattern of connectivity observed when subjects are at rest. Several studies have revealed patterns of correlative activity between the ventromedial prefrontal cortex (vmPFC) and retrosplenial cortex, which is the network we will be focusing on for this tutorial series; our aim will be to compare correlations between these nodes across persons with autism and a control group without autism.

First, however, we will need to create and place the seed region appropriately. We can place a seed voxel in the vmPFC using the XYZ coordinates 0, -50, -5 (similar to MNI coordinates of 0, +50, -5), and a correlation coefficient will be estimated for every other voxel in the brain. The resulting correlation maps will be created for each subject, and then collapsed across groups and statistically compared against each other.

The procedure for generating an ROI is exactly the same as what was done in a previous post about 3dUndump; we simply put the coordinates into a text file, and tell 3dUndump how large a sphere we want to create around those coordinates.


echo "0, -50, -5" > tmp.txt
3dUndump -prefix vmPFC -master errts.0050783+tlrc -srad 5 -xyz tmp.txt


This will then create a sphere with a 5mm radius around those coordinates, and information about that time series can then be extracted and correlated with other time series in every other voxel in the brain.



Resting State Analysis, Part III: Automating Your Analysis

Once you've set up a resting-state analysis script, either through uber_subject.py or following example #9 in the afni_proc.py documentation, I highly recommend that you set up some sort of higher-level script to automate running that script in each subject's directory. This is especially useful in the dataset we are using, since each individual analysis doesn't necessarily take that long, but we have a large number of subjects.

To begin, navigate to the downloaded KKI directory and use the following command to list each directory without trailing slashes:

ls -d */ | cut -f1 -d'/' > subjList.txt

You can then redirect this output to a text file, which can then be later edited at your leisure; in the above example, I used a redirect command to place all of the directories in a file called subjList.txt.

A for loop can then be used to do the analysis for each subject. (You can use any shell you want, but in this example I will use the t-shell.) Simply use the output of the text file as a variable, then use the for loop to execute the analysis for each subject, e.g.:

setenv subject `cat subjList.txt`
foreach subj ($subject)
cp RSproc.sh $subj/session_1
cd $subj/session_1
tcsh RSproc.sh $subj
cd ../..
end

The RSproc.sh script, generated from the uber_subject.py interface used in the last tutorial, can be found here. Note that I use a motion cutoff threshold of 0.3mm, which is slightly different from the standard 0.2mm cutoff; feel free to alter this if you like.

This should take care of all of your analyses while you go do something else, such as reading a book or shopping for tupperware and nosehair trimmers.* Of course, you will want to examine the output of your commands for any errors, but this menial task can usually be designated to one of your undergraduate RAs slated for resting-state data summoning immolation.




*Or maybe that's just me.