Automating SPM Contrasts

Manually typing in contrasts in SPM is a grueling process that can have a wide array of unpleasant side effects, including diplopia, lumbago, carpal tunnel syndrome, psychosis, violent auditory and visual hallucinations, hives, and dry mouth. These symptoms are only compounded by the number of regressors in your model, and the number of subjects in your study.

Fortunately, there is a simply way to automate all of this - provided that each subject has the same number of runs, and that the regressors in each run are structured the same way. If they are, though, the following approach will work.

First, open up SPM and click on the TASKS button in the upper right corner of the Graphics window. The button is marked "TASKS" in capital letters, because they really, really want you to use this thing, and mitigate all of the damage and harm in your life caused by doing things manually. You then select the Stats menu, then Contrast Manager. The options from there are straightforward, similar to what you would do when opening up the Results section from the GUI and typing in contrasts manually.

When specifying the contrast vector, take note of how many runs there are per subject. This is because we want to take the average parameter estimate for each regressor we are considering; one can imagine a scenario where one of the regressors occurs in every run, but the other regressor only happens in a subset of runs, and this more or less puts them on equal footing. In addition, comparing the average parameter or contrast estimate across subjects is easier to interpret.

Once you have the settings to your satisfaction, save it out as a .mat file - for example, 'RunContrasts.mat'. This can then be loaded from the command line:

load('RunContrasts')

Which will put a structure called "jobs" in your workspace, which contains all of the code needed to run a first-level contrast. The only part of it we need to change when looping over subjects is the spmmat field, which can be done with code like the following:

subjList=[207 208]; %And so on, including however many subjects you want

for subj=subjList

    jobs{1}.stats{1}.con.spmmat =     {['/data/hammer/space4/MultiOutcome2/fmri/' num2str(subj) '/RESULTS/model_multiSess/SPM.mat']} %This could be modified so that the path is a variable reflecting where you put your SPM.mat file
    spm_jobman('run', jobs)

end

This is demonstrated in the following pair of videos; the first, showing the general setup, and the second showing the execution from the command line.





Important Announcement from Andy's Brain Blog

Even though I assume that the readers of this blog are a small circle of loyal fanatics willing to keep checking in on this site even after I haven't posted for months, and although I have generally treated them with the same degree of interest I would give a Tupperware container filled with armpit hair, even they are entitled to a video update that features me sitting smugly with a cheesy rictus pasted on my face as I list off several of my undeserved accomplishments, as well as giving a thorough explanation for my long absence, and why I haven't posted any truly useful information in about a year. (Hint: It starts with a "d", and rhymes with "missertation.")

Well, the wait is over! Here it is, complete with a new logo and piano music looping softly in the background that kind of sounds like Coldplay!



For those of you who don't have the patience to sit through the video (although you might learn a thing or two about drawing ROIs with fslmaths, which I may or may not have covered a while back), here are the bullet points:


  • After several long months, I have finished my dissertation. It has been proofread, edited, converted into a PDF, and sent out to my committee where it will be promptly filed away and only skimmed through furiously on the day of my defense, where I will be grilled on tough issues such as why my Acknowledgements section includes names like Jake & Amir.
  • A few months ago I was offered, and I accepted, a postdoctoral position at Haskins Laboratories at Yale. (Although technically an independent, private research institution, it includes the name Yale in its web address, so whenever anybody asks where I will be working, I just say "Yale." This has the double effect of being deliberately misleading and making me seem far more intelligent than I am.) I recently traveled out there to meet the people I would be working with, took a tour of the lab, walked around New Haven, sang karaoke, and purchased a shotgun and a Rottweiler for personal safety reasons. Well, the Rottweiler more because I'll be pretty lonely once I get out there, and I need someone to talk to.
  • When I looked at the amount of money I would be paid for this new position, I couldn't believe it. Then when I looked at the amount of money I would be paying for rent, transportation, excess nosehair taxes (only in a state like Connecticut), shotgun ammunition, and dog food, I also couldn't believe it. Bottom line is, my finances will not change considerably once I move.
  • A new logo for the site has been designed by loyal fanatic reader Kyle Dunovan who made it out of the goodness of his heart, and possibly because he is banking on bigtime royalties once we set up an online shop with coffee mugs and t-shirts. In any case, I think it perfectly captures the vibe of the blog - stylish, cool, sleek, sophisticated, red, blue, green, and Greek.
  • Lastly, I promise - for real, this time, unlike all of those other times - to be posting some cool new techniques and tools you can use, such as slice analysis, leave-one-out analysis, and k-means clustering (as soon as I figure that last one out). Once I move to Connecticut the focus will probably shift to more big data techniques, with a renewed emphasis on online databases, similar to previous posts using the ABIDE dataset.
  • I hope to catch up on some major backlogging with emails, both on the blog and on the Youtube channel. However, I can't promise that I will get to all of them (and there are a LOT). One heartening development is that more readers are commenting on other questions and posts, and helping each other out. I hope that the community continues to grow like this, which will be further bonded through coffee mugs and t-shirts with the brain blog logo on it.

Spain



Fellow brainbloggers,

Like a neglectful parent, I have been away far too long - although I did leave twenty dollars on the kitchen table for you to order pizza - and while I could recite the usual litany of excuses, including job searching, teaching, writing papers, sorting the recycling, learning glassblowing, and recovering from gout, none of those warrants such an excessive absence; especially given the supportive, heartwarming messages I have received from some of you describing how the blog has helped save time, money, and, I like to think, relationships. Maybe. If you've learned nothing else, just remember that the most important part of a relationship is winning every argument.

However, as the end of my graduate school years comes closer, I have had to make difficult decisions resulting in the abandonment of the mistress of my youth, blogging, for the wife of mine age, dissertation writing. I never thought I would end up like all the others at this stage, finding myself using all of my free time to work on my dissertation. Partly it is because using the excuse of working on your dissertation is incredibly effective at getting out of any undesirable obligations. For example:


Injured Friend: Andy, I need to stanch the bleeding caused by a freak accident involving a pair of nosehair trimmers. Please help.
Me: You know I would, but I have to work on my dissertation.
Injured Friend: Of course; how selfish of me.

Girlfriend: Andy, we've been in a long-distance relationship for three years now, but during that time I have only seen you once on Skype - and that was just to tell me the joke about who they found in Jeffrey Dahmer's freezer*. I need more commitment from you.
Me: Sorry, Schmoopy, but I have to work on my dissertation.
Girlfriend: Whenever you say "I have to work on my dissertation," you sound so incredibly confident, productive, and sexy. I completely understand, and apologize for any inconvenience I may have caused.

Advisor: Andy, you need to spend less time screwing around, and more time working on your dissertation.
Me: Maybe, but I should probably work on my dissertation instead.
Advisor: Holy mackerel, you need to get on that.


The manifest absurdity of such statements doesn't seem to strike anybody as particularly odd - after all, you are slogging away for long hours on something productive and good, and not, say, out in the streets mugging anybody, or committing cyberterrorism - and this fills many graduate students with a satisfying sense of self-righteousness.

In any case, if you have written regarding any questions or issues, I will get to it eventually - there is a bit of a backlog at the moment, and I've had to do some triage. Although you should probably know that, again like a neglectful parent, I will be going to Spain for the holidays. That's right, I said Spain. And although I realize it may not always be the wisest thing to advertise when I am away from my apartment, please don't break into it while I'm gone. (Although, if you do, there are hotpockets in the freezer. Just saying.)


*Ben and Jerry.

How to Secure a Job in Academia

Ending graduate school and going on the job market is a terrifying prospect, especially for those nursing at the teat of a graduate student stipend. Sure, it's not an especially large amount of money, but it gets you by, pays for rent, pays for the food, and possibly pays for Netflix. The only reason you would leave it is for the more attractive teat of a professor's salary, which, if you hit the jackpot and receive tenure, you will get for the rest of your life. That is, unless you screw up bigtime by neglecting your teaching and research duties, have destructive affairs with your students, and in general completely abuse the purpose of tenure.

I am, of course, joking. There's no way you would ever lose tenure. That's why it's so popular: You can act however you want and nobody can do anything to stop you. Seriously. The U.S. military is currently experimenting with granting soldiers tenure, complete with sabbaticals every three years, and finding that they become invincible on the battlefield.

Obviously, then, securing a tenure-track job is important. If nothing else, you will need something to do for the next few decades of your life before you begin to decay and die. The rest of your friends have jobs, some of them on Wall Street. You're never quite sure what it is that they do, since most of the pictures you see of them, from what you can make out, involve cocaine-fueled orgies with celebrities. Still, they have jobs. They have purpose. The purpose of life, actually - and this is what everyone, deep down, believes in their core - is to have a secure job that pays well and that everyone else admires, even envies. The best jobs (and this is especially true in modern democracies) will dole out prizes regularly, and, ideally, you will get those prizes. 

This is the meaning of life. Anyone who tells you otherwise is wrong. I am right. The notion that there could be anything more to life is pernicious, even hateful, and you will remove it from your mind. I permit you to find the leisure time to read books, go to the opera, appreciate art, take up yoga, become politically involved, choose to become religious or to become an atheist, determine what your values are, form meaningful relationships. These activities will make you feel like a swell person, like an authentic human being, and you will derive much pleasure from comparing how well-rounded and how thoughtful you are to others. But one must never lose sight of what matters.

That is why I recommend using the website theprofessorisin.com to build your job application. The website is managed by Dr. Karen Kelsky, who has had literally oodles of experience reviewing job applications and has a nose for what works and what does not work. Everybody uses her site. Everybody. If you do not use her site, you will fail. Failure means not getting the job, which means you will not have purpose in your life.

You should be terrified at this prospect. You may think there are alternatives. There are no alternatives. The most successful tyranny does not suppress alternatives, it removes awareness of alternatives. This is me establishing a tyranny over you. You will obey. This is easy, since you have already been conditioned to feel this way by graduate school. You love jobs, prizes, and the acclaim of your peers; you are horrified by poverty, debt, shame. It is natural. Everyone feels it, no matter what they do. I have known scores of individuals desperately trying to lead bohemian existences, but in the end they all came back to the importance of a good job. Even those who most fervently preach the ideal of nonconformity, sincerity, and independence of mind, are those who, underneath their outrageous behavior and wild external adornments, lead the most traditional and safest of lives. For all of the exotic places they travel to, for all the louche connections they boast about, their internal lives are flat, their sexual lives withered. It is not the divine madness Socrates praised, nor is it the direct, immediate, nonintellectual perception of reality so highly prized by Lawrence. It is a stopgap, a rearguard action; merely something to fill up the vile lacuna in the middle of their existence.

But I digress. What I mean to say is that you should follow my orders for getting a job. Following my orders is not weakness. It is rational. You will want to get the job, so you will go to the website I just gave you. You will follow its instructions. You will both smile and cringe at the anecdotes which hit close to home for you. You will compare its examples with what you have written, and find out where you are wrong and she is right.

The reason for all of this is to be secure. There was a time where desiring this above all else was considered cowardly, pusillanimous, and shameful, but that was then. This is now. You may sneer at all of this, but you know that I am right. You may have faint stirrings of indignation that rebel against everything I have said, but you will still do what I say. Do this, and you will be happy. The notion that happiness consists in anything else is laughable. Happiness is promised by health, security, and a sound mind; not by Plato, Dickens, and Hemingway. Give men bread, and then ask of them virtue.



Andy's Brain Blog Needs Your Help!



To be more specific, I (Andy) need your help; but what's good for Andy's Brain Blog is good for America - and you're all patriots, right?

As I mentioned before, I am currently applying for jobs and putting together my research and teaching portfolios, playing up all the sexy studies currently in the works, and what I plan to do for the next few years; how I can attract students to the university, students to the department, secure money, funding, recognition, and all that good stuff necessary for the vitality of the university.

However, as all of you know, this right here is one of my dearest, most ambitious projects - to make statistics, neuroimaging, and computational modeling available to everyone in straightforward, simple terms. To use online repositories to get at data unavailable to the majority of smaller, liberal arts institutions, so that students from all parts of the globe, researchers anywhere, and quite literally anyone with a computer can get a piece of the action. To make the information in dense, unreadable technical manuals accessible and easy to understand through hands-on, no-nonsense tutorials. And - perhaps most importantly - I wear a suit when I do it.

I want to continue doing this. I want to continue building, expanding, and teaching, both here and in the classroom. I will not rest: I will drink life to the lees. Heck, maybe I'll even drink the lees too.

But to do that I need your help.

Through both the comments here, on the YouTube channel, and in private correspondence, I've talked with many researchers, students, and professors around the country and around the world. Most of you I've never seen, but I've had the privilege to help out professors and scholars all the way from Italy to China; I've freelanced for PET researchers at Michigan State, schizophrenia experimenters at Indiana, designed experiments for primates in New York. The AFNI tutorials created here have been used as class material at the University of Pittsburgh, and my code for Hodgkin-Huxley simulations have been used for demonstrations at Claremont McKenna College in California. My recipe for homemade granola is used by several hungry researchers to keep them going throughout those long afternoons. The list could go on.

What I ask for is if you have ever used the materials here in an educational setting, be it for the researchers in your lab or the students in your classroom, please let me know by sending an email to ajahn [at] indiana [dot] edu. I am trying to compile a list of where it is used, to demonstrate its use and effectiveness.

Lastly - allow me to get real here for a moment - I've thought of all of you, each person that I've conversed with or replied to or Skyped with, as my children. And not as "children" in the sense of putting you all down on my tax returns as dependents; although, if the IRS isn't too strict about metaphors, I think I could get away with that. No, I've thought of you as real children: Shorter than I am, a little incoherent at times maybe, often trying to get my attention, and playing with Legos and Gak.

Regardless, it's been one of my great pleasures to help you all out. Now get away from that electrical socket.

Bayesian Inference, Step 2: Running Bayesian Inference with Your Data

If you've had the opportunity to install R Studio and JAGS, and to download the associated programs needed to run Bayesian inference, it is a small step to actually getting your parameter estimates. Two of the programs from the BEST folder - BESTExample.R, and BEST1G.R - allow you to run independent-samples and one-sample t-tests, respectively. All that's required is changing the input in the "y1" and "y2" strings with your data, separate each observation with a comma, and then run the program. Other options can be changed, such as which value you are comparing against in the one-sample t-test, but everything else can essentially remain the same.

I realize it's been a while between posts, but right now I'm currently in the process of applying for jobs; this should start to pick up again in mid-October once the deadlines pass, but in the meantime, wish me luck!


Bayesian Inference, Step 1: Installing JAGS On Your Machine

Common complaint: "Bayesian analysis is too hard! Also, I have kidney stones."
Solution: Make Bayesian analysis accessible and efficient through freeware that anyone can use!

These days, advances in technology, computers, and lithotripsy have made Bayesian analysis easy to implement on any personal computer. All it requires is a couple of programs and a library of scripts to run the actual process of Bayesian inference; all that needs to be supplied by you, the user, is the data you have collected. Conceptually, this is no more difficult then entering in data into SAS or SPSS, and, I would argue, is easier in practice.

This can be done in R, statistical software that can interface with a variety of user-created packages. You can download one such package, JAGS, to do the MCMC sampling for building up distributions of parameter estimates, and then use those parameter estimates to brag to your friends about how you've "Gone Bayes."

All of the software and steps you need to install R, JAGS, and rjags (a program allowing JAGS to talk to R) can be found on John Kruschke's website here. Once you have that, it's simply a matter of entering in your own data, and letting the program do the nitty-gritty for you.




Gone for a Few Days

Going to be out of the state for a few days; a few of you have written questions to me about technical issues or methods, and I'll try to get back to you as soon as I can!

In the meantime, as is custom whenever I go on vacation, here is another late masterwork by Brahms. Always a mysterious figure, one of the greatest enigmas surrounding Brahms's life was his relationship with Clara Schumann, piano virtuoso and widow of the famous composer Robert Schumann. Scholars have speculated for decades what happened between them - because, obviously, knowing whether they hooked up or not is essential for how we listen to and appreciate the music of Brahms, and such a line of inquiry is a valid funding opportunity for government grants.

Fortunately, however, I recently came across one of Brahms's lost journals in a second-hand bookstore; and the secrets disclosed within will finally give us an answer. Let's take a look:


November 4th, 1876: Cloudy today. Composed first symphony.

August 9th, 1879: Went clubbing with Liszt and Wagner. Hooked up with Clara Schumann. No doubt about it.

October 24th, 1884: Note to self: Find out why I'm writing my journal in English.

April 3rd, 1897: Died from cancer.





Bayesian Inference Web App That Explains (Nearly) Everything



For those still struggling to understand the concepts of Bayesian inference - in other words, all of you - there is a web app developed by Rasmus Baath which allows one to see the process unfolding in real time. Similar to an independent-samples t-test, we are trying to estimate a parameter of the mean difference between the populations that the samples were drawn from; however, the Bayesian approach offers much richer information about a distribution of parameter values, and, more importantly, which ones are more credible than others, given the data that has been collected.

The web app is simple to use: You input data values for your two groups, specify the number of samples and burn-in samples (although the defaults for this are fine), and then hit the Start button. The MCMC chain begins sampling the posterior distribution, which builds up a distribution of credible parameter values, and 95% of the distribution containing the parameter values with the most credibility is labeled as the highest density interval, or HDI. This same procedure is applied to all of the other parameters informed by the data, including normality, effect size, and individual group means and standard deviations, which provides a much richer set of information than null hypothesis significance testing (NHST).

Because of this I plan to start incorporating more Bayesian statistics into my posts, and also because I believe it will overtake, replace, and destroy NHST as the dominant statistical method in the next ten years, burning its crops, sowing salt in its fields, looting its stores, stampeding its women, and ravishing its cattle. All of this, coming from someone slated to teach the traditional NHST approach next semester; which, understandably, has me conflicted. On the one hand, I feel pressured to do to whatever is most popular at the moment; but on the other, I am an opportunistic coward.

In any case, Bayesian inference is becoming a more tractable technique, thanks to programs that interface with the statistics package R, such such JAGS. Using this to estimate parameters for region of interest data, I think, will be a good first step for introducing Bayesian methods to neuroimagers.


Parametric Modulation with SPM: Why Order Matters

Those who are the youngest children in the family are keenly aware of the disadvantages of being last in the pecking order; the quality of toys, clothes, and nutrition is often best for the eldest children in the family, and gradually tapers off as you reach the last kids, who are usually left with what are called, in parent euphemisms, "hand-me-downs."

In reality, these are things that should have been discarded several years ago due to being broken, unsafe, containing asbestos, or clearly being overused, such as pairs of underwear that have several holes in them so large that you're unsure of where exactly your legs go.

In SPM land, a similar thing happens to parametric modulators, with later modulators getting the dregs of the variance not wanted by the first modulators. Think of variance as toys and cake, and parametric modulators as children; everybody wants the most variance, but only the first modulators get the lion's share of it; the last modulators get whatever wasn't given to the first modulators.

The reason this happens is because when GLMs are set up, SPM uses a command, spm_orth, to orthogonalize the modulators. The first modulator is essentially untouched, but all other modulators coming after it are orthogonalized with respect to the first; and this proceeds in a stepwise function, with the second modulator getting whatever was unassigned to the first, the third getting whatever was unassigned to the second, and so on.

(You can see this for yourself by doing a couple of quick simulations in Matlab. Type x1=rand(10,1), and x2=rand(10,1), and then combine the two into a matrix with xMat = [x1 x2]. Type spm_orth(xMat), and notice that the first column is unchanged, while the second is greatly altered. You can then flip it around by creating another matrix, xMatRev = [x2 x1], and using spm_orth again to see what effect this change has.)

Be aware of this, as this can dramatically alter your results depending on how your modulators were ordered. (If they were completely orthogonal by design, then the spm_orth command should have virtually no impact; but this scenario is rare.) This came to my attention when looking at a contrast of a parametric modulator, which was one of three assigned to a main regressor. In one GLM, the modulator was last, and correspondingly explained almost no variance in the data; while in another GLM, the modulator was first, and loaded on much more of the brain. The following two figures show the difference, using exactly the same contrast and correction threshold, but simply reversing the order of the modulators:


Results window when parametric modulator was the last one in order.


Results window when parametric modulator was the first one in order.

This can be dealt with in a couple of ways:
  1. Comment out the spm_orth call in two scripts used in setting up the GLM: line 229 in spm_get_ons.m, and lines 285-287 in spm_fMRI_design.m. However, this might affect people who actually do want the orthogonalization on, and also this increases the probability that you will screw something up. Also note that doing this means that the modulators will not get de-meaned when entered into the GLM.
  2. Create your own modulators by de-meaning the regressor, inserting your value at the TR where the modulator occurred, then convolving this with an HRF (usually the one in SPM.xBF.bf). If needed, downsample the output to match the original TR.

It is well to be aware of all this, not only for your own contrasts, but to see how this is done in other papers. I recall reading one a long time ago - I can't remember the name - where several different models were used, in particular different models where the modulators were entered in different orders. Not surprisingly, the results were vastly different, although why this was the case wasn't clearly explained. Just something to keep in mind, as both a reader and a reviewer.

I should also point out that this orthogonalization does not occur in AFNI; the order of modulators has no influence on how they are estimated. A couple of simulations should help illustrate this:

Design matrices from AFNI with simulated data (one regressor, two parametric modulators.) Left: PM1 entered first, PM2 second. Right: Order reversed. The values are the same in both windows, just transposed to illustrate which one was first in the model.

More information on the topic can be found here, and detailed instructions on how to construct your own modulators can be found on Tor Wager's lab website.