Erotic Neuroimaging Journal Titles

Yes...YES


With all this talk about sexy results these days, I think that there should be a special line of erotic neuroimaging journals dedicated to publishing only the sexiest, sultriest results. Neuroscientist pornography, if you will.

Some ideas for titles:

-Huge OFC Activations

-Blobs on Brains

-Exploring Extremely Active Regions of Interest

-Humungo Garbanzo BOLD Responses

-Deep Brain Stimulations (subtitle: Only the Hottest, Deepest Stimulations)

-Journal of where they show you those IAPS snaps, and the first one is like a picture of a couple snuggling, and you're like, oh hells yeah, here we go; and then they show you some messed-up photo of a charred corpse or a severed hand or something. The hell is wrong with these people? That stuff is gross; it's GROSS.


Think of the market for this; think of how much wider an audience we could attract if we played up the sexy side of science more. Imagine the thrill, for example, of walking into your advisor's office as he hastily tries to hide a copy of Humungo Garbanzo inside his desk drawer. Life would be fuller and more interesting; the lab atmosphere would be suffused with sexiness and tinged with erotic anticipation; the research process would be transformed into a non-stop bacchanalia. Someone needs to step up and make this happen.

AFNI Part 1: Introduction



As promised, we now begin our series of AFNI tutorials. These walkthroughs will be more in-depth than the FSL series, as I am more familiar with AFNI and use it for a greater number of tasks; accordingly, more advanced tools and concepts will be covered.

Using AFNI requires a solid understanding of Unix; the user should know how to write and read conditional statements and for loops, as well as know how to interpret scripts written by others. Furthermore, when confronted with a new or unfamiliar script or command, the user should be able to make an educated guess about what it does. AFNI also demands a sophisticated knowledge of fMRI preprocessing steps and statistical analysis, as AFNI allows the user more opportunity to customize his script.

A few other points about AFNI:

1) There is no release schedule. This means that there is no fixed date for the release of new versions or patches; rather, AFNI responds to user demands on an ad hoc basis. In a sense, all users are beta testers for life. The advantage is that requests are addressed quickly; I once made a feature request at an AFNI bootcamp, and the developers updated the software before I returned home the following week.

2) AFNI is almost entirely run from the command line. In order to make the process less painful, the developers have created "uber" scripts which allow the user to input experiment information through a graphical user interface and generate a preprocessing script. However, these should be treated as templates subject to further alteration.

3) AFNI has a quirky, strange, and, at times, shocking sense of humor. Through clicking on a random hotspot on the AFNI interface, one can choose their favorite Shakespeare sonnet; read through the Declaration of Independence; generate an inspirational quote or receive kind and thoughtful parting words. Do not let this deter you. As you become more proficient with AFNI, and as you gain greater life experience and maturity, the style of the software will become more comprehensible, even enjoyable. It is said that one knows he going insane when what used to be nonsensical gibberish starts to take on profound meaning. So too with AFNI.

The next video will cover the to3d command and the conversion of raw volumetric data into AFNI's BRIK/HEAD format; study this alongside data conversion through mricron, as both produce a similar result and can be used to complement each other. As we progress, we will methodically work through the preprocessing stream and how to visualize the output with AFNI, with an emphasis on detecting artifacts and understanding what is being done at each step. Along the way different AFNI tools and scripts will be broken down and discussed.

At long last my children, we shall take that which is rightfully ours. We shall become as gods among fMRI researchers - wise as serpents, harmless as doves. Seek to understand AFNI with an open heart, and I will gather you unto my terrible and innumerable flesh and hasten your annihilation.


Computational Modeling: A Confession

File:fig cortical cons ff fb lat.png




In a desperate attempt to make myself look cool and connected, on my lab webpage I wrote that my research
...focuses on the application of fMRI and computational modeling in order to further understand prediction and evaluation mechanisms in the medial prefrontal cortex and associated cortical and subcortical areas...
Lies. By God, lies. I know as much about computational modeling as I do about how Band-Aids work or what is up an elephant's trunk. I had hoped that I would grow into the description I wrote for myself; but alas, as with my pathetic attempts to wake up every morning before ten o'clock, or my resolution to eat vegetables at least once a week, this also has proved too ambitious a goal; and slowly, steadily, I find myself engulfed in a blackened pit of despair.

Computational modeling - mystery of mysteries. In my academic youth I observed how cognitive neuroscientists outlined computational models of how certain parts of the brain work; I took notice that their work was received with plaudits and the feverish adoration of my fellow nerds; I then burned with jealousy upon seeing these modelers at conferences, mobs of slack-jawed science junkies surrounding their posters, trains of odalisques in their wake as they made their way back to their hotel chambers at the local Motel 6 and then proceeded to sink into the ocean of their own lust. For me, learning the secrets of this dark art meant unlocking the mysteries of the universe; I was convinced it would expand my consciousness a thousandfold.

I work with a computational modeler in my lab - he is the paragon of happiness. He goes about his work with zest and vigor, modeling anything and everything with confidence; not for a moment does self-doubt cast its shadow upon his soul. He is the envy of the entire psychology department; he has a spring in his step and a knowing wink in his eye; the very mention of his name is enough to make the ladies' heads turn. He has it all, because he knows the secrets, the joys, the unbounded ecstasies of computational modeling.

Desiring to have this knowledge for myself, I enrolled in a class about computational modeling. I hoped to gain some insight; some clarity. So far I have only found myself entangled in a confused mess. I hold onto the hope that through perseverance something will eventually stick.

However, the class has provided useful resources to get the beginner started. A working knowledge of the electrochemical properties of neurons is essential, as is modeling their effects through software such as Matlab. The Book of Genesis is a good place to get started with sample code and to catch up on the modeling argot; likewise, the CCN wiki over at Colorado is a well-written introduction to the concepts of modeling and how it applies to different cognitive domains.

I hope that you get more out of them than I have so far; I will post more about my journey as the semester goes on.

Mapping Results onto SUMA (Part 2)

In a previous post I outlined how to overlay results generated by SPM or FSL onto a SUMA surface and published a tutorial video on my Techsmith account. However, as I am consolidating all of my tutorials onto Youtube, this video has been uploaded to Youtube instead.

There are few differences between this tutorial and the previous one; however, it is worth reemphasizing that, as the results have been interpolated onto another surface, one should not perform statistical analyses on these surface maps - use them for visualization purposes only. The correct approach for surface-based analyses is to perform all of your preprocessing and statistics on the surface itself, a procedure which will later be discussed in greater detail.

A couple of other notes:

1) Use the '.' and ',' keys to toggle between views such as pial, white matter, and inflated surfaces. These buttons were not discussed in the video.

2) I recommend using SPM to generate cluster-corrected images before overlaying these onto SUMA. That way, you won't have to mess with the threshold slider in order to guess which t-value cutoff to use.

More AFNI to come in the near future!


Lesion Studies: Recent Trends

A couple weeks ago I blogged hard about a problem presented by lesion studies of the anterior cingulate cortex (ACC), a broad swath of cortex shown to be involved in aspects of cognitive control such as conflict monitoring (Botvinick et al, 2001) and predicting error likelihood (Brown & Braver, 2005). Put simply: The annihilation of this region, either through strokes, infarctions, bullet wounds, or other cerebral insults, does not result in a deficit of cognitive control, as measured through reaction time (RT) in response to a variety of tasks, such as Stroop tasks - a task that requires overriding prepotent responses to a word presented on a screen, as opposed to the color of the ink that the word is written in - and paradigms which involve task-switching.

In particular, a lesion study by Fellows & Farah (2005) did not find a significant RT interaction of group (either controls or lesion patients) by condition (either low or high conflict in a Stroop task; i.e., either the word and ink color matched or did not match), suggesting that the performance of the lesion patients was essentially the same as the performance of controls. This in turn prompted the question of whether the ACC was really necessary for cognitive control, since those without it seemed to do just fine, and were about a pound lighter to boot. (Rimshot)

However, a recent study by Sheth et al (2012) in Nature examined six lesion patients undergoing cingulotomy, a surgical procedure which removes a localized portion of the dorsal anterior cingulate (dACC) in order to alleviate severe obsessive-compulsive symptoms, such as the desire to compulsively check the amount of hits your blog gets every hour. Before the cingulotomy, the patients performed a multisource interference task designed to elicit cognitive control mechanisms associated with dACC activation. The resulting cingulotomy overlapped with the peak dACC activation observed in response to high-conflict as contrasted with low-conflict trials (Figure 1).

Figure 1 reproduced from Sheth et al (2012). d) dACC activation in response to conflict. e) arrow pointing to lesion site

Furthermore, the pattern of RTs before surgery followed a typical response pattern replicated over several studies using this task: RTs were faster for trials immediately following trials of a similar type - such as congruent trials following congruent trials, or incongruent trials following incongruent trials - and RTs were slower for trials which immediately followed trials of a different type, a pattern known as the Gratton effect.

The authors found that global error rates and RTs were similar before and after the surgery, dovetailing with the results reported by Fellows & Farah (2005); however, the modulation of RT based on previous trial congruency or incongruency was abolished. These results suggest that the ACC functions as a continuous updating mechanism modulating responses based on the weighted past and on trial-by-trial cognitive demands, which fits into the framework posited by Dosenbach (2007, 2008) that outlines the ACC as part of a rapid-updating cingulo-opercular network necessary for quick and flexible changes in performance based on task demands and performance history.

a) Pre-surgical RTs in response to trials of increasing conflict. b, c) Post-surgical RTs showing no difference between low-conflict trials preceded by either similar or different trial types (b), and no RT difference between high-conflict trials preceded by either similar or different trial types (c).


Above all, this experiment illustrates how lesion studies ought to be conducted. First, the authors identified a small population of subjects about to undergo a localized surgical procedure to lesion a specific area of the brain known to be involved in cognitive control; the same subjects were tested before the surgery using fMRI and during surgery using single-cell recordings; and interactions were tested which had been overlooked by previous lesion studies. It is an elegant and simple design; although I imagine that testing subjects while they had their skulls split open and electrodes jammed into their brains was disgusting. The things that these sickos will do for high-profile papers.

(This study may be profitably read in conjunction with a recent meta-analysis of lesion subjects (Gläscher et al, 2012; PNAS) dissociating cortical structures involved in cognitive control as opposed to decision-making and evaluation tasks. I recommend giving both of these studies a read.)

Optseq and Event-Related Designs

One question I frequently hear - besides "Why are you staring at me like that?" - is, "Should I optimize my experimental design?" This question is foolish. As well ask man what he thinks of Nutella. Optimal experimental designs are essential to consider before creating an experiment, in order to ensure that there is a balance between the number of trials, the duration of the experiment, and the estimation efficiency. Optseq is one such tool which allows the experimenter to maximize all three.

Optseq, a tool developed by the good people over at MGH, helps the user order the presentation of experimental conditions or stimuli in order to optimize the estimation of beta weights for a given condition or a given contrast. By providing a list of the experimental conditions, the length of the experiment, the TR, and how long and how frequent the experimental conditions last, optseq will generate schedules of stimuli presentation that maximizes the power of the design, both through reducing variance due to noise and increasing the efficiency of estimating beta weights. This is especially important in event-related designs, which are discussed in greater detail below.

Chapter 1: Slow Event-Related Designs

Slow event-related designs refer to experimental paradigms in which the hemodynamic response function (HRF) for a given condition does not overlap with another condition's HRF. For example, let us say that we have an experiment in which IAPS photos are presented to the subject at certain intervals. Let us also say that there are three classes of photos (or conditions): 1) Disgusting pictures (DisgustingPic); 2) Attractive pictures (AttractivePic); and neutral pictures (NeutralPic). The order in which these pictures are presented will be at a  long enough time interval so that the HRFs do not overlap, as shown in the following figure (Note: All of the following figures are taken from the AFNI website; all credit and honor is theirs. They may find these later and try to sue me, but that is a risk I am willing to take):



The interstimulus time interval (ITI) in this example is 15 seconds, allowing enough time for the entire HRF to rise and decay, and therefore allow the experimenter to estimate a beta weight for each condition individually without worrying about overlap in the HRFs. (The typical duration of an HRF, from onset to decay back to baseline, is roughly 12 seconds, with a long period of undershoot and recovery thereafter). However, slow event-related designs suffer from relatively long experimental durations, since the time after each stimulus presentation has to allow enough time for the HRF to decay to baseline levels of activity. Furthermore, the subject may become bored after such lengthy "null" periods, and may develop strategies (aka, "sets") when the timing is too predictable. Therefore, quicker stimulus presentation is desirable, even if it leads to overlap in the conditions's HRFs.


Chapter 2: Rapid (or Fast) Event-Related Designs

Rapid event-related designs refer to experimental paradigms in which the ISI between conditions is short enough so that the hemodynamic response function (HRF) for a given condition overlaps with another condition's HRF. Theoretically, given the assumption of linearity of overlapping HRFs, this shouldn't be a problem, and the contribution of each condition's HRF can be determined through a process known as deconvolution. However, this can be an issue when the ISI's are fixed and the presentation of stimuli is fixed as well:


In this example, each condition follows the other in a fixed pattern: A followed by B followed by C. However, the resulting signal arising from the combination of HRFs (shown in purple in the bottom graph) is impossible to disentangle; we have no way of knowing, at any given time point, what amount of the signal is due to condition A, B, or C. In order to resolve this, we must randomize the presentation of the stimuli and also the duration of the ISI's (a process known as "jittering"):


Now we have enough variability so that we can deconvolve the contribution of each condition to the overall signal, and estimate a beta weight for each condition. This is the type of experimental design that optseq deals with.


Chapter 3: Optseq

In order to use optseq, first download the software from the MGH website (or, just google "optseq" and click on the first hit that comes up). Put the shell script into a folder somewhere in your path (such as ~/abin, if you have already downloaded AFNI) so that it will execute from anywhere within the shell. After you have done this you may also want to type "rehash" at the command line in order to update the paths. Then, type "optseq2" to test whether the program executes or not (if it does, you should get the help output, which is generated when no argument are given).

Optseq needs four pieces of information: 1) The number of time points in a block of your experiment; 2) The TR; 3) A permissible range of the minimum and maximum amounts of time after presentation of a stimulus and the start of the next (aka, the post-stimulus delay, or PSD); and 4) How many conditions there are in your experiment, as well as how long they are per trial and how many trials there are per block. Optseq will then generate a series of schedules that randomize the order  of the ITI's and the stimuli in order to find the maximum design efficiency.

Using the IAPS photos example above, let us also assume that there are: 1) 160 time points in the experiment; 2) a TR of 2 seconds; 3) A post-stimulus window of 20 seconds to capture the hemodynamic response; 3) Jitters ranging from 2 to 8 seconds; and 4) The three conditions mentioned above, each last 2 seconds per trial, and with 20 instances of DisgustingPic, 15 instances of AttractivePic, and 30 instances of NeutralPic. Furthermore, let us say that we wish to keep the top three schedules generated by optseq, each prefixed by the word IAPS, and that we want to randomly generate 1000 schedules.

Lastly, let us say that we are interested in a specific contrast - for example, calculating the difference in activation between the conditions disgustingPic and attractivePic.

In order to do this, we would type the following:

optseq2 --ntp 160 --tr 2 --psdwin 0 20 2 --ev disgustingPic 2 20 --ev attractivePic 2 15 --ev neutalPic 2 30 --evc 1 -1 0 --nkeep 3 --o IAPS --tnullmin 2 --tnullmax 8 --nsearch 1000

This will generate stimuli presentation schedules and only keep the top three; examples of these schedules, as well as an overview of how the efficiencies are calculated, can be seen in the video.


Chapter 4: The Reckoning - Comparison of optseq to 1d_tool.py

Optseq is a great tool for generating templates for stimuli presentation. However, if only one schedule is generated and applied to all blocks and all subjects, there is the potential for ordering effects to be confounded with your design, however minimal that possibility may be. If optseq is used, I recommend that a unique schedule be generated for each block, and then counterbalance or randomize those blocks across subjects. Furthermore, you may also want to consider randomly sampling the stimuli and ITI's and simulate several runs of your experiment, and then run them through AFNI's 1d_tool.py in order to see how efficient they are. With a working knowledge of optseq and 1d_tool.py, in addition to a steady diet of HotPockets slathered with Nutella, you shall soon become invincible.

You shall be...Perfect.


Unix for Neuroimagers: For Loops and Shell Scripting

With this tutorial we begin to enter into AFNI territory, as the more advanced Unix commands and procedures are necessary for running an AFNI analysis. First, we cover the basics of for loops. For loops in fMRI analysis are used to loop over subjects and execute preprocessing and postprocessing scripts for each subject individually. (By the way, I've heard of preprocessing and I've heard of postprocessing; but where does plain, regular processing take place? Has anyone ever thought of that? Does it ever happen at all? I feel like I'm taking crazy pills here.)

Within the t-shell (and I use the t-shell here because it is the default shell used by AFNI), a for loop has the following syntax:


foreach [variableName] ( [var1] [var2]...[varN])
     
     [Executable steps go here]

end



So, for example, if I wanted to loop over a list of values and have them assigned to variable "x" and then return those values, I could do something like the following:

foreach x (1 2 3 4 5)
    
     echo $x

end

Which would return the following:

1
2
3
4
5

Notice in this example that the for loop executes the line of code "echo $x", and assigns a new value to x for each step of the for loop. You can imagine how useful this can be when extended to processing individual subjects for an analysis, e.g.:

foreach subj ( subj1 subj2 subj3...subjN )

    tcsh afni_proc.py $subj

end


That is all you need to process a list of subjects automatically, and, after a little practice, for loops are easy and efficient to use. The same logic of for loops applies to FSL analyses as well, if you were to apply a feat or gfeat command to a list of feat directories. In the next batch of tutorials, we will begin covering the basics of AFNI, and eventually see how a typical AFNI script incorporates the use of for loops in its analysis.

The second part of the tutorial covers shell scripting. A shell script executes a series of commands within a text file (or, more formally, shell script), allowing the user more flexibility in applying a complex series of commands that would be unwieldy to type out in the command line. Just think of it as dumping everything you would have typed out explicitly into the command line, into a tidy little text file that can be altered as needed and executed repeatedly.

Let us say that we wish to execute the code in the previous example, and update it as more subjects are acquired. All that would be added would be a shebang, in order to specify the shell syntax used in the script:

#!/bin/tcsh

foreach subj (subj1 subj2 subj3)

     tcsh afni_proc.py $subj

end


After this file has been saved, make sure that it has executable permissions. Let's assume that the name of our shell script is AFNI_procScript.sh; in order to give it executable permissions, simply type

chmod +x AFNI_procScript.sh

And it should be able to be run from the command line. In order to run it, type the "./" before the name of the script, in order to signify that it should be executed from the shell, e.g.:

./AFNI_procScript.sh

Alternatively, if you wish to override the shebang, you can type the name of the shell you wish to execute the script. So, for example, if I wanted to use bash syntax, I could type:

bash AFNI_procScript.sh

However, since the for loop is written in t-shell syntax, this would return an error.


Due to popular demand, an optseq tutorial will be uploaded tomorrow. Comparisons between the output of optseq and AFNI's 1d_tool.py is encouraged, as there is no "right" sequence of stimulus presentations you should use; only reasonable ones.


Mendeley

One of the most useful resources I have yet found for organizing and annotating papers is the reference manager Mendeley. Mendeley's incredible powers are self-evident; only mere months ago I was surrounded by towering stacks of papers, feeling guilty that I had massacred countless trees in order to produce reams of articles that I would never get around to anyway. Then I found Mendeley, burned all those papers with extreme prejudice (what else can you do with a stack of papers you don't want to read?), and haven't looked back. Now, instead of wasting money on pens, paper, and highlighters, every paper I have ever read is at my fingertips as long as I have a computer and an Internet connection. Now, instead of hazarding the inability to read my chicken-scratches that I wrote down months ago on a sheet of paper, I have everything typed up nice and legible. Now, instead of feeling like just another dweeb whose desk space is swamped with more than he can handle, providing visible proof of his incompetency and slovenliness, I can hide that dweebiness down, deep down, where nobody can ever find it.

Mendeley has a host of features, many of which are documented on their website; I will only mention a couple here that I find particularly useful. First, in addition to inserting and formatting references into word processors, Mendeley also features an intuitive interface and allows the user to organize papers quickly and easily.

Mendeley Interface. Folders are on the left; list of returned search items in the middle; reference information on the right.


However, in my experience the most useful feature is Mendeley's built-in PDF reader, which opens up articles in a separate tab for easy reference. The paper can then be marked up with annotations, notes, and highlights, which is useful for writing down your deepest thoughts, feelings, and musings on complex yet beautiful scientific topics.



Papers can also be shared with groups of coworkers and friends through Mendeley's online sharing system, which works quite smoothly. Share those papers, even with people who don't want them or haven't even asked for them, and feel a sense of satisfaction that you are doing something good for someone else.

Let me share a personal habit. In my most private moments, when the air is so still you can hear the sound of the lid of a jar of Nutella being unscrewed, where the only smell is the faint odor radiating from my socks, and when I am absolutely sure nobody is watching, I select multiple references using the shift key and then insert them into a Word document. Try it.

A Survey of Methods Reporting

Figure 1: Why you are (probably) a slacker
Taken from Carp, 2012


A recent article in NeuroImage suggests that we cognitive neuroscientists aren't providing our methods in adequate detail when we submit articles for publication. Is it because we're all lazy, shiftless academic scum? Absolutely.

However, that is only part of the problem, according to the publication author, Joshua Carp. Because neuroimaging processing pipelines are so complex and multifaceted, the number of unique ways to process a given set of data rapidly approaches infinity (or at least several thousand). Of course, there are more reasonable ways to analyze data than others; for example, you probably wouldn't want to smooth your data before doing registration, because that would be straight-up bonkers. However, it can be done, and, based on the article, a substantial fraction of studies published from 2007 to 2011 do not provide adequate information for replication, leaving it an open question for whether a failure to replicate a given result represents a failure of the replication study, or whether the initial result was a false positive. Furthermore, this increased flexibility in processing leads to a greater likelihood of producing false positives, and makes it less likely that another lab will be able to reproduce a given experiment (see, for example, Ioannidis's 2005 paper on the subject).

To mitigate this problem, Carp suggests that neuroimagers adhere to the guidelines provided by Russ Poldrack in his 2008 paper, including providing details about number of subjects, number of runs (a surprising amount of studies did not report this, according to Carp), how ROI analyses were performed, how to present results, and so forth. I agree that these are all important things to report, although I also happen to agree with Carp that some of the more technical details, such as signal scaling, are provided as a default in programs such as SPM and often aren't given another thought; whereas something like smoothing, which has a higher likelihood of being changed based on the question being addressed, is more often to be reported. Thus Carp's suggestion that more journals provide a supplementary material section for all of the nitty-gritty details, I believe, is a good one.

In any case, another timely reminder (for me at least) about methods reporting and why it matters.

Unix for Neuroimagers: Shells and Variables

First, a few updates:

1) We just finished our first week of the semester here, and although things haven't been too busy, it may be a couple of weeks before I get back on a steady updating schedule. I'll do what I can to keep dropping that fatty knowledge on the regular, and educating your pale, soy-latte-white, Famous Dave's BBQ-stained faces on how to stay trill on that data and stack that cheddah to the ceiling like it's your job. And if you got one of those blogs dedicated to how you and your virgin-ass Rockband-playing frat brothers with names like Brady and Troy and Jason eating those cucumber salad sandwiches or whatever and you drop a link to this site, I'll know it. You show me that love, and I show it right the hell back.

2) While you're here, how about you donate a piece of that stack to the American Cancer Society. I mean, damn; I'm out there seven days a week on those roads, sweating and suffering, but you - you're at work procrastinating again, wringing your snow-bunny white hands over whether you should drop out of graduate school or just toughen it out and graduate in eight years, and while you're at it possibly take a swipe at that new Italian breezey who just entered the neuroscience program. Donate first, worry about those problems later.

3) We got another performance for you all this November, including Schumann's Adagio and Allegro for cello and piano, Resphigi's Adagio con Variazioni, and the Debussy cello sonata. Time and location TBA. Also, more music videos will be uploaded soon, but while you're waiting, you can listen to the latest Mozart Fantasie in D Minor, which has proved one of my most popular videos to date; last I checked, it had 57 views, which I think qualifies for viral status. We goin' worldwide, baby! World-WIDE!!

4) AFNI tutorials are next on the docket, after wrapping up the intro Unix tutorials for neuroimagers, and possibly doing a couple more FSL tutorials on featquery, FSL's ROI analysis tool. Beyond that, there isn't much else I have to say about it; now that you've mastered the basics, you should be able to get the program to jump through whatever hoops you set up for it and to do whatever else you need. There are more complex and sophisticated tools in FSL, to be sure, but that isn't my focus; I will, on the other hand, be going into quite a lot of details with AFNI, including how to run functional connectivity and MVPA analyses. It will take time, but we will get there; as with the FSL tutorials, I'll start from the bottom up.


Anyway, the latest Unix tutorial covers the basics on shells and variables. Shells are just ways of interfacing with the Unix OS; different shells, such as the t-shell (tcsh) and bash shell, do the same thing, but have different syntax and different nomenclature for how they execute commands. So, for example, an if/else statement in the t-shell looks different from a similar statement in the bash shell.

Overall, there's no need to worry too much about which shell you use, although AFNI's default is tcsh, so you may want to get yourself used to that before doing too much with AFNI. I myself use tcsh virtually all of the time, except for a few instances where bash is the only tool that works for the job (running processes on IU's supercomputer, Quarry, comes to mind). There are lots of tcsh haters out there for reasons that are beyond me, but for everything that I do, it works just fine.

As for variables, this is one of the first things you get taught in any intro computer science class, and those of you who have used other software packages, such as R or Matlab, already know what a variable is. In a nutshell, a variable is a thing that has a value. The value can be a string, or a letter, or a number, or pretty much anything. So, for example, when I type in the command
set x=10
in the t-shell, the variable is x, and the value is now 10. If I wish to extract the value from x at any time, I prepend a dollar sign ('$') to it, in order to tell Unix that what follows is a variable. You can also use the 'echo' command to dump the value of the variable to the standard output (i.e., your terminal). So, typing
echo $x
returns the following:
10
which is the value that I assigned to x.

From there, you can build up more complicated scripts and, by having the variable as a placeholder in various locations in your script, only have to change the value assigned to it in order to change the value in each of those locations. It makes your programming more flexible and easier to read and understand, and is critical to know if you wish to make sense of the example scripts generated by AFNI's "uber" scripts.

With all of the tutorials so far, you have essentially all of the fundamentals you need to operate FSL. Really, you only need to understand how to open up a terminal and make sure your path is pointing to the FSL binaries, but after that, all you need to do is understand the interface, and you can get by with pointing and clicking. However, a more sophisticated understanding is needed for AFNI, which will be covered soon. Very soon. Patience, my pretties.