Reminiscences of SFN 2013

In the past two weeks since I have returned from San Diego, I still find myself struck with lucid recollections during my hypopompic states, the memories of my travels twinned with hallucinations no less beautiful and no less real, were I to try distinguish the two.

When I close my eyes I can still feel the pitch-perfect weather against my skin; I can hear the purl of fountainwater underneath the lush gardens and exquisite statuary of Balboa Park; I can see those silvery, sunwarmed beaches, speckled with families and surfers and oceangoers of all stripes and ages, the terns overhead lazily riding the wind thermals and the surf below gently lapping at the shores. The man is not to be envied who does not find his spirit refreshed and invigorated by the scintillating waves of the ocean, his eros not aroused by the sight of brown-skinned beauties emerging from the sea with beads of saltwater clinging to their skin and their delicate pink toes sinking into the argentate sand; no, he is not to be envied who does not find some spark of religious awe kindled by the sight of the sun bleeding slowly into the horizon and replaced by the pale disc of the moon, pasted in that inky firmament like some ghostly wafer, overlooking the dark abyss of water out there past men's knowing, where stars are drowning and whales ferry their vast souls through the black and seamless sea.

It is well to be surrounded by such sights and sounds, as, set against the backdrop of a conference devoted to science, the conscious mind is all the more appreciative of the particulars and the practicalities of what he believes, secretly or openly, to be the healthiest, the most rigorous, the most downright of human endeavors - that of scientific inquiry. Aristotle once claimed that the twin peaks of human pleasure consist of one, sexual intercourse, and two, thinking; and once one has felt the slow-burning satisfaction of scientific experimentation, of hypotheses proposed and tested, of results surprising one in the most unexpected of ways, what right man would believe otherwise? And after a day of lively discussion and heated debate, after filling one's cup with as many poster sessions and workshops and talks as one can handle and drinking it to the lees, then one encounters the night; and, the mind still reeling from the heady fumes of science, the senses attuned to all the nuances that weren't there before, walking past the garish lighting of the restaurants and pubs of the Gaslamp district, brazen hussies with their sultry strolls and minimalist vesture calling out to each other in the darkness, steroid-inflated bouncers guarding the doors of nightclubs exuding faint rumbles of bass punctuated by shrieks and laughter - and it is here that one becomes aware of certain beauties and lusts and terrors and menace that until now were only thinly hidden. The juxtaposition of such different modes of experience makes each of them in turn that much more powerful, more savory, more piquant.

During my days at the conference inside the convention center, therefore, I expected all of the hobgoblins and ecstasies of the nights before to melt away like snow in sunshine; but even here there is an element of the surreal. Within the bowels of the convention center, several football fields long, were rows upon rows of posters, almost beyond reckoning; here is one person surrounded by intrigued colleagues, gesturing expressively with his hands, his face beaming; there is another over there who could not be more different, all alone, head down and sullenly gazing at the floor, one hand holding the opposite forearm, a perfect picture of dejection. Wind your way through the exhibition section where companies are hawking their wares, photos showing how results look before and after the application of their device, mechanical contraptions demonstrating how the latest stereotaxic equipment drills into any location without any error, no fuss, no muss. You would expect the vendors to be much more animated, to act like some sort of scientific carnival barker; but unfortunately, they sit around, this one checking her phone, that one with a saturnine expression pasted on his face and a toothpick affixed to the side of his mouth.

Upstairs through the pavilio, and enter the ballroom, where chairs are stacked in rows as neatly and ceremoniously as gravestones in a cemetery. Far away at the front of the ballroom a speaker is at the podium, a miniscule dot at this distance, but whose person is projected on several large screens hanging from the ceiling, the amount of exposure beyond the most egomaniacal totalitarian's wet dream. After a couple of hours of talks and results and diagrams of models, out the door again into the hallway, past several smaller conference rooms packed with listeners. Along my way I reach down to pick up a discarded pamphlet off the floor from the American Association for the Advancement of Science; inside, it laments that more than half of the United States population still believes in psychic phenomena such as ESP and seances. "Fifty-seven percent of American adults believe in phenomena unsupported by any evidence whatsoever," it says. "It would be better to get that number closer to zero." It then lists several resources and initiatives to educate the population to think scientifically. The younger the age at which they can stage an intervention, it seems, the better.

While I can appreciate the sentiment, part of me thinks that this feeling is misguided. I have several close colleagues who would be horrified to wake up to a world denuded of superstitions and myths; so satisfying is the sense of superiority they feel in mocking those who still hold groundless beliefs, and the repercussions so minimal, that to take that away from them would be to take away their chief joy. Conversations would dry up, bereft of the usual potent feelings of solidarity and indignation, and the only ties that used to bind them to other like-minded individuals would dissolve. No, clearly a world free of people believing crazy shit would be a catastrophe. I think that a more reasonable goal would be to get the number of persons down to about five or ten percent; that way, the Association can still claim no small measure of success in their crusade, and there will still be plenty of eccentrics left over to insult, belittle, and marginalize.

After making the rounds at all of the talks, I go back to the poster session, where new posters have been pinned up on the boards, some of them still reeking the stench of hot ink fresh off the printer. Wander around, and you begin to notice how some individuals tend to dominate the conversation surrounding a poster and poins out all the experimental flaws with a minimum of decency. To counteract this, I usually leave in one or two glaringly obvious errors in any poster I present or any paper I submit; that way, one can more easily comment on it and feel as though they have done something useful, usually leaving all of the other material alone.

That being said, however, still be aware that there is much research out there which is smoke and mirrors; having been in the game for quite some time, I can provide a short list of words and phrases that should immediately set off alarm bells in your head: neuroscience; significant; brain; rat; human; monkey; hypothesis; anterior cingulate; activation; voxels; cake; "game-changer"; default poop network. Beware the siren song of these words that charm the ear and bewitch the mind; they are beautiful but treacherous ondines who, given the chance, will wrap their briny arms around you, dragging you down to your death in the bottomless sea.

Master's Recital: Debussy, Bach, and Shostakovich

For those of you in Bloomington, I will be accompanying for a cellist's master's recital here at the Jacobs School of Music. The program features a sonata by Debussy so unbelievably colorful and vivid, your synesthesia will go haywire; some of the clearest, most soul-refreshing chamber music by Bach; and everyone's favorite, Shostakovich's colossal cello concerto no. 1, which, in classical music terms, is known as a bodice-ripper.

To bring you this, we've put in countless hours of arduous, painstaking practice, endured invective, censorship, and misunderstanding from the most trenchant of critics, and gone through long nights of rehearsal punctuated by hours of bickering and quarreling, torn sheet music and thrown metronomes - but always followed by tearful reconciliation. Has it been worth it? Heck yes. But then again, I'll let you all be the judge of that.


When: 7:00pm, Friday, November 8th
Where: Recital Hall, ground floor of Merrill Hall (1201 E 10th St)
Who: Andy Jahn, Piano; Sonja Kraus, Cello


===== Program =====

Debussy: Sonata for Cello and Piano
I. Prologue: Sostenuto e molto risoluto
II. Sérénade
III. Finale: Animé: Léger et nerveux

Bach: Gamba Sonata No. 1 in G Major for Cello and Piano
I. Adagio non troppo
II. Allegro, ma non tanto
III. Andante quasi Lento
IV. Allegro moderato

Shostakovich: Cello Concerto No. 1 in E-Flat Major, Op. 107
I. Allegretto
II. Moderato
III. Cadenza
IV. Allegro con moto



Computational Models of Reinforcement Learning: An Introduction

The process of learning what is good for us, and what is bad for us, is incredibly complex; but the rudiments have been outlined, and we can gain some insight by starting with the basic building blocks of what is known as reinforcement learning. During reinforcement learning, we come to associate certain actions with specific outcomes - push one button and get a piece of cake; push another button, and receive a blast of voltage to your nipples. Through experience we begin to flesh out a mental picture of what decisions are likely to lead to certain events; and, though merely observing someone else we can learn about what to do, or what not to do, even in the absence of reinforcers or punishers.

Before we get there, however, let's approach our subject from an even more basic form of learning - classical conditioning. In this case, no actions are needed; one merely observes a stimulus, such as a tone of a certain frequency, and learns that it predicts a specific outcome, such as the arrival of food. In this case, the tone is the conditioned stimulus, the food is the unconditioned stimulus, and salivating in response to the tone, after enough pairings between the tone and the food, becomes the conditioned response.

Let me give an example from my dating history. You may find this particular story I am about to relate to be way, way too much information; but if you've been reading for this long, I assume that we're on close enough terms that divulging such graphic details of my personal life will, far from driving us apart, bring us closer together by allowing us to bond over our shared humanity.

So. Onions. I am - or I used to be - indifferent to them. All I could say about them was that they had a smooth, eely texture when fried in oil; that they released a pungent aroma when sliced, diced, and crushed; and that their flavor was particularly sharp. Other than that, I had nothing else to say about them. Onions were onions.

But one day - never mind when - I began to see a girl who absolutely loved onions. Onions were inseparable from any dish she made; and so close was the association between her mood and the amount of onions she put into her cooking - casseroles, curries, tartlets, you name it - that, were I to witness her eating an entire onion in the raw, I would assume her to be in the seventh heaven.

My little onion, I used to call her, as a sign of my undying affection; and whenever we made love, we would first scatter onion shavings upon the bed, or the grass, or the movie theater seat, as a ritual to consecrate the beautiful, sacred act that was about to be made manifest. And when she would part the pillowy gates of her mouth and cleave her lips to mine, that pregnant moment filled with an anticipation so poignant you could hardly bear it, I would inhale deeply, feeling the overpowering, acrid smell of onions run over single one of my nose hairs and driving my olfactory bulbs insensate with desire.

"Darling, do you love me?" she would ask, breathing heavily, the odorous waves of onion wafting across the thin slit of air between us and mooring within my nostrils.

"Yes, my little onion," I would reply. "Yes; yes; a thousand times yes!"

Such was our love, then; and you can hardly imagine my shock and desolation when, several years into this relationship of onions and unadulterated bliss, some knave, jealous of our happiness no doubt, took it upon himself to poison one of her onions, and kill her! My bereavement was only slightly assuaged by the fact that she had, only a few days before, took out an extremely lucrative life insurance policy, having named me as the sole beneficiary.

After three painful, soul-searching days of mourning, however, I eventually gained the strength to renew my courtships with several other desirable young ladies. Yet, while throughout this period I continued to seduce innumerable women and live a Byronesque lifestyle of aristocratic excess, I couldn't help feeling some conspicuous lack, some defect in any affair, any tryst I willingly thrust myself into. At first I blamed the girls themselves: this one with long, lithe arms, but perhaps a shade too willowy; this one with a bold, intriguing personality, but perhaps a bit too pert for my taste; and yet another, sloe-eyed, with beautiful brown irises, but which, upon closer inspection, revealed the slightest of discrepancies in the size of one pupil compared to the other. Not having taken myself for a very discriminating fellow before my relationship with Mary, that light of my life, that fire of my loins - in other words, that onion chick I was talking about earlier, in case you couldn't tell - I found myself at a total loss.

While ruminating over my sudden change in amorous tastes, one day I found myself absentmindedly skimming the menu at a local bistro; and then - mirabile dictu! - I saw the item French onion soup inconspicuously nestled under the Appetizers section. Feeling my pulse quicken, I followed my instincts and ordered the soup, aware that I had hit upon the answer to my problem. Soon after, a disembodied hand placed the soup in front of me; and, slowly, meaningfully, I gazed down into the thick brown liquid. I braced myself, inhaled deeply, and somewhere in my brain a key unlocked the overflowing warehouse of my desire. Memories came flooding back; memories of Mary; memories of onion; and, most of all, memories of that pungent, acidic smell crushed out from the shavings underneath our bodies.

Having solved the puzzle, I now embark on a new chapter of my life; and nowhere do I go now without my peeler, and without my paring knife!


This story wonderfully illustrates some of the key components of classical conditioning. First, an unconditioned stimulus - Mary - elicited an unconditioned response from me - feelings of arousal. Because of Mary's repeated pairings with onions, the onions became a conditioned stimulus that signified an upcoming session of especially gratifying hanky-panky, and eventually by themselves elicited the conditioned response of arousal.

In psychological terms, this process of learning is called the "critic" part of learning; a stimulus signified some kind of upcoming reward, and over time a person learns this associations, eventually beginning to shift their usual feelings of pleasure and excitement from the reward itself to the stimulus signifying the reward. The critic evaluates how reliable the association is, and, depending on the individual, associations can be learned relatively slowly, or relatively quickly.

Let's focus on a landmark Science paper by Schultz, Dayan, & Montague (1997). This paper mathematically modeled different phases of reinforcement learning, and outlined several equations that can simulate how much an organism will response to the conditioned stimulus and to the reward itself. The following Matlab code implements equations 3 and 4 from the paper, using 200 trials and 100 timesteps within each trial. The weights are updated on each trial, and the prediction error, represented by delta, will become increasingly larger and move close to the time of the presentation of the conditioned stimulus. Note in the following figure from the Schultz et al paper, that when an organism has been conditioned to expect a reward at a certain time, the omission of that reward will lead to a large negative deflection in the prediction error signal.

Similar surface maps can be generated using the following code; I suggest adjusting the learning rate and discount factor parameters to see how they affect the error prediction signal, and also the administration of the reward at different times. Building up this intuition will be critical in understanding more advanced models of reinforcement learning, in which outcomes are contingent upon particular actions. And don't forget to keep eating those onions!



%Parameters
numTrials = 200;
numSteps = 100;
weights = zeros(100,200); %Array of weights from steps 1-100, initialized to zero

discFactor = 0.995; %Discounting factor
learnRate = 0.3; %Learning Rate
delta = zeros(100,200);
V = []; %Empty vector, sum of all future rewards
x = [zeros(1,19) ones(1,81)]; %Presentation of conditioning stimulus
r = zeros(100,200); %Reward
r(50:55,1:190)=1;


for idx = 1:numTrials
   
    for t = 1:numSteps-1   
       
        V(t,idx) = x(t).*weights(t, idx);
        V(t+1,idx) = x(t+1).*weights(t+1, idx);
       
        delta(t+1,idx) = r(t+1,idx) + discFactor.*V(t+1,idx) - V(t,idx);
       
        weights(t, idx+1) = weights(t, idx)+learnRate.*x(t).*delta(t+1,idx);
    end
   
   
end

surf(delta)

Indianapolis Monumental Marathon: Participant Tracking

Fellow brainbloggers, cognitive neuroscientists, and endurance sport enthusiasts,

Tomorrow I will be competing in the Indianapolis Monumental Marathon. Conditions at start time are low forties and partly cloudy, with a high chance of lactic acid, extreme muscle fatigue, and reduced sperm count. My goals: Break the two-hours-and-thirty-minutes barrier; place in the top ten; and to bring the heat on all these pasty-faced, weekend-warrior poltroons and send y'all back to Ireland.

If you would like to get updates on my race, simply sign up here and enter my name (First name: Andrew; Last name: Jahn). In a way, it's just like watching a horror film, or witnessing a particularly nasty breakup - all of the thrill, none of the danger. The things I do for you guys.




Andy's Brain Blog Advice Column: Should You Go to Graduate School?

Around this time of year legions of students will submit their graduate school applications; and, if I close my eyes, I can almost hear the click of the mouse buttons, the plastic staccato of keyboards filling in address information and reflective essays, the soft, almost inaudible squelching of eyeball saccades in their sockets as they gather information about potential laboratories to toil in and advisors to meet. So vivid is the imagination of these sounds, so powerful are the memories of my experience, that part of me can't help but feel a rill of nostalgia flutter down my spine, and possibly, somewhere, deep down, even a twinge of envy. I remember, as a young man, the heady experience of the application process: The shivers of expectation; the slow-burning, months-long buildup of excitement; the thrill of embarking upon an adventure of continuing to do work that you loved, but with new people to meet, new places to discover, and new worlds to conquer. For those about to undertake this journey, I say - Good fortune to you.

However, even in these times of expectation and excitement, I cannot refrain from advising caution; for I once knew a man in a similar situation, who, at the height of his powers, tried his hand at graduate studies; but, rather than augmenting his already considerable gifts, led to the most horrific of decays. So great a man was he, that to think of him is to think of an empire falling. This may smack of hyperbole; but the great promise of his early years, followed by the precipitous decline upon his entry to graduate school, do suggest the tragic dimensions of which I speak.

In his youth he was a hot-blooded hedonist, snatching at all pleasures as he could, carelessly, almost impulsively, like a shipwrecked sailor grasping at driftwood. During these years his life was one of wild debauch, filled with wagers and duels, wine-soaked bacchanalias and abducted women. Endowed with Herculean stamina and the unchained libido of a thousand-and-three Don Juans, every muscle, every sinew, every fiber of his being, was directed at vaulting his pleasure to its highest pinnacle and beyond. A dark aura of raw sexuality exuded from his being; the wellsprings were perennial which fueled his twisted desires. He wouldn't have known an excess if he saw one - his lusts were of such depravity they would have eclipsed even de Sade's darkest fantasies.

The nonstop orgies of his early years eventually petered out, however, and one morning he awoke to find himself in extreme want. Abandoned by his mistress, his fortune squandered, he eventually decided that applying to graduate school would be the best option; after all, styling himself a freethinker and an intellectual, the pursuits of business and politics seemed inadequate, even vulgar. A life of the mind, he concluded, was the only one for him, and thus did he eschew the red and the black in favor of the white labcoat of the researcher.

Among any other trade this man would have been happy, motivated and fulfilled, perfectly at home among the elegant rakes of any other era; but ambition denied withered him; his incessant studies dried up the springs of his energy; and melancholy marked him for her own. Instead of a life of health, vigor, and adventure, now he whittled away his days in a dreary, windowless room performing the most perfunctory and mind-numbing of tasks. Instead of using his masculine touch to awaken hundreds of young maidens into womanhood, now he could only practice a crippled eros that repeatedly failed to take wing. Poverty, alcoholism, and overwork became the staples of his life; his last years were clouded by religious mania; and, misunderstood and forgotten, he spent his final days in utter squalor, dying much as he foresaw - like a poisoned rat in a hole.

Limerick Intermezzo

There was a young man from Stamboul,
Who soliloquized thus to his tool:
"You took all my wealth
And you ruined my health,
And now you won't pee, you old fool."

My friend's story, though extreme, represents the experience of no small number of graduate students. It is not uncommon for the typical graduate to spend the prime of life in an environment he detests, doing work he abominates, with the energy that should go into the flower instead remaining in the leaves and stem. Frustration, disappointment, and monotony become his bywords. The great expectations he begins with, the intoxicating freedom of his new schedule, are all too quickly transformed into feelings of ennui and despair; the hot blood that once coursed through his veins gradually congeals into cold slime. He criticizes his program, his field, his advisors, all the while oblivious to the fact that he is a willing coauthor of his own misery. He manages to project a certain nonchalance, he gets along agreeably enough with his friends, but his most private moments - if not spent in a haze of wine or the arms of some debauched wench - are torture.

And yet - I have known a few individuals who persevere even under the most sordid of circumstances, who, even in the face of the most formidable of challenges, manage to live bravely, even joyfully. They are impervious to the most depressing of environments and the most hateful of colleagues. For these resilient few, their passion lifts them above the waves that would drown the merely indifferent; the iron in their souls allows them to withstand blows that would crush the weaker-willed. (I do not count myself among their number, but then again, I have never had the desire; I have been more than able to make up for any defects of personality or intelligence though flattery, intimidation, bribery, and blackmail.)

Let he who is considering graduate school, therefore, take stock of his weaknesses, and of his strengths; let him calculate the risks; let him understand that persisting in anything that leaves him feeling enervated and worthless is not the sign of some tragic hero, but the mark of a fool - it is the first step on the path to spiritual suicide.

If, by chance, he does have many years of happiness, let him rejoice in them all; yet let him remember the days of darkness, for they shall be many.

Mathematically Describing Neuronal Connections in the Brain

Students in my classes have started to catch on to the fact that I tend to dress up for lectures I am particularly excited about. For a topic I'm indifferent toward or lukewarm about, I wear my standard dress shirt, slacks, and dress shoes. To prepare for a subject I like a little bit more, that's when I throw on a sports jacket, and possibly a nice belt. But get me all hot and bothered, and that's when I break out...the ties.

Not surprisingly, then, I wear a three-piece suit when talking about neurons. These sexy little suckers act as the basic cells of communication throughout your brain and throughout your nervous system, relaying electrical transmissions all the way down your axons to the synaptic gap, those terminal buttons precariously poised on the precipice of a protoplasmic kiss, until finally, excruciatingly, those tiny vesicles of chemical bliss burst from their vile durance, recrudescent, crushing out the last throb of the last ecstasy man or monster has ever known.

...Let me catch my breath...Where was I? Oh yes - neurons. Besides their role in transmitting electrical and chemical signals throughout the brain, they also exist in astonishingly high numbers, with somewhere on the order of tens of billions of neurons packed into a single brain. On top of this, each one can share hundreds or thousands of connections with other neurons, leading to a staggering number of potential synaptic connections. The mind boggles.

To provide the full mathematical treatment of understanding neurons, we are joined again by Keith "The Rookie" Bartley, whose interest in synaptic connections was recently piqued by an introductory cognitive science course. Along the way Keith touches on mathematics, the Turing Test, Friends, oatmeal, the uncanny valley, and where genitals are represented in the brain, providing a theoretical basis for why foot massages can lead to greater chances for successful coitus.

==============

As a TA for an Introduction to Cognitive Science course, one of our instructors briefly discussed the concept of Block's "Aunt Bertha Machine" and how the Turing test alone required more bits of memory than there are atoms in the universe. (If you aren't familiar with Ned Block's work click here).  Below is a section of one of his slides:

Volume: (15*10^9 light-years)^3 = (15*10^9*10^16 meters)^3
Density: 1 bit per (10^-35 meters)^3
Total storage capacity: 10^184 bits < 10^200 bits < 2^670 bits
Critical Turing Test length: 670 bits < 670 characters < 140 words < 1 minute
 

Difficult as it is for some people to conceptualize and subsequently deal with such induced feelings of simultaneous intelligence, stupidity, and insignificance pervading recapitulations of their own life's meaning, I would like to warn those people that the following information about the capacity of the human brain is likely to do much worse. READ RESPONSIBLY.

For the human brain, the possible number of combinations and permutations of neural connections has been purported to vastly exceed the number of elementary particles in the Universe. Consider for a moment that the brain has 85,000,000,000 neurons, we'll round that up to the previously estimated 100,000,000,000 for hypothetical simplicity, each with a capacity for up to 10,000 synaptic "connections". 

1! = 1
10! = 3,628,800
100! = 9.33 x 10^157
1000! = 4.02 x 10^2,567
***This is where Google's calculator starts to report infinity***
For bigger factorials, we'll have to use Stirling's approximation.

So using Stirling's approximation....

10,000,000,000! ~ 2.33 x 10^95,657,055,186
100,000,000,000! ~ 3.75 x 10^1,056,570,551,815


But wait, each of the 100 billion neurons can have 10,000 synaptic connections, so…

10^11 * 10,000 = 1e26

So rather...

100,000,000,000,000,000,000,000,000! ~ REALLY BIG NUMBER


But Keith! Come on, this looks like a gross misrepresentation of the limits of human cognition?


Our theories about the brain are much more modular in scope, but at the same time, distributed enough to adapt. These two points function as a much better descriptor of the networked brain. It's reasonable to see, in our post-Scopes trial times, that humans' brains developed around how they are utilized in their day-to-day lives, and perhaps more importantly, the phenomenal ability to constantly adapt, even in the wake of extreme trauma. The development and existence of a single "grandmother neuron", is misrepresentative of our degree of sustainability in the brain. Neurons for your grandmother have to be very distributed, so that when some of your brains cells die off, as they often do, it's important you don't forget the old woman that squeezes your cheeks when she sees you, lest you punch her in the face for assault. Jennifer Aniston would no longer instill memories of how much time you wasted watching reruns of Friends on TBS every afternoon for 4 years, and Halle Berry would no longer remind you of how Hollywood reduced one of the greatest antiheroines in the history of DC Comics to a mannequin in spandex with a speech impediment. 

At the same time, however, that three pounds of oatmeal between your ears still retains a relative degree of modularity in regional function, which is a reason why your brain is compartmentalized in various folds (gyri) and crevices (sulci). When you have an itch from what is in fact a really small bug bite, your desired area to itch is very distributed across skin because the signals in the brain are themselves both distributed and modular. A diagram often used to demonstrate this modularity is the cortical homunculus.

Straight out of the backwoods of the uncanny valley, this diagram demonstrates the relative intensity and location that each section along your somatosensory cortex corresponds with on your body. As for the proximity of feet relative to your genitals, well that might just explain a lot about some guys now wouldn't it.

Introduction to Computational Modeling: Hodgkin-Huxley Model

Computational modeling can be a tough nut to crack. I'm not just talking pistachio-shell dense; I'm talking walnut-shell dense. I'm talking a nut so tough that not even a nutcracker who's cracked nearly every damn nut on the planet could crack this mother, even if this nutcracker is so badass that he wears a leather jacket, and that leather jacket owns a leather jacket, and that leather jacket smokes meth.

That being said, the best approach to eat this whale is with small bites. That way, you can digest the blubber over a period of several weeks before you reach the waxy, delicious ambergris and eventually the meaty whale guts of computational modeling and feel your consciousness expand a thousandfold. And the best way to begin is with a single neuron.


The Hodgkin-Huxley Model, and the Hunt for the Giant Squid

Way back in the 1950s - all the way back in the twentieth century - a team of notorious outlaws named Hodgkin and Huxley became obsessed and tormented by fevered dreams and hallucinations of the Giant Squid Neuron. (The neurons of a giant squid are, compared to every other creature on the planet, giant. That is why it is called the giant squid. Pay attention.)

After a series of appeals to Holy Roman Emperor Charles V and Pope Stephen II, Hodgkin and Huxley finally secured a commission to hunt the elusive giant squid and sailed to the middle of the Pacific Ocean in a skiff made out of the bones and fingernails and flayed skins of their enemies. Finally spotting the vast abhorrence of the giant squid, Hodgkin and Huxley gave chase over the fiercest seas and most violent winds of the Pacific, and after a tense, exhausting three-day hunt, finally cornered the giant squid in the darkest netherregions of the Marianas Trench. The giant squid sued for mercy, citing precedents and torts of bygone eras, quoting Blackstone and Coke, Anaxamander and Thales. But Huxley, his eyes shining with the cold light of purest hate, smashed his fist through the forehead of the dread beast which erupted in a bloody Vesuvius of brains and bits of bone both sphenoidal and ethmoidal intermixed and Hodgkin screamed and vomited simultaneously. And there stood Huxley triumphant, withdrawing his hand oversized with coagulate gore and clutching the prized Giant Squid Neuron. Hodgkin looked at him.

"Huxley, m'boy, that was cold-blooded!" he ejaculated.
"Yea, oy'm one mean cat, ain't I, guv?" said Huxley.
"'Dis here Pope Stephen II wanted this bloke alive, you twit!"
"Oy, not m'fault, guv," said Huxley, his grim smile twisting into a wicked sneer. "Things got outta hand."


Scene II

Drunk with victory, Hodgkin and Huxley took the Giant Squid Neuron back to their magical laboratory in the Ice Cream Forest and started sticking a bunch of wires and electrodes in it. To their surprise, there was a difference in voltage between the inside of the neuron and the bath surrounding it, suggesting that there were different quantities of electrical charge on both sides of the cell membrane. In fact, at a resting state the neuron appeared to stabilize around -70mV, suggesting that there was more of a negative electrical charge inside the membrane than outside.

Keep in mind that when our friends Hodgkin and Huxley began their quest, nobody knew exactly how the membrane of a neuron worked. Scientists had observed action potentials and understood that electrical forces were involved somehow, but until the experiments of the 1940s and '50s the exact mechanisms were still unknown. However, through a series of carefully controlled studies, the experimenters were able to measure how both current and voltage interacted in their model neuron. It turned out that three ions - sodium (Na+), potassium (K+), and chlorine (Cl-) - appeared to play the most important role in depolarizing the cell membrane and generating an action potential. Different concentrations of the ions, along with the negative charge inside the membrane, led to different pressures exerted on each of the ions.

For example, K+ was found to be much more concentrated inside of the neuron than outside, leading to a concentration gradient exerting pressure for the K+ ions to exit the cell; at the same time, however, the attractive negative force inside the membrane exerting a countering electrostatic pressure, as positively charged potassium ions would be drawn toward the inside of the cell. Similar characteristics of the sodium and chlorine ions were observed as well, as shown in the following figure:

Ned the Neuron, filled with Neuron Goo. Note that the gradient and electrostatic pressures, expressed in microvolts (mV) have arbitrary signs; the point is to show that for an ion like chlorine, the pressures cancel out, while for an ion like potassium, there is slightly more pressure to exit the cell than enter it. Also, if you noticed that these values aren't 100% accurate, then congratu-frickin-lations, you're smarter than I am, but there is no way in HECK that I am redoing this in Microsoft Paint.


In addition to these passive forces, Hodgkin and Huxley also observed an active, energy-consuming force in maintaining the resting potential - a mechanism which exchanged potassium for sodium ions, by kicking out roughly three sodium ions for each potassium ion. Even with this pump though, there is still a whopping 120mV of pressure for sodium ions to enter. What prevents them from rushing in there and trashing the place?

Hodgkin and Huxley hypothesized that certain channels in the neuron membrane were selectively permeable, meaning that only specific ions could pass through them. Furthermore, channels could be either open or closed; for example, there may be sodium channels dotting the membrane, but at a resting potential they are usually closed. In addition, Hodgkin and Huxley thought that within these channels were gates that regulated whether the channel was open or closed, and that these gates could be in either permissive or non-permissive states. The probability of a gate being in either state was dependent on the voltage difference between the inside and the outside of the membrane.

Although this all may seem conceptually straightforward, keep in mind that Hodgkin and Huxley were among the first to combine all of these properties into one unified model - something which could account for the conductances, voltage, and current, as well as how all of this affected the gates within each ion channel - and they were basically doing it from scratch. Also keep in mind that these crazy mofos didn't have stuff like Matlab or R to help them out; they did this the old-fashioned way, by changing one thing at a time and measuring that shit by hand. Insane. (Also think about how, in the good old days, people like Carthaginians and Romans and Greeks would march across entire continents for months, years sometimes, just to slaughter each other. Continents! These days, my idea of a taxing cardiovascular workout is operating a stapler.) To show how they did this for quantifying the relationship between voltage and conductance in potassium, for example, they simply applied a bunch of different currents, saw how it changed over time, and attempted to fit a mathematical function to it, which happens to fit quite nicely when you include n-gates and a fourth-power polynomial.



After a series of painstaking experiments and measurements, Hodgkin and Huxley calculated values for the conductances and equilibrium voltages for different ions. Quite a feat, when you couple that with the fact that they hunted down and killed their very own Giant Squid and then ripped a neuron out of its brain. Incredible. That is the very definition of alpha male behavior, and it's something I want all of my readers to emulate.
Table 3 from Hodgkin & Huxley (1952) showing empirical values for voltages and conductances, as well as the capacitance of the membrane.

The same procedure was used for the n, m, and h gates, which were also found to be functions of the membrane voltage. Once these were calculated, then the conductances and voltage potential could be found for any resting potential and any amount of injected current.

H & H's formulas for the n, m, and h gates as a function of voltage.

So where does that leave us? Since Hodgkin and Huxley have already done most of the heavy lifting for us, all we need to do is take their constants and equations they've already derived, and put it into a script that we can then run through Matlab. At some point, just to get some additional exercise, we may also operate a stapler.

But stay focused here. Most of the formulas and constants can simply be transcribed from their papers into a Matlab script, but we also need to think about the final output that we want, and how we are going to plot it. Note that the original Hodgkin and Huxley paper uses a differential formula for voltage to tie together the capacitance and conductance of the membrance, e.g.:

We can use a method like Euler first-order approximation to plot the voltages, in which each time step is based off of the previous one which is added to a function multiplied by a time step; in the sample code below, the time step can be extremely small, thus giving a better approximation to the true shape of the voltage timecourse. (See the "calculate the derivatives" section below.)

The following code runs a simulation of the Hodgkin Huxley model over 100 milliseconds with 50mA of current, although you are encouraged to try your own and see what happens. The sample plots below show the results of a typical simulation; namely, that the voltage depolarizes after receiving a large enough current and briefly becomes positive before returning to its previous resting potential. The conductances of sodium and potassium show that the sodium channels are quickly opened and quickly closed, while the potassium channels take relatively longer to open and longer to close.The point of the script is to show how equations from papers can be transcribed into code and then run to simulate what neural activity should look like under certain conditions. This can then be expanded into more complex areas such as memory, cognition, and learning.

The actual neuron, of course, is nowhere to be seen; and thank God for that, else we would run out of Giant Squids before you could say Jack Robinson.


Resources
Book of GENESIS, Chapter 4
Original Hodgkin & Huxley paper


%===simulation time===
simulationTime = 100; %in milliseconds
deltaT=.01;
t=0:deltaT:simulationTime;


%===specify the external current I===
changeTimes = [0]; %in milliseconds
currentLevels = [50]; %Change this to see effect of different currents on voltage (Suggested values: 3, 20, 50, 1000)

%Set externally applied current across time
%Here, first 500 timesteps are at current of 50, next 1500 timesteps at
%current of zero (resets resting potential of neuron), and the rest of
%timesteps are at constant current
I(1:500) = currentLevels; I(501:2000) = 0; I(2001:numel(t)) = currentLevels;
%Comment out the above line and uncomment the line below for constant current, and observe effects on voltage timecourse
%I(1:numel(t)) = currentLevels;


%===constant parameters===%
%All of these can be found in Table 3
gbar_K=36; gbar_Na=120; g_L=.3;
E_K = -12; E_Na=115; E_L=10.6;
C=1;


%===set the initial states===%
V=0; %Baseline voltage
alpha_n = .01 * ( (10-V) / (exp((10-V)/10)-1) ); %Equation 12
beta_n = .125*exp(-V/80); %Equation 13
alpha_m = .1*( (25-V) / (exp((25-V)/10)-1) ); %Equation 20
beta_m = 4*exp(-V/18); %Equation 21
alpha_h = .07*exp(-V/20); %Equation 23
beta_h = 1/(exp((30-V)/10)+1); %Equation 24

n(1) = alpha_n/(alpha_n+beta_n); %Equation 9
m(1) = alpha_m/(alpha_m+beta_m); %Equation 18
h(1) = alpha_h/(alpha_h+beta_h); %Equation 18


for i=1:numel(t)-1 %Compute coefficients, currents, and derivates at each time step
   
    %---calculate the coefficients---%
    %Equations here are same as above, just calculating at each time step
    alpha_n(i) = .01 * ( (10-V(i)) / (exp((10-V(i))/10)-1) );
    beta_n(i) = .125*exp(-V(i)/80);
    alpha_m(i) = .1*( (25-V(i)) / (exp((25-V(i))/10)-1) );
    beta_m(i) = 4*exp(-V(i)/18);
    alpha_h(i) = .07*exp(-V(i)/20);
    beta_h(i) = 1/(exp((30-V(i))/10)+1);
   
   
    %---calculate the currents---%
    I_Na = (m(i)^3) * gbar_Na * h(i) * (V(i)-E_Na); %Equations 3 and 14
    I_K = (n(i)^4) * gbar_K * (V(i)-E_K); %Equations 4 and 6
    I_L = g_L *(V(i)-E_L); %Equation 5
    I_ion = I(i) - I_K - I_Na - I_L;
   
   
    %---calculate the derivatives using Euler first order approximation---%
    V(i+1) = V(i) + deltaT*I_ion/C;
    n(i+1) = n(i) + deltaT*(alpha_n(i) *(1-n(i)) - beta_n(i) * n(i)); %Equation 7
    m(i+1) = m(i) + deltaT*(alpha_m(i) *(1-m(i)) - beta_m(i) * m(i)); %Equation 15
    h(i+1) = h(i) + deltaT*(alpha_h(i) *(1-h(i)) - beta_h(i) * h(i)); %Equation 16

end


V = V-70; %Set resting potential to -70mv

%===plot Voltage===%
plot(t,V,'LineWidth',3)
hold on
legend({'voltage'})
ylabel('Voltage (mv)')
xlabel('time (ms)')
title('Voltage over Time in Simulated Neuron')


%===plot Conductance===%
figure
p1 = plot(t,gbar_K*n.^4,'LineWidth',2);
hold on
p2 = plot(t,gbar_Na*(m.^3).*h,'r','LineWidth',2);
legend([p1, p2], 'Conductance for Potassium', 'Conductance for Sodium')
ylabel('Conductance')
xlabel('time (ms)')
title('Conductance for Potassium and Sodium Ions in Simulated Neuron')








What's in an SPM.mat File?

A while back I attempted to write up a document that summarized everything someone would need to know about using SPM. It was called, I believe, the SPM User's Ultimate Guide: For Alpha Males, By Alpha Males. The title was perhaps a little ambitious, since I stopped updating it after only a few dozen pages. In particular I remember a section where I attempted to tease apart everything contained within the SPM.mat files output after first- and second-level analyses. To me this was the most important section, since complex model setups could be executed fairly easily by someone with a good understanding of Matlab code, but I never completed it.

Fortunately, however, there is someone out there who already vivisected the SPM.mat file and publicly displayed its gruesome remains in the online piazza. Researcher Nikki Sullivan has written an excellent short summary of what each field means, broken down into neat, easily digestible categories. You can find it on her website here, and I have also copied and pasted the information below. It makes an excellent shorthand reference, especially if you've forgotten, for example, where contrast weights are stored in the structure, and don't want to go through the tedium of typing SPM, then SPM.xY, then SPM.xY.VY, and so on.

But if you've forgotten how to rock that body? Girl, ain't no remedy for that.


========================

details on experiment:

 

SPM.xY.RT - TR length (RT ="repeat time")
SPM.xY.P - matrix of file names
SPM.xY.VY - # of runs x 1 struct array of mapped image volumes (.img file info)
SPM.modality - the data you're using (PET, FMRI, EEG)
SPM.stats.[modality].UFp - critical F-threshold for selecting voxels over which the non-sphericity is estimated (if required) [default: 0.001]
SPM. stats.maxres - maximum number of residual images for smoothness estimation
SPM. stats.maxmem - maximum amount of data processed at a time (in bytes)
SPM.SPMid - version of SPM used
SPM.swd - directory for SPM.mat and img files. default is pwd

basis function:

 

SPM.xBF.name - name of basis function
SPM.xBF.length - length in seconds of basis
SPM.xBF.order - order of basis set
SPM.xBF.T - number of subdivisions of TR
SPM.xBF.T0 - first time bin (see slice timing)
SPM.xBF.UNITS - options: 'scans'|'secs' for onsets
SPM.xBF.Volterra - order of convolution
SPM.xBF.dt - length of time bin in seconds
SPM.xBF.bf - basis set matrix

Session Stucture:

 

user-specified covariates/regressors (e.g. motion)
SPM.Sess([sesssion]).C.C - [nxc double] regressor (c#covariates,n#sessions)
SPM.Sess([sesssion]).C.name - names of covariates
conditions & modulators specified - i.e. input structure array
SPM.Sess([sesssion]).U(condition).dt: - time bin length {seconds}
SPM.Sess([sesssion]).U(condition).name - names of conditions
SPM.Sess([sesssion]).U(condition).ons - onset for condition's trials
SPM.Sess([sesssion]).U(condition).dur - duration for condition's trials
SPM.Sess([sesssion]).U(condition).u - (t x j) inputs or stimulus function matrix
SPM.Sess([sesssion]).U(condition).pst - (1 x k) peri-stimulus times (seconds)
parameters/modulators specified
SPM.Sess([sesssion]).U(condition).P - parameter structure/matrix
SPM.Sess([sesssion]).U(condition).P.name - names of modulators/parameters
SPM.Sess([sesssion]).U(condition).P.h - polynomial order of modulating parameter (order of polynomial expansion where 0none)
SPM.Sess([sesssion]).U(condition).P.P - vector of modulating values
SPM.Sess([sesssion]).U(condition).P.P.i - sub-indices of U(i).u for plotting
scan indices for sessions
SPM.Sess([sesssion]).row
effect indices for sessions
SPM.Sess([sesssion]).col
F Contrast information for input-specific effects
SPM.Sess([sesssion]).Fc
SPM.Sess([sesssion]).Fc.i - F Contrast columns for input-specific effects
SPM.Sess([sesssion]).Fc.name - F Contrast names for input-specific effects
SPM.nscan([session]) - number of scans per session (or if e.g. a t-test, total number of con*.img files)

global variate/normalization details

 

SPM.xGX.iGXcalc - either "none" or "scaling." for fMRI usually is "none" (no global normalization). if global normalization is "Scaling", see spm_fmri_spm_ui for parameters that will then appear under SPM.xGX.

design matrix information:

 

SPM.xX.X - Design matrix (raw, not temporally smoothed)
SPM.xX.name - cellstr of parameter names corresponding to columns of design matrix
SPM.xX.I - nScan x 4 matrix of factor level indicators. first column is the replication number. other columns are the levels of each experimental factor. SPM.xX.iH - vector of H partition (indicator variables) indices
SPM.xX.iC - vector of C partition (covariates) indices
SPM.xX.iB - vector of B partition (block effects) indices
SPM.xX.iG - vector of G partition (nuisance variables) indices
SPM.xX.K - cell. low frequency confound: high-pass cutoff (secs)
SPM.xX.K.HParam - low frequency cutoff value
SPM.xX.K.X0 - cosines (high-pass filter)
SPM.xX.W - Optional whitening/weighting matrix used to give weighted least squares estimates (WLS).
  • if not specified spm_spm will set this to whiten the data and render the OLS estimates maximum likelihood i.e. W*W' inv(xVi.V).
SPM.xX.xKXs - space structure for K*W*X, the 'filtered and whitened' design matrix
SPM.xX.xKXs.X - Mtx - matrix of trials and betas (columns) in each trial
SPM.xX.xKXs.tol - tolerance
SPM.xX.xKXs.ds - vectors of singular values
SPM.xX.xKXs.u - u as in X u*diag(ds)*v'
SPM.xX.xKXs.v - v as in X u*diag(ds)*v'
SPM.xX.xKXs.rk - rank
SPM.xX.xKXs.oP - orthogonal projector on X
SPM.xX.xKXs.oPp - orthogonal projector on X'
SPM.xX.xKXs.ups - space in which this one is embedded
SPM.xX.xKXs.sus - subspace
SPM.xX.pKX - pseudoinverse of K*W*X, computed by spm_sp
SPM.xX.Bcov - xX.pKX*xX.V*xX.pKX - variance-covariance matrix of parameter estimates (when multiplied by the voxel-specific hyperparameter ResMS of the parameter estimates (ResSS/xX.trRV ResMS) )
SPM.xX.trRV - trace of R*V
SPM.xX.trRVRV - trace of RVRV
SPM.xX.erdf - effective residual degrees of freedom (trRV^2/trRVRV)
SPM.xX.nKX - design matrix (xX.xKXs.X) scaled for display (see spm_DesMtx('sca',... for details) SPM.xX.sF - cellstr of factor names (columns in SPM.xX.I, i think) SPM.xX.D - struct, design definition SPM.xX.xVi - correlation constraints (see non-sphericity below) SPM.xC - struct. array of covariate info

header info

 

SPM.P - a matrix of filenames
SPM.V - a vector of structures containing image volume information.
SPM.V.fname - the filename of the image.
SPM.V.dim - the x, y and z dimensions of the volume
SPM.V.dt - A 1x2 array. First element is datatype (see spm_type). The second is 1 or 0 depending on the endian-ness.
SPM.V.mat- a 4x4 affine transformation matrix mapping from voxel coordinates to real world coordinates.
SPM.V.pinfo - plane info for each plane of the volume.
SPM.V.pinfo(1,:) - scale for each plane
SPM.V.pinfo(2,:) - offset for each plane The true voxel intensities of the jth image are given by: val*V.pinfo(1,j) + V.pinfo(2,j)
SPM.V.pinfo(3,:) - offset into image (in bytes).If the size of pinfo is 3x1, then the volume is assumed to be contiguous and each plane has the same scalefactor and offset.

structure describing intrinsic temporal non-sphericity

 

SPM.xVi.I - typically the same as SPM.xX.I SPM.xVi.h - hyperparameters
SPM.xVi.V xVi.h(1)*xVi.Vi{1} + ...
SPM.xVi.Cy - spatially whitened (used by ReML to estimate h)
SPM.xVi.CY - <(Y - )*(Y - )'>(used by spm_spm_Bayes)
SPM.xVi.Vi - array of non-sphericity components

  • defaults to {speye(size(xX.X,1))} - i.ii.d.
  • specifying a cell array of contraints ((Qi)
  • These contraints invoke spm_reml to estimate hyperparameters assuming V is constant over voxels that provide a high precise estimate of xX.V
SPM.xVi.form - form of non-sphericity (either 'none' or 'AR(1)')
SPM.xX.V - Optional non-sphericity matrix. CCov(e)sigma^2*V.
  • If not specified spm_spm will compute this using a 1st pass to identify signifcant voxels over which to estimate V. A 2nd pass is then used to re-estimate the parameters with WLS and save the ML estimates (unless xX.W is already specified)
 

filtering information

 

SPM.K - filter matrix or filtered structure:
  • SPM.K(s) - struct array containing partition-specific specifications
  • SPM.K(s).RT - observation interval in seconds
  • SPM.K(s).row - row of Y constituting block/partitions
  • SPM.K(s).HParam - cut-off period in seconds
  • SPM.K(s).X0 - low frequencies to be removed (DCT)
  • SPM.Y - filtered data matrix
 

masking information

 

SPM.xM - Structure containing masking information, or a simple column vector of thresholds corresponding to the images in VY.
SPM.xM.T - [n x 1 double] - Masking index
SPM.xM.TH - nVar x nScan matrix of analysis thresholds, one per image
SPM.xM.I - Implicit masking (0 --> none; 1 --> implicit zero/NaN mask)
SPM.xM.VM - struct array of mapped explicit mask image volumes
SPM.xM.xs - [1x1 struct] cellstr description

design information (self-explanatory names, for once) 

 

SPM.xsDes.Basis_functions - type of basis function
SPM.xsDes.Number_of_sessions
SPM.xsDes.Trials_per_session
SPM.xsDes.Interscan_interval
SPM.xsDes.High_pass_Filter
SPM.xsDes.Global_calculation
SPM.xsDes.Grand_mean_scaling
SPM.xsDes.Global_normalisation

details on scannerdata (e.g. smoothness) 

 

SPM.xVol - structure containing details of volume analyzed
SPM.xVol.M- 4x4 voxel --> mm transformation matrix
SPM.xVol.iM - 4x4 mm --> voxel transformation matrix
SPM.xVol.DIM - image dimensions - column vector (in voxels)
SPM.xVol.XYZ - 3 x S vector of in-mask voxel coordinates
SPM.xVol.S- Lebesgue measure or volume (in voxels)
SPM.xVol.R- vector of resel counts (in resels)
SPM.xVol.FWHM - Smoothness of components - FWHM, (in voxels)

info on beta files:

 

SPM.Vbeta - struct array of beta image handles
SPM.Vbeta.fname - beta img file names
SPM.Vbeta.descrip - names for each beta file

info on variance of the error

 

SPM.VResMS - file struct of ResMS image handle
SPM.VResMS.fname - variance of error file name

info on mask

 

SPM.VM - file struct of Mask image handle
SPM.VM.fname - name of mask img file

contrast details (added after running contrasts)

 

SPM.xCon - Contrast definitions structure array
  • (see also spm_FcUtil.m for structure, rules &handling)
SPM.xCon.name - Contrast name
SPM.xCon.STAT - Statistic indicator character ('T', 'F' or 'P')
SPM.xCon.c - Contrast weights (column vector contrasts)
SPM.xCon.X0 - Reduced design matrix data (spans design space under Ho)
  • Stored as coordinates in the orthogonal basis of xX.X from spm_sp
  • (Matrix in SPM99b)
  • Extract using X0 spm_FcUtil('X0',...
SPM.xCon.iX0 - Indicates how contrast was specified:
  • If by columns for reduced design matrix then iX0 contains the column indices.
  • Otherwise, it's a string containing the spm_FcUtil 'Set' action: Usually one of {'c','c+','X0'} defines the indices of the columns that will not be tested. Can be empty.
SPM.xCon.X1o - Remaining design space data (X1o is orthogonal to X0)
  • Stored as coordinates in the orthogonal basis of xX.X from spm_sp (Matrix in SPM99b) Extract using X1o spm_FcUtil('X1o',...
SPM.xCon.eidf - Effective interest degrees of freedom (numerator df)
  • Or effect-size threshold for Posterior probability
SPM.xCon.Vcon - Name of contrast (for 'T's) or ESS (for 'F's) image
SPM.xCon.Vspm - Name of SPM image

Multi-Variate Pattern Analysis (MVPA): An Introduction

Alert reader and fellow brain enthusiast Keith Bartley loves MVPA, and he's not ashamed to admit it. In fact, he loves MVPA so much that he was willing to share what he knows with all of us, as well as a couple of MVPA libraries he has ported from Matlab to Octave.

Why don't I write a post on MVPA, you ask? Maybe because I don't know doodley-squat about it, and maybe because I'm man enough to admit what I don't know. In my salad days when I was green in judgment I used to believe that I needed to know everything; nowadays, I get people to do that for me. This leaves me more time for gentlemanly activities, such as hunting, riding, shooting, fishing, and wenching.

So without further delay, here it is: An introduction to MVPA for those who are curious about it and maybe even want to do it themselves, but aren't exactly sure where to start. Be sure to check out the links at the end of the post for links to Keith's work, as well as Octave distributions he has been working on.


==================================

Well, it had to happen sometime. Time to call out the posers. Those who would bend neuroscience to their evil will. Where to start? How about the New York Times. Oh, New York Times. You've managed to play both sides of this coin equally well. Creating a neuroimaging fallout with bad reports, then throwing out neuroscience studies altogether ("The brain is not the mind" <- What? Stop. Wait, who are you again? Just stop). What's to be said for those few that rise above the absolutes of "newspaper neuroscience", and simply report findings, not headline grabbing inferences? Should we be opposed to gained knowledge, or seek to better understand the details? Boring you say? Pfft. Then you have yet to know the neuroscience I know.

As an example, a recent NYT article suggested that activation in both the primary auditory and visual cortices in response to an iPhone ring alone demonstrated that people were seeing the iPhone in their mind's eye.

...
This is a completely RIDICULOUS notion to suggest from univariate GLM BOLD response alone. But can we ever really know what a person sees in their mind's eye or hears in their mind's ear? Remember, good science happens in the setup, not the interpretation.

You may have heard of an increasingly implemented form of machine classification in brain-imaging dubbed Multi-Variate Pattern Analysis (MVPA). If you prefer the term "mind reading", sure go ahead, but remember to bring your scientific floaties, lest you drown from the lack of understanding as it escapes the proverbial lungs of your deoxygenated mind. Before you know it, you'll be believing things like this:


Verbose laughter ensues. Close, but no Vicodin. How does MVPA of BOLD responses really work?


Source: Kaplan & Meyer, 2011

By first training a program to recognize a pattern, in much the same way you might visually recognize a pattern, we can quantitatively measure the similarity or dissimilarity of patterns in response to differing conditions. But what does this have to do with knowing what a person might see in their mind? I recently compiled some soundless videos for a demonstration of just this. Watch them fullscreen for the greatest effect.



It's likely that the sound of coins clanking, a car crashing, or Captain Kirk screaming "KAHHHHN!!!" registered in your mind, and thus created distinct neural representations in your primary auditory cortex. If we train a machine classifier to recognize your brain's patterned neural response to the sound, we can then ask the classifier to guess which among all these stimuli your brain is distinctly representing in the absence of sound. If the machine classifier is successful statistically more than null permutations of chance, THEN we can MAYBE begin to suggest a top-down neural mechanism of alternate sensory modality stimulation. 

BUT, I promised you details, and details you shall have!

As a fan of Princeton's MVPA Matlab toolbox, opening it up for people without access to a Matlab license seemed like the next most logical step, prompting me to convert the toolbox, as well as its implementations of SPM and AFNI-for-MATLAB, to one unitary Octave distribution. Below is a link to my website and Github page. There you will find setup instructions as well as tutorial data and a script that Dr. Marc Coutanche (See his rigorously awesome research in MVPA here) and I implemented in a recent workshop. Coded are step-by-step instructions to complete what is a research caliber implementation of pattern analysis. Learn it, live it, repeat it, and finally bask in your ability to understand something that would make your grandma's head explode (See the Flynn Effect for possible explanation there). As with anything I post, if you have questions, comments, or criticisms, feel free to message me through any medium of communication ...unless you are a psychic. I'd like to keep my mind's ear to myself.