FSL Tutorial 2: FEAT (Part 1)



A new tutorial about FEAT is now up; depending on how long it takes to get through all of the different tabs in the interface, this may be a three-part series. In any case, this will serve as a basic overview of the preprocessing steps of FEAT, most of which can be left as a default.

The next couple of tutorials will cover the set up of models and timing files within FSL, which can be a little tricky. For those of you who have stuck with it from the beginning (and I have heard that there are a few of you out there: Hello), there will be some more useful features coming up, aside from reviewing the basics.

Eventually we will get around to batch scripting FEAT analyses, which can save you several hours of mindless pointing and clicking, and leave you with plenty of time to watch Starcraft 2 replays, or Breaking Bad, or whatever it is kids watch these days.



FSL Tutorial 0: Conversion from DICOM to NIFTI



After publishing my last tutorial, I realized that I should probably take a step back and detail how to convert raw scanner images into something that can be used as FSL. This latest walkthrough will show you how to download MRIcron and use one of its tools, dcm2nii, to convert raw scanner image formats such as DICOM or PAR/REC to nifti, which can be used by almost all fMRI software analysis packages.

Note: I've been using Camtasia Studio to make these screencasts, and so far I have been very happy with it. The video editor is simple and intuitive, and making videos isn't too difficult. It runs for $99, which is a little pricey for a screencast tool, but if you're going to be making them on a regular basis and doing tutorials, I would recommend buying it.

Repost: Why Blogs Fail

Pictured: One of the reasons why blogs fail


Blogger Neuroskeptic recently wrote a post about why blogs fail, observing what divides successful from unsuccessful blogs, and why the less successful ones ultimately stop generating new content or disappear altogether. Reading this made me reflect on why I started this in the first place; initially it was to write down, in blog-form, what I saw and heard at an AFNI workshop earlier this spring. Since then, I tried to give it a more coherent theme, by touching upon some fMRI methodology topics that don't get as much attention as they deserve, and creating a few walkthroughs to help new students in the field get off the ground, as there isn't much out there in the way of interactive videos showing you how to analyze stuff.

Analyzing stuff, in my experience, can be one of the most intimidating and paralyzing experiences of a graduate student's career; not because of laziness or incompetence, but because it is difficult to know where to start. Some of the obstacles that hinder the analysis of stuff (e.g., common mistakes involving experiment timing, artifacts that can propagate through an entire data set, not having a method for solving and debugging scripting errors) do not need to be repeated by each generation, and my main objective is to point out where these things can happen, and what to do about them. Of course, this blog contains other topics related to my hobbies, but overall, the theme of how to analyze stuff predominates.

In light of all of this, here is my pact with my readers: This blog will be updated at least once a week (barring any extreme circumstances, such as family emergencies, religious observations, or hospitalization from Nutella overdose); new materials, such as walkthroughs, programs, and instructional videos, will be generated on a regular basis; and I will respond to any (serious) questions posted in the comments section. After all, the reason I take any time out of my busy, Nutella-filled day to write any of this content is because I find it interesting and useful, and hope that somebody else will, too.

Chopin: Nocturne in E-flat Major



This is probably one of the best-known and best-loved out of all Chopin's nocturnes - rightly so. Although it has been overplayed to death, and although several performances suffer from excruciatingly slow tempi, no one can deny its elegance and charm.

The architecture is straightfoward enough: an A section which repeats, followed by a B section, then A again, then B, and a coda, in a form technically known as rounded binary. Chopin's genius lies in keeping the same harmonic foundation in each repeated section while progressively ornamenting the melody, finding ways to keep it fresh and exciting every time it returns. Finally, elements of both themes in the A and B sections are combined into a wonderful coda (listen for them!), becoming increasingly agitated until reaching an impassioned dominant, prepared by a c-flat anticipation held for an almost ridiculous amount of time.

Almost - Chopin, as well as Beethoven, is a master at reaching the fine line separating drama and melodrama, pathos and sentimentality, without going over the edge. The famous cadenza - a four-note motif marked senza tempo during which time stands still - repeats faster and faster, more and more insistent, until finally relenting, and finding its way back to a tender, emotional reconciliation with the tonic.

Enjoy!

FSL Tutorial: Part 1 (of many)



I recently started testing out FSL to see if it has any advantages over other fMRI analysis packages, and decided to document everything on Youtube as I go along. The concepts are the same as any other package (AFNI, SPM, etc), but the terminology is slightly different, and driving it from the command line is not as intuitive as you would think. Plus, they use a ton of acronyms for everything, which, to be honest, kind of pisses me off; I don't like it when they try to be cute and funny like that. The quotes and sonnets generated by AFNI after exiting the program, however, are sophisticated and endearing. One of my favorites: "Each goodbye makes the next hello closer!"

In any case, here is the first, introductory tutorial I made about FSL. I realized from searching around on Youtube that hardly any fMRI analysis tutorial videos exist, and that this is a market that sorely needs to be filled. A series of walkthroughs and online lessons using actual data, in my opinion, would be far more useful at illustrating the fundamentals of fMRI data analysis than only having manuals (although those are extremely important as well, and I would recommend that anyone getting started in the field read them so that they can needlessly suffer as I did).

I will attempt to upload more on a regular basis, and start to get some coherent lesson plan going which allows the beginner to get off the ground and understand what the hell is going on. True story: It took me at least three years to fully comprehend what a beta weight was. Three years. I'm not going to blame it all on Smirnoff Ice, but it certainly didn't help.

Note: I suggest hitting fullscreen mode and viewing at a higher resolution (360p or 480p) in order to better see the text in the terminal window.

Also, the example data for these tutorials can be found here.




Youtube Music Channel

One of my hobbies is playing piano, and recently I bought the equipment to record sound from my electric piano. I own a Yamaha CLP-320 series, which has been an excellent piano and has held up remarkably well for the past three years.

The first recording uploaded from this piano is a nocturne by Chopin in B-flat minor,  opus 9, #1. I have had it in my repertoire for quite a while now, and it still remains my favorite nocturne out of all of them. I plan to upload more recordings with some regularity (e.g., every week or two), so be sure to check out the channel often to see what's new. Subscribing (or even just "liking") is a great way to help me out.

One feature of my videos is annotations which highlight certain interesting parts of the piece, such as important key modulations, any historical background that might be significant, and so on. I also think it looks cool when Chopin spits facts about his music. My goal is for it to be edifying rather than distracting; time will tell whether the public likes it or not.

Anyway, here's a link to the video. Enjoy!

[Edit 07.08.2012]: I removed the annotations, because I eventually found them distracting. Any notes will be placed in the "Description" box, as soon as I convince the recording companies that this indeed my own original work, and not a copy.


Region of Interest Analysis


Before we get down to regions of interest, a few words about the recent heat wave: It's taken a toll. The past few days I've walked out the door and straight into a slow broil, that great yellowish orb pasted in the cloudless sky like a sticker, beating down waves of heat that saps all the energy out of your muscles. You try to get started with a couple miles down the country roads, crenelated spires of heat radiating from the blacktop, a thin rime of salt and dust coating every inch of your skin, and realize the only sound in this inferno is the soles of your shoes slapping the cracked asphalt. Out here, even the dogs have given up barking. Any grass unprotected by the shade of nearby trees has withered and died, entire lawns turned into fields of dry, yellow, lifeless straw. Flensed remains of dogs and cattle and unlucky travelers lie in the street, bones bleached by the sun, eyeless sockets gazing skyward like the expired votaries of some angry sun god.

In short, it's been pretty brutal.

Regions of Interest

Region of Interest (ROI) analysis in neuroimaging refers to selecting a cluster of voxels or brain region a priori (or, also very common, a posteriori) when investigating a region for effects. This can be done either by creating a small search space (typically a sphere with a radius of N voxels), or based on anatomical atlases available through programs like SPM or downloadable from web. ROI analysis has the advantage of mitigating the fiendish multiple comparisons problem, in which a search space of potentially hundreds of thousands of voxels is reduced to a smaller, more tractable area, thus reducing overly stringent multiple comparisons correction thresholds. At first glance this makes sense, given that you may not be interested in a whole brain analysis (i.e., searching for activation in every single voxel in the entire volume); however, it can also be abused to carry out confirmatory analyses after you have already determined where a cluster of activation is.

Simple example: You carry out a whole brain analysis, and find a cluster of fifty voxels extending over the superior frontal sulcus. This is not a large enough cluster extent to pass cluster correction at the whole brain level, but you go ahead anyway and perform an additional ROI analysis focused on the center of the cluster. There are not any real safeguards against this measure, as it is impossible to know what the researcher had in mind when they conducted the test. For instance, what if an investigator happened to simply make a mistake and see the results of a whole brain analysis before doing an ROI analysis? Pretend that he didn't see them? These are questions which may be addressed in a future post about a Bayesian approach to fMRI, but for now, be aware that there exists significant potential for misuse of this technique.

Additional Considerations


Non-Independence
Colloquially known as "double-dipping," non-independence has become an increasingly important issue over the years as ROI analyses have become more common (see Kriegeskorte et al, 2009, 2010).  In order to avoid biasing an ROI toward certain regressors, it is essential that the ROI and the contrast of interest share no common regressors. Consider a hypothetical experiment with three regressors: A, B, and C.  The contrast A-B is used to define an ROI, and the experimenter then decides to test the contrast of A-C within this ROI.  As this ROI is already biased toward voxels that are more active in response to regressor A, this is a biased contrast to conduct. This is not endemic only to fMRI data, but applies to any other statistical comparison where bias is a potential issue.

Correction for Multiple ROI Analyses
Ideally, each ROI analysis should be treated as an independent test, and should be corrected for multiple comparisons.  That is, assuming that an investigator is agnostic about where the activation is to be found, the alpha threshold for determining significance should be divided by the amount of tests conducted.  While this is probably rarely done, it is good practice to have a priori hypotheses about where activation is to be found in order to avoid these issues of multiple comparisons.

ROI Analysis in AFNI

Within AFNI, there exists a useful program called 3dUndump which requires x, y, and z coordinates (in millimeters), radius size of the sphere, and the master dataset where the sphere will be applied. A typical command looks like:

3dUndump -prefix (OutputDataset) -master (MasterDataset) -srad (Radius of Sphere, in mm) -xyz (X, Y, and Z coordinates of sphere)


One thing to keep in mind is the orientation of the master dataset. For example, the standard template that AFNI warps has a positive to negative gradient when going from posterior to anterior; in other words, values in the Y-direction will be negative when moving forward of the anterior commissure. Thus, it is important to note the space and orientation of the coordinates off of which you are basing your ROI, and make sure it matches up with the orientation of the dataset you are applying the ROI to. In short, look at your data after you have generated the ROI to make sure that it looks reasonable.


The following is a short Python wrapper I made for 3dUndump. Those already familiar with using 3dUndump may not find much use in it, but for me, having an interactive prompt is useful:



#!usr/bin/env python


import os
import math
import sys


#Read in Raw user input, assign to variables
print("MakeSpheres.py")
print("Created by Andrew Jahn, Indiana University 03.14.2012")
prefix = raw_input("Please enter the output filename of the sphere: ")
master = raw_input("Please enter name of master dataset (e.g., anat_final.+tlrc): ")
rad = raw_input("Please enter radius of sphere (in mm): ")
xCoord = raw_input("Please enter x coordinate of sphere (MNI): ")
yCoord = raw_input("Please enter y coordinate of sphere (MNI): ")
zCoord = raw_input("Please enter z coordinate of sphere (MNI): ")


#Make string of coordinates (e.g., 0 36 12)
xyzString = xCoord + " " + yCoord + " " + zCoord
printXYZString = 'echo ' + xyzString + ' > sphere_' + rad + 'mm_'+xCoord+'_'+yCoord+'_'+zCoord+'.txt'
os.system(printXYZString) #prints xyzstring to filename given above


#Will need sphere file in this format for makeSpheres function
xyzFile = 'sphere_' + rad + 'mm_'+xCoord+'_'+yCoord+'_'+zCoord+'.txt'


def makeSpheres(prefix, master, rad, xyz ):
cmdString = '3dUndump -prefix '+prefix+ ' -master '+master+' -srad '+rad+' -xyz '+xyz 
os.system(cmdString)
return


makeSpheres(prefix=prefix, master=master, rad=rad, xyz=xyzFile)




Which will generate something like this (Based on a 5mm sphere centered on coordinates 0, 30, 20):


Beta values, time course information, etc., can then be extracted from within this restricted region.

ROI Analysis in SPM (Functional ROIs)

This next example will focus on how to do ROI analysis in SPM through MarsBar, a toolbox available here if you don't already have it installed. In addition to walking through ROI analysis in SPM, this will also serve as a guide to creating functional ROIs. Functional ROIs are based on results from other contrasts or interactions, which ideally should be independent of the test to be investigated within that ROI; else, you run the risk of double-dipping (see the "Non-Independence" section above).

After installation, you should see Marsbar as an option in the SPM toolbox dropdown menu:

1. Extract ROIs

After installing Marsbar, select it from the toolbox dropdown menu.  After Marsbar boots up, click on the menu “ROI Definition”.  Select “Get SPM clusters”.


This will prompt the user to supply an SPM.mat file containing the contrast of interest.  Select the SPM.mat file that you want, and click “Done”.

Select the contrast of interest just as you would when visualizing any other contrast in SPM.  Select the appropriate contrast and the threshold criteria that you want.




When your SPM appears on the template brain, navigate to the cluster you wish to extract, and click “current cluster” from the menu, underneath “p-value”.  Highlight the cluster you want in the SPM Results table.  The highlighted cluster should turn red after you select it.



Navigate back to the SPM results menu, and click on “Write ROIs”.  This option will only be available if you have highlighted a clutter.  Click on “Write one cluster”.  You can enter your description and label for the ROI; leaving these as default is fine.  Select an appropriate name for your ROI, and save it to an appropriate folder.




2. Export ROI

Next, you will need to convert your ROI from a .mat file to an image file (e.g., .nii or .img format).  Select “ROI Definition” from the Marsbar menu and then “Export”.



You will then be presented with an option for where to save the ROI. 



Select the ROI you want to export, and click Done.  Select the directory where you wish to output the image.  You should now see the ROI you exported, with a .nii extension.

3. ROI Analysis

You can now use your saved image for an ROI analysis.  Boot up the Results menu as you would to normally look at a contrast, but when prompted for “ROI Analysis?”, select “Yes”. You can select your ROI from the “Saved File” option; alternatively, you can mask activity based on atlas parcellations by selecting the “Atlas” option.



The constriction of search space will mean fewer multiple comparisons need to be corrected for, and thus increases the statistical power of your contrast.




Summer Music Festival

For me, one of the major draws of Indiana University - besides its neuroimaging program - was the presence of the Jacobs School of Music, one of the best conservatories in the world. I had studied piano as a kid and have continued to keep it up as a hobby, so for me, a music school within spitting distance from my apartment was a major attraction.

Last summer the Jacobs School began a summer music festival, where nearly every night for the entire summer term there are musical performances. Solo piano recitals, string quartets and chamber music, and symphonic works are performed continuously, with soloists and music ensembles traveling from all over the world to play here. Tickets are six dollars for students, and piano recitals are free. For a piano addict like myself, with Carnegie Hall-caliber performances going on all day, every day - this is bliss.

One of the themes this summer is a complete cycle of the sixteen Beethoven string quartets. I had heard a few of the earlier works, but had put off listening to the later quartets because I didn't feel quite ready for them yet - many of them are tremendously difficult to listen to, approaching colossal lengths and levels of complexity that are almost impenetrable. Although immensely rewarding, this is not the same as saying it is a pleasurable experience.

However, I decided to go through with it anyway, and on Sunday I witnessed a performance of one of the last works ever composed by Ludwig van, the string quartet in B-flat major, opus 130. Back when it first premiered Beethoven had composed a gigantic, dissonant double fugue as the final movement, stretching the entire performance to almost an hour. It was met with resistance from the public and critics alike, and Beethoven's publisher convinced him to substitute a lighter, more accessible ending. Beethoven relented (begrudgingly), and published the fugue separately, entitling the work "Große Fugue", which means Great Fugue, or Grand Fugue. Because of its tremendous duration and emotional scope, the work has since been frequently performed on its own as an example of Beethoven's late style and contrapuntal daring.

Which is why I was surprised that, before the performance, the second violinist addressed the audience and informed us that contrary to what was on the program, they were going to play the work with the original fugue ending. Holy crap, I thought: It is on.

The next hour was intense, with the titanic opening, middle, and ending movements flanked by smaller, intense interludes, an architecture that characterized much of Beethoven's late output. The Fugue, as expected, was gigantic, and the musicians were sawing away at their instruments, squeezing the lemon so to speak, and somehow sustaining the whole thing without letting it degrade into complete chaos. I was extremely impressed, and recommend that anyone who has the chance should see these guys live, or at least listen to one of their CDs.


During the performance, one interesting musical reference I noticed was a theme consisting of a leap followed by a trill - a similar theme that opens the finale of Beethoven's Hammerklavier Sonata, also in B-flat major, and also ending with a fugue on acid. The jump in that work is much larger and more dramatic - almost a tenth, I think - but the similarity was striking. In the following video, listen to the trill theme starting around 7:45, and then compare with the opening of the Hammerklavier fugue. And then you might as well watch the whole thing, because Valentina Lisitsa's spidery, ethereal hands are beautiful to watch.



In all, it was a heroic performance, and I am looking forward to completing the cycle over the next couple of weeks.

The next couple of days will see a return to the cog neuro side of things, since it has been a while since I've written about that. I have a new program written for building regions of interest in AFNI, as well as some FSL and UNIX tutorials which should be online soon. Stay tuned!

Grandma's Marathon Postmortem


[Note: I don't have any personal race photos up yet; they should arrive in the next few days]


Finally back in Bloomington, after my standard sixteen-hour Megabus trip from Minnesota to Indiana, with a brief layover in Chicago to see The Bean. I had a successful run at Grandma's, almost six minutes faster than my previous marathon, which far exceeded my expectations going into it.

Both for any interested readers, and for my own personal use to see what I did leading up to and during race day, I've broken down the experience into segments:

1. Base Phase

I rebooted my training over winter break coming back from a brief running furlough, after feeling sapped from the Chicago Marathon. I began running around Christmas, starting with about 30 miles a week and building up to about 50 miles a week, but rarely going above that for some months. Now that I look back on it, this really isn't much of a base phase at all, at least according to the books I've read and runners I've talked to; but that was the way it worked out with graduate school and my other obligations.

Here are the miles per week, for eleven weeks, leading up to Grandma's:

Week 1: 38
Week 2: 57
Week 3: 43
Week 4: 55
Week 5: 45
Week 6: 48
Week 7: 70
Week 8: 31
Week 9: 80
Week 10: 29
Race Week: 28 (not including the marathon)



2. Workouts

The only workouts I do are 1) Long runs, typically 20-25 miles; and 2) Tempo runs, usually 12-16 miles at just above (i.e., slower than) threshold pace, which for me is around 5:40-5:50 a mile. That's it. I don't do any speedwork or interval training, which is a huge weakness for me; if I ever go under my lactate threshold too early, I crash hard. It has probably been about two years since I did a real interval workout, and I should probably get back to doing those sometime. The only thing is, I have never gotten injured or felt fatigued for days after doing a tempo run, whereas I would get stale and injured rather easily doing interval and speed workouts. In all likelihood I was doing them at a higher intensity than I should have when I was doing them.

3. TV Shows

The only TV show that I really watch right now is Breaking Bad, and the new season doesn't begin until July 15th. So, I don't have much to say about that right now.

4. Race Day

In the week leading up to the race, I was checking the weather reports compulsively. The weather for events like this can be fickle down to the hour, and the evening before the race I discussed race strategy with my dad. He said that in conditions like these, I would be wise to go out conservative and run the first couple of miles in the 6:00 range. It seemed like a sound plan, considering that I had done a small tune-up run the day before in what I assumed would be similar conditions, and the humidity got to me pretty quickly. In short, I expected to run about the same time I did at the Twin Cities marathon in 2010.

The morning of the race, the weather outside was cooler and drier than I expected, with a refreshing breeze coming in from Lake Superior. I boarded the bus and began the long drive to the start line, which really makes you realize how far you have to run to get back. I sat next to some guy who looked intense and pissed off and who I knew wouldn't talk to me and started pounding water.

At the starting line everything was warm and cheerful and sun-soaked and participants began lining up according to times posted next to the corrals. I talked to some old-timer who had done the race every single year, who told me about the good old days and how there used to only be ten port-a-johns, and how everyone would go to that ditch over yonder where the grassy slopes would turn slick and putrid. I said that sounded disgusting, and quickly walked over to my corral to escape.

Miles 1-6

At the start of the race, I decided to just go out on feel and take it from there. The first mile was quickly over, and at the mile marker my watch read: 5:42. I thought that I might be getting sucked out into a faster pace than I anticipated, but since I was feeling solid, I thought I would keep the same effort, if not back off a little bit, and then see what would happen. To my surprise, the mile splits got progressively faster: 5:39, 5:36, 5:37, 5:34, 5:33. I hit the 10k in about 34:54 and started to have flashbacks to Chicago last fall, where I went out at a similar pace and blew up just over halfway through the race. However, the weather seemed to be holding up, and actually getting better, instead of getting hotter and more humid. At this point my body was still feeling fine so I settled in and chose to maintain the same effort for the next few miles.

Miles 7-13

Things proceeded as expected, with a 5:34, 5:38, and 5:35 bringing me to mile 9. I saw my parents again at this point, both looking surprised that I had come through this checkpoint several minutes faster than planned. Although I knew it was fast, I didn't really care; at this point the weather had become decidedly drier than it was at the start of the race, the crowd was amping up the energy, and I was feeling bulletproof. It was also right about this time that I started feeling a twinge in my upper left quadricep.

Shit.

I had been having problems with this area for the past year, but for the last month things had settled down and it hadn't prevented my completing any workouts for the past few weeks. Still, it was a huge question mark going into the race, and with over sixteen miles to go, it could put me in the hurt locker and possibly force me to pull out of the race. I knew from experience that slowing down wouldn't help it so I continued at the same pace, hoping that it might go away.

5:28, 5:40, 5:37, 5:44. Although I was able to hold the same pace, the quadricep was getting noticeably worse, and began to affect my stride. I ignored it as best I could, trying instead to focus on the fact that I had come through the half in 1:13:40, which was close to a personal best and almost identical to what I had done at Chicago. I kept going, prepared to drop out at the next checkpoint if my leg continued to get worse.

Miles 14-20

The next couple of miles were quicker, with a 5:28 and 5:38 thanks to a gradual downhill during that section of the race. As I ran by the fifteen-mile marker, I began to notice that my quadricep was improving. The next pair of miles were 5:42, 5:42. At this point there was no mistaking it: The pain had completely disappeared, something that has never happened to me before. Sometimes I am able to run through mild injuries, but for them to recover during the course of a race, was new to me.

In any case, I didn't dwell on it. The next few miles flew by in 5:48, 5:38, and 5:55, and I knew that past the twenty-mile mark things were going to get ugly.

Miles 21-26.2

As we began our descent into the outskirts of Duluth, the crowds became larger and more frequent, clusters of bystanders serried together with all sorts of noisemakers and signs that sometimes had my name on them, although they were meant for someone else. At every aid station I doused myself with water and rubbed cool sponges over my  face and shoulders, trying not to overheat. 5:57, 5:56. However, even though I was slowing way down, mentally I was still with it and able to focus. We had entered the residential neighborhoods, and the isolated bands of people had blended together into one long, continuous stream of spectators lining both sides of the road. I made every attempt to avoid the cracks in the street, sensing that if I made an uneven step, I might fall flat on my face.

5:55. Good, not slowing down too much. Still a few miles out, but from here I can see the promontory where the finish line is. At least, that's where I think it is. If that's not it, I'm screwed. Why are these morons trying to high-five me? Get the hell out of my way, junior. Just up and over this bridge. Do they really have to make them this steep? The clown who designed these bridges should be dragged into the street and shot. 6:03. I was hoping I wouldn't have any of those. I can still reel in a few of these people in front of me. This guy here is hurting bad; I would feel sorry for him, but at the same time it's such a rush to pass people when they are in a world of pain; almost like a little Christmas present each time it happens. 6:03 again. Oof. We've made another turn; things are getting weird. Where's the finish? Where am I? What's my favorite food? I can't remember. This downhill is steep as a mother, and I don't know if those muscles in my thighs serving as shock pads can take much more of this. Legs are burning, arms are going numb, lips are cracked and salty. One more mile to go here; just don't throw up and don't lose control of your bowels in front of the cameras and you're golden. 6:11. Whatever. There's the finish line; finally. Wave to the crowd; that's right, cheer me on, dammit! Almost there...and...done.

YES.


Artist's rendering of what I looked like at the finish

29th overall, with a time of 2:30:23. I staggered around the finish area, elated and exhausted, starting to feeling the numbness creep into my muscles. For the next few days I had soreness throughout my entire body, but no hematuria, which is a W in my book.

More results can be found here, as well as a summary page here with my splits and a video showcasing my raw masculinity as I cross the finish line.

Next marathon will be in Milwaukee in October, and I'll be sure to give some running updates as the summer progresses.

Grandma's! I am coming for you!

Hey all, I will be gone until next Tuesday for a brief trip back to Minnesota to see some relatives and race Grandma's marathon. Conditions are supposed to be in the upper 50s at racetime, partly cloudy, tailwind of 9mph, and a slight net downhill, which should be ideal for running a personal best.



According to my training logs, VO2 max, past running history, and a Ouija board, I should be somewhere in the neighborhood of running a 2:32:00 (roughly 5:48/mile). You can track my progress here on the Grandma's Marathon website (first name: Andrew; last name: Jahn).

I'll be posting a post-race analysis sometime early next week. Wish me luck!