Mapping Results onto SUMA (Part 2)

In a previous post I outlined how to overlay results generated by SPM or FSL onto a SUMA surface and published a tutorial video on my Techsmith account. However, as I am consolidating all of my tutorials onto Youtube, this video has been uploaded to Youtube instead.

There are few differences between this tutorial and the previous one; however, it is worth reemphasizing that, as the results have been interpolated onto another surface, one should not perform statistical analyses on these surface maps - use them for visualization purposes only. The correct approach for surface-based analyses is to perform all of your preprocessing and statistics on the surface itself, a procedure which will later be discussed in greater detail.

A couple of other notes:

1) Use the '.' and ',' keys to toggle between views such as pial, white matter, and inflated surfaces. These buttons were not discussed in the video.

2) I recommend using SPM to generate cluster-corrected images before overlaying these onto SUMA. That way, you won't have to mess with the threshold slider in order to guess which t-value cutoff to use.

More AFNI to come in the near future!


SUMA Demo

I've posted a demo of AFNI's surface mapper program, SUMA, over here on my screencast account. Specifically, I talk about how to map volumetric results generated in any fMRI software package (e.g., AFNI, FSL, SPM, BrainVoyager) onto a template surface provided by SUMA.

In this demo, I take second-level results generated in SPM and map them onto a template MNI surface, the N27 brain. All that is needed for doing this is a results dataset, a template underlay anatomical brain to visualize results on (here I use the MNI_caez_N27 brain provided in the AFNI binaries directory under ~/abin), and a folder called suma_mni that contains the .spec files for mapping onto the N27 brain. The suma_mni folder is available for download at Ziad Saad's website here. Just download it to the same directory, and you are good to go.

SPM 2nd-level results mapped onto template surface using AFNI / SUMA

I've outlined the steps in a word document, Volumetric_SUMA.docx, which is available at my website. Please send me any feedback if any of the steps are unclear.

Although this is an incredibly easy way to make great-looking figures, at the same time I would not recommend doing any ROI stats on the results mapped onto a surface using these steps. This is because it is essentially a rough interpolation of which voxel corresponds to which node; if you want to do surface ROI analyses, do all of your preprocessing and statistics on the surface (I may write up a demo of how to do this soon).

Ye Good Olde Days

I've uploaded my powerpoint presentation about what I learned at the AFNI bootcamp; for the slides titled "AFNI Demo", "SUMA Demo", and so on, you will have to use your imagination.

The point of the presentation is that staying close to your data - analyzing it, looking at it, and making decision about what to do with it - are what we are trained to do as cognitive neuroscientists (really, any scientific discipline). The reason I find AFNI to be superior is that it allows the user to do this in a relatively easy way. The only roadblocks are getting acquainted with Unix and shell programming, and also taking the time to get a feel for what looks normal, and what looks potentially troublesome.



Back in the good old days (ca. 2007-2008) we would simply make our scripts from scratch, looking through fMRI textbooks and making judgments about what processing step should go where, and then looking up the relevant commands and options to make that step work. Something would inevitably break, and if you were like me you would spend days or weeks trying to fix it. To make matters worse, if you asked for help from an outside source (such as the message boards), nobody had any idea what you were doing.

The recent scripts containing the "uber" prefix - such as "uber_subject.py", "uber_ttest.py", and so on - have mitigated this problem considerably, generating streamlined scripts that are more or less uniform across users, and therefore easier to compare and troubleshoot. Of course, you still need to go into the generated script and make some modifications here and there, but everything is pretty much in place. It will still suggest that you check each intermediate step, but that becomes easier to ignore once you have a higher-level interface that takes care of all the minor details for you. Like everything else, there are tradeoffs.

AFNI Bootcamp: Day 4

Finally, I’ve made it to the last day of bootcamp. Got an AFNI pen to prove that I was here, and pictures of the workshop should be available shortly, since I’m assuming everyone must be curious to know what it looked like (I would be).

Gang opened with a presentation on the AFNI tools available for the primary types of connectivity: functional connectivity and effective connectivity. Functional connectivity is a slightly misleading term in my opinion, since you are simply looking at correlations between regions based on a seed-voxel timeseries. The correlation we claim to be looking at is merely a goodness of fit of the timeseries of one voxel with another, and from there we state whether these regions are somehow talking to each other or not, although that is a nebulous term. Much safer would be to go with a term like timeseries correlation analysis, since that is more descriptive of what is actually going on. As for effective connectivity and other types of intra and inter-voxel correlations such as structural equation modeling, I have not had as much experience in those areas, and will not touch on them right now. When I get to those sometime down the line, I will discuss those in more detail, and whether and when they appear to be useful.

The remained of the day was a SUMA demo from Ziad, showcasing how easy it is to visualize surface activity using their surface mapping program. SUMA is, in my experience, much faster and easier to manipulate than FreeSurfer, and, notwithstanding a few technical hurdles, is simple to use when interfacing with volumetric data. Also demonstrated was AFNI’s InstaCorr tool, which allows for instantaneous visualization of functional connectivity throughout the entire brain. One simply sets a voxel as a seed region, and can see how it correlates with every other voxel in the brain. The most interesting (and fun) feature of this tool is the ability to simply hold down the control and shift keys, and then drag the mouse cursor around to see the functional connectivity maps update in less time than it takes to refresh the monitor. This can be done on the surface maps as well. Although I still have the same reservations about resting state data as mentioned previously, this appears to be an excellent method for visualizing these experiments.

Beyond that, I took the opportunity to get in some additional face time with each of the AFNI members, and had a conversation with Daniel about how to examine registration between anatomical and epi datasets. By adding the –AddEdge option to the alignment part of the script (such as align_epi_anat.py), an additional folder named “AddEdge” is created with the anatomical and EPI datasets both before and after registration. Contours of the gyri and sulci are also shown, as well as any overlap between the two after registration. Apparently, the functional data I showed him wasn’t particularly defined (although we were acquiring at 3.5x3.5x3.75), the registration was still OK. One method for improving it may be to use scans acquired pre-steady-state, since those have been spatial contrast than the scans that are acquired during the experiment.

Before I left, I asked Bob about using grey matter masks for smoothing. The rationale for smoothing within a grey matter mask is to avoid smoothing in air and other stuff that we don’t care about (e.g., CSF, ventricles, bone), and as a result improve SNR relative to traditional smoothing methods that take place over the entire brain. However, Bob brought up the point that smoothing within GM on an individual subject basis can introduce biases into the group analysis, since not every subject experiences the same smoothing in the same voxel location. When we smooth, for example, after normalizing to a standardized space, all of the brains fit within the magic Talairach box, and so everything within the bounding box receives the same smoothing kernel. However, since each subject’s grey matter boundaries are stereotyped, we may be smoothing in different areas for each subject; in fact, it is guaranteed to happen. To alleviate this, one could either create a group grey matter mask and use that for smoothing, or take both the white and grey matter segmentation maps from FreeSurfer and, combining them, smooth across a whole brain mask that leaves out non-brain related areas, such as ventricles. I will have to think more about this and try a couple of approaches before deciding on what is feasible, and whether it makes that big of a difference or not.

That’s about it from the bootcamp. It has been an intense four days, but I have enjoyed it immensely, and I plan to continue using AFNI in the future, at least for double-checking the work that goes on in my lab. I’ll be experimenting more in the near future and posting figures of my results, as well as screencasts, when I find the time to pick those up again. For now, it’s onto CNS at Chicago.