Today was the first day of AFNI bootcamp, and served as an introduction to the software as well as the philosophy behind it. On the technical side, there wasn’t a whole lot that was new, as it was targeted both toward AFNI veterans and newcomers alike. However, the development team hinted at some new tools that they would be presenting later during the workshop.
First, I should introduce the men behind the AFNI software. They are, in no particular order:
-Bob Cox: Founder of AFNI back at the Medical College of Wisconsin in 1993/1994. Is the hive mind of the AFNI crew, and leads the development of new features and tools.
-Rick Reynolds: Specialist in developing “uber” scripts that create the nuts-and-bolts Unix scripts through graphical user interfaces (GUIs). Up until a few years ago, people still made their scripts from scratch, cobbling together different commands in ways that seemed reasonable. With the new uber scripts, users can point and click on their data and onsets files, and select different options for the preprocessing stream. I’ll be covering these more later.
-Ziad Saad: Developer of the Surface Mapper (SUMA) which talks to AFNI and projects 3D volumetric blobs onto a 2D surface. This allows a more detailed look at activation patterns along the banks of cortical gyri and within the sulci, and produces much sexier looking pictures. I will also discuss this more later.
-Gang Chen: Statistics specialist and creator of the 3dMEMA and 3dLME statistical programs. An excellent resource for statistics-related problems after you’ve screwed up or just can’t figure out how you should model your data.
-Daniel Glen: Registration expert and developer of AFNI’s alignment program, align_epi_anat.py.
As I mentioned, the lectures themselves were primarily an introduction to how fMRI data analysis works at the single-subject level. The philosophy driving the development of AFNI is that the user should be able to stay close to his or her data, and be able to check it easily. AFNI makes this incredibly easy, especially with the development of higher-level processing scripts, and the responsibility of the user is to understand both a) what is going on, and b) what is being processed at each step. Using the program uber_subject.py (to be discussed in detail later), a script called @ss_review_driver is generated, which allows the user to easily check censored TRs, eyeball registration, and review the design matrix. This takes only a couple of minutes per subject, and in my opinion is more efficient and more intuitive than clicking through SPM’s options (although SPM’s approach to viewing the design matrix, where one can point and click on each beta for each regressor, is still far better than any other interface I have used).
A couple of observations during the lectures:
-There is a new program called 3dREML (Restricted Maximum Likelihood) that takes into account both the estimate of the beta for each regressor, and its variance. This information is then taken to the second level for the group analysis, in which betas from subjects with high variance are weighted less than subjects with a much tighter variance around each estimate. It is a concept akin to Bayesian “shrinkage”, in which the estimate of a parameter is constrained around a certain estimate if the majority of the data is around that estimate, which attenuates the effect of outliers. The second-level program – a tool called 3dMEMA (Mixed-Effects Meta-Analysis) – uses the results from 3dREML.
-Daniel Glen discussed some new features that will be implemented in AFNI’s registration methods, such as new atlases for monkey and rat populations. In addition, you can create your own atlas, and use that for determining where an activation occurred. Still in the works: Enabling AFNI’s built-in atlas searcher, whereami, to link up with web-based atlases, as well as display relevant literature and theories associated with the selected region / voxel. This is similar to Caret’s method of displaying what a selected brain region is hypothesized to do.
That’s about it for today. Tomorrow will be covering the interactive features of AFNI, including looking at anatomical-EPI registration and overlaying idealized timecourses (i.e., your model) on top of raw data. Hot!