Parameter Extraction in AFNI: 3dmaskave and 3dmaskdump

Previously we showed how to extract parameters using Marsbar in SPM and featquery in FSL, and the concept is identical for AFNI. Once you have created a mask (e.g., using 3dUndump or 3dcalc), you can then extract parameter estimates from that ROI either using the tool 3dmaskave or 3dmaskdump.
3dmaskave is quicker and more efficient, and is probably what you will need most of the time. Simply supply a mask and the dataset you wish to extract from, and it will generate a single number of the average parameter estimate across all the voxels within that ROI. For example, let's say that I want to extract beta weights from an ROI centered on the left nucleus accumbens, and I have already created a 5mm sphere around that structure stored in a dataset called LeftNaccMask+tlrc. Furthermore, let's say that the beta weights I want to extract are in a beta map contained in the second sub-brik of my statistical output dataset. (Remember that in AFNI, sub-briks start at 0, so the "second" sub-brik would be sub-brik #1.) To do this, use a command like the following:

3dmaskave -mask LeftNaccMask+tlrc stats.202+tlrc'[1]'

This will generate a single number, which is the average beta value across all the voxels in your ROI.

The second approach is to use 3dmaskdump, which provides more information than 3dmaskave. This command will generate a text file that contains a single beta value at each voxel within the ROI. A couple of useful options are -noijk, to suppress the output of voxel coordinates in native space, and -xyz, to output voxel coordinates in the orientation of the master dataset (usually in RAI orientation). For example, to output a list of beta values into a text file called LeftNaccDumpMask.txt,

3dmaskdump -o LeftNaccDumpMask.txt -noijk -xyz -mask LeftNaccMask+tlrc stats.202+tlrc'[1]'

This will produce a text file that contains four columns: The first three columns are the x-, y-, and z-coordinates, and the fourth column is the beta value at that triplet of coordinates. You can take the average of this column by exporting the text file to a spreadsheet like Excel, or use a command like awk from the command line, e.g.

awk '{sum += $4} END {print "Average = ", sum/NR}' LeftNaccDumpMask.txt


Keep in mind that this is only for a single subject; when you perform a second-level analysis, usually what you will want to do is loop this over all of the subjects in your experiment, and perform a statistical test (e.g., t-test) on the resulting beta values.






Concluding Unscientific Postscript

I recently came across this recording of Schubert's Wanderer Fantasie, and I can't help but share it here; this guy's execution is damn near flawless, and, given both the time of the recording and some of the inevitable mistakes that come up, I have good reason to believe it was done in a single take. It's no secret that I do not listen to that much modern music, but it isn't that modern music is bad, necessarily; it's just that classical music is so good. Check out the melodic line around 16:30 to hear what I'm talking about.



What is Percent Signal Change? And Why are People Afraid of It?

A few years ago when I was writing up my first publication, I was gently reprimanded by a postdoc for creating a figure showing my results in percent signal change. After boxing my ears and cuffing me across the pate, he exhorted me to never do that again; his tone was much the same as parents telling their wayward daughters, recently dishonored, to never again darken their door.

Years later, I am beginning to understand the reason for his outburst; much of the time what we speak of as percent signal change, really isn't. All of the major neuroimaging analysis packages scale the data to compare signal across sessions and subjects, but expressing it in terms of percent signal change can be at best misleading, at worst fatal.

Why, then, was I compelled to change my figure to parameter estimates? Because what we usually report are the beta weights themselves, which are not synonymous with percent signal change. When we estimate a beta weight, we are looking at the amount of scaling to best match a canonical BOLD response to the raw data; a better approximation of true percent signal change would be the fitted response, and not the beta weight itself.



Even then, percent signal change is not always appropriate: recall the term "global scaling." This means comparing signal at each voxel against a baseline average of signal taken from the entire brain; this does not take into consideration intrinsic signal differences between, say, white and grey matter, or other tissue classes that one may encounter in the wilderness of those few cubic centimeters within your skull.

You can calculate more accurate percent signal change; see Gläscher (2009), or the MarsBar documentation.

Not everybody should analyze FMRI data; but if they cannot contain, it is better for one to be like me, and report parameter estimates, than to report spurious percent signal change, and burn.