SPM: Setting the Origin and Normalization (Feat. Chad)

Of all the preprocessing steps in FMRI data, normalization is most susceptible to errors, failure, mistakes, madness, and demonic possession. This step involves the application of warps (just another term for transformations) of your anatomical and functional datasets in order to match a standardized space; in other words, all of your images will be squarely placed within a bounding box that has the same dimensions for each image, and each image will be oriented similarly.

To visualize this, imagine that you have twenty individual shoes - possibly, those single shoes you find discarded along the highways of America - each corresponding to an individual anatomical image. You also have a shoe box, corresponding to the standardized space, or template. Now, some of the shoes are big, some are small, and some have bizarre contours which prevent their fitting comfortably in the box.

However, due to a perverted Procrustean desire, you want all of those shoes to fit inside the box exactly; each shoe should have the toe and heel just touching the front and back of the box, and the sides of the shoes should barely graze the cardboard. If a particular shoe does not fit these requirements, you make it fit; excess length is hacked off*, while smaller footwear is stretched to the boundaries; extra rubber on the soles is either filed down or padded, until the shoe fits inside the box perfectly; and the resulting shoes, while bearing little similarity to their original shape, will all be roughly the same size.

This, in a nutshell, is what happens during normalization. However, it can easily fail and lead to wonky-looking normalized brains, usually with abnormal skewing of a particular dimension. This can often by explained by a faulty starting location, which can then lead to getting trapped in what is called a local minimum.

To visualize this concept, imagine a boulder rolling down valleys. The lowest point that the boulder can fall into represents the best solution; the boulder - named Chad - is happiest when he is at the lowest point he can find. However, there are several dips and dells and dales and swales that Chad can roll into, and if he doesn't search around far enough, he may imagine himself to be in the lowest place in the valley - even if that is not necessarily the case. In the picture below, let's say that Chad starts between points A and B; if he looks at the two options, he chooses B, since it is lower, and Chad is therefore happier. However, Chad, in his shortsightedness, has failed to look beyond those two options and descry option C, which in truth is the lowest point of all the valleys.



This represents a faulty starting position; and although Chad could extend the range of his search, the range of his gaze, and behold all of the options underneath the pandemonium of the dying sun, this would take far longer. Think of this as corresponding to the search space; expanding this space requires more computing time, which is undesirable.

To mitigate this problem, we can give Chad a hand by placing him in a location where he is more likely to find the optimal solution. For example, let us place Chad closer to C - conceivably, even within C itself - and he will find it much easier to roll his rotund, rocky little body into the soft, warm, womb-like crater of option C, and thus obtain a boulder's beggar's bliss.

(For the mathematically inclined, the contours of the valley represent the cost function; the boulder represents the cost function ratio between the source image and the template image; and each letter (A, B, and C) represents a possible minimum in the cost function.)


As with Chad, so with your anatomical images. It is well for the neuroimager to know that the origin (i.e., coordinates 0,0,0) of both Talairach and MNI space is roughly located at the anterior commissure of the brain; therefore, it behooves you to set the origins of your anatomical images to the anterior commissure as well. The following tutorial will show you how to do this in SPM, where this technique is most important:




Once we have successfully warped our anatomical image to a template space, the reason for coregistration becomes apparent: Since our T2-weighted functional images were in roughly the same space as the anatomical image, we can apply the same warps used on the anatomical image to the functional images. This is where the "Other Images" option comes into play in the SPM interface.



As always, check your registration. Then, check it again. Then, ask someone else to check it. (This is a great way to meet girls.) In particular, check to make sure that the internal structures (such as the ventricles) are properly aligned between the template image and your warped images; matching the internal variability of the template image is much trickier, and therefore much more susceptible to failure - even if the outer boundaries of the brain look as though they match up.


*Actually, it's more accurate to say that it is compressed. However, once I started with the Procrustean thing, I just had to roll with it.
 

Coregistration Demonstrations

Coregistration - the alignment of two separate modalities, such as T1-weighted and T2-weighted images - is an important precursor to normalization. This is because 1) It aligns both the anatomical and functional images into the same space and orientation; and 2) Because any warps applied to the anatomical image can then be accurately applied to the functional images as well. You can create a homemade demonstration of this yourself, using nothing more than a deck of playing cards, a lemon, and a belt.



However, before doing either coregistration or normalization, often it is useful to manually set the coordinates of the anatomical image (or whichever image you will be warping to a standardized space) so that it is in as close an alignment with the template image as possible. Since the origins of both MNI and Talairach standardized spaces are located approximately at the anterior commissure, the origin of the anatomical image should be placed there as well; this provides a better starting point for the normalization process, and increases the likelihood of success. The following tutorial shows you how to do this, as well as what the anterior commissure looks like.



Once this is done, you are ready to proceed with the coregistration step. Usually the average EPI image - output from the realignment step - will be used as the source image, while the anatomical image will be used as the reference image (the image that is moved around). Then, these warps are applied to the functional images to bring everything into harmonious alignment.


FMRI Motion Correction: AFNI's 3dvolreg

I. Introduction

The fortress of FMRI is constantly beseiged by enemies. Noisy data lead to difficulties in sifting the gold of signal from the flotsam of noise; ridiculous assumptions are made about blood flow patterns and how they relate to underlying neural activity; and signal is corrupted by motions of the head, whether due to agitation, the sudden and violent ejection of wind, or the attempt to free oneself from such a hideous, noisy, and unnatural environment.

This last besetting weakness is the root of much pain and suffering for neuroimagers. Consider that images are acquired on the order of seconds and strung together as a series of snapshots over a period of minutes. Consider also that we deal with puny, squirmy, weak-willed humans, unable to remain still as death for any duration. Finally, consider that head motion may occur at any time during the acquisition of our images - as though we were using a slow shutter speed to take a picture of a moving target.

Coregistration - the spatial alignment of images - attempts to correct these problems. (Note that the term coregistration encompasses both registration across modalities, such as T2-weighted images to a T1-weighted anatomical, and registration within a single modality. The latter is often referred to as motion correction.) For example, given a time series of T2-weighted images, coregistration will attempt to align all of those images to a reference image. This reference image can be any one of the individual functional images in the time series, although using the functional image acquired closest in time to the anatomical image can lead to better initial alignment. Once a reference image has been chosen, spatial deviations are then calculated between the reference image and all other functional images in the timeseries, each image shifted by the inverse of these calculated distances from the reference image.


II. Rigid-body transformations

It what ways can images deviate from each other? Often we assume that images taken from the same subject can be realigned using rigid-body transformations. This means that the size and shape of the registered images are the same, and only differ in translations along the x, y, and z axes, and in three rotation angles (roll, pitch, and yaw). Each of these can be shown by a simple example. First, locate your head and prepare to move it. Ready?
  1. Fix your vacant stare upon an attractive person in front of you. This can be someone in either a classroom or a workplace setting. While you stare, keep your body still and only move your head to the left and right. This is moving along the x-axis.
  2. While the rest of your body remains immobile, again move your head - this time, directly forward and directly backward. This is moving along the y-axis.
  3. Keep staring. Now extend your neck directly upward, and compress it as you come downward. This is moving along the z-axis.
  4. Are you feeling that telluric connection with her yet? Perhaps these next few moves will get her to notice you. Nod your head vigorously back and forth in a "Yes" motion. This is called the pitch rotation, and will entice her to approach you.
  5. Now, send mixed signals by shaking your head "No". This is called the yaw rotation, and will both confuse her and heighten the sexual tension.
  6. Finally, do something completely different and roll your head to the side as though touching your ears to your shoulders. This is called the roll rotation, and will make her think you either have a rare movement disorder or are batshit insane. Now you are irresistible.
The correct execution of these moves can be found in the following video.



III. 3dvolreg

3dvolreg, the AFNI command to perform motion correction, will estimate spatial deviations between the reference functional image and other functional images using each of the above movement parameters. The deviation for each image is calculated and output into a movement file which can then be used to censor (i.e., remove from the model) timepoints that contain too much motion.

A typical 3dvolreg command requires the following arguments:

  • base (sub-brik): Use this sub-brik of the functional dataset as the reference volume.
  • zpad (n): Pad each volume with n voxels with a value of zero prior to motion correction, then remove them afterward.
  • (Interpolation method): Can be cubic, linear, or heptic; in general, higher-order interpolations are slower but produce better results.
  • (prefix): Label for output dataset.
  • -1Dfile (label): Label for text file containing motion estimates for each volume.
  • -1Dmatrix_save (label): Label for text file containing matrix transformations from each volume to reference volume. Can be used later with 3dAllineate to warp each functional volume to a standard space.
  • (input): Functional volume to be motion-corrected.

Assume that we have already slice-time corrected a dataset, named r01.tshift+orig. Example command for motion correction:
3dvolreg -verbose -zpad 1 -base r01.tshift+orig'[164]' -heptic -prefix r01_MC -1Dfile r01_motion.1D -1Dmatrix_save mat.r01.1D r01.tshift+orig

After you have run motion correction, view the results in the AFNI GUI. (It is helpful to open up two windows, one with the motion-corrected data and one with the non-corrected data.) By selecting the same voxel in each window, note that the values are different. As the motion-corrected data is now slightly shifted and not in the location that was originally sampled, your chosen spatial interpolation method will estimate the intensity at each new voxel by sampling nearby voxels. Lower-order interpolation methods are usually a weighted average over the intensity of immediately neighboring voxels, while higher-order interpolations will use information from a wider range of nearby voxels. Assuming you have a relatively new machine running AFNI, 3dvolreg is wicked fast, so heptic or fourier interpolation is recommended.

Last, AFNI's 1dplot can graph the movement parameters dumped into the .1D files. A special option passed to 1dplot, the -volreg option, will label each column in the .1D file with the appropriate movement label.

Example command:
1dplot -volreg -sepscl r01_motion.1D

IV. Potential Issues

Most realignment programs, including 3dvolreg, use an iterative process: small translations and rotations along the x-, y-, and z-axes are made until a minimum in the cost function is found. However, there is always the danger that this is a local minimum, not a global minimum. In other words, 3dvolreg may think it has done a good job in overlaying one image on top of the other, but a larger movement may have led to an even better fit. As always, look at your data both before and after registration to assess the goodness of fit.

Also note that motions that occur on the scale of less than a TR (e.g., less than 2-3 seconds) cannot be corrected by 3dvolreg, as it assumes that any rigid-body motion occurs across volumes. There are more sophisticated techniques which try to address this, with varying levels of success. For now, accept that your motion correction will never be perfect.