Fellow brainbloggers,
I will be providing more details on each of the ICA steps, but for those who need to do it right now, there is a playlist up on Youtube, a survival kit of sorts, which will guide you through an example analysis from start to finish and point out dangers along the way. As I was doing this over the weekend, I encountered a couple of particularly pernicious pitfalls, precariously poised on the precipice of total annihilation:
1. When doing dual_regression, make sure that your input list points to your functional data that has not only been preprocessed, but also normalized to standard space. Often FSL automatically creates a dataset called filtered_func_data.nii.gz in both the main directory and the normalized directory; choose the normalized one.
2. You can do multi-session MELDOIC through the GUI, but only if each subject has the same number of TRs. If there is a discrepancy, MELODIC will exit out once it encounters the different subject. In that case, you need to analyze each subject individually (or by batch subjects together who have the same number of TRs), and then do melodic from the command line, using a command such as:
melodic -i ICA_List.txt -a concat -o ICA_Output --nobet -d 30 --mmthresh 0.5 --tr 2.5
Where ICA_List.txt contains a path to the preprocessed, normalized data for each subject on each row.
3. I previously mentioned that you should not insult our future robot overlords, and leave the dimensionality estimation to FSL. However, this can lead to large numbers of components, and it is often a good idea to set this manually at 30, give or take a few components. Usually around 30 components will hit the sweet spot between overfitting and underfitting the amount of variance to each component.
Those are my observations from the front. As always, take these tutorials with a large calculus of salt.
I will be providing more details on each of the ICA steps, but for those who need to do it right now, there is a playlist up on Youtube, a survival kit of sorts, which will guide you through an example analysis from start to finish and point out dangers along the way. As I was doing this over the weekend, I encountered a couple of particularly pernicious pitfalls, precariously poised on the precipice of total annihilation:
1. When doing dual_regression, make sure that your input list points to your functional data that has not only been preprocessed, but also normalized to standard space. Often FSL automatically creates a dataset called filtered_func_data.nii.gz in both the main directory and the normalized directory; choose the normalized one.
2. You can do multi-session MELDOIC through the GUI, but only if each subject has the same number of TRs. If there is a discrepancy, MELODIC will exit out once it encounters the different subject. In that case, you need to analyze each subject individually (or by batch subjects together who have the same number of TRs), and then do melodic from the command line, using a command such as:
melodic -i ICA_List.txt -a concat -o ICA_Output --nobet -d 30 --mmthresh 0.5 --tr 2.5
Where ICA_List.txt contains a path to the preprocessed, normalized data for each subject on each row.
3. I previously mentioned that you should not insult our future robot overlords, and leave the dimensionality estimation to FSL. However, this can lead to large numbers of components, and it is often a good idea to set this manually at 30, give or take a few components. Usually around 30 components will hit the sweet spot between overfitting and underfitting the amount of variance to each component.
Those are my observations from the front. As always, take these tutorials with a large calculus of salt.