LISA improves statistical analysis for fMRI
Gabriele Lohmann
One of the principal goals in functional magnetic resonance imaging (fMRI) is the detection of activation areas in the human brain. The effects that we typically observe in fMRI data are too small to be detectable by the naked eye. Therefore, sophisticated software is needed to make sense of the data. The first computer algorithms for neuroimaging data were developed more than 20 years ago. Many of these early algorithms are still routinely used today.
In the latest publication, we propose a new method for statistical inference. The purpose of statistical inference is to determine whether or not an effect is "statistically significant" or whether is it just random noise. It is thus a central part of data analysis in neuroimaging.
We asked Dr. Gabriele Lohman about her study:
1. Why were you interested in this topic?
My motivation for becoming interested in this topic is twofold:
a) The most widely used statistical inference procedures were invented more than 10 years ago and are not well suited for handling state-of-the art high-resolution neuroimaging data. MRI technology improved considerably in recent years with ultra-high field scanners (>= 7 Tesla) offering greatly improved spatial resolution that standard algorithms were not designed to handle. In particular, standard algorithms lead to a loss in spatial precision so that one of the main advantages of ultrahigh-field scanning is lost due to inadequate software.
b) A recent publication by Eklund et al (PNAS, 2016) showed that some of the most widely used methods of statistical inference produce unreliable results. This finding gained worldwide attention and made headline news in international media outlets. Even though the media outcry at the time was somewhat exaggerated, it nonetheless highlighted that current statistical inference procedures have serious deficiencies. For the two reasons listed above, researchers in our field are now desperate for better approaches to statistical inference in fMRI. This was more than enough motivation for me to become interested in this topic.
2. What should the average person take away from your study?
Sophisticated mathematical methods are needed into order to make sense of neuroimaging data. The colored "blobs" that are often depicted in articles on neuroimaging are the result of a complicated software applications.
3. What is the added value of your study/paper for society?
In our first tests, we found that our method is much more sensitive than previous methods, and was able to detect more of the brain activity invoked by our experiments. We therefore hope that our method will help to provide a more complete understanding of brain function. In the future, the insights that we gain from this basic research may benefit patients of neurological diseases.
4. Are there any major caveats? What questions still need to be addressed?
The brain activity that we are able to detect depends on the number of test subjects that we can recruit and also on the overall measurement time. In recent years, databases containing large amounts of neuroimaging data have become publicly available. These "big data" resources allow us to predict that there is still a lot of brain activity that we fail to detect when we only have data of a few individual subjects.