How autism scientists are tackling mind imaging’s replication downside | Spectrum
When Maxwell Elliott’s latest research paper circulated on Twitter last June, he wasn’t sure how he was feeling.
Elliott, a PhD student in clinical psychology in Ahmad Hariri’s laboratory at Duke University in Durham, North Carolina, is studying functional magnetic resonance imaging (fMRI) and how it can be used to better understand neurological conditions such as dementia and autism.
He was happy that this “little” aspect of the field, as he describes it, attracted a little more attention, but the reason for the excitement disappointed him: a news agency picked up the story with an exaggerated headline: “Duke University researchers say that every brain activity study you’ve ever read is wrong. “
“They just seemed to get it totally wrong,” says Elliott.
His study, published in Psychological Science, didn’t ignore 30 years of fMRI research, as the headline suggested. However, it questioned the reliability of experiments that use fMRI to find differences in how people’s brains respond to a stimulus. During certain activities, such as B. in emotion processing, individual differences in brain activity patterns do not withstand when people are scanned several times at intervals of months, Elliott and his colleagues had shown.
Elliott was concerned about the response from researchers using such patterns to differentiate between people with and without certain neuropsychiatric disorders, including autism. The sensational reporting only increased this fear.
Many in the brain imaging field stepped in to defend Elliott and his colleagues and condemned the news agency’s mischaracterization, but others opposed the study and what they saw as an over-generalization of the limits of fMRI. On both sides, the idea that fMRI has a reliability problem hit a nerve – one that reverberated far and wide in the Twitter sphere.
With all of the talk over the past year, concerns about research into brain imaging are nothing new. Critics have previously mistakenly claimed that fMRI is so useless that it can find signals in a dead salmon, and that even breathing can skew the results of brain scans beyond usefulness.
Regardless, many brain imaging researchers are optimistic about what the tools can do. They do not reject the problems that other scientists have raised, but they do tend to view them as growing problems in a still relatively new area. Some problems, particularly those arising from data collection and analysis, are likely to be resolved; some may be endemic to the methods themselves. But none is as bad as many tweets and headlines proclaim.
“It doesn’t hurt to remind people that no matter how advanced the field is, keep these limitations in mind,” said Kevin Pelphrey, professor of neurology at the Brain Institute at the University of Virginia in Charlottesville.
Rinsing the pipeline:
Some problems with imaging studies start with data collection. The Autism Brain Imaging Data Exchange (ABIDE) dataset, for example, was started with a relatively small number of scans – from 539 autistic people and 573 non-autistic people – in 17 different locations, and early analysis of the dataset revealed great variability based on where the Scans took place. A 2015 study showed that this variability can even lead to incorrect results.
Since then, the researchers running ABIDE have added another 1,000 brain scans, using 19 sites in total. The larger sample sizes should improve some of the replicability issues with the scans, the team said. They also plan to meet as a group to discuss how data collection can be standardized.
Other problems arise when imaging data reaches the analysis stage. In May 2020, just weeks before Elliott and his colleagues published their study, a team from Tel Aviv University in Israel published an article in Nature showing how different pipelines for analyzing fMRI data can lead to vastly different results.
In this work, 70 different research teams analyzed the same raw fMRI dataset, with each team testing the same nine hypotheses, but since no two teams used the exact same workflow to analyze the scans, they all reported different “results”.
Workflow issues also plague structural MRI scans, which show the anatomy of the brain rather than activity. Although this form of scan-to-scan MRI is more reliable than fMRI, an unpublished research according to the study by Elliott and colleagues showed that 42 groups who tried to trace white matter trajectories through the same brain rarely gave the same results. (A 2017 study found that groups were able to successfully identify 90 percent of the underlying pathways, but not without producing a large number of false positives.)
Inconsistencies in workflow are one of the main reasons brain imaging results are difficult to reproduce, says Pelphrey. Every decision that is made while analyzing a series of images – such as where to set a threshold for brain activity – is based, for example, on assumptions that can ultimately have meaningful effects, he says.
JohnnyGreig / iStock
Similarly, different methods that researchers use to track white matter can change whether two closely spaced trajectories are interpreted as an intersection like an “X” or just “kiss” and then bend away from each other, says Ruth Carper, scientist Associate Professor of Psychology at San Diego State University in California. Decisions about how to define the border between gray and white matter in the cortex can also skew results.
To address these types of issues, the Organization for Human Brain Mapping (OHBM) issued recommendations for analyzing MRI and fMRI data in 2016, and added guidelines for electroencephalography and magnetencephalography, which are also used to measure, in the following year Collect data from living brains. The organization is working on an update of the recommendations.
The idea behind the guidelines is to help researchers make better decisions throughout their analysis pipeline, says John Darrell Van Horn, professor of psychology and data science at the University of Virginia and co-chair of the OHBM’s Best Practice Committee .
The guidelines also encourage “everyone to be transparent” [the analyses] they played so it’s no secret, ”says Van Horn.
More transparency not only improves replicability, but also makes it easier for researchers to find out whether one analysis pipeline is better than another, says Tonya White, associate professor of child and adolescent psychiatry at Erasmus University in Rotterdam, who is also at OHBM Guidelines.
“If you develop a new algorithm that you say is better but you don’t compare it to the old method, how do you really know it is better?” She says.
Hold:
For many scientists, better image collection and analysis is the only way forward. Despite the challenges involved – and reports that researchers are turning away from brain imaging – there is no substitute for certain lines of research.
“You can’t ask about language in a rat,” says Carper, about changes in the brain over time with post-mortem tissue.
Many imaging findings persist over time: Certain areas of the brain are activated in all people in response to the same stimuli; People have individual connectivity patterns; and scans can predict key behavioral categories, such as executive function and social skills, says Pelphrey. And when the results are conflicting, as in fMRI studies on reward processing in autism, some researchers have started implementing an algorithm that helps find areas of the brain that are consistently activated in response to the task.
“There are a lot of things that [has been] Replicated very well from lab to lab and has limited theories of how the mind works, knowing how the brain works, ”says Pelphrey. “I don’t think all of this has gone away just because we know that we have to be very careful about the predictions we make from scan to scan within an individual.”
As scientists get better at collecting and analyzing image data, they can also improve their interpretations based on a clearer understanding of the technology they are using, Carper says. For fMRI, for example, most of the tasks were designed to identify areas of the brain that generally respond to a particular stimulus, rather than to examine individual differences. And certain types of naturalistic stimuli can produce more reliable results, Elliott and colleagues reported in a review in Trends in Cognitive Sciences this month.
The new review outlines other strategies for circumventing the fMRI reliability issues pointed out by the team over the past year. Strategies include doing longer scans to collect more data, developing better ways to model the noise within a scan, and taking advantage of newer technologies that provide better isolation from markers of neural activity.
Elliott reached out to Twitter to promote the new work, he says, but he’s not interested in any further controversy on social media. He’s just curious to see what people think of the newspaper itself.
“I want to reflect [the ideas] in a better, longer form than a tweet. “
Quote this article: https://doi.org/10.53053/JGHN8805