Reimagining autism screening: A dialog with Roald Øien | Spectrum
Fly View Productions / iStock
Roald Øien
professor, Arctic University of Norway
For the past five years, Roald Øien has stress-tested the Modified Checklist for Autism in Toddlers (M-CHAT), a go-to-screen for autism introduced in 2001. His results are far from encouraging: the screen misses more than 70 percent of children who are later diagnosed with autism, while most are marked without the disease. It can also confuse autism with other developmental disorders, such as intellectual disability.
The M-CHAT-R, a revision from 2014 in which the number of questions was reduced from 23 to 20 and the cutoff score was adjusted, does not do much better. To assess its effectiveness, Øien and colleagues applied the new cutoff algorithm retrospectively to M-CHAT data from 54,463 18-month-old children. His team described his results in Autism Research in November.
The move decreased false positives – non-autistic children mistakenly identified as autistic – by 2.4 percent, but it also increased false negatives – autistic children who were overlooked by the screen – by 3.6 percent, a compromise the Øien does not want to accept.
Spectrum spoke to Øien, Professor of Special Education and Developmental Psychology at Arctic University of Norway and Associate Professor at the Yale Child Study Center, about the work and conversations he hopes to inspire them around screening, developmental monitoring and care for children with autism .
spectrum: What drives your work on M-CHAT?
Roald Øien: I am motivated to lower the age of diagnosis which seems to be pretty stable. Even if we do massive screening as recommended in the US, it doesn’t seem to affect the age of diagnosis at the country level. And there are studies that show that universal screening is really expensive.
We know that parents are concerned about many children with autism by around 15 months of age. However, as many studies show, the average age at diagnosis is 3 to 4 years. That’s a long time for parents to worry. If we manage to reduce the age of diagnosis, it could be beneficial for the parents, both for coping and for the child’s outcomes, knowing that early detection is associated with better outcomes. I have a daughter with autism. She is 15. So I understand that identifying and reducing the time of concern early is of great importance for parents.
S: What prompted your new analysis?
TURNIP: I want to start a discussion about screening tools and how we try to improve them by moving different thresholds or cutoffs while not actually changing the measurement itself. I don’t really know if that’s the way. Nobody had looked at this particular algorithmic change, and we wanted to see if this really helps identify children with autism?
S: What exactly was the algorithm change?
TURNIP: In the original M-CHAT, a child was screened positive if their parents answered 2 or more of 6 “critical items” that best predict autism, or 3 or more of the 23 questions with “no”. Our studies show that most children with autism fall below these two thresholds. In the M-CHAT-R they changed the algorithm so that everyone who scores over 2 points receives a follow-up. They also removed three elements that appear to be poor predictors of autism.
We applied the new algorithm to the old M-CHAT. We cannot say for sure that the algorithm is the only thing that changes the results from M-CHAT to M-CHAT-R. However, it might be worth hypothesizing that many of the changes in identification rates are primarily due to the change in the algorithm rather than the questions themselves.
S: How should researchers weigh false positives and false negatives? Is one “worse” than the other?
TURNIP: Everything in life has a compromise, right? So it is only a matter of deciding whether this compromise is acceptable. If you find yourself in a world where M-CHAT is supposed to be an autism-specific tool and you are not interested in other developmental disorders, it can be worth missing out on many children with possible other diagnoses. On the other hand, we see that if we lose the false positives, we also lose some of the children with autism. And I don’t know if that’s a compromise that I personally want to say that I’m comfortable with.
This paper does not revolutionize screening. It is part of the discussion on how to find solutions that are more affordable, more precise, and capture more of the broader autism phenotype than the M-CHAT can do on its own.
S: What suggestions do you have to improve the screening?
TURNIP: We need to rethink universal screening. Instead, in Europe we carry out development monitoring. Screening shouldn’t be the only one at 18 and 24 months. We should be practicing developmental monitoring at both 18 and 24 months of age, and maybe at 5, 6, and 7 years old because we know that a large portion of the group will be identified later in school age.
I’m not sure we can improve M-CHAT the way it is now. I don’t think that’s the goal either. It is still useful to look for the prototypical signs of autism in a specific subset of children. But I think we should practice it with caution and we must be aware of all of its limitations. And we need clinicians who know that many children are missing. It’s not a yes or no to autism.
And there could be many other ways to conduct developmental screening. There are other screening tools, such as the age and stage questionnaires and the parents’ assessment of the level of development. There are many instruments out there and I don’t think any of them will ever master all of them.
S: What are the next steps for you?
TURNIP: Well, I think M-CHAT is now a closed chapter for me. We have shown all the limits, not just with M-CHAT. It is with all screening tools. It doesn’t necessarily matter whether you change the scores and the algorithms and the cutoffs and the wording and explanations. It’s more about having discussions to move this forward. This is what i really want
Cite this article: https://doi.org/10.53053/SZOT5712