Deep-learning mannequin could precisely predict autism prognosis | Spectrum
Continuous Prediction: A new model that uses medical records to screen for autism is working consistently across all U.S. states, suggesting it can help overcome structural barriers such as geographical differences in medical record keeping.
Courtesy Ishanu Chattopadhyay
A new model of deep learning appears to outperform a widely used screening test for identifying young children with autism. The algorithm, described in Science Advances on October 6, generates predictions based on disease patterns often associated with autism.
“We have known for a long time that children with autism have much higher rates of many diseases, including immunological and gastrointestinal diseases,” said study director Ishanu Chattopadhyay, assistant professor of medicine at the University of Chicago, Illinois. “In this study we tried to use the insufficiently used aspects of the anamnesis to assess the individual risk.”
Doctors typically screen children between the ages of 18 and 24 months for autism using parenting questionnaires, such as the Modified Checklist for Infant Autism (M-CHAT), the accuracy of which can be compromised by cultural and language barriers. Most of the children M-CHAT marks for further assessment – 85 percent – do not have autism. These false positive results increase waiting times for specialist examinations and delay diagnosis and intervention in children who have the disease.
“The waiting time from a positive result on an M-CHAT screen to a targeted autism assessment can take a year,” says Chattopadhyay.
Because the new model is more accurate than the M-CHAT, it could shorten the waiting time for a diagnosis, says Chattopadhyay.
It is unclear how well the model could work in a clinical setting, but the delay in diagnosis for many with autism is so significant that “anything that helps, even if it helps a little, could be of value,” says Thomas Frazier. Professor of Psychology at John Carroll University in University Heights, Ohio.
Researchers trained their model to identify diagnostic codes grouped into 17 categories of conditions associated with autism, including immunological disorders and infectious diseases. The algorithm combed the electronic health records of more than 4 million children ages 6 and under, including 15,164 with autism, from a US insurance claims database.
The algorithm compared the patterns of concurrent illness between autistic and non-autistic children to create an Autism Comorbidity Risk Score (ACoR), an estimate of the likelihood of a child with a specific history of Comorbidities later diagnosed is autism. A score above a certain threshold indicates that a child should be referred for diagnostic tests and possible interventions.
The ACoR accurately identified approximately 82 percent of autistic children aged just over 2 years of age; The accuracy improved to 90 percent by the age of 4. Children reported by the ACoR were at least 14 percent more likely to have autism than those identified by M-CHAT in a 2019 study conducted at Children’s Hospital of Philadelphia.
The team achieved similar results when they validated the model using records from 377 autistic and 37,635 non-autistic children aged 6 years and younger who were studied at the University of Chicago Medical Center between 2006 and 2018.
In both sets of data, the model tagged children, on average, more than two years earlier than they were given a formal diagnosis. The researchers warn that delays in accessing specialized treatment may make part of the difference, although this gap is most likely no longer than a year.
The accuracy of the model did not differ based on the race or ethnicity of the study participants. The model also worked consistently across US states, suggesting that it may be useful in overcoming structural barriers such as geographical differences in medical record keeping. And it was able to differentiate between autism and various psychiatric illnesses with an accuracy of more than 90 percent between the ages of 2 and 2½ years.
Of the 17 categories of comorbidities, infections and immunological disorders were the most predictive of autism, the study shows.
“I have definitely not seen anyone approach this problem from this angle,” says Frazier. In his view, the tool would best be used as a complement to current screening approaches. “If you had an algorithm that was incredibly cheap to implement and that spat out a probability score that could integrate with the M-CHAT results and fit into the primary care workflow, this could be useful.”
Clinical implementation could mean “having a panel of comorbidities in place to be screened for at each visit,” says Dwight German, professor of psychiatry at the University of Texas Southwestern in Dallas.
The most important next step for the researchers is to conduct a prospective clinical study to “compare our tool with existing tools to see if we are reducing false positives and the [diagnostic] Delay, ”says Chattopadhyay.
Clinical trials are key to validating the approach and answering the outstanding questions about the algorithm – including how effective the model is at differentiating between autism and developmental disorders, with which it is often confused. And since the accuracy of the model after the 2nd
“Especially in severe cases, changes in behavior for a family doctor at this age would be noticeable,” says German. Frazier adds that he would like to see how well the model can identify autistic children with low support needs.
Chattopadhyay and his colleagues also plan to test the tool to check for a range of conditions, he says. “This is a new class of algorithms for analyzing patient data that uses comorbidities and medical history and appears to provide clinically relevant predictive power in the case of autism and even other diseases that we investigate.”
Quote this article: https://doi.org/10.53053/NALU6283