Validating autism subtypes: An important however typically missed step in analysis | Spectrum
Professor of Clinical Neuropsychology , University of Amsterdam
Joost Agelink van Rentergem
Postdoctoral researcher, University of Amsterdam
The practice of categorizing autistic people into subtypes based on similarities in their characteristics and abilities is controversial. Subtypes can have negative connotations and evoke images of stereotyping and marginalization.
For decades, the autism spectrum was, by definition, a collection of subtypes, including Asperger’s Syndrome and Pervasive Developmental Disorder – unless otherwise stated. However, there was no clear clinical distinction between the subtypes, and they did not fully capture the inherent inter-human variation in the spectrum. The fifth and most recent edition of the Diagnostic and Statistical Manual of Mental Disorders, which clinicians refer to for diagnosis, therefore retired in 2013.
However, there are often good reasons for subtyping. Identifying subtypes of people who share certain genetic variants can be useful as these variants can be linked to certain medical problems. Subtyping analysis can also be used to demonstrate the non-existence of certain subtypes. Or it can help researchers figure out who will benefit most from a particular type of support without focusing on etiology or ontology.
For these reasons, we shouldn’t categorically stop performing subtyping analyzes. However, research should focus on discovering significant subtypes of autism. To reach consensus among scientists about the number and type of subtypes, we conducted a systematic review of the autism subtyping literature. We limited our search to articles published since 2001 that had used a statistical or machine learning method to discover subtypes of autistic people. These subtyping methods are data-driven: the researchers did not look for a specific number of subtypes and did not pre-determine what the subtypes would look like; they let the data speak for themselves.
We have identified 156 items that meet our criteria. Of these, 82 percent found that two to four subtypes described their data well. But these subtypes reflected a very different set of measurements, including levels of markers of inflammation, ratings of autism traits and sensory sensitivity questionnaires, tests of language proficiency, hormone levels, and patterns of facial features, and this diversity made consensus difficult to find or to draw any firm conclusions. Because the samples contained variables that are so heterogeneous in many of these studies, it is impossible to tell whether researchers viewed the same subdivision from different angles or discovered different subdivisions each time.
Additionally, we found that few studies took additional steps to validate their subtype results and support their claims. Hence, we concluded that a lack of systematic validation has led to the proliferation of autism subtypes of questionable utility. We encourage researchers to systematically validate their results and back them up with additional evidence, especially if the sample is small or the result is otherwise inconclusive.
In our review, published in the July issue of the Clinical Psychology Review, we outline seven ways to provide compelling evidence for subtyping analysis. We also provide a Subtyping Validation Checklist that researchers can use. These validation strategies require more analyzes, more measurements or more participants, but can make the results of the subtyping all the more interpretable and valuable.
The prototypical form of validation is independent replication, in which the entire recruiting, measuring and analysis process is repeated with a second group of participants. However, only 9 percent of the articles we reviewed contained an independent replication sample. Studies examining the stability of subtypes over time were even rarer, although there are many calls for more research in this direction in the literature.
Most of the articles – 88 percent – used a strategy called external validation, which compares subtypes with additional variables that were not used in the original analysis. For example, external validation can consist of determining that a subtype has a higher quality of life or fewer comorbid diagnoses than other subtypes. But few articles have explicitly stated how additional variables validated their results or what would invalidate them. For example, if one subtype is older or has more women than the other subtypes, it is difficult to interpret the impact of such results on the validity of the subtypes unless the researchers provide a hypothesis or rationale for focusing on those demographic variables.
To improve this situation, we have two additional recommendations. First, we ask researchers to describe what validation strategies they used in their studies and explain their reasons for choosing these approaches. Preferably they would pre-register their validation strategies online and explain what validation and invalidity of their results would mean. The checklist we have created can support this process.
Second, and more importantly, we ask researchers to explicitly state the goal of their subtyping analysis. There are many reasons to study subtypes. Much of the disagreement around these types of studies seems to arise when subtype labels are reified – in which case, these labels can be viewed as immutable properties that determine a person’s fate or worth. We would argue that this is seldom and never should be the goal of subtyping research. Instead, the ultimate goal of this type of job should be to improve prognosis and care.
Hilde Geurts is Professor of Clinical Neuropsychology at the University of Amsterdam in the Netherlands and Senior Researcher at Leo Kannerhuis’s Autism Clinic. Joost Agelink van Rentergem is a postdoctoral fellow at the University of Amsterdam.
Quote this article: https://doi.org/10.53053/GNBN2110