The autism industry, among others, has rendered the term “evidence-based” meaningless.
I think we’re just going to have to let the term “evidence-based” go. There seems to be an inverse relationship between the extent to which a practice is described as evidence-based, and the quality of evidence supporting its use.
This is particularly true of behaviorism and ABA.
And if it turns out that, contrary to widespread assumptions, behavior modification techniques aren’t supported by solid data even when used with autistic kids, why would we persist in manipulating anyone with positive reinforcement? A rigorous new meta-analysis utterly debunks the claim that applied behavior analysis (ABA) therapy is the only intervention for children with autism that’s “evidence-based.” In fact, it raises serious questions about whether ABA merits that description at all.
You might assume that those who use the phrase “evidence-based practice” (EBP) are offering a testable claim, asserting that the practices in question are supported by good data. In reality, the phrase is more of an all-purpose honorific, wielded to silence dissent, intimidate critics, and imply that anyone who criticizes what they’re doing is rejecting science itself. It’s reminiscent of the way a religious leader might declare that what we’ve been told to do is “God’s will”: End of discussion.
Moreover – and it took me awhile to catch on to this – behaviorists often use “EBP” just as a shorthand for the practices they like, in contrast to the (progressive or humanistic) approaches they revile. It doesn’t matter if the evidence is actually weak or ambiguous or even if it points in the other direction. They’ll always come up with some reason to dismiss those inconvenient findings because their method is “evidence-based” by definition. (On social media and elsewhere, you can get a glimpse of how modern behaviorism resembles a religious cult – closer to Scientology than to science – with adherents circling the wagons, trading ad hominem attacks on their critics, and testing out defensive strategies to employ when, for example, people with autism speak out about how ABA has harmed them. Or when scholarship shows just how weak the empirical case for ABA really is.)
Which brings us back to that new research review. The work of eleven authors – including, interestingly, an ABA therapist – representing the University of Texas, Boston College, Vanderbilt, and Mount Holyoke, it was published in January 2020 in Psychological Bulletin (PB), a prestigious social science journal that specializes in lengthy integrative research reviews. The article is not a polemic. It does not consider, and appears not even to be informed by, any of the broader objections to ABA that are raised by autistic people or that I’ve raised here. It confines itself to describing peer-reviewed research. The authors cast a wide net, looking for every English-language study in the last half-century that compared an intervention group with a control group in treating children up to age 8 who had been diagnosed with Autism Spectrum Disorder. This yielded 1,615 separate results from 150 reports representing 6,240 participants.
The most striking finding in this research review is how few high-quality assessments of “the primary approach used in clinical practice” – that is, ABA – have ever been conducted. In fact, the great majority of ABA studies were so poorly designed that they didn’t merit inclusion in this review. Rather than comparing the results of different treatments for groups of children, behaviorist journals commonly publish single-subject studies, in which one child is assessed before and after treatment. (This method was invented by behaviorists back when their behavior-shaping efforts were limited to lab rats.) You don’t have to be a trained data analyst to see the serious limitations of this method in terms of the results’ lack of generalizability. For the authors of the PB review, these limitations were so glaring that it didn’t even make sense for them to bother with the results of single-subject studies. Yet those dubious results are the primary basis for behaviorists’ claims that ABA is “evidence-based.”
It’s now often just marketing jargon. Practices that are accepted as evidence-based generally don’t have to try to sell themselves as evidence-based.
I’d be curious how many things labeled “evidence-based” are for profit.
Source: Noah Sasson on Twitter
I am likewise curious.
- Behaviorism: Measuring the Surface, Badly – Ryan Boren
- The Problem with Behaviorism – Ryan Boren
- Autism, Trauma, and Stress – Ryan Boren
- Post-truth, Open Society, and the Business of Behaviorism – Ryan Boren
- Persuasion and Operant Conditioning: The Influence of B. F. Skinner in Big Tech and Ed-tech – Ryan Boren
- Drop the B from PBS – Ryan Boren
- Tech Ethics and the New Behaviorism – Ryan Boren