ADHD rating scales and screening measures lead to a high number of people inappropriately diagnosed with ADHD, according to a study in the Journal of Attention Disorders. Specifically, the existing measures lead to a large number of false positivesāpeople who donāt actually meet the criteria for ADHD but are still diagnosed with it anyway.
The study was conducted by Allyson G. Harrison and Melanie J. Edwards at Queens University, Canada. Their results focused on ADHD diagnoses given to young adults (like college-aged kids).
āClinicians who use self-report screening tests or who administer semi-structured interviews need to be aware that a positive screening outcome, especially in a clinical setting, has an extremely high false positive rate and a low positive predictive value,ā they write.
In order to analyze how well the existing measures for ADHD diagnosis performed, Harrison and Edwards conducted a systematic review that included all studies that provided data on the accuracy of the tests. They found only 20 studies that actually included enough information to assess diagnostic accuracy. There were seven different ADHD scales used in the studies.
The results varied widely, but the essential fact was that the measures were about as good as a coin flip at telling who had ADHD versus who was āhealthyā:
āA positive score in any of these studies typically had, at best, chance ability to correctly identify those with true ADHD compared with normal adults,ā the researchers write.
Moreover, results were even worse when trying to tell who had ADHD versus who had other mental health issues or stress:
āMost [of the measures] had less than a 10% chance of accurate diagnosis given a positive test score.ā
How to Interpret Diagnostic Accuracy
In order to understand the specific results, you need to understand what accuracy means for a medical test. The most commonly given statistics are sensitivity and specificity: Sensitivity is how likely the test is to correctly say that someone with a disease has the diagnosis. Specificity is how likely the test is to correctly say that someone without the disease doesnāt have the diagnosis. Generally, as a test gets more sensitive (to not miss any true cases of the disease), it will become less specific (overdiagnosing people who donāt have the disease); conversely, as the test gets more specific (avoiding overdiagnosis), it will become less sensitive (and start missing true cases).
But, these are not the statistics that are the most useful to clinicians. Sensitivity and specificity donāt take into account the fact that very few people in the population actually meet the criteria for a given diagnosis. They need to be converted into positive predictive value (PPV) and negative predictive value (NPV).
Think about it this way: If only 5% of people have a certain disease, then even if a test has high sensitivity and specificity, there are more chances for it to overdiagnose (95 chances out of 100) than there are for it to underdiagnose (5 chances out of 100). Thus, even a pretty accurate test is likely to hugely overdiagnose the condition. PPV and NPV take this into account and tell us how accurate the test would be in real life, where few people actually meet the criteria for ADHD.
The researchers give this example: Imagine that 5% of the population actually has ADHD, and a test has very high accuracy (say, 90% sensitivity and 72% specificity). Out of a sample of 1,000 people, 50 (5%) will have ADHD. The test will correctly identify 90%: 45 out of 50 (five people will go undiagnosed). However, it will also diagnose 266 more people who do not have ADHD. Thus, out of every 311 diagnoses given by the test, 266 (86%) are wrong. Put another way, 86 out of every 100 diagnoses are false positives.
However, studies have shown that clinicians donāt understand this point. They see a test that has validated high accuracyāsuch as the aforementioned 90% sensitivity and 72% specificityāand assume that means that the test is simply correct almost all of the time.
āPrevious studies demonstrate that clinicians frequently ignore or misunderstand the predictive validity of a positive score on a screening test,ā the researchers write. āIndeed, clinicians consistently and significantly overestimate the probability of disease/disorder both before and after obtaining test results, which may contribute to overdiagnosis of disorders. In the case of ADHD, clinicians may incorrectly believe that self-report measures or interviews have a higher level of diagnostic accuracy than is supported by the research, and may not understand that base rate of the disorder influences the interpretation of obtained scores.ā
A Deeper Dive into the Results
PPV results in the 20 studies ranged from 6% to a startlingly high 94%. Most floated around the single digits and teens. That 94% (found for the ASRS test) wasnāt replicated, either: āIn all subsequent validation studies using clinical comparison groups, a high score on the ASRS had, at best, only a 22% chance of accurately identifying those with true ADHD,ā Harrison and Edwards write.
And most scores for the other tests were even worse.
āThe results of this review show that, in clinical situations, ADHD screening measures typically have less than chance ability to accurately differentiate those with true ADHD from those with other disorders that also produce symptoms that mimic ADHD. In other words, clinicians who rely mainly or exclusively on these screening measures to diagnose ADHD in adults will overidentify far more people who do not have ADHD than accurately diagnose this condition,ā the researchers write.
Why the wide range of scores, though? Well, the 20 original studies had many issues that led to inconsistent results and even overestimation of their accuracy, according to Harrison and Edwards. Most did not explain how they determined whether someone actually met criteria for diagnosis; for those that did, it was clear they used other, similar self-report measures to make this determinationārather than the gold standard of a full clinical work-up. Thus, even if a study found that the new measure was relatively accurate, it was only accurate compared to other low-accuracy measures.
The studies also used arbitrary cut-off points to determine whether someone āhadā ADHD or not based on the test. In many cases, these cut-offs were not the recommended cut-offs in the test manuals. And the studies were not consistent with each other; different studies used different cut-off points to make this determination. And some used different cut-off points for different analyses within the same study! Some studies used only certain sub-scales, while others failed to analyze the results according to the test design (for instance, failing to calculate adjusted scores).
Harrison and Edwards write that, especially for young adults (like college-age kids), clinicians rarely rule out other issues before giving a diagnosis of ADHD. Instead, they diagnose based on āself-reportā using these ADHD rating scales or screeners. Essentially, these tests ask the person if they have had the symptoms of ADHD; if the person says āyesā enough times, they receive the diagnosis.
āThe majority of these submitted reports conferred a diagnosis of ADHD based primarily or exclusively on current self-reported symptoms, with most failing to obtain collateral reports, confirm childhood onset, establish functional impairment, or rule out other potential causes for the reported symptoms,ā the researchers write.
They add that clinicians must not rely on these measures to diagnose ADHD. The tests are for screening onlyādesigned to overdiagnose, with the assumption that clinicians will then do a more thorough clinical interview and weed out the people who donāt actually have ADHD.
According to the researchers, many other things can lead to the same āsymptomsā that make up an ADHD diagnosis: Normal changes in cognition in childhood and adolescence; the effects of other health issues, including both physical health problems and emotional problems such as anxiety and depression; and the cognitive effects of substance abuse are all routinely misdiagnosed as āADHD.ā
Moreover, the researchers note that plenty of studies have found that college-age kids have an easy time feigning the āsymptomsā of ADHD to obtain stimulants for recreational use, receive testing accommodations, or otherwise gain an advantage in a system that requires the diagnosis. And they add that kids are also becoming convinced that they have ADHDāand inadvertently showing symptomsābecause of social media contagion.
āSocial media platforms may act as a vehicle of transmission for the social contagion of self-diagnosed mental health conditions, particularly in stressed or vulnerable young women,ā the researchers write.
This is corroborated by other researchers, who write that there is an epidemic of teen girls becoming convinced that they have Touretteās, dissociative identity disorder, and other mental health conditions after watching TikTok creators who glamorize and sexualize the conditions.
Controversy Over the ADHD Diagnosis and Stimulant Treatment
The real nail in the coffin, though, is that even these dismal results presuppose that the diagnoses laid out in psychiatryās āBible,ā the DSM, are trustworthy, objective measures. But according to experts, thatās just not the case.
The DSM-5, in particular, was criticized for arbitrarily expanding the diagnostic criteria for ADHDāincluding removing the requirement that the symptoms actually impact functioningāso that an untold number of children who did not meet the diagnosis by DSM-IV standards suddenly āhadā the disorder.
Allen Frances, chair of the DSM-IV task force, cited āconclusive proof ADHD is overdiagnosedā: The youngest kids in any given classroom are twice as likely to be diagnosed with ADHD and receive stimulant drugs. This has been found in classrooms across the world, from the US to Finland to Taiwan.
In another article, Frances writes, āWe are currently spending more than $10 billion dollars a year for ADHD drugs, a fifty-fold increase in just 20 years. Much of this is wasted medicating children who have been mislabeled. Studies in many countries show that the youngest kid in a class is twice as likely as the oldest to get an ADHD diagnosis. We have turned normal immaturity into a mental disorder. It would be much smarter to spend most of this money on smaller class sizes and more gym periods.ā
In that same article, Frances notes that Keith Connorsāthe āfatherā of ADHD, the namesake for the Connors scale for ADHD diagnosis, and perhaps the most key figure in the development of the ADHD diagnosis and the use of stimulant treatmentācalled the overdiagnosis of ADHD āan epidemic of tragic proportions.ā
As for the efficacy of stimulant treatment for ADHD, researchers have found that those who receive treatment end up doing worse than those who donāt get stimulantsāeven if they have the same level of ADHD symptoms.
This was confirmed by the MTA study, which is often cited as evidence that the drugs work. Although the short-term outcomes appeared positive, by the 22-month mark, any benefit of stimulant drugs had disappeared. Researchers concluded that āthe MTA medication algorithm was associated with deterioration rather than a further benefit.ā In the end, researchers wrote, āextended use of medication was associated with suppression of adult height but not with reduction of symptom severity.ā
Other studies have found that stimulants donāt improve academic performanceāinstead,Ā they increase the likelihood of kids dropping out of school. Ritalin was found to lead toĀ an 18-fold increase in depression, which decreased back to baseline when kids stopped taking the drug. And up to 62.5% of kids may experience hallucinations and other psychotic experiencesĀ after taking stimulant drugs.
****
Harrison, A. G. & Edwards, M. J. (2023). The ability of self-report methods to accurately diagnose attention deficit hyperactivity disorder: A systematic review. Journal of Attention Disorders, 27(12), 1343-1359. https://doi.org/10.1177/10870547231177470 (Link)
every person and their dog now believes they have this corrupt, but highly profitable DSM fiction known as ‘adhd’
The therapy industry is completely out of control and is causing way more harm than it can EVER do good.
Report comment
So right, psychology practices offering ADHD assessments are popping up like mushrooms and existing ones are recruiting psychologists who can offer this service. I guess when there is money to be made you can deceive yourself that ADHD actually exists and that it is treatable with groundbreaking approaches such as time management skills.
Report comment
and those diagnosed may sell their stimulant drugs and get caught/get in legal trouble. And they are tuned up by stimulants to want other stimulant hits. Just a nightmare.
Report comment
This one is a great piece by Peter Simons, I encourage more in that vein!. Yipee!.
“Thus, even a pretty accurate test is likely to hugely overdiagnose the condition.” if the disease is rare enough or very rare. 95% of pregnancies do OK, so if I were negligent and tell ALL pregnant women they are going to be OK, I would be right 95% of the time, merely “wrong” around 3-4% of the time. And CATASTROPHICALLY wrong the rest…
Not that ADHD is antyhing like that, I am not expressing a medical opinion, but an skeptic one.
“Imagine that 5% of the population actually has ADHD…” as it stands now, just because Abbot and Costello say someone has ADHD does not make Abbot and Costello diagnosticians. There is no “objetive”, no GOLD STANDARD, way to say ANYONE has ADHD, or that ADHD exists at all… .
From wikipedia:
Gold standard (test)
“In medicine and statistics, the gold standard or criterion standard[1] is the diagnostic test or benchmark that is the best available under reasonable conditions.[2] It is the test against which new tests are compared to gauge their validity[*], and it is used to evaluate the efficacy of treatments.[1] The gold standard test is usually chosen to be the most accurate test available without restrictions.”
Now, that sounds heavy. But if “mental disorders” are biological, it’s gold standard, by definition, being biological, should be biological, not Inter-rater reliability based…
Otherwise: How do you know it is biological if there is no biological test for it?.
From wikipedia:
Inter-rater reliability
“In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.”
Independent observers does not happen in psychiatry!. They are trained to agree uppon so called “symptoms” on the belief they are biologically caused, without a biological test to prove or meassure thus!. How is that even logical? how even reasonable?. To agree among them?.
If Abbot and Costello are TAUGHT to agree on something that does not exist in reality, that is: a biological “disorder” without a biological test for it, as in psychiatry, they will still agree 80 to 90% of the time, at best, and that will not mean ANYTHING at all. As the rest of PS review, I think shows or suggests.
“…base rate of the disorder…” does not exist in psychiatry. There is no objective way to calculate the base rate. Period. There being no gold standard and/or no biological test for a claimed biological disorder.
That is: there is no BIOLOGICAL way, no not only Inter-rater reliability way, to know how many people have ADHD. Period. Just opinions of people taught to agree with each other regardless of the biological reality of the patient. Since the biological reality of the patient is NEVER, EVER, meassured in psychiatry. Period.
“Most did not explain how they determined whether someone actually met criteria for diagnosis…” ditto, no gold standard. Even Inter-rater reliability seems lacking, precisely because it is not objective, but intersubjective, or worse: subjective. Just an opinion, and a biased gainst reality one, as this PS review shows.
“And some used different cut-off points for different analyses within the same study!”, hahaha, nothing like inconsistency to make one’s beliefs solid… in the field of dogma. Any claim contratry to my dogma just makes my belief stronger…
“The tests are for screening onlyādesigned to overdiagnose, with the assumption that clinicians will then do a more thorough clinical interview and weed out the people who donāt actually have ADHD.” how does one do that with overdiagnosing tools?. Would not that only lead to less overdiagnoses in the best case scenario?.
“āWe are currently spending more than $10 billion dollars a year for ADHD drugs…”, yeah, disregarding what it will cost when they TRY to stop taking them. Just ask Paul Erdos…
“We [psychiatrists] have turned normal immaturity into a mental disorder.” I did not play any role in that crime against minors, or young adults. Not even as a member of society at large: I always fought against cackomamie and overdiagnosis.
“…It would be much smarter to spend most of this money on smaller class sizes and more gym periods.ā Nope, it would make more sense spending it in abolishing psychiatry. Maybe the effective altruism crowd cared to chip in on that?.
“Other studies have found that stimulants donāt improve academic performanceāinstead, they increase the likelihood of kids dropping out of school. Ritalin was found to lead to an 18-fold increase in depression, which decreased back to baseline when kids stopped taking the drug. And up to 62.5% of kids may experience hallucinations and other psychotic experiences after taking stimulant drugs.”, how much does that cost to the sufferers and society? I bet WAY more than the money made by practitioners, pharma and regulators/legislators getting kickbacks or campaing donations… that’s called HARMS not COSTS…
* Validity is the boogey man of psychiatry, precisely because they claim “mental disorders” are biologically caused, or biological based, but there is no biological way to claim so. Several prominent psychiatrists have claimed that psychiatry and it’s diagnoses, as far as I understand, have no VALIDITY. In lay terms: “mental disorders” have no basis in reality, just mere belief, and agreement in beliefs. Period.
From wikipedia:
“Validity is the main extent to which a concept, conclusion, or MEASSUREMENT is well-founded and likely corresponds accurately to the real world.” Uppercase mine. Meassurement is what corresponds to a biological phenomena, because it is a physical one. Interrater agreement is not a meassurement when there is no gold standard to correlate the agreement with A biological/physical MEASSUREMENT.
š
Report comment
Peterās report reads rather Kafkaesque. But, then, how else could it read when reporting on the fallout from 6 decades of mindless behaviorism masquerading as science? Wellā¦ hereās to another 6 decades of doubling down on the asinine āMinimal Brain Damageā lie that grew into todayās ADHD Frankenstein.
The bigger problem, however, is that the leadership class (in all Western societies) no longer have to concern themselves with substantive facts, the history of facts, facticity, or the epistemological or āscientificā foundations of āfactsā; a nice ancillary benefit for an institution (ergo psychiatry) from which facts-et al- are, effectively, kryptonite.
Theodor Roszak said as much- and way better-when he posited that technocracy as āāThat society in which those who govern justify themselves by appeal to technical experts who, in turn, justify themselves by appeal to scientific forms of knowledgeā. āAnd beyond the authority of science there is no appealā. Thus, the ADHD damage is only getting started folks! More experts with unlimited expert solutions from the very same assembly line of indoctrinated and rewarded experts who cooked up this scientific turd 6 decades ago, and who have kept itās deceptive enterprise exponentially growing ever since.
Regarding the elephant in the corner of Peterās report. What are we to think about all the ignored or suppressed āactualā causes surrounding this substantial population of āover diagnosedā ADHD kids? If a kid has been misdiagnosed/over-diagnosed (admittedly, a murky if not impossible distinction to begin with, for ADHD is more a linguistic phantasm than āa thingā), the causes for their inattention challenges not only go unattended, they are negatively compounded to the extent that taking pills is seen to be solving those unattended issues. Simply put, as California Surgeon General Nadine Burke Harris said, āThere is no pill for sexual abuseā! Nor is there a pill for dozens of other reasons anyoneās attention aptitudes would be compromised; no matter how much utilitarian pseudo-science or bad faith virtue signaling is thrown in with the pills.
FWIW: I regard the moniker āADHD over-diagnosisā more accurately to be a āMaldiagnosisā (from Mal, to be malformed, evil, bad, etc.) An over-diagnosis must have an actual scientific etiology and correlating statistical confirmation for an over-diagnosis to have occurred. Simply put, confirmed scientism cannot over-diagnose, only further maldiagnoseā¦
Report comment
Correction: I mislabeled the ADHD precursor as Minimal Brain Damage, when in fact it was named Minimal Brain “Dysfunction” (MBD). Although, whenever I critically consider MBD as a scientific premise following a scientific process- using MĆ¼nchhausen, Fries’s, or Agrippa’s trilemma, etc., I tend to feel as though I’m self-inducing a kind of brain damage as a result…
Report comment
If you’re against psychiatry then you should be all for self diagnosis. It puts the power back into the hands of the people! Etc.
I remember the good old days where kids were under diagnosed and teachers just said I was in lala land and made the whole class laugh at me but sure
Wtf even is this magazine with its kneejerk reaction to everything
Report comment
This article assumes that an entity called ADHD exists in the first place, in other words, that it’s an identifiable brain disorder ā yet no reliable objective evidence for it has been found.
The concept of ADHD should, therefore, be abandoned and people with symptoms of inattention, impulsivity, restlessness, etc., if causing significant distress, should be offered drug-free treatment.
Report comment