Medicine Is on the Brink of “AI-Induced Deskilling”

Research is piling up to show that once doctors use artificial intelligence, they begin to lose their vital skills—and trainees may never learn them.

0
23

“AI-induced deskilling.” That’s what researchers are calling the recent findings showing that doctors become worse at their jobs after using AI.

For instance, a recent study found that doctors who used an AI colonoscopy program became less proficient at detecting adenomas—a key precursor to colon cancer—when they had to return to using their own brains. Those researchers concluded that “continuous exposure to decision support systems such as AI might lead to the natural human tendency to over-rely on their recommendations, leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.” They noted that previous research shows that doctors who use AI tools do not continue to use their own skills to double-check the outcomes; instead, they over-rely on the tools without realizing it, leading to misinterpretation.

Similarly, a recent MIT study found that students who used AI to generate an essay were convinced that they did the majority of the work—despite not being able to remember a single quote from their essay immediately afterward. This was in sharp contrast to those who used their own faculties to write the essay, who had no problem remembering the words they’d written.

Now, a new study shows that concerns about deskilling in the medical field apply across a wide range of tasks at which doctors must be proficient: reduced ability to examine the patient and poorer clinical judgment, worse communication, and medical error. And, according to the authors, the opportunity for trainees to develop these skills in the first place may be lost (called “upskilling inhibition”).

The current study was conducted by Chiara Natali, Luca Marconi, and Federico Cabitza at the University of Milano-Bicocca, Italy, and Leslye Denisse Dias Duran at Ruhr University Bochum, Germany. It was published in Artificial Intelligence Review.

“Critical voices have cautioned how the repeated delegation of decision-making to machines may erode human expertise, to the point that certain skills may even be irreversibly lost in a ‘second singularity’, unless proactive measures are taken,” the researchers write.
They add, “In healthcare, this manifests as a growing dependence on AI-driven recommendations, progressively narrowing clinicians’ opportunities to refine their judgment through real-world experience. Without deliberate interventions, the medical profession risks entering a phase where AI’s authority is unquestioned, and human oversight becomes increasingly performative rather than substantive.”

Three doctors at a computer

You've landed on a MIA journalism article that is funded by MIA supporters. To read the full article, sign up as a MIA Supporter. All active donors get full access to all MIA content, and free passes to all Mad in America events.

Current MIA supporters can log in below.(If you can't afford to support MIA in this way, email us at [email protected] and we will provide you with access to all donor-supported content.)

Donate

Previous articleA Call for Evidence-Based, Fear-Free Practice
Peter Simons
Peter Simons was an academic researcher in psychology. Now, as a science writer, he tries to provide the layperson with a view into the sometimes inscrutable world of psychiatric research. As an editor for blogs and personal stories at Mad in America, he prizes the accounts of those with lived experience of the psychiatric system and shares alternatives to the biomedical model.

LEAVE A REPLY