Stanford Algorithm Fails to Deliver Appropriate Healthcare

Stanford used an algorithm to distribute the COVID-19 vaccine and it prioritized senior faculty over front-line workers.

4
1054

Stanford University used an algorithm to decide who would get the first 5000 doses of the coronavirus vaccine—and it did not go well, prioritizing senior faculty instead of front-line, low-paid healthcare workers.

Artificial intelligence and machine-learning algorithms are increasingly being used across the healthcare system, including in psychiatry. Indeed, apps that conduct extensive surveillance of users’ texts and GPS tracking are being given to people with mental health diagnoses, despite no published evidence that the algorithms they generate provide any accurate information or any improved outcomes.

Disabled people and people with mental health diagnoses bear the brunt of privacy invasions, as a recent study found. According to the authors of that study, in some cases, police and border security agents were using information taken from users’ mental health apps.

Public Domain

The current algorithmic failure was revealed on December 17, when disgruntled Stanford medical professionals sent a letter of protest to their higher-ups at the Stanford School of Medicine, Stanford Health Care, and Stanford Children’s Health.

The letter was signed by the Chief Resident Council in support of the residents and fellows in the Stanford hospital system, many of whom are on the front lines treating COVID-positive patients. The letter alleges that only 7 of the roughly 1300 residents and fellows received the vaccine. According to the letter, the algorithm instead prioritized senior faculty, most of whom are not currently engaged in clinical work and who have worked from home since March.

The letter also alleges that Stanford leadership was aware of the error by December 15, but stuck with the algorithm’s decision anyway. The letter states:

“We believe that to achieve the aim of justice, there is a human responsibility of oversight over such algorithms to ensure that the results are equitable.”

According to an article in Forbes, Stanford Health Care’s leadership apologized on December 18 and claimed that an error in the algorithm resulted in the failure.

According to the letter, the use of an algorithm to disenfranchise residents is an extension of existing power dynamics, which leave residents with little voice in the system under which they work.

“Ultimately, we understand that the lack of inclusion in the institutional vaccination plan is a result of resident disempowerment at Stanford Medicine. Our disempowerment is not isolated to vaccine allocation, which is why many of the items above reflect our need to have a greater stake in the decisions that impact us and our patients within the institution.”

Algorithms and artificial intelligence are being increasingly rolled out to guide healthcare decision-making, especially in psychiatry—despite ethical concerns. In fact, they are consistently shown to replicate the biases of those who develop them and use them.

For instance, a study in the New England Journal of Medicine found that the use of algorithmic decision-making could worsen the treatment people of color receive in clinical situations, suggesting that racial biases are “baked into” the system of decision-making.

A study in Science supported this claim when researchers found that an algorithm was using cost as a proxy for health. That software prevented Black people from receiving referrals to further care because it noticed that people of color typically receive less costly treatment—which we know is due to factors such as poverty, insurance issues, and racial bias in the healthcare system. However, the algorithm concluded that Black people were healthier because the system spends less money on them.

Another study this year demonstrated that using artificial intelligence to look for vocal changes to screen for mental health problems could exacerbate the racial bias that permeates diagnoses such as schizophrenia. Yet another study found that machine learning was no better than chance at detecting the presence of early psychosis.

Researchers have also been searching for artificial intelligence solutions and algorithms that could predict whether someone responds to treatment after a psychiatric diagnosis. However, the search has failed, possibly because “treatment” is largely due to the placebo effect—according to researchers in JAMA Network Open.

 

*

You can access the Stanford letter here

4 COMMENTS

    • My understanding is everyone is being tracked, is that not correct? For goodness sakes, my maps program tells me where it thinks I’m wanting to go, prior to my inputing any information into it. And after dropping my mom off at the dentist the other day, my phone had an advertisement for a different dentist on it, within minutes. And I do have friends who’ve said the same type things are happening to them, so I think they are tracking / invading the privacy, of all people. It’s creepy.

      Report comment

  1. And to add, it’s not as if the present healthcare is reliable, dependable or honest. Some really try, they try hard despite the politics involved among the system. It is becoming impossible for doctors and clients alike.
    It’s a mess. And I believe psychiatry really ruined so much of true healthcare.

    Report comment

  2. If the algorhythms of these idiots out to lunch having a degree to punch at keys when there could have been those more famous people is beyond me. Why not just take them out completely, to restaurants, zoos, to the penguin museum and then take them along on vacation!?

    Report comment

LEAVE A REPLY