A recent article, published in Psychiatric Services in Advance, explores the use of digital technologies and how they can be misused and employed coercively in psychiatry. The author highlights steps that can be taken to reduce coercion and misuse of digital technologies in psychiatric settings.
The author, psychiatrist Nathaniel Morris of the University of California San Francisco, writes:
“Coercion is just one possible outcome among many, including loss of privacy, distress for patients and families, the transmission of stigmatizing information, and exacerbation of racial and socioeconomic disparities, related to digital technology use and misuse in psychiatry. At the same time, these technologies bring new opportunities for reconsidering and studying coercive practices to support the well-being of and respect for patients in psychiatric settings.“
While the use of digital technologies in psychiatry was already on the rise pre-pandemic, its use has dramatically increased throughout the COVID-19 pandemic. Although such technologies, including but not limited to telepsychiatry and mobile mental health apps, have been beneficial in that they have increased client access to mental healthcare and information, they bring with them a number of concerns regarding how they might infringe upon clients’ rights and be employed in coercive tactics.
Given that psychiatric clients are already at high risk for coercion, we must attend to how digital technologies may be used to further add to the problem.
Morris begins by addressing potential concerns associated with electronic medical record (EMR) flags, which can note high suicide or violence risk. The digitization of client records allows mental health professionals to easily access clients’ information and gain awareness of potential risks or concerns, enabling them to adequately address and assist those who might have a history of suicidal ideation or attempts. Such flags about histories of violence can also enable clinicians to take necessary safety precautions.
However, while beneficial in some ways, flagging clients’ records could be used in the service of coercion. Morris highlights, for example, how the attention is drawn to clients’ risk for suicide or violence might lead to biased treatment, wherein the physician may solely focus on the client’s mental health while potentially ignoring a broader medical understanding of the client—which could result in them missing medical issues.
Increased attention to mental health concerns may also lead clinicians to pursue coercive interventions, such as involuntary psychiatric hospitalization, that may not be necessary or helpful to the client. Further, EMR flags could be used to deny clients access to treatment or pressure clients into treatment that is not congruent with their own preferences.
For example, at the Veteran’s Health Administration, clients flagged with histories of violence may be required to follow certain treatment conditions, like needing a police escort or to be screened by a metal detector before entering the facility. Critics of EMR flags have also noted that most flagged behaviors are verbal, with some suggesting that flags are a way to punish individuals who express concerns or complaints about their treatment.
Morris also draws attention to the use of surveillance cameras in psychiatric units. While the use of surveillance cameras on psychiatric units is often justified as being in the service of the safety of the clients, research evidence does not support this claim and, in fact, suggests that surveillance can contribute to psychological harm. Other concerns associated with video surveillance include: “privacy, consent, dignity, data protection, and potential exacerbation of psychiatric symptoms.”
In addition to concerns about privacy and clients’ dignity, video surveillance in psychiatric settings can be used coercively. Clinicians could use client behaviors that occurred on camera, when the client presumed no one else was present, against them in civil commitment hearings that could potentially keep clients institutionalized. Along similar lines, clients may be monitored covertly without their knowledge, which raises privacy concerns in addition to potentially causing ruptures in clients’ trust.
Moreover, although videoconferencing in psychiatric settings has been beneficial, especially during the COVID-19 pandemic—increasing access to care, allowing clients to connect with their loved ones, and facilitating legal proceedings—several concerns accompany this technology. Morris suggests that poor sound and video quality could potentially impact the client’s ability to be fully present for and understand civil commitment hearings, with clients typically already struggling to understand why they remain in the hospital following such hearings with or without the use of videoconferencing.
Additionally, clients in forensic settings struggling with mental health and/or substance addiction issues might not feel comfortable sharing personal or sensitive information in a videoconference with strangers or may not feel as if they have the same ability to access and confide in their legal counsel.
While video conferencing may allow family and friends to visit their loved ones in psychiatric settings, Morris also raises concerns that such access may lead to loved ones choosing tele-visitation over in-person visits. Televisitation may not allow for the same sense of connection as in-person visits, wherein loved ones are more clearly able to see the impact of involuntary hospitalization on those they care about, which allows them to better advocate for their institutionalized friends or family members.
Lastly, risk assessment tools, which allow clinicians to assess for the likelihood of things like suicide or violence, are discussed as potentially problematic and coercive. Although risk assessment tools have been employed in psychiatric settings prior to the use of digital technologies, digital technologies are transforming these tools.
Risk assessment algorithms have been employed to assess for suicide, violence, and other negative events. While accurate predictions of such adverse outcomes could be useful, the reality is that these tools are imperfect and not as accurate as they may appear.
Social media companies, like Facebook, have also developed suicide risk assessment algorithms to detect concerning social media posts—which raises significant ethical concerns and questions about how valid such algorithms are. The lack of accuracy of these algorithms has real-life implications for those involuntarily hospitalized, potentially on false grounds.
Not only may these algorithms be inaccurate, but they also might contribute to systemic inequalities of individuals belonging to marginalized racial, gender, socioeconomic, and other disenfranchised groups, such as children, who tend to be particularly at risk for coercion in psychiatric settings.
Morris writes:
“In a recent example, researchers found racial bias in a widely used algorithm for stratifying patients’ health risks and targeting high-risk patients for additional care management. Because less money often is spent on Black patients than on White patients with similar needs, and the algorithm stratified risk on the basis of costs rather than illness, the algorithm perpetuated less attention to the health needs of Black patients.”
In addition, risk assessment tools also leave room open for interpretation. If clinicians are not properly trained or do not know how to interpret or use certain risk assessment tools, this could also contribute to the coercion of psychiatric clients.
Morris identifies steps that can be taken to reduce the abuse of digital technologies in psychiatric settings, such as disclosing the technologies being used in treatment to clients. He also suggests that clients be provided with the opportunity to “opt-out” of certain technologies when appropriate, providing the example of allowing clients to choose in-person rather than video observation when available.
Clients should also be given the ability to change or erase digital information, like requesting the removal of EMR flags or erasing video surveillance records. However, Morris suggests that while such requests will likely not, and in some instances, should not be granted, having formal procedures available could allow for open discussion between clients and clinicians about the purpose of flags and other surveillance measures.
Morris also advocates for further guidelines, training, and support for clinicians to know how to properly use and employ digital technologies and so they are aware of potential risks related to coercion so those can be avoided at all costs.
Morris concludes by pushing for a balanced approach to the use of digital technologies in psychiatric settings, one that is aware of the potential benefits and possibilities for such technologies, in addition to being aware of and avoiding misuse and abuse of these technologies.
****
Morris, N. P. (2021). Digital technologies and coercion in psychiatry. Psychiatric Services in Advance, 1-9. (Link)
“Morris concludes by pushing for a balanced approach to the use of digital technologies in psychiatric settings, one that is aware of the potential benefits and possibilities for such technologies, in addition to being aware of and avoiding misuse and abuse of these technologies.”
The implication being that the existence of psych wards is perfectly fine, it’s just some few folk mucking about with tech in the wrong ways that is the issue.
The fact that this guy doesn’t question the existence OF psychiatric facilities to start with is to put it mildly, a huge problem.
Report comment
The only real benefit of these spy cameras would be to catch the evil actions of the staff toward the patients. But somehow, I don’t think that’s the purpose of it.
Report comment
Well much evil comes in such silent forms. More evil comes in silence than in active disturbance. I think we need character meters for staff to wear.
Report comment
There never has been privacy. That right is broken as soon as you talk to a doctor. So you either never see one, or you become known through whatever they write on the charts. And this includes the medical system too, since we all know that there is nothing medical about psychiatry.
There are 2 systems that own you. The legal and the “health” “care” systems.
Sometimes getting on your knees and kissing their feet helps.
Report comment
I’ve had to get care from a medical hospital that has cameras in almost every room to monitor patients. There’s always one person in each area watching. They claim to do it to better protect patients and really only use it for people who are fall risks. Plus it’s easier than going to each room.
Needless to say, I didn’t relax once.
But, honestly, I treat every interaction with any kind of medical professional as if I’m being recorded (both video and audio) and that the recordings will be used against me in some fashion. This goes double for *any* kind of interaction with the psych system. I don’t tell any professional anything I wouldn’t broadcast to the world. I used to physically look for cameras before I realized that that behavior could be viewed as “suspicious.”
Paranoid? Maybe. I have trust issues and enough experience behind me to justify those issues. Being required to use an app or other surveillance would 10,000% drive me away from any kind of “help.”
Report comment
It’s disgusting and makes these professionals look like asses.
Report comment
No mention of the microchipped Abilify — have we forgotten so soon?
Report comment
Seriously, that is CREEPY!
Report comment
People do know that meds are now squealing on you if you don’t take them, right? I’m just gonna leave this statement put out in 2017 by the FDA right here:
“ The U.S. Food and Drug Administration today approved the first drug in the U.S. with a digital ingestion tracking system. Abilify MyCite (aripiprazole tablets with sensor) has an ingestible sensor embedded in the pill that records that the medication was taken. The product is approved for the treatment of schizophrenia, acute treatment of manic and mixed episodes associated with bipolar I disorder and for use as an add-on treatment for depression in adults.
The system works by sending a message from the pill’s sensor to a wearable patch. The patch transmits the information to a mobile application so that patients can track the ingestion of the medication on their smart phone. Patients can also permit their caregivers and physician to access the information through a web-based portal.
“Being able to track ingestion of medications prescribed for mental illness may be useful for some patients,” said Mitchell Mathis, M.D., director of the Division of Psychiatry Products in the FDA’s Center for Drug Evaluation and Research. “The FDA supports the development and use of new technology in prescription drugs and is committed to working with companies to understand how technology might benefit patients and prescribers.”
It is important to note that Abilify MyCite’s prescribing information (labeling) notes that the ability of the product to improve patient compliance with their treatment regimen has not been shown. Abilify MyCite should not be used to track drug ingestion in “real-time” or during an emergency because detection may be delayed or may not occur.
Schizophrenia is a chronic, severe and disabling brain disorder. About 1 percent of Americans have this illness. Typically, symptoms are first seen in adults younger than 30 years of age. Symptoms of those with schizophrenia include hearing voices, believing other people are reading their minds or controlling their thoughts, and being suspicious or withdrawn. Bipolar disorder, also known as manic-depressive illness, is another brain disorder that causes unusual shifts in mood, energy, activity levels and the ability to carry out day-to-day tasks. The symptoms of bipolar disorder include alternating periods of depression and high or irritable mood, increased activity and restlessness, racing thoughts, talking fast, impulsive behavior and a decreased need for sleep.
Abilify MyCite contains a Boxed Warning alerting health care professionals that elderly patients with dementia-related psychosis treated with antipsychotic drugs are at an increased risk of death. Abilify MyCite is not approved to treat patients with dementia-related psychosis. The Boxed Warning also warns about an increased risk of suicidal thinking and behavior in children, adolescents and young adults taking antidepressants. The safety and effectiveness of Abilify MyCite have not been established in pediatric patients. Patients should be monitored for worsening and emergence of suicidal thoughts and behaviors. Abilify MyCite must be dispensed with a patient Medication Guide that describes important information about the drug’s uses and risks.
In the clinical trials for Abilify, the most common side effects reported by adults taking Abilify were nausea, vomiting, constipation, headache, dizziness, uncontrollable limb and body movements (akathisia), anxiety, insomnia, and restlessness. Skin irritation at the site of the MyCite patch placement may occur in some patients.
Prior to initial patient use of the product, the patient’s health care professional should facilitate use of the drug, patch and app to ensure the patient is capable and willing to use the system.
Abilify was first approved by the FDA in 2002 to treat schizophrenia. The ingestible sensor used in Abilify MyCite was first permitted for marketing by the FDA in 2012.
The FDA granted the approval of Abilify MyCite to Otsuka Pharmaceutical Co., Ltd. The sensor technology and patch are made by Proteus Digital Health.
The FDA, an agency within the U.S. Department of Health and Human Services, protects the public health by assuring the safety, effectiveness, and security of human and veterinary drugs, vaccines and other biological products for human use, and medical devices. The agency also is responsible for the safety and security of our nation’s food supply, cosmetics, dietary supplements, products that give off electronic radiation, and for regulating tobacco products.”
Report comment
Big Brother is watching you and pharmaceutical companies seem to be the “biggest brothers.”
Report comment
As has already been mentioned, privacy has never existed in psychiatry–certainly not in mental hospitals where there are no doors on the rooms, and spies in the form of psychiatric aides monitor every word and action. And any discussion of dignity in this context is a joke. I have far too much to say on this matter to put it here. You can read my article in “The Mighty”–“For People in Psychiatric Hospitals, Dignity Can Be Life-Saving”– for a fairly extensive discussion of the problem https://themighty.com/2020/12/psychiatric-hospital-dignity-after-suicide-attempt/ Furthermore, it is absolutely true that risk assessment is far far far from being an exact science. Definitely risk assessment results in flags or labels that pigeon-hole people, limiting their treatment options and often leading to forced hospitalization. These were the outcomes my son and I experienced. As the saying goes: “Read all about it” in my book “Broken: How the broken mental health care system leads to broken lives and broken hearts” available on Amazon.
Report comment