A recent article, published in Psychiatric Services in Advance, explores the use of digital technologies and how they can be misused and employed coercively in psychiatry. The author highlights steps that can be taken to reduce coercion and misuse of digital technologies in psychiatric settings.
The author, psychiatrist Nathaniel Morris of the University of California San Francisco, writes:
“Coercion is just one possible outcome among many, including loss of privacy, distress for patients and families, the transmission of stigmatizing information, and exacerbation of racial and socioeconomic disparities, related to digital technology use and misuse in psychiatry. At the same time, these technologies bring new opportunities for reconsidering and studying coercive practices to support the well-being of and respect for patients in psychiatric settings.“
While the use of digital technologies in psychiatry was already on the rise pre-pandemic, its use has dramatically increased throughout the COVID-19 pandemic. Although such technologies, including but not limited to telepsychiatry and mobile mental health apps, have been beneficial in that they have increased client access to mental healthcare and information, they bring with them a number of concerns regarding how they might infringe upon clients’ rights and be employed in coercive tactics.
Given that psychiatric clients are already at high risk for coercion, we must attend to how digital technologies may be used to further add to the problem.
Morris begins by addressing potential concerns associated with electronic medical record (EMR) flags, which can note high suicide or violence risk. The digitization of client records allows mental health professionals to easily access clients’ information and gain awareness of potential risks or concerns, enabling them to adequately address and assist those who might have a history of suicidal ideation or attempts. Such flags about histories of violence can also enable clinicians to take necessary safety precautions.
However, while beneficial in some ways, flagging clients’ records could be used in the service of coercion. Morris highlights, for example, how the attention is drawn to clients’ risk for suicide or violence might lead to biased treatment, wherein the physician may solely focus on the client’s mental health while potentially ignoring a broader medical understanding of the client—which could result in them missing medical issues.
Increased attention to mental health concerns may also lead clinicians to pursue coercive interventions, such as involuntary psychiatric hospitalization, that may not be necessary or helpful to the client. Further, EMR flags could be used to deny clients access to treatment or pressure clients into treatment that is not congruent with their own preferences.
For example, at the Veteran’s Health Administration, clients flagged with histories of violence may be required to follow certain treatment conditions, like needing a police escort or to be screened by a metal detector before entering the facility. Critics of EMR flags have also noted that most flagged behaviors are verbal, with some suggesting that flags are a way to punish individuals who express concerns or complaints about their treatment.
Morris also draws attention to the use of surveillance cameras in psychiatric units. While the use of surveillance cameras on psychiatric units is often justified as being in the service of the safety of the clients, research evidence does not support this claim and, in fact, suggests that surveillance can contribute to psychological harm. Other concerns associated with video surveillance include: “privacy, consent, dignity, data protection, and potential exacerbation of psychiatric symptoms.”
In addition to concerns about privacy and clients’ dignity, video surveillance in psychiatric settings can be used coercively. Clinicians could use client behaviors that occurred on camera, when the client presumed no one else was present, against them in civil commitment hearings that could potentially keep clients institutionalized. Along similar lines, clients may be monitored covertly without their knowledge, which raises privacy concerns in addition to potentially causing ruptures in clients’ trust.
Moreover, although videoconferencing in psychiatric settings has been beneficial, especially during the COVID-19 pandemic—increasing access to care, allowing clients to connect with their loved ones, and facilitating legal proceedings—several concerns accompany this technology. Morris suggests that poor sound and video quality could potentially impact the client’s ability to be fully present for and understand civil commitment hearings, with clients typically already struggling to understand why they remain in the hospital following such hearings with or without the use of videoconferencing.
Additionally, clients in forensic settings struggling with mental health and/or substance addiction issues might not feel comfortable sharing personal or sensitive information in a videoconference with strangers or may not feel as if they have the same ability to access and confide in their legal counsel.
While video conferencing may allow family and friends to visit their loved ones in psychiatric settings, Morris also raises concerns that such access may lead to loved ones choosing tele-visitation over in-person visits. Televisitation may not allow for the same sense of connection as in-person visits, wherein loved ones are more clearly able to see the impact of involuntary hospitalization on those they care about, which allows them to better advocate for their institutionalized friends or family members.
Lastly, risk assessment tools, which allow clinicians to assess for the likelihood of things like suicide or violence, are discussed as potentially problematic and coercive. Although risk assessment tools have been employed in psychiatric settings prior to the use of digital technologies, digital technologies are transforming these tools.
Risk assessment algorithms have been employed to assess for suicide, violence, and other negative events. While accurate predictions of such adverse outcomes could be useful, the reality is that these tools are imperfect and not as accurate as they may appear.
Social media companies, like Facebook, have also developed suicide risk assessment algorithms to detect concerning social media posts—which raises significant ethical concerns and questions about how valid such algorithms are. The lack of accuracy of these algorithms has real-life implications for those involuntarily hospitalized, potentially on false grounds.
Not only may these algorithms be inaccurate, but they also might contribute to systemic inequalities of individuals belonging to marginalized racial, gender, socioeconomic, and other disenfranchised groups, such as children, who tend to be particularly at risk for coercion in psychiatric settings.
“In a recent example, researchers found racial bias in a widely used algorithm for stratifying patients’ health risks and targeting high-risk patients for additional care management. Because less money often is spent on Black patients than on White patients with similar needs, and the algorithm stratified risk on the basis of costs rather than illness, the algorithm perpetuated less attention to the health needs of Black patients.”
In addition, risk assessment tools also leave room open for interpretation. If clinicians are not properly trained or do not know how to interpret or use certain risk assessment tools, this could also contribute to the coercion of psychiatric clients.
Morris identifies steps that can be taken to reduce the abuse of digital technologies in psychiatric settings, such as disclosing the technologies being used in treatment to clients. He also suggests that clients be provided with the opportunity to “opt-out” of certain technologies when appropriate, providing the example of allowing clients to choose in-person rather than video observation when available.
Clients should also be given the ability to change or erase digital information, like requesting the removal of EMR flags or erasing video surveillance records. However, Morris suggests that while such requests will likely not, and in some instances, should not be granted, having formal procedures available could allow for open discussion between clients and clinicians about the purpose of flags and other surveillance measures.
Morris also advocates for further guidelines, training, and support for clinicians to know how to properly use and employ digital technologies and so they are aware of potential risks related to coercion so those can be avoided at all costs.
Morris concludes by pushing for a balanced approach to the use of digital technologies in psychiatric settings, one that is aware of the potential benefits and possibilities for such technologies, in addition to being aware of and avoiding misuse and abuse of these technologies.
Morris, N. P. (2021). Digital technologies and coercion in psychiatry. Psychiatric Services in Advance, 1-9. (Link)