Emerging Digital Mental Health Technologies Raise Legal and Ethical Issues

Inadequate legal and regulatory structures allow for abuses with digital mental health technologies.

8
1837

Experts have called for better legal and regulatory frameworks to address the expanding use of digital technologies in mental health initiatives. A new mapping study by an expert in disability law, Piers Gooding, in the International Journal of Law and Psychiatry provides a framework for organizing these new digital mental health technologies. His work also clarifies the yawning gap between current policies and regulations and the scale of the legal, ethical, and social risks that accompany their use.

A key promise of digital mental health technologies is closing the “treatment gap” by expanding the scale and reach of mental health services. Researchers have also noted how the rise of digital psychiatry, particularly with machine learning and AI, is transforming philosophical assumptions underpinning who receives psychiatric treatment and why.

This radical potential of digital psychiatry has enticed governments worldwide to pour funds into digital or technology-enhanced mental health in hopes of increasing cost-effectiveness and improving the quality of patient care. Meanwhile, tech industry professionals estimate that the value of the market for mobile health apps will rise to US$102.35 billion by 2023. Mental health apps, which range from relatively non-invasive ‘lifestyle apps’ to wholesale tracking systems to software that effectively turns one’s smartphone into a medical device, represent nearly 30% of this market.

Digital mental health technologies have also spurred a transfer of ideas and concepts from the field of mental health into other social institutions like education, consumer industries, and the criminal justice system. There, they are used in the name of prevention and early intervention. Substantial empirical evidence now suggests that mental health apps disregard user privacy, open up a host of ethical issues related to their commercial status, and lack a firm basis in evidence.

Over-arching debates of such trends converge upon the recognition that “current legal and regulatory structures for digital mental health technologies are inadequate.”

This study by Gooding maps technologies that pertain to mental health in and beyond straightforward mental health contexts. It aims to identify major socio-legal issues, types of technologies, and their users as a preliminary means to make sense of legal, ethical, and social issues arising where digital technology meets mental health practice.

The map produced a typology structured by domains of use. Where industry perspectives tend only to map functionalities, “domains of use” maps put concern for users and subjects of technologies in the center. The first major user categories include clients, service users, and patients, using tech like peer group communication technologies run by users or moderated by service providers, service user-directed personal records, and a variety of self-directed apps that manage symptoms, aid in self-diagnosis, and help users navigate professional offerings.

From a legal and ethical perspective, these technologies create concerns that apply broadly to the consequences of the aggregation of health information (privacy, security, data ownership, the potential for government intervention). Oversight for these is notably opaque, mixing scant government regulation with industry self-regulation and consumer education.

Service provider tech was also surveyed, including referral coordination, clinical decision support, electronic health records, client identification and registration systems, health records, provider decision support, telemedicine, activity planning and scheduling, and provider training. Digital phenotyping, digital pills, tele-psychotherapy (cognitive-behavioral therapy), interfaces through which clinical sessions occur, and legacy chatbots and automated first responses (e.g., suicide hotlines).

In each of these categories, service user groups’ concern centers on privacy, confidentiality, and the possibility of discrimination in the face of stolen, leaked, or commercialized personal health data. Professional consternation centers on the dangers of unproven, low-quality commercialized treatments. In this regard, health and data privacy laws in most countries omit requirements in relation to digital mental health initiatives, and existing requirements tend to lack detail and comprehensive oversight.

Health system management is a different beast, using ‘Electronic Visit Verification.’ Exacting monitoring like EVV, as well as e-prescription tracking, raise issues of compliance with core administrative law standards, like legality, fairness, transparency, and accountability. “Welfare surveillance” is unique in its collection of secondary personal data or personal data that relays more than was intended for capture.

The map also includes data management services, stretching our understanding of the scope of mental health technologies. Data storage is relevant to every electronic record and refers to a central storage point between, for example, a mental health app and a healthcare provider. While these are usually subject to existing health data protection laws like HIPAA in the U.S., Goodings provides examples of dangerous exceptions:

“Fast data transfer means that mistakes are hard to contain, and high connectivity increases the likelihood of unmoderated content and unintended recipients. In the UK, for example, the national database of medical files, the ‘Care.data scheme’ was shut down in 2016 after it was found to be selling patient data to drug and insurance companies.”

Data sharing by criminal justice agencies presents similar issues. Gooding presents the following example:

“The Office of the Privacy Commissioner of Canada found that the Toronto Police Service had released mental health and suicide data to the Canadian Police Information Centre (CPIC). CPIC had then shared the data with the US Department of Homeland Security, which in turn shared it with US Customs and Border Protection. Several Canadians were then deemed inadmissible to the US under the Immigration and Nationality Act (US).”

Whereas data management services act as ‘data warehouses,’ storing records for healthcare providers, corporations that collect user data relevant to mental health set up ‘data marketplaces.’ Gooding highlights how Facebook and Twitter use wellness screenings and AI-based suicide prevention efforts with little regulation governing how this data is connected to real healthcare systems.

Gooding’s review of these technologies points to issues that impinge on fundamental rights, like freedom from exploitation, and ethical principles like transparency, accountability, and the minimization of harm. It is easy to see how the proliferation of these technologies leads to concentration of power in the field, privacy and security issues, diminution of human connection in care relations, and the extension of clinical, government, and commercial power deeper into the lives of individuals.

This broad-based surveying research aids efforts to make sense of massive, tech-enabled shifts in mental health care that are currently taking advantage of the slow pace of legal and policy updates. Gooding reviews the legal and ethical issues at play, which may give policymakers a sense of the regulatory options available to ensure greater privacy, fairness, and dignity in the face of these trends.

 

****

Gooding, P. (2019). Mapping the rise of digital mental health technologies: Emerging issues for law and society. International Journal of Law and Psychiatry, 67, 101498. https://doi.org/10.1016/j.ijlp.2019.101498 (Link)

8 COMMENTS

  1. We all know that this has absolutely nothing to do with care, or quality, or health.
    The “public” should start to THINK. They should be aware that each and every “public” is at risk of lesser
    quality of life. That this is simply a move to keep the world going EXACTLY as it has been and if you are in the way of that plan, you are disposable and by disposable I mean that you yourself become just another organism to feed on, and if you have no QOL or die in the process, you don’t even become a stat. I realize the public which is all of us, are easily misled because they think that these measure are always about someone else, WHICH is exactly what they want you to think, or should I say, they know and count on the prejudice and biases of people. Except this is one time where you, the public, is no longer insulated by simply pointing a finger at your neighbour.
    No one is immune.

    Report comment

    • Yes it’s (privacy) been my beef for years. I remember when in Canada, the shared info act came about. It was for “collaborative care”. I have found this more problematic than if I had a criminal record.
      People truly do become comfortably numb. We have always had difficulty with thinking and reasoning stuff out, looking at long term implication.

      Report comment

  2. I recall an experiment where someone designed a computer program to act like a non-directive therapist. It asked basic questions like, “Tell me about your childhood,” and gave generic acknowledgements like “I see” or “Uh huh…” If someone went off on a tangent, the computer would say something like, “Let’s get back to your mother.” The users reported feeling better from using the program. I think it showed how desperate people are just to have someone listen to them, even though the “listening” was totally artificial. Perhaps the machine had an advantage, in that it was unable to give unsolicited advice or demonstrate prejudice or evaluate for the user what the computer thought their “real problem” was.

    Anyway, I think we’re sinking down pretty low when we’re trying to have computers provide “therapy.” Though the computer program may have a big advantage over a psychiatrist with a prescription pad. At least it won’t make you worse.

    Report comment

    • Steve, do you sense this site as providing some degree of therapy? Or learning, though the impact of typing the words, transmitting through e-tech may not really get beyond Elting E. Morison’s book, From Know-How to Nowhere: The Development of American Technology”? To have been introduced and medicated, without first really understanding the nature of “the medical practice” seemingly leaves the court docket wide open for potential legal challenges, if there were only attorney’s who understood or could understand the nature of how rationality may not be so linear in nature. The real problems seemingly are surfacing now.

      Report comment

        • When I read your comment along with others, to advocate and to have advocated, knowing this moment is in part framed by the technologies of “communication” as typed, can leave me in awe of discovering said site. But also without a verbal, an oral way to hear where my emphasis is, then I wonder (as I type this the seclusions, the wards, the yuk of the experiences encountered) requires one to go beyond. To hear or be in crowds or groups who choose the simplicity of words to realize race, or then in terms of writing as sharp declininations (Black versus White, or black to white as a range of grays) can be confusing as to the bias one can think. If I were thinking about how one might experience “light” or enlightenment from the painter’s perspective, could the argument be realized in terms of exclamation points? To be with and of the heart requires a great deal of being human. Oddly, when we realize the level of the other perspectives aligned with potential embarrassment, by the very words of mental health treated and treatment, there seems to be a point, whereby enough lines have been drawn in the sand. Tks!

          Report comment

LEAVE A REPLY