Experts have called for better legal and regulatory frameworks to address the expanding use of digital technologies in mental health initiatives. A new mapping study by an expert in disability law, Piers Gooding, in the International Journal of Law and Psychiatry provides a framework for organizing these new digital mental health technologies. His work also clarifies the yawning gap between current policies and regulations and the scale of the legal, ethical, and social risks that accompany their use.
A key promise of digital mental health technologies is closing the “treatment gap” by expanding the scale and reach of mental health services. Researchers have also noted how the rise of digital psychiatry, particularly with machine learning and AI, is transforming philosophical assumptions underpinning who receives psychiatric treatment and why.
This radical potential of digital psychiatry has enticed governments worldwide to pour funds into digital or technology-enhanced mental health in hopes of increasing cost-effectiveness and improving the quality of patient care. Meanwhile, tech industry professionals estimate that the value of the market for mobile health apps will rise to US$102.35 billion by 2023. Mental health apps, which range from relatively non-invasive ‘lifestyle apps’ to wholesale tracking systems to software that effectively turns one’s smartphone into a medical device, represent nearly 30% of this market.
Digital mental health technologies have also spurred a transfer of ideas and concepts from the field of mental health into other social institutions like education, consumer industries, and the criminal justice system. There, they are used in the name of prevention and early intervention. Substantial empirical evidence now suggests that mental health apps disregard user privacy, open up a host of ethical issues related to their commercial status, and lack a firm basis in evidence.
Over-arching debates of such trends converge upon the recognition that “current legal and regulatory structures for digital mental health technologies are inadequate.”
This study by Gooding maps technologies that pertain to mental health in and beyond straightforward mental health contexts. It aims to identify major socio-legal issues, types of technologies, and their users as a preliminary means to make sense of legal, ethical, and social issues arising where digital technology meets mental health practice.
The map produced a typology structured by domains of use. Where industry perspectives tend only to map functionalities, “domains of use” maps put concern for users and subjects of technologies in the center. The first major user categories include clients, service users, and patients, using tech like peer group communication technologies run by users or moderated by service providers, service user-directed personal records, and a variety of self-directed apps that manage symptoms, aid in self-diagnosis, and help users navigate professional offerings.
From a legal and ethical perspective, these technologies create concerns that apply broadly to the consequences of the aggregation of health information (privacy, security, data ownership, the potential for government intervention). Oversight for these is notably opaque, mixing scant government regulation with industry self-regulation and consumer education.
Service provider tech was also surveyed, including referral coordination, clinical decision support, electronic health records, client identification and registration systems, health records, provider decision support, telemedicine, activity planning and scheduling, and provider training. Digital phenotyping, digital pills, tele-psychotherapy (cognitive-behavioral therapy), interfaces through which clinical sessions occur, and legacy chatbots and automated first responses (e.g., suicide hotlines).
In each of these categories, service user groups’ concern centers on privacy, confidentiality, and the possibility of discrimination in the face of stolen, leaked, or commercialized personal health data. Professional consternation centers on the dangers of unproven, low-quality commercialized treatments. In this regard, health and data privacy laws in most countries omit requirements in relation to digital mental health initiatives, and existing requirements tend to lack detail and comprehensive oversight.
Health system management is a different beast, using ‘Electronic Visit Verification.’ Exacting monitoring like EVV, as well as e-prescription tracking, raise issues of compliance with core administrative law standards, like legality, fairness, transparency, and accountability. “Welfare surveillance” is unique in its collection of secondary personal data or personal data that relays more than was intended for capture.
The map also includes data management services, stretching our understanding of the scope of mental health technologies. Data storage is relevant to every electronic record and refers to a central storage point between, for example, a mental health app and a healthcare provider. While these are usually subject to existing health data protection laws like HIPAA in the U.S., Goodings provides examples of dangerous exceptions:
“Fast data transfer means that mistakes are hard to contain, and high connectivity increases the likelihood of unmoderated content and unintended recipients. In the UK, for example, the national database of medical files, the ‘Care.data scheme’ was shut down in 2016 after it was found to be selling patient data to drug and insurance companies.”
Data sharing by criminal justice agencies presents similar issues. Gooding presents the following example:
“The Office of the Privacy Commissioner of Canada found that the Toronto Police Service had released mental health and suicide data to the Canadian Police Information Centre (CPIC). CPIC had then shared the data with the US Department of Homeland Security, which in turn shared it with US Customs and Border Protection. Several Canadians were then deemed inadmissible to the US under the Immigration and Nationality Act (US).”
Whereas data management services act as ‘data warehouses,’ storing records for healthcare providers, corporations that collect user data relevant to mental health set up ‘data marketplaces.’ Gooding highlights how Facebook and Twitter use wellness screenings and AI-based suicide prevention efforts with little regulation governing how this data is connected to real healthcare systems.
Gooding’s review of these technologies points to issues that impinge on fundamental rights, like freedom from exploitation, and ethical principles like transparency, accountability, and the minimization of harm. It is easy to see how the proliferation of these technologies leads to concentration of power in the field, privacy and security issues, diminution of human connection in care relations, and the extension of clinical, government, and commercial power deeper into the lives of individuals.
This broad-based surveying research aids efforts to make sense of massive, tech-enabled shifts in mental health care that are currently taking advantage of the slow pace of legal and policy updates. Gooding reviews the legal and ethical issues at play, which may give policymakers a sense of the regulatory options available to ensure greater privacy, fairness, and dignity in the face of these trends.
****
Gooding, P. (2019). Mapping the rise of digital mental health technologies: Emerging issues for law and society. International Journal of Law and Psychiatry, 67, 101498. https://doi.org/10.1016/j.ijlp.2019.101498 (Link)