Digital Health Technologies Threaten Human Rights, Experts Warn

Global health experts argue that any gains from digital health technologies will be offset by violations of human rights.


New research published in the Health and Human Rights Journal outlines the potential harms that come with the hasty, unprincipled adoption of digital health technologies. Global health experts and advocates, led by Nina Sun from Drexel University, review strategies for mitigating risks of discrimination and violence inherent in digital health technologies and propose mechanisms for increasing accountability.

While not new, the use of digital technologies for health has risen with the COVID-19 pandemic. Our current circumstances make it even more difficult to argue against the allure of digital health technologies that stand to close the “treatment gap” by expanding the scale and reach of mental health services. Such potential has enticed governments worldwide to pour funds into digital or technology-enhanced mental health to increase cost-effectiveness in the long run, thereby improving the quality of patient care.

Nevertheless, it is becoming more and more apparent that digital health technologies stand on slippery ethical grounds and that this expansion implies outsourcing important features of service provision.

The article provides an overview of potential harms related to these technologies. It describes the ethical and human rights standards that governments and other stakeholders may use to effectively mitigate the rights-related concerns associated with their use.

There are three key areas of potential harm related to digital health technologies related to access to health services and their privatization. These are data breach, bias, and function creep.

Common in the health sector, data breaches occur when security efforts are circumvented, leading to the accidental or unlawful alteration, loss, or disclosure of personal data. Such breaches “violate an individual’s right to privacy and erode trust in the health care system.” Bias in algorithms and automated processes have been repeatedly documented:

“This phenomenon can, for example, amplify discrimination in criminal justice proceedings and predictive policing, facilitate discriminatory hiring decisions, or produce targeted online marketing campaigns with discriminatory effects.”

The accuracy of these processes also declines for minority populations, and algorithmic decision-making sometimes evades nondiscrimination laws by furthering the dynamic pricing of services. Moreover, function creep occurs when data is collected for one purpose and then used opportunistically.

The authors urge us to see ethics and human rights as separate yet complementary systems for protecting individuals and promoting accountability in digital health tech use. This is because ethical approaches tend to lack specificity and have weak enforcement mechanisms.

This is where a human rights framework can be helpful. Within the context of health, discussions raised by COVID-19 and rights-related standards for populations at increased risk of HIV are informative of the most relevant standards. These are a right to health, non-discrimination, the right to benefit from scientific progress, and the right to privacy. Together, these support access and availability of unbiased technologies, the imposition of rights due diligence and harm mitigation, and the right to have recourse to non-digital services and erasure or correction of digital records.

On the issue of private actors, the authors recommend that the private sector take on human rights commitments as an issue of legal compliance:

“States must protect against human rights abuses by third parties, an obligation that covers private actors. This includes ensuring access to justice when business-related human rights violations arise. Governments should also set expectations for businesses domiciled or operating within their jurisdiction to respect human rights, including through crafting, monitoring, and enforcing protective legislation, as well as conducting human rights due diligence that accounts for issues related to gender and marginalization.”

Regional data-protection frameworks already enshrine consensual processes meant to protect the rights of the “data subject.” These include the right to be informed about what data are and are not collect, to access stored data, to rectification and erasure, to restriction of processing, to be notified of rectification or erasure or restriction of processing, to data portability, and to object to collected data.

However, one must remember that safeguards aligned with regional and global human rights and ethical standards in national legal frameworks on data collection and processing are minimum standards. These should be complemented to advance the right to health in an equitable, non-discriminatory manner.

The authors propose three opportunities for complementing these standards that allow countries to assess whether there is sufficient consideration of ethical principles and integration of human rights protections when digital health technologies are adopted. Judicial reviews should be used to their fullest extent, and health technology assessments (HTAs) should be used as part of national digital health strategies.

Health technology assessments evaluate the value of health tech at different points in its life cycle. This is useful for informing policymakers and influencing decision-making in health care (e.g., allocation of funds). These are used well with country-wide strategies that help identify gaps and opportunities for leveraging digital technologies to improve health outcomes. National digital health strategies also serve to direct focus to the definition of human rights standards, to advance rights-based principles (e.g., participation via broad-based consultations), and to develop the trust necessary for effective implementation.

Rights-related issues notwithstanding, digital technologies hold promise for addressing barriers to health care quality and access. Their future development should take community ownership of such technologies more seriously. This tends to create a deeper alignment with the ethical principles of accountability and justice in their design and implementation.



Sun, N., Esom, K. Dhaliwal, M., Amon, J. (2020). Human rights and digital health technologies. Health and Human Rights Journal (22)2, pp 21-32. (Link)


  1. Must the discussions in here always dwell on what goes on beyond the physical senses in closed spaces, and behind closed doors?

    Because after awhile, it will end up being so logical that all of us might be needing ashtrays, various other enclosures to meet our needs, in keeping this up and so forth.


    I mean: “Because after awhile, it will end up being so logical that all of us might be needing ashtrays, various other enclosures to meet our needs in keeping this up, and so forth.”

    Report comment