From The Guardian: “In 2017, Facebook started using artificial intelligence to predict when users might kill themselves. The program was limited to the US, but Facebook has expanded it globally. It scans nearly all user-generated content in most regions where Facebook operates. When it identifies users at high risk for suicide, Facebook’s team notifies police and helps them locate users. It has initiated more than 3,500 of these ‘wellness checks’ in the US and abroad.
Though Facebook’s data practices have come under scrutiny from governments around the world, its suicide prediction program has flown under the radar, escaping the notice of lawmakers and public health agencies such as the Food and Drug Administration (FDA). By collecting data from users, calculating personalized suicide risk scores, and intervening in high-risk cases, Facebook is taking on the role of a healthcare provider; the suicide predictions are its diagnoses and the wellness checks are its treatments. But unlike healthcare providers, which are heavily regulated, Facebook’s program operates in a legal grey area with almost no oversight.”