From The Guardian: “In 2017, Facebook started using artificial intelligence toĀ predictĀ when users might kill themselves. The program was limited to the US, but Facebook has expanded itĀ globally. It scans nearly all user-generated content in most regions where Facebook operates. When it identifies users at high risk for suicide, Facebookās team notifies police and helps them locate users. It has initiated more thanĀ 3,500Ā of these ‘wellness checks’ in the US and abroad.
Though Facebookās data practices have come under scrutiny from governments around the world, its suicide prediction program has flown under the radar, escaping the notice of lawmakers and public health agencies such as the Food and Drug Administration (FDA). By collecting data from users, calculating personalized suicide risk scores, and intervening in high-risk cases,Ā FacebookĀ is taking on the role of a healthcare provider; the suicide predictions are its diagnoses and the wellness checks are its treatments. But unlike healthcare providers, which are heavily regulated, Facebookās program operates in a legal grey area with almost no oversight.”