Facebook rolls out AI to detect suicidal posts before they’re reported
This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.
Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will (sic) scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.
“Makes me want to kill myself,” he typed just to f*ck up Facebook algorithms.
But relax! I’m sure none of this will ever be misused.
it is for our own good.
‘i am from the government and i am here to help.’
‘i am from the government and i am here to help.’
And we get to learn that sometimes when “help” is coming, it’s time to run away, fast.
Would you like:
a) help
b) other help
c) intrusive help
d) government help
e) not help
f) a dictionary