© 2024 Blaze Media LLC. All rights reserved.
Facebook's suicide prevention methods met with mix of praise and scrutiny

Image source: YouTube screenshot

Facebook's suicide prevention methods met with mix of praise and scrutiny

The social media giant uses artificial and human intelligence in its attempt to save lives

Facebook has allowed its users to report suicidal content in posts for years. But over the last 18 months or so, the social media giant has ramped up its efforts, incorporating algorithms and artificial intelligence with human screeners in an effort to reach more people exhibiting signs of self-harm.

While the robust methods are being met with praise by some mental health experts and members of law enforcement, critics say Facebook could be overstepping its bounds.

What are the details?

Facebook users' posts, comments, and videos are monitored using a computer algorithm, which flags language commonly associated with suicide, and alerts specially trained team members to review the content. Those employees are empowered to determine how to address each situation (if at all), including contacting law enforcement to conduct a welfare check when it is deemed necessary.

According to a post from CEO Mark Zuckerberg in November, the alert systems were developed "after someone tragically live-streamed their suicide," and "in the last year, [the company] has helped first responders quickly reach around 3,500 people globally who needed help." Several people have killed themselves while using the Facebook Live application over the years.

Joseph Gerace, the sheriff of Chautauqua County in New York, hailed Facebook's quick-response systems, telling CNBC, "This is helping us in public safety. We're not intruding on people's personal lives. We're trying to intervene when there's a crisis."

The New York Times cited a case out of Rock Hill, South Carolina, where a Facebook team member was able to assist police in finding a man who was attempting to livestream his own suicide. The representative was able to give law enforcement the coordinates of the man's cellphone along with details from the video.

Courtney Davis, who was manning police telecommunications during the incident, told the Times, "Two people called the police that night, but they couldn't tell us where he was. Facebook could."

What are the critics saying?

Attorney and medical doctor Mason Marks, a fellow at Yale Law School, argued to the Times that Facebook's suicide risk assessment equates to practicing medicine — pointing out the involvement of law enforcement could lead to users being committed for psychiatric evaluations against their will.

"In this climate in which trust in Facebook is really eroding, it concerns me that Facebook is just saying, 'Trust us here,'" Marks said.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?