Facebook has begun using artificial intelligence to identify those who may be at risk of committing suicide, reports the BBC.

The company has long offered a reporting tool for those concerned that posts by a Facebook friend may indicate suicidal thoughts, and it is these reports that have been used to teach the AI system to detect worrying posts.

The social network has developed algorithms that spot warning signs in users’ posts and the comments their friends leave in response. After confirmation by Facebook’s human review team, the company contacts those thought to be at risk of self-harm to suggest ways they can seek help.

In the US, those posting messages flagged by the system can be offered the option of speaking with a crisis helpline via Facebook Messenger.

It has now developed pattern-recognition algorithms to recognise if someone is struggling, by training them with examples of the posts that have previously been flagged.

Talk of sadness and pain, for example, would be one signal. Responses from friends with phrases such as “Are you OK?” or “I’m worried about you,” would be another.

The NYT reported last year that U.S. suicide rates had surged to their highest level in nearly 30 years.

Siri stepped up its own response to possible suicide references back in 2013.

The rise was particularly steep for women. It was also substantial among middle-aged Americans, sending a signal of deep anguish from a group whose suicide rates had been stable or falling since the 1950s […]

The rate rose by 2 percent a year starting in 2006, double the annual rise in the earlier period of the study. In all, 42,773 people died from suicide in 2014, compared with 29,199 in 1999.