Analytics and the Prevention of Suicide

Greg Horne describes how data on social media can be used to identify and concentrate resources on groups who are at risk of suicide.

__________________________________________

Suicide is the second leading cause of death among youth in Canada. According to Statistics Canada, in 2011, it accounted for approximately 20% of the deaths of people under the age of 25. The Canadian Mental Health Association claims that among 15 – 24-year-olds the percentage of deaths caused by suicide is even higher, a frightening 24%– the third highest in the industrialized world. Recent reports also suggest that the suicide rates for First Nations and Inuit youth in Canada are from five to eleven times higher than the National average. Yet, despite these disturbing statistics, it is difficult, if not impossible, for health care providers (or friends and family) to identify whether a young person plans to injure themself or die by suicide.

The warning signs leading up to a suicide can be easy to miss. For example, consider the recent spate of suicides at the University of Guelph. Was there a possibility of identifying the warning signs of increasing mental health issues at the University? Were there indications of a potential spike in suicides?

Some warning signs may be found online. Many people use social platforms like Facebook and Twitter to post detailed personal information about their health and their mental wellbeing. This information could help to identify groups who are at risk of self-harm or suicide.

SAS Canada, a data management, software development, and analytics company, is using a new artificial intelligence software solution to identify social groups that are at increased risk of suicide. The data collection and analysis begins with the manual selection of a group of target words that may indicate mental health issues or thoughts of suicide. For example, words like “suicide, cut, bullying, family issues and depression.”  After this initial stage, the group of target words will grow as the software solution learns what concepts and discussions are important to the target demographic. In order to correctly identify the warning signs of suicide and eliminate false positives (cases where target words are mentioned but not indicative of a suicide risk), the software solution must understand the context and the syntax of the media. It has to constantly learn how to adapt to the language and colloquial terms being used.

If a group has been identified through data analysis as being at high risk, public health units, university authorities, or other relevant organizations with appropriate expertise can then design and implement preventive interventions.

Recently, this work by SAS was presented to various health care organizations in Canada. Although the details of these meetings are confidential, I can report that these organizations confirmed the pressing need to identify the early warning signs of behaviour that may lead to either self-harm or death by suicide. SAS’s artificial intelligence model for identifying persons on social media who are at risk for suicide would be able to fill this gap in healthcare data.  The positive response from healthcare organizations provides some validation and encouragement for this SAS project.

The use of artificial intelligence software solutions on social media is part of a movement called Data For Good. This movement is about using Big Data to help solve social and humanitarian issues.

Currently, SAS is just scraping the surface of what can be done with the data held in social platforms within the contexts of mental health and suicide prevention. For example, another way to use the online data is to look at patterns and trends. This information can tell us if specific regions in Canada have a problem or help identify a specific school or time of year. Ultimately, it could allow for more targeted suicide prevention campaigns. It could instruct decision makers where the most effort and dollars are needed to treat persons who are most at risk. I envision it being used to find not only at-risk teens but others like first responders or veterans who may be considering self-harm, self-medication or suicide. More information on the work done by SAS Canada is available online.

Nevertheless, some big questions remain: Who should intervene when a group is found to be high-risk for harm or suicide and when and where should such an intervention occur? How should privacy concerns be managed when this kind of surveillance is involved? Who should be accountable when no one intervenes and a death occurs? How should society use such data to drive positive outcomes for patients? How should the success of such interventions be measured? These are the questions that policy makers and academics are currently wrestling with.

__________________________________________

Greg Horne is the National Lead for Health Care at SAS Canada. @_greghorne

%d bloggers like this: