The Future of Suicide Prevention

Do entrepreneurs and healthcare pioneers hold the key to unlocking more effective means for preventing suicide? We take a deep dive into what’s on the market and leading the charge in prevention.

Illustration depicting how technology can be used to help a person living with a behavioral health condition
Geraldine Sy

Thanks to technological innovation, we have new tools and processes to alleviate or solve our everyday challenges, affording us greater productivity, efficiency, and success. Unsurprisingly, then, healthcare is also in the midst of technological transformation as new advancements and ideas propel us toward cures and better, more effective, and more efficient treatment methods. Illnesses, diseases, and everyday ailments are being re-examined and re-evaluated through the filter of innovation, with the brightest minds considering how they can use things like applications, algorithms, big data, artificial intelligence (AI), and machine learning to advance society toward healthier lives. Improving the ability to prevent suicide is one such focus of healthcare technology, especially given increasing annual suicide rates.

Entrepreneurs and healthcare pioneers have aimed their attention at three notable areas of opportunity for revolutionizing current methods in suicide prevention:

Though many healthcare organizations, institutions, agencies, and other entities do address warning signs, risk factors, and protective factors, oftentimes, they lack the proximity, time, personnel, and resources needed to address all three comprehensively and effectively. Forward-thinkers are trying to remove those barriers for better, broader and more successful prevention efforts.

As a result, most of the new platforms, tools, and software aim to overhaul or improve identification processes, risk monitoring, and building support networks or bolstering connectedness - or some combination of the three. To do this, tech leaders are creating apps and employing AI, algorithms, and machine learning, usually combining them with big data found in electronic health records (EHRs), on social media, and other pre-existing troves of information.

Applying AI to healthcare settings

Let’s start with electronic health records (EHRs), which are essentially the digital version of paper medical charts. Unsurprisingly, EHRs contain a wealth of patient and health system population information for tech leaders that could help them predict suicide risk. Psychologist and researcher Jessica Ribeiro and her team apply AI to EHRs to “map out the relationship between factors that lead to suicide.” The tool’s machine-learning-trained algorithm examines things like “medication use to the number of ER visits over many years” to discover patterns that indicate suicide risk. At 80 percent accuracy for attempts within two years and 92 percent accuracy for attempts within the next week, the tool alleviates cost and resource restrictions that might prevent healthcare professionals from identifying these patterns and at-risk patients.

Applying AI to big data from phones and social media

But that leaves, of course, the people who never step foot into a medical office or facility, let alone reach out for help for a behavioral health issue. Behavioral science software company Cogito uses the data from smartphone communication and movement patterns to power its mobile app, Companion. Using information like where and how far the smartphone user goes, and how socially connected the user is through calls and messages, the app then generates a risk score, shows it to a clinician, and if necessary, the clinician can call the user to follow-up. If the score drops for any reason, the app alerts the clinician. Naturally, users must opt-in, but the power of this technology is that an individual doesn’t have to interact with a medical professional to be on their radar.

Social media is another plentiful data source that provides nearly unlimited publicly available information that many technology companies can scan and analyze for indicators of risk. Tech giant Facebook has taken steps to do exactly that by using its own data. Employing its existing features, Facebook has repurposed some of its tools to create an integrated approach to prevention: improving its reporting process by adding crisis resources and dedicated teams to review, partnering with crisis support organizations to assist via Live Chat, and as far as data is concerned, using AI and pattern recognition to look for threats. When scanning and examining users’ posts, Facebook uses AI to recognize posts that might likely include thoughts of suicide, and then has its Community Operations review and reach out to the user, if necessary.

Outside of Silicon Valley, Canadian-based AI company Advanced Symbolics is partnering with the nation’s Public Health Agency to do something similar to Facebook’s post-scanning. In their pilot project, the two organizations will “[examine] patterns in Canadian social media posts, including suicide-related content” to research and predict suicide rates. Unlike Facebook, which will use what it finds to reach out to users and potentially “intervene,” this pilot project will use its findings to help the country with its mental health resource planning.

Tackling a social media platform more popular in China, a group of researchers led by Zhu Tingshao from the Institute of Psychology at the Chinese Academy of Sciences in Beijing have taken to Weibo to apply AI and utilize its data. Their technology “uses a web crawler to scan posts and pattern recognition to find those that show suicidal tendencies. It does not look for keywords, but uses a prediction model that makes a judgment about the content.” If the content suggests risk, they offer mental health resources to the individual. Using their AI system, Tingshao and his team have been able to identify more than 20,000 at-risk users, and are in the process of introducing the system to Twitter in the U.S.

In general, combining social media with AI offers the potential to discover patterns of content and posting that could help tech leaders, researchers, and caregivers alike better understand what to look for online that might indicate risk.

Using apps to help communities offer support

Apps are essentially built to be mobile and accessible anytime and anywhere, which proves useful when tackling something like suicide prevention. Many up-and-coming apps span everything from educating caregivers and promoting connectedness, to offering assistance in times of crisis.

One of the players in this area is the Suicide Prevention App, a “publicly distributed standardized screening and response planning tool that empowers [users] with the professional skills needed to help someone in time of need from anywhere in the world.” Using proprietary algorithms, the app determines your concern based on the information it fields through the app, and then provides a response plan to help you provide support to the person in need. The app touts itself as a tool “to build community support and capacity to reduce suicide by giving you the skills to help others and connect to the infrastructure already in place.” The Suicide Prevention App is less a means for someone to self-identify as at-risk, and more a way for the community to better support people who might need help, ultimately putting resources in the hands of those who can make the greatest immediate impact.

Similarly, Kognito At-Risk aims to educate community members - specifically at schools and universities - in how to “identify warning signs of psychological difficulties, and practice initiating and leading real-life conversations that help motivate students to seek help.” Unlike other apps that might offer resources, Kognito At-Risk employs online role-play simulation that enables users to practice how they would handle these real-life scenarios, with virtual coaches providing advice along the way. The program’s success led to “a 70 percent increase in the number of classmates approached to discuss concerns about their psychological state and 53 percent increase in referrals to school support services.”

Then there are apps like Bolster, which helps solve the problem of what to do after someone has been identified as at-risk. For many of these individuals, they do not necessarily need to be treated in a hospital or in-patient setting, but they still require dedicated support and resources. Bolster bills itself as “the world’s first community for supporters,” creating an app that provides the caretakers and loved ones of at-risk individuals with “practical guidance and emotional reassurance” from both experts and others in the same situation. This emphasizes connectedness and the importance of having readily available and trusted lifelines, crucial factors in suicide prevention.

Other apps provide basic services like the ability to screen for suicidal ideation and behaviors, such as the Suicide Safe app launched by SAMHSA, or are merely a quicker way to reach crisis resources and lifelines, such as the MY3 app in California. Then there are other apps that, while not suicide-prevention-specific, aim to make therapy more geographically and financially accessible through teletherapy, which provide preventative care and serve as a protective factor against suicide. Apps help reduce many of the restraints that prevent people from receiving care, initiating conversations, or providing support to others, and make it so that prevention efforts are mobile and readily available.

Analyzing behavior and movement

Beyond the apps, innovators are experimenting with some seemingly futuristic capabilities with potential application for suicide prevention. Similar to how tech leaders and researchers analyze text and content via smartphone apps, other innovators have begun analyzing offline speech, facial emotion, and even body movement to try to unlock valuable information about whether a person is at-risk. In examining speech analytics, researchers will look at things like pauses, speed, the quality of our voice, and other aspects of verbal communication and connect them to neuroscience. From there, they determine whether there might be connections or patterns that reveal a predisposition for a behavioral health issue. With facial emotion analysis, researchers examine facial expressions “to determine suicidal thoughts based on how people react emotionally and physically to different stimuli,” and if used in conjunction with “measures such as skin conductance and heart rate” could provide insight into risk level. This holds massive potential if able to be applied accurately, given how suicidal ideation or other at-risk behaviors might never be verbalized or evident. Researchers have even experimented with wearable sensors attached to the body that collect real-time data and, when run through a predetermined algorithm and compared against clinical data, are able to examine an individual’s mental state and report results to appropriate parties. With any of these technologies, however, an individual obviously must opt-in and agree to participate in their use; so despite such advancements, there is still the challenge in having individuals agree to using the technology.

With all of these innovations emerging as potential solutions for the challenges society currently faces in suicide prevention, there is still a lot of testing to be done as to their efficacy. However, the future of suicide prevention is ultimately a promising one - one in which the ideas of great tech leaders and entrepreneurs can support the needs of healthcare professionals, caretakers, and communities to ensure the safety and wellbeing of those who might be at-risk for suicide. These advancements might be in their early stages, or still have yet to be adopted by the public, but they offer a glimpse into what the world might be like when detection, treatment, and continued care are no longer limited by things like resources, geography, or cost, and instead, readily available and accessible to all who need it.