Story at a glance
- The Trevor Project is a nonprofit organization combatting suicide among youth in the LGBTQ+ community.
- The organization partnered with Google to develop artificial intelligence that can train counselors for its helpline and identify high-risk.
- The technology will allow the organization to train more digital volunteer crisis counselors, potentially tripling their numbers by the end of the year.
Riley is a genderqueer 16-year-old from North Carolina struggling with feelings of anxiety and depression. They’re not real, but the conversations between Riley and the crisis counselors on the other end of the Trevor lifelines are — and so is the need for help among LGBTQ+ youth in America.
Riley is a simulator built by the Trevor Project and Google’s engineers with $2.7 million in grants from the Google AI Impact Challenge to help train counselors to respond to LGBTQ+ youth in crisis. Nearly 70 percent of the organization's digital crisis counselors volunteer on nights and weekends, said CEO and Executive Director Amit Paley, and the simulator allows more flexibility for both trainers and trainees. It also allows them to train even more people, and the nonprofit hopes to triple its digital volunteer crisis counselors by the end of the year, eventually growing to ten times as many as are available now.
“When I joined The Trevor Project as CEO and Executive Director in 2017, after serving as a volunteer Lifeline counselor for years, I sought to improve our technology platforms to further our mission to end suicide among LGBTQ young people,” said Paley in a statement. “Through the tireless efforts of our growing teams and our unique partnerships over the last few years, I’m proud to say that our technology infrastructure is more sophisticated than ever before and a key part of the reason we are helping far more LGBTQ youth than ever before.”
READ MORE FROM CHANGING AMERICA
Right now, there are already more than 700 counselors working for the Trevor Project, but the need remains dire — suicide rates are especially high among LGBTQ+ youth, and the coronavirus pandemic has only exacerbated the problem. The organization estimates that at least one LGBTQ+ person between the ages of 13 and 24 attempts suicide every 45 seconds, and more than 1.8 million LGBTQ+ youth seriously consider suicide each year.
“Our Crisis Contact Simulator can engage in a prolonged back-and-forth dialogue with trainees and can use language in the same way people do, including language LGBTQ youth often use to describe their experiences and emotions. The simulator maintains a consistent emotional and experiential narrative in talking about real-life feelings and situations,” said Dan Fichter, Head of Artificial Intelligence (AI) and Engineering at The Trevor Project.
For privacy reasons, the training simulator doesn’t use actual conversations between counselors and any of the hundreds of thousands of LGBTQ+ youth that have reached out for help. Instead, it uses mock conversations between trained volunteers, as well as language gleaned from an analysis of posts on TrevorSpace, the organization’s online social networking community for LGBTQ+ youth ages 13-24.
It’s not perfect, Fichter acknowledged, considering that young people often code switch depending on who they’re talking to and mirror the tone and language of the other person. But unlike other AI programs, which are tailored to the broadest possible audience, the Trevor Project is trying to reach some of the most marginalized members of the community. The next step is building out a diverse set of personas that represent a wide range of life situations, backgrounds, sexual orientations, gender identities and risk levels.
The multiyear collaboration between the Trevor Project and nearly 30 Google.org fellows also birthed a tool for risk assessment, which uses AI to assess suicide risk and connect the highest risk youth to counselors more quickly. For this, developers used responses to an initial web form that users fill out before being connected through the helpline's online chat or text functions.
These are anonymized — the form doesn't ask for a name and any names, locations or other potentially self-identifying language given are cut out before they are fed into the program — and the organization uses private, encrypted and security-tested servers. As the use of artificial intelligence becomes more common, questions over privacy and data rights have followed, but Fichter said none of the data from the responses were shared with Google or copied out.
“There are a lot of important conversations going on right now about AI in broader society in contexts outside of public health and mental health,” he said. “To us it's really important specifically to center and better serve people at marginalized intersections of identity.”
READ MORE LGBTQ+ STORIES FROM CHANGING AMERICA