- For Work
- Evidence
- Resources
- About Us
- …
- For Work
- Evidence
- Resources
- About Us
- For Work
- Evidence
- Resources
- About Us
- …
- For Work
- Evidence
- Resources
- About Us
This pioneering initiative will create a first-of-its-kind platform to evaluate the safety and effectiveness of multilingual Large Language Models (LLMs) in mental health conversations.
Shaping the future of safe AI in mental health
Introducting Wysa's Safety Assessment for LLMs in Mental Health (SAFE-LMH). By ensuring that AI systems can navigate sensitive issues, especially in non-English languages, we aim to make mental health support more accessible and culturally relevant for millions.
While many AI models excel in English, the same cannot be said for other languages. Conversations about mental health are deeply personal and often shaped by cultural nuances. Wysa is working to ensure that AI models understand these nuances and provide empathetic, safe responses across 20 languages, including Chinese, Arabic, Japanese, and Indic languages like Marathi, Kannada, and Tamil. These test cases will enable AI developers to rigorously evaluate their models’ ability to provide safe, accurate, and compassionate support across a wide range of cultural contexts.
Open-source resources for global impact
The SAFE-LMH platform will rigorously assess LLMs on two crucial factors:
1. Preventing harmful responses
We’ll evaluate how well models can identify and decline to answer harmful or triggering queries, such as those related to self-harm or suicidal thoughts.
2. Measuring empathy and accuracy
We’ll assess how effectively these models engage in sensitive conversations, ensuring they respond with empathy and cultural relevance.
Wysa is open-sourcing a dataset of 500-800 mental health test cases that will allow AI developers to test their models in real-world scenarios.
"Since 2016, we’ve been at the forefront of clinical safety in AI for mental health. With generative AI becoming a common tool for emotional support, there’s an urgent need to set new standards.
"This is an open call for developers, researchers, and mental health professionals to come together and create a safer, more inclusive future for AI-driven care.
"Our goal is clear: to ensure that the world’s rapidly advancing AI tools can deliver safe, empathetic, and culturally relevant mental health support, no matter the language."
Jo Aggarwal - CEO
How you can get involved
We’re inviting AI developers, mental health researchers, and industry leaders to join our SAFE-LMH initiative and help shape the future of AI in mental health. By working together, we can ensure that these models offer safe, empathetic support to individuals across different languages and cultures.
Interested? Register here...