Artificial Intelligence Models Frequently Employ Inflammatory Verbiage
In a study conducted by Mass General Brigham, the impact of AI on language and stigma, particularly in addiction-related queries, has been brought to light. The research emphasizes the importance of creating AI systems that help patients feel valued and understood.
Large language models (LLMs) used in healthcare communication often generate responses about addiction and substance use that contain stigmatizing language in over 35% of cases. This use of stigmatizing terms may unintentionally reinforce harmful stereotypes around substance use disorders.
However, the study suggests that balancing the benefits of AI with careful consideration of its impact on language and stigma is crucial. By carefully crafting the instructions for the models, the researchers were able to reduce the use of stigmatizing language by nearly 90%. This was achieved primarily through targeted prompt engineering, which drastically reduces the occurrence of stigmatizing language to around 6.3%.
Prompt engineering involves carefully designing the inputs and instructions given to LLMs so that they respond using more empathetic, person-centered, and non-judgmental language. For example, prompts can encourage the LLM to use language emphasizing recovery and support rather than blame or shame.
Clinicians are advised to proofread and modify LLM outputs before sharing them with patients to ensure they avoid stigmatizing terms and instead reflect language aligned with compassionate care. Future efforts should include people with personal experience of addiction in developing and refining the language used by AI tools.
The study underscores the importance of involving patients and families in this process. Their insights can provide valuable insights into what kinds of words and expressions feel respectful and helpful. The integration of AI in healthcare is becoming more common, and it is essential to ensure that these tools are used in a way that fosters trust and engagement, rather than reinforcing harmful stereotypes.
The authors recommend that clinicians review any AI-generated content carefully before sharing it with patients. Addressing language concerns in AI can lead to improved trust and better engagement with treatment. The study aligns with broader healthcare goals to "Speak with Empathy, Heal with Compassion," fostering patient engagement and improving outcomes through respectful communication. Overall, combining prompt engineering with human oversight is key to reducing stigma in AI-driven healthcare conversations about addiction and substance use.
- To foster trust and engagement in health-and-wellness conversations, AI systems in the field of science, including those specializing in mental health, must be engineered with care, avoiding stigmatizing language and instead adopting language that emphasizes recovery and support.
- When integrating AI technology into the realm of mental health and health-and-wellness, it is crucial to involve patients and their families in the process, to ensure the language used by these systems aligns with empathetic and compassionate care, and thus helps reduce stigma and improve overall outcomes.