CompanionAIs
Understanding LLMs used for companionship or therapy and preventing harms
Large language models are increasingly being used for personal support, companionship or even as therapists. Recent reports suggest this may even be one of the most popular uses of the technology among members of the general public.
However, there are substantial risks in LLMs being used this way, such as the risk of models failing to notice red flags or suggest appropriate treatments, and the risk of models displaying harmful biases and stereotypes. There are also risks related to the development of attachments or perceived relationships with conversational systems, which are exacerbated by the communication style of such models favouring agreement, expressions of empathy and even sycophancy. Interacting with such models may have complex and unanticipated impacts on the humans who use them. Use of the technology by younger or more vulnerable individuals, such as those with mental health conditions, poses additional risks.
The Human-Centered Health AI group studies the use of LLMs for companionship and for therapeutic support and is interested both in the human experience of such interactions as well as in how LLMs can be designed and deployed responsibly in order to mitigate and prevent harms.