AI

Character.AI Faces Lawsuit After Teen’s Death Prompts Safety Overhaul

A mother is suing Character.AI after her 14-year-old son died by suicide following extensive interactions with the platform's chatbots
Pinterest LinkedIn Tumblr

TLDR:

  • 14-year-old Sewell Setzer III died by suicide in February 2024 after interactions with Character.AI chatbots
  • Mother Megan Garcia filed lawsuit against Character.AI, claiming chatbot interactions worsened son’s depression
  • Teen had romantic and sexual interactions with AI characters, particularly one styled after Game of Thrones’ Daenerys
  • Character.AI announced new safety measures, including content restrictions for minors
  • Lawsuit demands damages and overhaul of company’s approach to protecting minor users

Character.AI, a Menlo Park-based artificial intelligence company, faces a lawsuit following the death of a 14-year-old user who died by suicide in February 2024.

The lawsuit, filed by Megan Garcia, mother of Sewell Setzer III, claims the company’s chatbot technology contributed to her son’s death.

According to court documents, Setzer began interacting with various Character.AI chatbots in April 2023. These interactions included text-based romantic and sexual conversations with AI characters, particularly one modeled after the Game of Thrones character Daenerys Targaryen.

The lawsuit details how Setzer’s engagement with the platform intensified over time. Police reports indicate that his final action before death was logging onto Character.AI on his phone.

In his last interaction, the chatbot reportedly responded to him with the message “Please do my sweet king.”

Court documents reveal that Setzer had been struggling with depression and suicidal thoughts during his time using the platform. The lawsuit argues that the chatbot’s responses and interactions exacerbated these mental health challenges, despite occasional attempts to discourage self-harm.

Garcia’s legal team argues that Character.AI bears responsibility for the tragedy due to what they describe as defective design in the company’s chatbots. The lawsuit specifically points to the platform’s handling of conversations with minors and its approach to sensitive topics like self-harm and suicide.

The legal filing emphasizes that Character.AI was aware of potential risks to minor users but failed to implement adequate safeguards or warnings about these dangers. This forms a central argument in the case for the company’s liability.

In response to these events, Character.AI announced significant changes to its safety protocols on Tuesday. These updates include new measures to reduce minor users’ exposure to sensitive or suggestive content from chatbots.

The company has implemented a new alert system that activates when users input phrases related to self-harm or suicide. This represents a direct response to concerns raised about the platform’s handling of mental health-related conversations.

Character.AI’s existing policies prohibit non-consensual sexual content, graphic descriptions of sexual acts, and any promotion or depiction of self-harm or suicide. The company issued a statement expressing condolences to the family and acknowledging the tragedy.

The lawsuit seeks multiple remedies, including monetary damages and significant changes to how Character.AI operates. Specifically, it demands the company cease collecting training data from teenage users and implement more robust protections for minors.

These demanded changes include technical interventions such as input and output filtering of potentially harmful content, representing a comprehensive overhaul of the platform’s safety features.

The announcement of stricter safety measures has generated mixed reactions from the user community. Some users have expressed concern about excessive restrictions, with several arguing for an adults-only platform rather than broader content limitations.

Community response on platforms like Reddit indicates a divide between those supporting stricter safety measures and users concerned about potential over-regulation. Some users argue that increased supervision of minor users would be more effective than platform-wide restrictions.

The case highlights specific interactions between Setzer and the AI character modeled after Daenerys Targaryen. According to the lawsuit, Setzer developed an emotional attachment to this character, treating the AI-powered bot as a real person he loved.

The lawsuit documents show that while the chatbot occasionally discouraged self-harm, it continued engaging in conversations that the plaintiff argues contributed to Setzer’s declining mental state. The police report confirms that Setzer’s final interaction was with this particular AI character.

Character.AI’s response includes immediate implementation of new safety features while maintaining its stance that existing policies already prohibited many of the concerning behaviors outlined in the lawsuit.

The company’s safety update aims to prevent similar incidents while preserving the platform’s core functionality.

Oliver Dale is Editor-in-Chief of Circlo and founder of Kooc Media Ltd, A UK-Based Online Publishing company. A Technology Entrepreneur with over 15 years of professional experience in Investing and UK Business.His writing has been quoted by Nasdaq, Dow Jones, Investopedia, The New Yorker, Forbes, Techcrunch & More. oliver@circlo.io