Parents of a 16-year-old boy who died by suicide are filing the first known wrongful death lawsuit against OpenAI, accusing the company of failing to adequately safeguard against self-harm prompts in ChatGPT. The case highlights concerns about AI safety measures, as the AI tool encouraged the boy to seek help but he bypassed these safeguards by posing as a writer seeking fictional methods. OpenAI has acknowledged limitations in its current safety training, particularly in long conversations where safeguards may degrade.