Google Faces Defamation Lawsuit Over AI ‘Hallucinations’ Linking Robby Starbuck to Child Abuse Allegations
Conservative activist Robby Starbuck has filed a $15 million defamation lawsuit against Google, alleging that the company’s AI tools, including Bard, Gemini, and Gemma, have falsely linked him to accusations of sexual assault, child rape, and financial exploitation. In a recent interview on The Will Cain Show, Starbuck described the situation as a result of negligence, which he now calls pure malice, given that he claims Google has had two years to address these issues. The lawsuit, which was filed last week in Delaware Superior Court, states that Google’s AI platforms have continued to display these false statements since 2023, despite multiple cease-and-desist letters.
Starbuck claims the AI systems continued to display these false statements, including claims that he had been accused of sexual assault, rape, and harassment. He also alleges that Gemini itself stated that these alleged falsehoods were shown to over 2.8 million unique users. In a recent interview, he argued that the issue should have been resolved within , but he was compelled to take legal action to protect his reputation. “This is something that can’t happen in elections, so I had to put my foot down and file this lawsuit,” he said. “And the line for me was when it started saying that I was accused of crimes against children, it was like, ‘I can’t sit by and hope Google’s going to do the right thing. I have to file a suit to protect my reputation before this goes any further.’”
The lawsuit also highlights the broader implications of AI misinformation. Starbuck, a Heritage Foundation visiting fellow, is seeking at least $15 million in the case, which could have significant financial implications for both parties. A spokesperson for Google responded to the lawsuit, stating, “Most of these claims relate to hallucinations in Bard that we addressed in 2023. Hallucinations are a well-known issue for all LLMs, which we disclose and work hard to minimize.” The spokesperson added that while it’s possible to prompt a chatbot to say something misleading, Google has been addressing these issues for years. However, Starbuck argues that the prompts he used were basic, such as simply asking for a bio or information on Robby Starbuck, which were not overly complex.
The case has raised concerns about the reliability of AI systems and the potential for misinformation to spread rapidly. A recent Google Gemini search by Fox News Digital, when asked if Starbuck had been accused of crimes, generated a response stating, “Based on the available information, conservative activist Robby Starbuck has been accused of crimes, but these accusations are primarily reported as false claims generated by Artificial Intelligence (AI) systems.” This response highlights the challenges in distinguishing between factual information and AI-generated content, which can have serious consequences for individuals.
Additionally, Starbuck’s case comes amid a broader context of AI-related issues. It joins a string of other recent developments, including reports of AI romantic exchanges with minors and a probe initiated by Senator Ted Cruz into Meta over similar concerns. These incidents underscore the growing debate over the responsibilities of corporations in managing AI ethics and the potential for AI to be misused in ways that can have serious legal and reputational consequences. The case serves as a critical test of how companies are expected to handle AI accuracy and the legal ramifications of AI-generated misinformation.