Google has denied reports suggesting it uses Gmail messages to train AI models, stating these claims are misleading. The company asserts that its Gmail Smart Features have been available for years and that it does not utilize user email content for training its Gemini AI model.
A Google spokesperson, Jenny Thomson, told The Verge that the reports are unfounded, emphasizing that there have been no changes to user settings and that Smart Features have long existed. She further clarified that Gmail content is not used for training the Gemini AI model, thereby addressing concerns about data privacy and user trust.
The controversy stems from viral social media posts and articles, such as those from Malwarebytes, which claimed Google had altered its policies to use user emails for AI training. These claims, however, were rejected by Google, which highlighted that users can still disable Smart Features if they wish, without affecting their core email functionality.
Google’s response aims to reassure users about their privacy and data security, especially amid growing public concerns about AI data usage. The company’s stance is consistent with its long-standing commitment to user privacy, though the incident underscores the ongoing debate around data ethics and the use of personal information for AI development.
The situation reflects broader industry tensions regarding data collection practices and the need for transparency. As AI technologies continue to evolve, such controversies are likely to persist, necessitating clear communication from companies to maintain user trust and compliance with data protection regulations.
Meanwhile, the incident has sparked discussions about the importance of verifying information from sources like social media and third-party analyses. Users are encouraged to consult official statements from companies to ensure they receive accurate and up-to-date information on data practices and AI developments.