Google has refuted claims that it uses Gmail data to train its AI models, clarifying that its policies have not changed and that Gmail’s Smart Features have been available for years. The company emphasizes it does not use user emails for training its Gemini AI model.
The dispute centers on reports suggesting that the company uses users’ email content to train its Gemini AI model, which has sparked concern and debate. However, Google has denied these claims, citing misleading reports and asserting that its policies have remained unchanged. The tech giant emphasizes that Gmail’s Smart Features, which include spell checking and other tools, have existed for years and are not linked to AI training. Google spokesperson Jenny Thomson stated that the company does not use user emails for training its AI models, countering that the reports are unfounded and misleading.
The controversy highlights the growing concerns about data privacy and the ethical implications of AI development. It underscores the need for transparency in how companies handle user data and the potential risks associated with AI training methods. As discussions around data ethics continue, the debate over Google’s practices serves as a critical example of the broader challenges in balancing innovation with user privacy.