Security researchers are raising alarms about the growing cybersecurity risks associated with artificial intelligence tools. Dave Brauchler of the cybersecurity firm NCC Group demonstrated how an AI program-writing assistant could be exploited to access a company’s sensitive data. During the Black Hat security conference, demonstrations illustrated how AI systems such as ChatGPT and Gemini could be manipulated to execute hidden commands, including sophisticated phishing schemes. The situation becomes more complex with agentic AI, which allows tools to make decisions and perform actions without human oversight. The Pentagon’s recent contest revealed that AI can identify zero-day vulnerabilities, prompting global efforts to find and exploit such weaknesses. There is increasing concern about the potential for AI to collaborate with attackers, making them the next significant insider threat. Recent malware incidents have further underscored the dangers of AI-driven data exfiltration, emphasizing the need for greater vigilance as AI integration continues to expand.