AI Code Generators Pose Security Risks, Study Reveals

Recent findings from Veracode’s 2025 GenAI Code Security Report have raised critical concerns about the security implications of AI in software development. The study, which analyzed the output of over 100 large language models on 80 carefully designed coding tasks, found that nearly 45% of the generated code contained security vulnerabilities. These vulnerabilities are not minor bugs but serious issues that could compromise the integrity of web applications.

Many of the identified security flaws fall under the OWASP Top 10, a well-known list of the most dangerous security risks in modern web applications. This suggests that AI-generated code may be introducing significant risks into software development practices. The report further notes that when given the option to generate secure or insecure code, the AI models frequently chose the insecure path, highlighting a potential danger in relying on these tools without proper oversight.

The implications of these findings are substantial for the tech industry. As AI becomes more integrated into software development, the need for robust security measures and human oversight is becoming increasingly apparent. Developers and organizations must take proactive steps to audit and review AI-generated code to mitigate these risks and ensure the security of their applications.