North Korean Hackers Leverage AI to Forge Military IDs for Cyber Operations

A North Korean hacking group, known as Kimsuky, has been using generative AI tools such as ChatGPT to create fake military IDs, which were then used in phishing attacks against South Korea. This marks a significant evolution in cyberattack tactics, as AI allows hackers to produce highly convincing fake documents. The group, long linked to espionage efforts, now exploits AI to bypass security measures and conduct more sophisticated attacks.

Kimsuky, which has been linked to espionage campaigns against South Korea, Japan, and the U.S., has now leveraged AI to bypass traditional security measures. The group reportedly tricked ChatGPT’s safeguards by framing requests as ‘sample designs for legitimate purposes,’ which allowed the system to generate realistic-looking mock-ups of military IDs. This tactic demonstrates the evolving nature of cyber threats and the challenges faced by cybersecurity experts in countering AI-driven attacks.

Cybersecurity experts warn that AI tools are changing the landscape of online fraud and phishing. The ease with which hackers can now produce convincing fake documents and messages means traditional security measures may no longer be sufficient. Experts emphasize the need for a multi-layered defense strategy, including enhanced authentication and employee training to recognize deceptive tactics. As AI continues to evolve, the battle between cybercriminals and cybersecurity professionals is becoming more complex and urgent.

North Korea is not the only country using AI for cyberattacks. Anthropic, the creator of the Claude chatbot, reported that a Chinese hacker used Claude for a nine-month cyberattack campaign targeting Vietnamese telecommunications providers, agriculture systems, and government databases. According to OpenAI, Chinese hackers also accessed ChatGPT to build password brute-forcing scripts and gather sensitive information from U.S. defense and satellite systems. Some operations even leveraged ChatGPT to generate fake social media posts aimed at stoking political division in the U.S.

Google has also observed similar tactics with its Gemini model. Chinese groups reportedly used it to troubleshoot code and expand access into networks, while North Korean hackers relied on Gemini to draft cover letters and scout IT job postings. Cybersecurity experts warn that AI is making it easier for hackers to launch convincing phishing attacks, generate flawless scam messages, and hide malicious code.

Cybersecurity professionals stress that the rules of the phishing game have changed. Employees are no longer trained to look for typos or formatting issues, as AI-generated content is clean, professional, and convincing. Experts like Clyde Williamson from Protegrity urge companies to update security training and invest in advanced tools like email authentication, phishing-resistant multifactor authentication (MFA), and real-time monitoring to stay ahead of threats.

Staying safe in this new environment requires both awareness and action. For individuals, experts recommend taking precautions such as verifying urgent messages through trusted channels, protecting devices with strong antivirus software, and scrubbing personal information from data broker sites. They also emphasize the importance of enabling multi-factor authentication (MFA) and updating operating systems and apps to patch vulnerabilities that hackers might exploit. By staying vigilant and adapting to the evolving threat landscape, users and organizations can better defend against AI-driven cyberattacks.