Google reported on Thursday that state-backed hackers linked to North Korea have used its Gemini artificial intelligence model to research targets and support cyber operations. The disclosure highlights the growing weaponization of generative AI tools by advanced threat groups to accelerate attacks.
The company’s Threat Analysis Group identified the activity from a hacking collective tracked as UNC2970. This group, which has ties to North Korea, employed Google’s Gemini AI to conduct reconnaissance on potential targets. The activity is part of a broader trend where various hacking groups are leveraging AI to speed up multiple phases of the cyber attack lifecycle.
How AI is Being Weaponized
According to Google, threat actors are using large language models like Gemini for several malicious purposes. These include generating phishing emails, debugging malicious code, and researching vulnerabilities in software and targets. The tools help lower the barrier to entry for less skilled hackers and increase the efficiency of sophisticated groups.
Google stated that the use of its AI for these purposes violates its terms of service. The company has taken steps to disrupt such activities, including terminating accounts and implementing technical safeguards. However, the incident underscores the dual-use nature of powerful AI systems, which can be repurposed for harm despite built-in safety policies.
Broader Industry Concerns
The report from Google aligns with warnings from other cybersecurity firms and government agencies. In recent months, officials from the FBI and other national security bodies have cautioned that adversaries are experimenting with AI for malicious cyber activity. This includes not only reconnaissance but also social engineering and information operations.
Google’s findings specifically note that AI models are being used to enable information operations, which aim to manipulate public opinion. Furthermore, some advanced actors are attempting “model extraction attacks,” a technique to steal or replicate the underlying AI model itself.
Official Responses and Safeguards
In response to these threats, Google and other AI developers have stated they are continuously updating their models with safety mitigations. These include limiting the types of queries the AI can answer regarding harmful topics and monitoring for suspicious usage patterns. The company emphasized that its safety filters blocked most policy-violating requests from the hackers.
Cybersecurity experts note that while filters can block direct requests for malicious code, determined actors can use more subtle, multi-step prompts to achieve their goals. This creates an ongoing cat-and-mouse game between AI developers and those seeking to abuse the technology.
Looking Ahead: The AI Security Landscape
The integration of AI into cyber offense is expected to continue evolving. Security analysts anticipate that state-sponsored groups will increasingly rely on these tools for target research, social engineering, and code generation. This will likely lead to more personalized and convincing phishing campaigns at a larger scale.
In response, the cybersecurity industry is developing AI-powered defensive tools to detect and respond to threats faster. Policymakers are also examining potential regulations to govern the use and export of powerful AI models. International discussions on establishing norms for state behavior in cyberspace are likely to include the role of artificial intelligence as a key topic.
Google has committed to sharing more detailed threat intelligence with the security community and to strengthening its AI safety protocols. The company’s report serves as a formal acknowledgment of the security challenges posed by the very technology it is helping to pioneer.
Source: Adapted from Google Threat Analysis Group disclosure