Hello and welcome to Eye on AI. In today’s edition…A Google report reveals how malicious actors are using Gemini to hack faster and more efficiently; Microsoft quickly moves to offer DeepSeek’s R1 to Azure customers; OpenAI launches ChatGPT for government; and much more.
If a technology can be at all useful for cyber attacks, hackers will without a doubt add it to their toolbox. So it’s no surprise that hacking groups have jumped on AI, just as everyone else has. But now we have some details about exactly how government-backed malicious actors are leveraging the technology, including AI tools built by U.S. companies.
Google’s Threat Intelligence Group yesterday published a report detailing how hacking groups associated with China, North Korea, Iran, Russia, and over a dozen other countries have been using the company’s Gemini chatbot to assist with their operations. The researchers found the AI chatbot is being used for both hacking activity (like espionage and computer network attacks) and coordinated efforts to influence online audiences.
Overall, Gemini was helpful in supporting several phases of an attack, including research, creating malicious content, and planning evasion strategies. But, as of now, hackers have not been able to use Gemini to generate novel attack methods, according to the report. Hacking groups from Iran and China used Gemini the most, relying on the chatbot for a wide variety of tasks from researching military targets to malicious scripting—over 20 Chinese groups and 10 Iranian groups were observed using Gemini.
The findings come as geopolitical concerns around AI are reaching new heights, sparked by the release of R1 from DeepSeek, which appears to have overcome the roadblock of U.S. sanctions and built a model with similar capabilities to leading U.S. AI systems while being trained without top-of-the-line AI hardware and at only a fraction of the cost.
Hackers tap AI for research, coding, content generation, and more
According to the report, the vast majority of activity observed by the researchers involved using AI to accelerate their campaigns. This includes actions like using Gemini to troubleshoot code for malware, generate phishing emails, and create and localize content.
They also used Gemini for research, including investigating potential infrastructure, vulnerabilities, target organizations, evasion techniques, and more. For example, the report describes how China-backed groups used Gemini to research U.S. military and U.S.-based IT organizations, U.S. government network ranges, and publicly available information about U.S. intelligence personnel. North Korean groups were observed researching nuclear power plants in South Korea, cyber forces of foreign militaries, historic cyber events, and malware development.
In fewer cases, the Google researchers observed malicious actors instructing Gemini to take malicious actions and attempting to circumvent its guardrails. In one example described in the report, a group input different publicly available jailbreak prompts in an attempt to get Gemini to output Python code for a distributed denial-of-service (DDoS) tool. Others sought to use Gemini to abuse Google products, including researching techniques for Gmail phishing and bypassing Google’s account verification methods. Google says its safety responses restricted such content and that attempts to use Gemini to abuse Google products were unsuccessful.
No new threats—for now
The other point made clear in the report is that while government-backed hacking groups are finding plenty of ways to hack more efficiently with AI, they were not observed using AI to discover new code vulnerabilities or develop unprecedented ways of orchestrating attacks.
“Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume,” reads the report, also noting that it allows less-skilled actors to more quickly develop their tools and skills.
This, of course, could change as AI further develops, becomes more integrated into the world, and hacking groups gain more experience experimenting with it. The cloud completely upended the cyberthreat landscape, greatly expanding how malicious actors could hack and exploit. AI will be probably even more transformative in changing how companies and governments operate, how data is exchanged, how information is learned, and how we interact with the internet, software, and our devices.
This does mean that there is a high risk of AI being used for hacking and espionage operations in new ways. Even if we’re not seeing evidence of this right now, we shouldn’t get complacent about this threat.
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
This story was originally featured on Fortune.com