- Hackers are using the Gemini chatbot in their operations, per a report from Google.
- It said that hackers from Iran, China, and North Korea are using Gemini to boost productivity.
- But hackers hadn’t achieved any major breakthroughs using the tech, the report said.
Businesses are using AI to improve their productivity — and it’s no different for hackers from Iran, China, and North Korea, according to a report from Google.
The tech giant’s Threat Intelligence Group said in a report on Wednesday that while hackers were using its Gemini chatbot to operate more efficiently, it wasn’t yet a game changer for new capabilities.
“Threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities,” it said.
“Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume.”
Google said that state-backed hackers were using the tool for tasks including generating code, researching targets, or identifying network vulnerabilities. Promoters of disinformation, it said, were using Gemini for developing fake personas, translation, and messaging.
The company’s cybersecurity unit added that rapid advances in large language models, or LLMs, meant that hackers were constantly devising new ways to use the tools.
“Current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors. We note that the AI landscape is in constant flux, with new AI models and agentic systems emerging daily,” the report said.
The report said Iranian hackers were the biggest users of Gemini, employing it to craft phishing campaigns or conduct “reconnaissance on defense experts and organizations.”
Chinese hackers were mainly focused on using the technology to troubleshoot code and obtain “deeper access to target networks,” Google’s report said.
Meanwhile, North Korean actors have used the technology to craft fake cover letters and research jobs as part of a plan to secretly place agents into remote IT jobs in Western companies.
US officials last year said that North Korea is placing people in remote positions in US firms using false or stolen identities as part of a mass extortion scheme.
Google said Gemini’s safeguards prevented hackers from using it for more sophisticated attacks, such as accessing information to manipulate Google’s own products.
Analysts have long warned that generative AI, which produces text or media in response to user requests, has the capacity to make hacking and disinformation operations more effective.
A report by the UK’s National Cyber Security Center last week echoed Google’s conclusions on the impact of the tech on cybercrime. It said that while AI would “increase the volume and heighten the impact” of cyber attacks, the overall impact would be “uneven.”