Monday, December 23, 2024

Google Researchers Reveal The Myriad Ways Malicious Actors Are Misusing Generative AI

Must read

One of the great fears of modern times is that generative AI systems are giving malicious actors unprecedented power to lie, manipulate and steal on a scale previously unimaginable and that this will undermine our systems of trust, democracy and society.

Examples abound of from election interference to the mass-production of fake reviews. Indeed, it’s easy to imagine that these are just a small fraction of the insidious endeavor currently undermining our way of life.

The truth is more nuanced of course but it raises the broader question of how to better understand these malicious AI techniques, where they are being applied, by whom, on what scale and for what purpose.

Now we get an answer of sorts thanks to the work of Nahema Marchal at Google DeepMind and Rachel Xu at Google Jigsaw and colleagues, who have studied the misuse of generative AI and the way it has evolved in the last couple of years. Their approach has revealed a wide variety of malicious activities that they have categorized. “We illuminate key and novel patterns in misuse during this time period, including potential motivations, strategies, and how attackers leverage and abuse system capabilities,” they say.

Emergent Communication

In the process, they have also uncovered certain kinds of activity that sit at the boundary between acceptable and unacceptable use of AI. “These include the emergence of new forms of communications for political outreach, self-promotion and advocacy that blur the lines between authenticity and deception,” say the team.

Their approach is surprisingly straightforward. Marchal, Xu and co analyze over 200 media reports of the abuse or misuse of AI systems published between January 2023 and March 2024. They then categorize the types and patterns of reported abuse to create a taxonomy of tactics that malicious actors employ in their work.

The types of abuse fall into two broad categories—those exploiting generative AI systems and those attempting compromise the same systems to reveal protected information or to perform tasks otherwise prohibited, say the researchers.

They then further subdivide these categories. The first and most common category that exploits generative AI involves realistically depicting human likenesses for tasks such as impersonation, creating synthetic personalities and producing non-consensual sexual imagery. “The most prevalent cluster of tactics involve the manipulation of human likeness, especially Impersonation,” say Marchal, Xu and co.

One example is a story run on PBS News about AI robocalls attempting to suppress voting in New Hampshire by impersonating President Biden.

The second category involves the realistic depiction of non-human objects and includes falsifying documents like identity papers as well as creating counterfeits designed to pass as the real thing.

The final category, they say, focuses on the mechanisms of content production. This includes automating workflows, production at a vast scale and in ways that can target specific individuals. In one example researchers used ChatGPT to mass email legislators to raise awareness of AI generated emails.

Despite the wide variety of abusive applications, Marchal, Xu and co conclude that most employ easily accessible generative AI capabilities rather than technologically sophisticated ones.

Perhaps most interesting is the emergence of new forms of communication that blur boundaries of what is and isn’t acceptable use of generative AI. For example, during recent elections in India when political avatars emerged that addressed individual voters by name using whatever language they spoke and various politicians used deepfakes of themselves to spread their message more widely but also to portray themselves in more positive light.

Few of these examples clearly acknowledged the way generative AI was used in these campaigns. “GenAI-powered political image cultivation and advocacy without appropriate disclosure undermines public trust by making it difficult to distinguish between genuine and manufactured portrayals,” say the researchers. “We are already seeing cases of liar’s dividend, where high profile individuals are able to explain away unfavorable evidence as AI-generated.”

Amplifying Monetization

Beyond efforts to impersonate humans and exert improper influence, the most common goal for malicious users of AI is to monetize products. Examples include the mass generation of low-quality articles, books and adverts to attract eyeballs and generate advertising revenue.

The production of non-consensual sexual imagery is also an active area of commercial activity, for example, the “nudification” of women as a paid-for service.

Of course, the research has some limitations that the researchers are keen to highlight. For example, it is based entirely on media reports of malicious online activity, an approach which can introduce bias. For example, the media tends to focus on the most outrageous examples, which may overestimate certain kinds of sensation activity while underestimating other activity that is less headline grabbing but equally insidious.

But Marchal, Xu and co make an important start on studying the ecosystem of malicious uses of generative AI. Their work raises important questions about the far-reaching consequences of this activity and how it is altering the nature of communication and of society itself.

The team do not attempt to characterize the rate of change, but it is not hard to imagine how the impact of these activities could grow exponentially. Humans are not good at imagining the consequences of exponential change, which makes this even more an issue of significant public concern.

“These findings underscore the need for a multi-faceted approach to mitigating GenAI misuse, involving collaboration between policymakers, researchers, industry leaders, and civil society,” conclude Marchal, Xu and co. The sooner the better.


Ref: Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data : arxiv.org/abs/2406.13843

Latest article