Wednesday, February 5, 2025

Google Ditches Pledge Not To Use AI For Weapons Or Surveillance

Must read

Google’s parent company Alphabet has redrafted its policies guiding its use of artificial intelligence (AI), doing away with a promise to never use the technology in ways “that are likely to cause overall harm”. This includes weaponizing AI as well as deploying it for surveillance purposes.

The pledge to steer clear of such nefarious applications was made in 2018, when thousands of Google employees protested against the company’s decision to allow the Pentagon to use its algorithms to analyze military drone footage. In response, Alphabet declined to renew its contract with the US military and immediately announced four red lines that it vowed never to cross in its use of AI.

Publishing a set of principles, Google included a section entitled “AI applications we will not pursue”, under which it listed “technologies that cause or are likely to cause overall harm” as well as “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Surveillance and “technologies whose purpose contravenes widely accepted principles of international law and human rights” were also mentioned on the AI blacklist.

However, updating its principles earlier this week, Google scrapped this entire section from the guidelines, meaning there are no longer any assurances that the company won’t use AI for the purposes of causing harm. Instead, the tech giant now offers a vague commitment to “developing and deploying models and applications where the likely overall benefits substantially outweigh the foreseeable risks.”

Addressing the policy change in a blog post, Google’s senior vice president James Manyika and Google DeepMind co-founder Demis Hassabis wrote that “since we first published our AI Principles in 2018, the technology has evolved rapidly” from a fringe research topic to a pervasive element of everyday life.

Citing a “global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” the pair say that “democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.” Among the applications they now envisage for AI are those that bolster national security – hence the backpedaling on previous guarantees not to use AI as a weapon.

With this in mind, Google says it now endeavors to utilize the technology to “help address humanity’s biggest challenges” and promote ways to “harness AI positively”, without stating exactly what this does and – more importantly – doesn’t entail.

Without making any specific statements about what kinds of activities the company won’t be getting involved with, then, the pair say that Google’s AI use will “stay consistent with widely accepted principles of international law and human rights,” and that they will “work together to create AI that protects people, promotes global growth, and supports national security.”

Latest article