Google quietly became more evil this past week.
The company has changed its promise of AI responsibility and no longer promises not to develop AI for use in dangerous tech. Prior versions of Google’s AI Principles promised the company wouldn’t develop AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” or “technologies that gather or use information for surveillance violating internationally accepted norms.” Those promises are now gone.
If you’re not great at deciphering technobabble public relations pseudo-languages, that means making AI for weapons and spy “stuff.” It suggests that Google is willing to develop or aid in the development of software that could be used for war. Instead of Gemini just drawing pictures of AI-powered death robots, it could essentially be used to help build them.
This is a slow but steady change from just a few years ago. In 2018, the company declined to renew the “Project Maven” contract with the government, which analyzed drone surveillance, and failed to bid on a cloud contract for the Pentagon because it wasn’t sure these could align with the company’s AI principles and ethics.
Then in 2022, it was discovered that Google’s participation in “Project Nimbus” gave some executives at the company concerns that “Google Cloud services could be used for, or linked to, the facilitation of human rights violations.” Google’s response was to force employees to stop discussing political conflicts like the one in Palestine.
That didn’t go well, leading to protests, mass layoffs, and further policy changes. In 2025, Google isn’t shying away from the warfare potential of its cloud AI.
This isn’t too surprising. There’s plenty of money to be made working for the Department of Defense or the Pentagon, and executives and shareholders really like plenty of money. However, there’s also the more sinister thought that we’re in an AI arms race and have to win it.
Demis Hassabis, CEO of Google DeepMind, says in a blog post that “democracies should lead in AI development.” That’s not a dangerous idea — until you read it alongside comments like Palantir CTO Shyam Sankar’s, who says that an AI arms race must be a “whole-of-nation effort that extends well beyond the DoD in order for us as a nation to win.”
These ideas can bring us to the brink of World War III. A winner-take-all AI arms race between the U.S. and China seems only good for the well-protected leaders of the winning side.
We all knew that AI would eventually be used this way. While joking about the Rise of the Machines, we were half-serious, knowing that there is a real possibility that AI could turn into some kind of super soldier that never needs to sleep or eat, only stopping to change its battery and fill its ammunition reserves. What is a video game idea today can become a reality in the future.
And there isn’t a damn thing we can do about it. We could stop using all of Google’s (and Nvidia’s, Tesla’s, Amazon’s, and Microsoft’s … you get the idea) products and services as a way to protest and force a change. That might have an impact, but it’s not a solution. If Google stops doing it, another company will take its place and hire the same people because they can offer more money. Or Google could simply stop making consumer products and have more time to work on very lucrative DoD contracts.
Technology should make the world a better place — that’s what we are promised. Nobody ever talks about the evils and carnage it also enables. Let’s hope someone in charge likes the betterment of mankind more than the money.