Alphabet Inc.’s Google removed a passage from its artificial intelligence principles that pledged to avoid using the technology in potentially harmful applications, such as weapons.
The company’s AI Principles previously included a passage titled “AI applications we will not pursue,” such as “technologies that cause or are likely to cause overall harm,” including weapons, according to screenshots viewed by Bloomberg. That language is no longer visible on the page.
Responding to a request for comment, a Google spokesperson shared a blog post published Tuesday.
“We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights,” James Manyika, a senior vice president at Google, and Demis Hassabis, who leads the AI lab Google DeepMind, wrote in the post. “And we believe that companies, governments and organizations sharing these values should work together to create AI that protects people, promotes global growth and supports national security.”
Removing the “harm” clause may have implications for the type of work Google will pursue, said Margaret Mitchell, who co-led Google’s ethical AI team and is now chief ethics scientist for AI startup Hugging Face.
“Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people,” she said.
The change is part of a broader shift in policies among large technology companies. In January, Meta Platforms Inc. disbanded many of its diversity and inclusion efforts, telling employees they would no longer be required to interview candidates from underrepresented backgrounds for open roles. That same month, Amazon.com Inc. halted some diversity and inclusion programs, with a senior human resources executive calling them “outdated.” The administration of President Trump has expressed opposition to diversity initiatives.
Tracy Pizzo Frey, who oversaw what is known as Responsible AI at Google Cloud from 2017 to 2022, said that the AI principles guided the work her team did every day. “They asked us to deeply interrogate the work we were doing across each of them,” she said in a message. “And I fundamentally believe this made our products better. Responsible AI is a trust creator. And trust is necessary for success.”
Google employees have long debated how to balance ethical concerns and competitive dynamics in the field of AI, particularly since the launch of OpenAI’s ChatGPT turned up the heat on the search giant. Some Googlers voiced concerns to Bloomberg in 2023 that the company’s drive to regain ground in AI had led to ethical lapses.