Monday, December 23, 2024

Google, OpenAI among tech giants to create coalition for safe AI

Must read

Founding sponsors of the industry-led initiative launched yesterday include Google, IBM, Intel, Microsoft, Amazon, Anthropic, OpenAI and Wiz.

Some of the biggest names in tech including Google, OpenAI, Microsoft and Amazon have announced a new coalition focused on developing safety standards for AI tools.

The Coalition for Secure AI was unveiled at the Aspen Security Forum yesterday (18 July). The open-source initiative will give developers guidance and tools needed to design safe and cybersecure AI systems using a standardised framework.

The goal of the coalition hosted by the Oasis global standards body is to enhance trust and security in AI use and deployment by addressing its “unique risks”.

David LaBianca of Google said that establishing the coalition was rooted in the “necessity of democratising” the knowledge and advancements essential for the secure integration and deployment of AI.

“With the help of OASIS Open, we’re looking forward to continuing this work and collaboration among leading companies, experts and academia,” said LaBiana, who is co-chair of the coalition’s governing board.

Omar Santos of Cisco, who is the other co-chair, said that the coalition’s goal is to “eliminate redundancy” and” amplify our collective impact” through key partnerships that focus on critical topics. “We will harness our combined expertise and resources to fast track the development of robust AI security standards and practices that will benefit the entire industry.”

Founding sponsors of the initiative include Google, IBM, Intel, Microsoft, Nvidia, Amazon, Anthropic, Cisco, OpenAI and Wiz (which is in advanced talks to be acquired by Google).

While the coalition represents an industry initiative and not a requirement from any government, there have been global governmental efforts to make the emerging technology safer and mitigate some of its dangers.

In May, major AI players including Anthropic, Google, Meta, Microsoft and OpenAI agreed to an expanded set of safety commitments relating to the development of artificial intelligence at the AI Seoul Summit in South Korea. The commitments built on the Bletchley Declaration that came out of a similar UK summit last year.

Meanwhile, Silicon Valley accelerator Y Combinator and a host of AI start-ups recently signed an open letter opposing plans to regulate the sector in California. The company spoke out against a bill which aims to put more safety responsibilities on AI developers.

Y Combinator argued that the responsibility for the misuse of large language models should rest “with those who abuse these tools, not with the developers who create them”.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Latest article