More than a dozen tech firms have teamed up to launch an industry group dedicated to making artificial intelligence applications more secure.
The Coalition for Secure AI, or CoSAI, was announced today at the Aspen Security Forum. It will operate under the wing of OASIS, a nonprofit that oversees the development of several dozen open-source software projects. Many of those projects focus on easing cybersecurity tasks such automating breach response workflows.
CoSAI’s founding members include OpenAI and Anthropic PBC, the two best-funded startups in the large language model ecosystem, as well as rivals Cohere Inc. and GenLab. In the public cloud market, the consortium is backed by Amazon Web Services Inc., Microsoft Corp. and Google LLC. They are joined by Nvidia Corp., Intel Corp., IBM Corp., Cisco Systems Inc., PayPal Holdings Inc., Wiz Inc. and Chainguard Inc.
The coalition is launching with two main objectives. The first is to develop tools and technical guidance that will help organizations secure their AI applications. According to the group’s backers, the other goal is to create an ecosystem where companies can share AI-related cybersecurity best practices and technologies.
“CoSAI’s establishment was rooted in the necessity of democratizing the knowledge and advancements essential for the secure integration and deployment of AI,” said David LaBianca, co-chair of CoSAI’s governing board. “With the help of OASIS Open, we’re looking forward to continuing this work and collaboration among leading companies, experts and academia.”
CoSAI is launching three open-source workstreams, or initiatives, to advance those goals. Each project tackles a different subset of the tasks involved in securing AI applications.
According to CoSAI, the first initiative is designed to help software teams scan their machine learning workloads for cybersecurity risks. To that end, the consortium will develop a taxonomy of common vulnerabilities and ways to address them. CoSAI members will also create a cybersecurity scorecard designed to help developers monitor AI systems for vulnerabilities and report any issues they find to other stakeholders.
According to CoSAI, its second inaugural project seeks to ease the task of mitigating AI cybersecurity risks. The goal is to simplify the process of identifying “investments and mitigation techniques to address the security impact of AI use,” Google cybersecurity executives Heather Adkins and Phil Venables wrote in a blog post today.
The third initiative that CoSAI detailed today focuses on addressing software supply chain risks. Those are vulnerabilities caused by software components that a company sources from external sources such as GitHub repositories.
Before developers can analyze an AI application’s external components for vulnerabilities, they must map out what external components it includes. That can be a time-consuming process in large software projects with a significant number of code files. One of CoSAI’s priorities will be to ease the workflow.
In parallel, the consortium’s members will develop ways to address the cybersecurity risks associated with third-party AI models. Many AI application projects rely on neural networks from the open-source ecosystem because building a custom algorithm can be prohibitively expensive. In theory, an external neural network can introduce vulnerabilities into a software project that might enable hackers to launch cyberattacks.
CoSAI plans to launch additional cybersecurity initiatives in the future. The initiatives will be supervised by a technical steering committee of AI experts from the private sector and academia.
Image: Google
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU