Sunday, December 22, 2024

OpenAI Drops AI Safety, Google Stays Firm

Must read

OpenAI has dominated the artificial intelligence news cycle and did so again with its new GPT-4o generative model’s launch. However, reports of the company disbanding a team working to develop controls around AI systems created a flutter. More so, because Google’s Deepmind is coming out with a new Frontier Safety Framework to prevent runaway capabilities of AI. 

Security considerations around AI have been making headlines for some time now, the latest being the US government’s security guidelines that deal with bolstering critical infrastructure against artificial intelligence-led threats. More recently, the US and China held closed-door discussions in Switzerland to consider security concerns around AI and its use-cases. Given this scenario, OpenAI’s shift and Deepmind’s efforts are no doubt critical as countries attempt to rein in AI. 

Altman’s focus shifts from security to products

Before trying to figure out what Deepmind is attempting, let’s look at how OpenAI’s efforts to prevent “super-intelligent” AI systems from going rogue panned out. The company, last July had committed 20% of its compute resources to this effort, but last week saw several members of this team resigning over failed promises. 

Apparently, Sam Altman and team deprioritized safety research in favour of new products such as the latest generative model that added emotion-led voice responses to the chatbot. The argument for doing so could stem from the fact that super-intelligent AI is still theory as till date opinion is divided on whether AI could ever accomplish all human tasks. 

Be that as it may, industry experts argue that Altman’s focus is on revenue generation, especially in view of the funds he has raised to productize generative AI for commercial use. Which means that despite all the talk around safety, OpenAI would continue to launch new products in the months ahead – something that the security team didn’t approve of. 

Reneging on security could cost OpenAI

And the outcome was the resignation of the two joint leads Jan Leike and co-founder Ilya Sutskever. The former, a Deepmind researcher in the past, took to social media to share his disagreements with the OpenAI leadership. “I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super) alignment, confidentiality, societal impact, and related topics,” he said in a series of posts on X (formerly Twitter). 

Media reports even suggested that Altman also upset Sutskever by rushing to launch new AI-powered features at OpenAI’s first Dev conference last November. Altman also reportedly took up cudgels over former member of the board Helen Toner’s paper over the company’s approach to safety, even attempting to remove her from the board.

Is Google trying to steal a march here? Maybe, yes

There were also reports of OpenAI letting its chatbot store become a dumping ground for spam and scraped data from YouTube, the latter actually causing some legal issues as it was beyond that platform’s terms of service. It is in this backdrop that Google’s Deepmind seems to be delving into the security issues, also hoping that it can attract some of OpenAI’s disgruntled safety researchers in the process.  

Google’s Deepmind’s initiative is geared towards identifying and preventing runaway capabilities that AI could develop either in the area of artificial general intelligence (AGI) or just a malware generator going off the rocker. In a note shared publicly, Deepmind has listed out a three-step framework for the same. 

The steps are quite simple and entails identifying potentially harmful capabilities in a model by simulating its development path, evaluating models regularly to detect known critical capability levels and apply mitigation plans to prevent exfiltration or illegal deployment. While this might sound quite simplistic, formalizing them could prove critical as most AI-led innovations appear to be just winging it for now.

“We aim to have this initial framework implemented by early 2025, which we anticipate should be well before these risks materialize. The Framework is exploratory and based on preliminary research, which we hope will contribute to and benefit from the broader scientific conversation. It will be reviewed periodically and we expect it to evolve substantially as our understanding of the risks and benefits of frontier models improves,” says the note. 

Latest article