Friday, November 22, 2024

Google’s AI bug hunters sniff out two dozen-plus code flaws

Must read

Google’s OSS-Fuzz project, which uses large language models (LLMs) to help find bugs in code repositories, has now helped identify 26 vulnerabilities, including a critical flaw in the widely used OpenSSL library.

The OpenSSL bug (CVE-2024-9143) was reported in mid-September and fixed a month later. Some, but not all, of the other vulnerabilities have also been addressed.

Google believes its AI-driven fuzzing tool – which injects unexpected or random data into software to catch errors – found something that’s unlikely to have ever been caught by human-driven fuzzing.

“As far as we can tell, this vulnerability has likely been present for two decades and wouldn’t have been discoverable with existing fuzz targets written by humans,” said Oliver Chang, Dongge Liu, and Jonathan Metzman of Google’s open source security team in a blog post.

If that’s correct, security research henceforth really ought to involve AI for fear that threat actors have already done so – and found flaws that would be invisible to the AI-deprived.

Another example cited by Google’s security team, a bug in the cJSON project, is similarly said to have been spotted by AI and missed by a human-written fuzzing test.

So the value of AI assistance appears to be substantial for security professionals. The Chocolate Factory earlier this month announced that, for the first time, a separate LLM-based bug hunting tool called Big Sleep had identified a previously unknown exploitable memory-safety flaw in real software.

And in October, Seattle-based Protect AI released an open source tool called Vulnhuntr that used Anthropic’s Claude LLM to find zero-day vulnerabilities in Python-based projects.

The OSS-Fuzz team introduced AI-based fuzzing in August 2023 in an effort to fuzz a greater portion of codebases – to improve fuzzing coverage, meaning the amount of code tested.

The process of fuzzing involves drafting a fuzzing target – “a function that accepts an array of bytes and does something interesting with these bytes using the API under test” – then dealing with potential compilation issues and running the fuzzing target to see how it performs, making corrections, and repeating the process to see whether crashes can be traced to specific vulnerabilities.

Initially, OSS-Fuzz handled the first two steps: 1) Drafting an initial fuzz target; and 2) Fixing any compilation issues that arise.

Then, at the beginning of 2024, Google made OSS-Fuzz available as an open source project and has been trying to improve how the software handles subsequent steps: 3) Running the fuzz target to see how it performs, and fixing any obvious mistakes causing runtime issues; 4) Running the corrected fuzz target for a longer period of time, and triaging crashes to determine their root causes; and 5) Fixing vulnerabilities.

According to Google, its LLM can now handle the first four steps of the developer’s fuzzing process and the plan is to tackle the fifth shortly.

“The goal is to fully automate this entire workflow by having the LLM generate a suggested patch for the vulnerability,” said Chang, Liu, and Metzman. “We don’t have anything we can share here today, but we’re collaborating with various researchers to make this a reality and look forward to sharing results soon.” ®

Latest article