November 21, 2024
Google claims a world first because AI discovers a zero-day security vulnerability

Google claims a world first because AI discovers a zero-day security vulnerability

Update, November 5, 2024: This story, originally published on November 4, now includes the results of research into the use of AI deepfakes.

An AI agent has discovered a previously unknown, zero-day, exploitable memory safety vulnerability in commonly used real-world software. It’s the first example, at least made public, of such a discovery, according to Google’s Project Zero and DeepMind, the forces behind Big Sleep, the large language model-backed vulnerability agent that spotted the vulnerability.

If you don’t know what Project Zero is and aren’t in awe of what it has accomplished in terms of safety, then you simply haven’t been paying attention in recent years. These elite hackers and security researchers work relentlessly to discover zero-day vulnerabilities in Google’s products and beyond. The same accusation of lack of attention applies if you’re not aware of DeepMind, Google’s AI research labs. So when these two tech giants joined forces to create Big Sleep, they had to make waves.

ForbesHow fear defeated the deepfake hackers of $10 billion startups

Google uses a large language model to find zero-day vulnerabilities in real code

In a November 1 announcement, Google’s Project Zero blog confirmed that Project Naptime’s large language model, supported by its security vulnerability research framework, has evolved into Big Sleep. This joint effort involving some of the very best ethical hackers, as part of Project Zero, and the very best AI researchers, as part of Google DeepMind, has developed a large, language model-powered agent that can go out and solve very real security problems can discover. in commonly used code. In the case of this world-first, the Big Sleep team says it has “found an exploitable stack buffer underflow in SQLite, a widely used open source database engine.”

The zero-day vulnerability was reported to the SQLite development team in October, who fixed it the same day. “We found this issue before it appeared in an official release,” Google’s Big Sleep team said, “so SQLite users were not affected.”

AI could be the future of Fuzzing, says the Google Big Sleep team

While you may not have heard the term fuzzing before, it has been part of the staple diet of safety research for decades. Fuzzing refers to using arbitrary data to cause errors in the code. While the use of fuzzing is widely accepted as an essential tool for those looking for vulnerabilities in code, hackers will readily admit that it can’t find everything. “We need an approach that can help defenders find the bugs that are difficult (or impossible) to find by fuzzing,” the Big Sleep team said, adding that it hoped AI can fill the gap and “find vulnerabilities in software can find before it is even solved.” released,” leaving little room for attackers to strike.

ForbesNew Cyber ​​Attack Warning: Confirming You’re Not a Robot Can Be Dangerous

“Finding a vulnerability in a widely used and well-developed open source project is an exciting result,” the Google Big Sleep team said, but admitted that the results are currently “highly experimental.” Currently, the Big Sleep agent is seen as only as effective as a target-specific fuzzer. However, it is the near future that looks bright. “This effort will lead to a significant benefit for defenders,” said Google’s Big Sleep team, “with the potential to not only find crashing test cases, but also provide high-quality root cause analysis. cheaper and more effective in the future.”

The downside of AI is evident in deepfake security threats

While Google’s Big Sleep news is refreshing and important, as is that from a new RSA report looking at how AI can help in the drive to get rid of passwords by 2025, the downside of the AI -safety medal always be considered Good. One of those downsides is the use of deepfakes. I’ve already discussed how Google deepfakes were used in an attack on a Gmail user, a report that went viral for all the right reasons. Now, a Forbes.com reader has contacted me to inform me of research conducted to gauge how AI technology can be used to influence public opinion. Again, I recently discussed this when the FBI issued a warning about a 2024 election voting video that was actually fake and supported by Russian distributors. The latest VPNRanks research is worth reading in its entirety, but here are a few hand-picked stats that are sure to get the gray cells working.

  • 50% of respondents have encountered deepfake videos online multiple times.
  • 37.1% consider deepfakes to be an extremely serious threat to reputation, especially when it comes to creating fake videos of public figures or ordinary people.
  • Concerns about deepfakes manipulating public opinion are high: 74.3% are very concerned about possible abuse in political or social contexts.
  • 65.7% believe that a deepfake released during an election campaign is likely to influence voters’ opinions.
  • 41.4% believe it is extremely important that social media platforms immediately remove non-consensual deepfake content as soon as it is reported.
  • When it comes to predictions for 2025, the number of global deepfake identity fraud attempts is expected to reach 50,000, and over 80% of global elections could be influenced by deepfake interference, compromising the integrity of democracy threatened.

Leave a Reply

Your email address will not be published. Required fields are marked *