ChatGPT Plus can exploit zero-day security vulnerabilities — why this should concern you

Cybercrime will soon be for the people.
By Chance Townsend  on 
A person's hand holds an iPhone with the OpenaAI ChatGPT app running GPT-4 visible
Credit: Smith Collection/ Gado / Contributor / Archive Photos

GPT-4, OpenAI's latest multimodal large language model (LLM), can exploit zero-day vulnerabilities independently, according to a study reported by TechSpot.

The study by University of Illinois Urbana-Champaign researchers has shown that LLMs, including GPT-4, can execute attacks on systems by utilizing undisclosed vulnerabilities, known as zero-day flaws. As part of the ChatGPT Plus service, GPT-4 has demonstrated significant advancements over its predecessors in terms of security penetration without human intervention.

The study involved testing LLMs against a set of 15 "high to critically severe" vulnerabilities from various domains, such as web services and Python packages, which had no existing patches at the time.

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

GPT-4 displayed startling effectiveness by successfully exploiting 87 percent of these vulnerabilities, compared to a zero percent success rate by earlier models like GPT-3.5. The findings suggest that GPT-4 can autonomously identify and exploit vulnerabilities that traditional open-source vulnerability scanners often miss.

Why this is concerning

The implications of such capabilities are significant, with the potential to democratize the tools of cybercrime, making them accessible to less skilled individuals known as "script-kiddies." UIUC's Assistant Professor Daniel Kang emphasized the risks posed by such powerful LLMs, which could lead to increased cyber attacks if detailed vulnerability reports remain accessible.

Kang advocates for limiting detailed disclosures of vulnerabilities and suggests more proactive security measures such as regular updates. However, his study also noted the limited effectiveness of withholding information as a defense strategy. Kang emphasized that there's a need for robust security approaches to address the challenges introduced by advanced AI technologies like GPT-4.

Topics Cybersecurity

Headshot of a Black man
Chance Townsend
Assistant Editor, General Assignments

Currently residing in Austin, Texas, Chance Townsend is an Assistant Editor at Mashable. He has a Master's in Journalism from the University of North Texas with the bulk of his research primarily focused on online communities, dating apps, and professional wrestling.

In his free time, he's an avid cook, loves to sleep, and "enjoys" watching the Lions and Pistons break his heart on a weekly basis. If you have any stories or recipes that might be of interest you can reach him by email at [email protected].


Recommended For You

Is AI good or bad? The answer is more complicated than 'yes' or 'no.'
An illustration of a humanoid figure pugged into AI

Rabbit R1 update boosts battery life
Rabbit R1 in hand

21 of the best ChatGPT courses you can take online for free
ChatGPT on phone


Trending on Mashable
'Wordle' today: Here's the answer hints for May 4
a phone displaying Wordle

NYT Connections today: See hints and answers for May 4
A phone displaying the New York Times game 'Connections.'

NYT Connections today: See hints and answers for May 3
A phone displaying the New York Times game 'Connections.'

53 of the best Harvard University courses you can take online for free
Hands on laptop

'Wordle' today: Here's the answer hints for May 3
a phone displaying Wordle
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!