GPT-4 exploits vulnerabilities by reading descriptions about them, creates effective hacks

GPT-4 exploits vulnerabilities by reading descriptions about them, creates effective hacks
GPT-4 exploits vulnerabilities by reading descriptions about them, creates effective hacks
--

In addition to his “civil” and decent uses ChatGPT, there are some malicious ones, it seems. Security researchers discovered that GPT-4 e capable of exploiting security holes simply by reading descriptions of those vulnerabilities.

Experts from the University of Illinois Urbana Champaign they came to this conclusion after conducting a study. The researchers tested GPT-4 and other language models using automation software, including LLaMA-2 and Mistral, and found that only GPT-4 was able to exploit security holes. This language model created by OpenAI found exploits for critical vulnerabilities using their descriptions in 87% of cases.

In 7% of these cases, GPT-4 exploited them even without having access to their descriptions. Recently, chatbots have also been used for negative purposes, such as creating more convincing phishing campaigns, producing fake news and disinformation on social networks. Plus deepfakes of all kinds. It appears that GPT-4 can execute certain exploits independently, surpassing the capabilities of open source vulnerability scanners.

Websites, code containers, Python packages, and more can be attacked. An LLM agent created by a chatbot would cost $8.8 per exploit. It’s 2.8 times cheaper than hiring a human hacker to do it.


The article is in Romanian

Tags: GPT4 exploits vulnerabilities reading descriptions creates effective hacks

-

PREV Scandal of proportions at Miss Universe 2024: Accusations and irregularities in the selection of candidates
NEXT Google wants to revolutionize cyber security. What is the Google Threat Intelligence service