in

Europol warns of ChatGPT’s potential criminal applications

[ad_1]

What just happened? It’s amazing how much ChatGPT can do, from writing essays and emails to creating programming code. But its abilities are easily abused. The European Union Agency for Law Enforcement Cooperation (Europol) has become the latest organization to warn that criminals will use the chatbot for the likes of phishing, fraud, disinformation, and general cybercrime.

Europol notes that Large Language Models (LLMs) are advancing rapidly and have now entered the mainstream. Numerous industries are adopting LLMs, including criminal enterprises.

“The impact these types of models might have on the work of law enforcement can already be anticipated,” Europol wrote. “Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT.”

Europol notes that ChatGPT’s ability to draft text based on a few prompts makes it ideal for phishing attacks. These emails are usually identifiable by their spelling and grammatical errors or suspicious content, tell-tale signs that ChatGPT can avoid. The tool can also write in specific styles based on the type of scam, increasing the chances of a successful social engineering play.

Additionally, ChatGPT can produce authentic-sounding text at speed and scale, making it a perfect tool for propaganda and disinformation purposes.

But possibly the most dangerous aspect of ChatGPT is that it can write malicious code for cybercriminals who have little or no knowledge of programming. Europol writes that the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing. “If prompts are broken down into individual steps, it is trivial to bypass these safety measures.”

Based on previous reports, OpenAI’s service is already being abused in this way. Security researchers in January discovered ChatGPT being used on cybercrime forums as both an “educational” tool and malware-creation platform. The chatbot could also be used to answer technical queries about hacking into networks or escalating privileges.

ChatGPT’s uses aren’t limited to creating specific texts or code. A potential criminal could use it to learn about a particular crime area, such as terrorism or child abuse. While this information could be found on the internet, ChatGPT makes it easier to discover and understand thanks to how the query result is presented. There’s also the potential for creating a filter-free language model that could be trained on harmful data and hosted on the dark web.

Finally, Europol warns of the danger that ChatGPT user data, such as sensitive queries, could be exposed. This already happened a week ago when the service was temporarily shut down after it started showing other users’ chat history titles. The contents were not exposed, but it was still a major privacy incident.

Europol isn’t the only agency to warn of the potential dangers posed by chatbots. The UK’s National Cyber Security Centre (NCSC) issued a similar warning earlier this month.

Masthead: Emiliano Vittoriosi



[ad_2]

Source link

Smuggler tries to sneak 239 CPUs past Chinese customs by strapping them to his body

Lyft shakes up top-level management, hires former Amazon executive David Risher as next CEO