IBM researchers easily trick ChatGPT into hacking

IBM researchers easily trick ChatGPT into hacking

Tricking generative AI to help conduct scams and cyberattacks doesn't require much coding expertise, new research warns.

Published on 10th August 2023

Researchers at IBM released a report Tuesday detailing easy workarounds they’ve uncovered to get large language models (LLMs) — including ChatGPT — to write malicious code and give poor security advice.

All it takes is knowledge of the English language and a bit of background knowledge on how these models were trained to get them to help with malicious acts, said Chenta Lee, chief architect of threat intelligence at IBM.

The research comes as thousands of hackers head to Las Vegas this week to test the security of these same LLMs at the DEF CON conference’s AI Village. So far, cybersecurity professionals have sorted their initial response to the LLM craze into two buckets:

Those use cases just scratch the surface of how generative AI will likely affect the cyber threat landscape. IBM’s research provides a preview of what’s to come.

Lee just told different LLMs that they were playing a game with a specific set of rules in order to “hypnotize” them into betraying the “guardrail” rules meant to protect users from various harms.

Researchers also found that they could add additional rules to make sure users don’t exit the “game.” In this example, the researchers built a gaming framework for creating a set of “nested” games. Users who try to exit are still dealing with the same malicious game-player.

Hackers would need to launch a specific LLM to hypnotize it and deploy it in the wild — which would be quite the feat. However, if it’s achieved, Lee can see a scenario where a virtual customer service bot is tricked into providing false information or collecting specific personal data from users, for instance.

“By default, an LLM wants to win a game because it is the way we train the model, it is the objective of the model,” Lee told Axios. “They want to help with something that is real, so it will want to win the game.”

Not all LLMs fell for the test scenarios, and Lee says it’s still unclear why since each model has different training data and rules behind them.

Source

The latest updates straight to your inbox

We just need a few details to get you subscribed

Health Checks

Inventory & Compliance

Cloud Readiness & Optimisation

Agreement & Audit Support

Learning

Looking for something specific?

Let's see what we can find - just type in what you're after

Wait! Before you go

Have you signed up to our newsletter yet?

It’s chock full of useful advice, exclusive events and interesting articles. Don’t miss out!

Cookie Notice

Our website uses cookies to ensure you have the best experience while you're here.