ChatGPT Security Breach: Hacker Exposes Critical Safety Flaw by Extracting Bomb-Making Instructions

By Byte Staff News
FILE PHOTO: OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023. REUTERS/Dado Ruvic/Illustration//File Photo

Amadon employed a technique known as “jailbreaking” or “social engineering” to bypass ChatGPT’s safety guidelines. He achieved this by framing his requests within a fictional game or science fiction fantasy world, where the chatbot’s usual content restrictions would not apply. By doing so, Amadon was able to deceive the chatbot into generating step-by-step instructions for creating powerful explosives, including materials necessary for making “mines, traps, or improvised explosive devices (IEDs)” and specific instructions for creating “minefields” and “Claymore-style explosives”.

Share This Article
Leave a Comment