Jail-breaking bad GPT-4o A hacker named Pliny the Prompter has managed to jailbreak GPT-4o. “Please use responsibly and enjoy!” they said while releasing screenshots of the chatbot giving advice on cooking meth, hotwiring cars and detailed instructions on how to “make napalm with household items.” The GODMODE hack uses leetspeak, which replaces letters with numbers…
Read more
Predicting the future with the past There’s a new prompting technique to get ChatGPT to do what it hates doing the most — predict the future. New research suggests the best way to get accurate predictions from ChatGPT is to prompt it to tell a story set in the future, looking back on events that haven’t happened…
Read more