In May 2025, researchers discovered a universal jailbreak that works on GPT-4, Claude, Gemini, and more. It lets anyone bypass safety filters and extract harmful, even criminal instructions — using just a prompt.
They warned OpenAI, Google, Anthropic, and Microsoft.
Most did nothing.
In this episode, we break down what really happened, why it matters, and what it means for anyone using AI right now.
Sources: 📄 Official Research Papers:https://arxiv.org/abs/2307.15043https://arxiv.org/abs/2505.10066📰 Article:https://www.techradar.com/computing/a...🎧 Listen to the podcast: https://podcasts.apple.com/us/podcast...📩 Get the AI for Everyone Linkedin newsletter: / 7133153749287501824 🌐 LaunchReady.ai: https://launchready.ai/