In this episode, we examine the rapidly growing threat of AI jailbreaks — a cybersecurity challenge reshaping the landscape of large language models (LLMs) and enterprise chatbots. According to the IBM 2025 Cost of a Data Breach Report, 13% of all data breaches now involve AI systems, with the vast majority stemming from jailbreak attacks that circumvent developer-imposed guardrails.
A highlight of our discussion is Cisco’s “instructional decomposition” jailbreak technique, which shows how attackers can extract original training data — even copyrighted material — by manipulating conversational context and using incremental requests that evade security protocols. We’ll break down how this method works, why it’s so difficult to detect, and what it means for the future of enterprise AI.
Topics we cover include:
As enterprises accelerate AI adoption, the line between innovation and vulnerability is razor-thin. Jailbreaks prove that guardrails alone are not enough. To safeguard sensitive data and prevent catastrophic breaches, organizations must adopt layered defenses, continuous monitoring, and robust governance frameworks.
#AIJailbreak #LLMSecurity #Cisco #InstructionalDecomposition #ChatbotRisks #DataExfiltration #GenerativeAI #Cybersecurity #AICompliance #IBMDataBreachReport #PromptInjection #EnterpriseAI #SamsungDataLeak #DeepSeekR1 #ZeroTrustAI #AIRegulation
En liten tjänst av I'm With Friends. Finns även på engelska.