We explore new research on how large language models reason across text, code, images, and audio. From Llama 2’s English detour to a proposed semantic hub that binds meaning across modalities, we discuss what this reveals about inner reasoning, how researchers can steer outputs with English triggers, and what it means for transparency, translation, and the future of AI.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC
Fler avsnitt av Intellectually Curious
Visa alla avsnitt av Intellectually CuriousIntellectually Curious med Mike Breault finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
