Is Kimi K2 actually better than Claude? In episode 64 of Mixture of Experts, host Tim Hwang is joined by Abraham Daniels, Chris Hay and Kaoutar El Maghraoui. First, Moonshot AI released Kimi K2, their trillion-parameter MoE model, and our experts analyze the benchmarks and what this really means. Then, we reflect on DeepSeek-R1 6 months later; did it live up to the hype? Next, Google is investing $25 billion in AI infrastructure, and it’s not just AI chips. How does this compare to their competitors? Finally, Anthropic’s Claude for Enterprise announced an expansion with Lawrence Livermore National Laboratory —what AI safety concerns might this raise? Tune in to today’s episode of Mixture of Experts to find out.
00:00 – Intro
01:18 – Kimi K2
12:07 – DeepSeek-R1 vibe check
28:49 – Google's data center investments
41:20 – Claude powers LLNL research
The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Resources: