We dive into the challenge of updating large language models without erasing what they already know. The Memoir framework freezes the base model, adds a sparse residual memory, and uses a top-hash masking scheme to store and retrieve thousands of edits. Learn how edits are written to tiny memory slots, kept from colliding, and pulled at inference time, plus what experiments say about reliability, generalization, and locality.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC
Fler avsnitt av Intellectually Curious
Visa alla avsnitt av Intellectually CuriousIntellectually Curious med Mike Breault finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
