Robots following coded instructions to complete a task? Old school. Robots learning to do things by watching how humans do it? That’s the future. Earlier this year, Stanford’s Animesh Garg and Marynel Vázquez shared their research in a talk on “Generalizable Autonomy for Robotic Mobility and Manipulation” at the GPU Technology Conference last week. We caught up with them to learn more about generalizable autonomy - the idea that a robot should be able to observe human behavior, and learn to imitate it in a way that’s applicable to a variety of tasks and situations. Like learning to cook by watching YouTube videos, or figuring out how to cross a crowded room for another.
Fler avsnitt av NVIDIA AI Podcast
Visa alla avsnitt av NVIDIA AI PodcastNVIDIA AI Podcast med NVIDIA finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.
