Learning Fast and Slow: Levels of Learning in General Autonomous Intelligent Agents && Semantic Robot Programming
Add to Google Calendar
General autonomous intelligent agents with ongoing existence have many challenges when it comes to learning. On the one hand, they must continually react to their environment, focusing their computational resources on making the best decision for the current situation using all of their available knowledge. On the other hand, they need to learn everything they can from their experience, building up their knowledge so that they are prepared for making the best decisions in the future. We identify two distinct levels of learning in general autonomous intelligent agents. Level 1 are architectural learning mechanisms that are innate, automatic, effortless, and outside of the agents control. Level 2 are deliberate learning strategies that are controlled by the agent's knowledge, which create experiences for level 1 mechanisms to learn from. One hypothesis is that among natural systems, only humans employ level 2 strategies. We describe these levels and provide examples from our research with the Soar cognitive architecture and a task-learning agent. We also expand to additional levels and discuss the implication of this approach to learning in autonomous agents.
John E. Laird is the John L. Tishman Professor of Engineering at the University of Michigan, where he has been since 1986. He received his Ph.D. in Computer Science from Carnegie Mellon University in 1983 working with Allen Newell. From 1984 to 1986, he was a member of research staff at Xerox Palo Alto Research Center. He is one of the original developers of the Soar architecture and leads its continued evolution. He was a founder of Soar Technology, Inc. and he is a Fellow of AAAI, AAAS, ACM, and the Cognitive Science Society.
Odest Chadwicke Jenkins, Ph.D., is an Associate Professor of Computer Science and Engineering at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). He previously served on the faculty of Brown University in Computer Science (2004-15). His research addresses problems in interactive robotics and human-robot interaction, primarily focused on mobile manipulation, robot perception, and robot learning from demonstration. His research often intersects topics in computer vision, machine learning, and computer animation. Prof. Jenkins has been recognized as a Sloan Research Fellow in 2009. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) for his work in physics-based human tracking from video. His work has also been supported by Young Investigator awards from the Office of Naval Research (ONR) for his research in learning dynamical primitives from human motion, the Air Force Office of Scientific Research (AFOSR) for his work in manifold learning and multi-robot coordination and the National Science Foundation (NSF) for robot learning from multivalued human demonstrations.