Towards Learning the Foundations of Manipulation Actions with Unguided Exploration
This event is free and open to the publicAdd to Google Calendar
Virtual dissertation defense (Passcode: 728538)
ABSTRACT: Human infants are not born with the ability to reach and grasp. But after months of typical development, infants are capable of reaching and grasping reliably. During this time, the infant receives minimal guidance and learns primarily by observing its autonomous experience with its developing senses. How is it possible for this learning phenomenon to occur, especially when this experience begins with seemingly random motions?
We present a computational model that allows an embodied robotic agent to learn these foundational actions in a manner consistent with infant learning. By examining the model and the resulting behaviors, we can identify knowledge sufficient to perform these actions, and how this knowledge may be represented.
Our agent uses a graph representation for peripersonal space, the space surrounding the agent and in reach of its manipulators. The agent constructs the Peripersonal Space (PPS) Graph by performing random motions. For each pose visited, a node stores the joint angles that produced it and an image of the arm in this configuration. Edges connect each pair of nodes that have a feasible motion between them. The PPS Graph provides a mapping between configuration space and image space, and the agent learns to use it as a powerful tool for planning manipulation actions.
Initially, the only known actions are moves to random PPS Graph nodes. The agent selects a known action to perform, and observes the typical results. If there are unusual results, the agent defines a new action with the goal of achieving that result. Once a new action is defined, the agent learns features that help achieve the goal more reliably. At the conclusion of this work, the move, reach, ungrasp, and place actions are fully reliable, and the grasp and pick-and-place actions are semi-reliable.