AI Seminar

How Do We Make Ethical Robots?

Benjamin KuipersProfessor of Computer Science and EngineeringUniversity of Michigan

Robots are playing increasing roles in our society, not just as tools for people taking actions, but as goal-seeking agents, deciding for themselves which actions to take. This has raised concerns about robots, inadvertently or deliberately, behaving in destructive ways. If robots are to participate in our society, we want them to behave ethically.

Ethics is society's way to encourage its individual members to be trustworthy, encouraging cooperation, which leads to positive-sum interactions, making the society as a whole stronger and more successful. In contrast, behaviors that exploit trust for individual gain tend to be negative-sum interactions. Trust and cooperation are discouraged, and the society as a whole becomes weaker and less successful.

We consider examples of different levels of ethical reasoning, including responding to immediate desires, maximizing individual expected utility; following ethical principles and social norms; and resolving ethical dilemmas.

As technology advances, and as non-human agents including intelligent robots and other AIs increasingly act as autonomous decision-makers, they must be designed to follow ethical principles, demonstrate trustworthiness, and encourage cooperation among all members of society.
Benjamin Kuipers is a Professor of Computer Science and Engineering at the University of Michigan. He received his B.A. from Swarthmore College, and his Ph.D. from MIT. He served as Department Chair at the University of Texas at Austin, and is a Fellow of AAAI, IEEE, and AAAS. He investigates the representation of commonsense and expert knowledge, with particular emphasis on the effective use of incomplete knowledge. His research accomplishments include developing the QSIM algorithm for qualitative simulation, the Spatial Semantic Hierarchy models of knowledge for robot exploration and mapping, and methods whereby an agent without prior knowledge of its sensors, effectors, or environment can learn its own sensorimotor structure, the spatial structure of its environment, and its own object and action abstractions for higher-level interactions with its world.

Sponsored by