Book review: Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Collin Allen.
This book combines the ideas of leading commentators on ethics, methods of implementing AI, and the risks of AI, into a set of ideas on how machines ought to achieve ethical behavior.
The book mostly provides an accurate survey of what those commentators agree and disagree about. But there’s enough disagreement that we need some insights into which views are correct (especially about theories of ethics) in order to produce useful advice to AI designers, and the authors don’t have those kinds of insights.
The book focuses more on near term risks of software that is much less intelligent than humans, and is complacent about the risks of superhuman AI.
The implications of superhuman AIs for theories of ethics ought to illuminate flaws in them that aren’t obvious when considering purely human-level intelligence. For example, they mention an argument that any AI would value humans for their diversity of ideas, which would help AIs to search the space of possible ideas. This seems to have serious problems, such as what stops an AI from fiddling with human minds to increase their diversity? Yet the authors are too focused on human-like minds to imagine an intelligence which would do that.
Their discussion of the advocates friendly AI seems a bit confused. The authors wonder if those advocates are trying to quell apprehension about AI risks, when I’ve observed pretty consistent efforts by those advocates to create apprehension among AI researchers.
Your last paragraph makes no sense to me; did you drop a few words or something?