A detailed examination of people's reactions to machine actions as compared to human actions. Through dozens of experiments, this book explores when and why people judge humans and machines differently. How would you feel about losing your job to a machine? How about a tsunami alert system that fails? Would you react differently to acts of discrimination depending on whether they were carried out by a machine or by a human? What about public surveillance?
How Humans Judge Machines compares people's reactions to actions performed by humans and machines. Using data collected in dozens of experiments, this book reveals the biases that permeate human-machine interactions.
Are there conditions in which we judge machines unfairly? Is our judgment of machines affected by the moral dimensions of a scenario? Is our judgment of machine correlated with demographic factors such as education or gender?
César Hidalgo and colleagues use hard science to take on these pressing technological questions. Using randomized experiments, they create revealing counterfactuals and build statistical models to explain how people judge artificial intelligence and whether they do it fairly. Through original research, How Humans Judge Machines bring us one step closer tounderstanding the ethical consequences of AI.
Although the book is mainly about how humans see the actions of fellow humans and machines, the book provides lots of insights and future directions on how AI product liability should be approached. The book also models the wrongness using mathematical models with moral dimensions as variables. I liked the conclusion part in particular where the author brings bureaucracy as a scenario and draws conclusions based on how the people view bureaucracy, whether machine like system or attributing the function to leaders running the bureaucracy.
This book could perfectly be a long paper. It doesn't mean that is not an important and (mostly) easy to read book about how people judge machines and their behavior compared to human counterparts. It is always good to have some numbers on things that appear more or less intuitive. As the authors reported, it would be great to have similar data in different cultural contexts (the study was done in the US, using Mechanical Turk, with all the good and bad that it implies).
At the end, we need more studies like this, across countries, ages, etc.