Book Review: How To Stay Smart In a Smart World
How to stay smart in a smart world
Gerd Gigerenzer is Director of the Hamburg Centre for Risk Literacy at the University of Potsdam and a former Professor of Psychology at the University of Chicago. He has written several books to do with risk.
This book is in two parts. The first concentrates on the ‘human affair’ with AI and the second on the high stakes involved. The first chapter looks in detail at dating apps. These have been around for a long time (I remember filling out questionnaires for computer dating programs in the late 60’s). They still rely on the same principles: compare two people’s responses to questions, weight the responses and declare a match. The problems are obvious. People’s responses may not be honest. More importantly, people often react differently when meeting face to face. There are apps that provide photos, but similar reservations apply. Then the suppliers of the apps may use bots to generate phantom users.
In the second chapter, Gigerenzer introduces the concept of the stable-world principle. This is where AI excels: the games of Chess and Go have defined pieces and rules. The most recent version of Go AI was set up with the rules, played against itself multiple times and emerged as a player that can defeat any human or any previous AI. However, this software could not manage a game of Chess.
He looks at how our perception of intelligence has changed. Originally mathematicians such as Gauss were considered paragons of intelligence. But as computers, first human and later machines, became capable of intricate computation our perception shifted towards artistic and creative people as defining intelligence.
Gigerenzer looks at autonomous vehicles. Here the stable-world principle does not exist. A vehicle cannot predict the likely behaviour of pedestrians – indeed the vehicle does not even have a concept of what a pedestrian is. He uses the example of a small child at the edge of the road. If a human driver sees a parent with the child they understand that it is less likely the the child will run into the road. The AI cannot make such an evaluation.
This leads to a discussion of common sense which cannot be coded into any AI. The final chapter of part one looks at big data. One drawback of big data collections is that they are old and riddled with biases. Gigerenzer cites the law of recency: a current data point may be more relevant than a collection of past data. His example is Google’s attempt at flu prediction software. The most accurate predictor of flu infection is the most recent data point.
In the second part Gigerenzer examines transparency. Most of the data collected for AI is not available for examination by the casual user. Worse still are the results of neural networks where not even the coders know how conclusions are reached. He has a chapter titled “Sleepwalking into Surveillance”. By apathetic default, we are seemingly unconcerned about our movements and habits being tracked.
Gigerenzer describes the psychology of getting users hooked. Research has shown that people perform poorly at tasks if their mobile is nearby even turned off. When required to leave phones inaccessible they do better. This develops into a concern for safety. Allowing mobile phones in cars leads to distracted driving.
Finally he discusses the increasing difficulty of distinguishing fact from fake. We cannot assume that digital natives have the skills to parse all the information they encounter.
This is another important book that should be read by anyone with an interest in current technologies.