As we approach the end of 2022, and the beginning of a new year, I’m posting my last two book reviews of this first year of my blog, which I began in February. This time of the year is usually a time for reflection, and for looking forward to next year’s projects and challenges, so it’s fitting that I’m choosing to review two books I’ve read recently that take a similar approach to evaluating aspects of the technology-steeped world in which we live.
In The Loop, NBC News Technology Correspondent Jacob Ward coins the term “the loop” to symbolize an iterative dynamic in our society and lives whereby artificial intelligence, and the computerized algorithms that increasingly influence and control many aspects of our modern lives, are in fact shrinking our capacity for personal choice and individual decision-making.
He begins with several chapters on recent research in behavioral psychology that have demonstrated the extent to which we as humans respond unconsciously to stimuli in our environment of which we aren’t even fully aware, and process information and uncertainty in the “reality” we are experiencing on two different levels, one of which is fast and impulsive, and thus prone to being influenced and misled by past experiences and beliefs, while the other is slower, more analytical and fact-based.
From an evolutionary standpoint, this two-level thinking process worked reasonably well to allow us to survive and prosper in the primitive world in which we evolved. System 1, the more automatic and frequently used one, allows us to make fast decisions and act immediately without a lot of thought, while System 2 allows us to apply slower, more “critical thinking” evaluation to difficult information and stimuli. The combination of the two allows for quick response to threats, but also the ability to grow, learn, and change.
Unfortunately, the invention of computer algorithms, based increasingly on vast quantities of data and attempts to build artificial intelligence to help us to make choices, has created a dangerous vulnerability for us. The businesses building these systems, Ward suggests, have studied the weaknesses of our human decision-making processes, and tailored their algorithms to exploit our emotions and impulses for the benefit of their bottom line, or in support of hidden political objectives or opinions, rather than to just help us make better decisions.
This is not a startling new revelation at this point. It’s common knowledge by now that smartphones were designed to use visual and aural rewards to keep us looking at them, and that social media’s algorithms were designed to maximize our emotional engagement with their feeds, by favoring and promoting posts that engender fear and anger, but Ward does an excellent job of demonstrating how the dynamic plays out in a variety of other different contexts and real-world situations.
For example, in one chapter, he describes how the online gaming industry has created incentives in their games that are specifically designed to create addiction in their users. Along the way, he introduces us to Nir Eyal, a Stanford MBA who wrote the popular book Hooked on how to build “habit-forming” products, which became a bible for Silicon Valley entrepreneurs, and Robert Cialdini's classic work Influence: The Psychology of Persuasion (previously reviewed here) on how to get people to do what you want.
Ward goes on to explore many problems in applying algorithms and artificial intelligence tools in areas such as education, therapy, policing and crime, and discusses troubling aspects of these attempts that surface repeatedly across the different application areas.
He raises the fact that we often don’t know or understand how many of these algorithms arrive at their decisions. They're usually "black boxes", which we're expected (and often required) to accept as valid on face value, even though some have been challenged in court and ultimately found to be faulty or unreliable. In many cases, these algorithms are protected by intellectual property rights, so that the individual negatively affected by them is explicitly barred from understanding the basis for the decision the algorithm rendered.
Ward also points out that organizations like courts, law enforcement agencies, insurance companies, credit agencies and banks often use computer algorithms to make life-altering decisions based on underlying big data which may reflect existing social biases and injustices. This process of using real data based on unfair conditions to make new unjust decisions simply perpetuates and reinforces the existing injustice, under the misleading appearance of the algorithm's "objectivity".
This is a wide-ranging and interesting exploration of how the alluring promise of machine intelligence and algorithms to enrich our lives has instead too often been used to limit our choices, and to manipulate us for the benefit of the wealthy and powerful. Recommended.
No comments:
Post a Comment