We keep being told that automation is going to impact virtually everything in our lives. Work, shopping, communicating, driving, health care, you name it. Artificial Intelligence (AI) and biotech are going to become the foundation for pretty well all we do, and even who we are.
Can this really be true? Is it inevitable? Does it matter to us as individuals? Should we care? For a teaser as to what is currently happening at edges that don’t impact too many of us directly (yet), read the following passage and see what you think.
From Yuval Harari’s most recent remarkable book, 21 Lessons for the 21st Century:
“On December 7, 2017, a critical milestone was reached, not when a computer beat a human at chess – that’s old news – but when Google’s AlphaZero program defeated the Stockfish 8 program. Stockfish 8 was the world’s computer chess champion for 2016. It had access to centuries of accumulated experience in chess, as well as decades of computer experience. It was able to calculate seventy million chess positions per second. In contrast, AlphasZero performed only eighty thousand such calculations per second, and its human creators had not taught it any chess strategies – not even standard openings. Rather, AlphaZero used the latest machine-learning principles to self-learn chess by playing against itself. Nevertheless, out of a hundred games the novice AlphaZero played against Stockfish, AlphaZero won twenty-eight and tied seventy-two. It didn’t lose even once. Since AlphaZero had learned nothing from any human, many of its winning moves and strategies seemed unconventional to the human eye. They may well be considered creative, if not downright genius.
Can you guess how long it took AlphaZero to learn chess from scratch, prepare for the match against Stockfish, and develop its genius instincts? Four hours. That’s not a typo. For centuries, chess was considered one of the crowning glories of human intelligence. AlphaZero went from utter ignorance to creative mastery in four hours, without the help of any human guide.” (pp. 31-32)
This is pretty powerful stuff, and maybe a bit demoralizing if you’re a budding chess player. I know my jaw dropped when I read it, despite my background in technology. To reassure you a bit, this specialized program has been programmed by very smart humans to “learn” from each move it makes and then those that its “opponent” makes, relating the moves, storing the effects, and analyzing options according to the rules of chess that the very smart humans would have programmed into it. These results are staggering, especially within such an astoundingly short period of time, but it is for very specific, focused activity.
The takeaways from this example are significant:
- The best computer scientists working in AI (machine learning being one technique of AI) are able to produce astoundingly responsive programs for very well-defined applications.
- As long as there are no decisions to be made entirely autonomously by a computer program that can in any way impact a human being’s life (playing chess doesn’t fall in that category for most of us), then using AI techniques has many advantages. It removes drudgery by doing things at super-human speeds and not getting bored, and it provides reliability by not making mistakes because it’s tired or thinking of something else.
- If, on the other hand, there are any decisions to be made that impact people – any at all, but let’s just take airplane autopilot programs as one possibility – then having computer programs making those decisions based on preprogrammed rules decided on possibly by only one software engineer who may not have thought everything through or tested it thoroughly is extremely problematic.
This is why, in my mind, it’s important for society to be wary of leaving the future of technology solely to the technology experts, and I say this as a retired computer science professor who is very positively disposed towards techie folks. As more and more of what we use has embedded software in it, from our smart phones to our household appliances to cars and semi-autonomous cars, not to mention autopilot software that clearly hasn’t been thought through properly, it behooves everyone to pay attention: the corporations that design these new and continually updated applications and products; the governments that regulate products and procedures for customer safety and security; and, the citizens whose lives are impacted every day by these products. Science and technology design teams need to include voices from the humanities, the social sciences, and relevant professions. People who study and work in the humanities and social sciences should understand that they have a critical role to play in the development of AI-driven products. And our education system should ensure that humanities and social science programs include some grounding in technology, so their graduates are ready to play a role, just as technology programs should include some grounding in humanities and social sciences. You’re all needed.
Let’s look at just a few examples already in use or on the to-do list:
- An autonomous or semi-autonomous car
Semi-autonomous cars – not to mention before long fully autonomous cars – are promoted as safer than driver-controlled cars because they don’t get distracted and they have sensors to detect obstacles from all sides. But what if the car’s software has to choose between avoiding obstacles in two directions? What if veering right to avoid hitting someone on the car’s left would result in hitting someone else on the car’s right? Would the car be able to tell if the people were old or young? Agile or infirm? Walking fast or walking slow? Related to the driver or not? Should any of that matter? What if there were two people on one side and one on the other, would that factor into the avoidance decision? This type of decision would be preprogrammed in the car’s software. Who makes the determination of what decision rules should be included? A philosopher who’s an expert in ethics? Maybe an ethics panel consisting of a philosopher, a lawyer, an officer of religion, and a psychologist? Would the software engineering team make that decision? I’m not sure I’d want to be the one making the decision. And who gets sued by the person or people who are struck by the autonomous vehicle? This needs to be more than an engineering decision.
- An autonomous job-application/recruitment system
Most job recruitment is done at least initially by requesting all interested parties to complete an application form. Then, “we’ll contact you if we decide to pursue your application.” For the hundreds of applications that an HR person or committee has to go through, there are usually some applicants that everyone agrees are not a suitable fit, hopefully one or more that seem to be suitable, and then a number of applications that are more difficult to judge from the application. In my experience, when a committee goes through a pile of applications, it really does take the varied experience and perspectives of committee members to collectively decide who should be shortlisted and why. Yet, applications are in place now to have a computer program make those determinations, based on preprogrammed rules. This may save time and human resource, but it may not produce the best result, either for the applicants or for the company. This approach sure doesn’t help with out-of-the-box thinking about new hires. Or with promoting diversity. At the very least, a number of people with the relevant expertise with respect to the position and the company should be involved in developing the rules, which may differ for every position.
- An autonomous policing application, using past case histories and machine learning techniques to decide whether someone should be charged of a crime or targeted as a person of interest.
This is talked about, and I find it pretty scary. Again, the question arises: who decided how such a program will be structured and what kind of rules will be embedded in it, rules to be used in decision-making and possibly use as a starting point for the machine to “learn” additional rules. If a database of existing cases is used for a starting point for machine learning, how does the system avoid “learning” false “knowledge”, such as the indefensible preponderance of racial minorities having been charged because of bias? There are the instincts and experiences of police and detectives over many years, along with input criminology experts, lawyers, social workers, and others that should be taken into consideration. How will we know if this has happened? How will we as a society ensure that it is happening? And who will get sued when someone is wrongly charged?
The bottom line is that all citizens should have awareness of the potential, the power, and the pitfalls of technology that uses AI. We should be able to feel confident that, as our world shifts to handing even more control over to computers, those people designing, implementing, and using these autonomous systems have asked the right questions, involved the right people, and continue to have vigilant oversight in their use. These systems can be amazing – and an amazing power for improving our lives – but they are not infallible.
Remember this about computer programs, which are written by all too human programmers:
- They are not humans.
- They do not have empathy.
- They do not judge, but they do make decisions based on rules programmed into them that may have an unintended bias.
- They do not understand the intent behind public policy that has been interpreted and programmed into an application, which again may or may not have been fully captured by the programmer.
- They do not think for themselves. They cannot make an on-the-spot instinctive ethic decision; they just automatically do what they’ve been programmed to do in any given situation, including continually pointing the nose of the airplane down towards the ground each time the pilot tries to make it go up.
I raise these concerns as a computer programmer who recognizes how easily design assumptions can be made with the best of intentions and yet generate unintended consequences, especially without adequate testing. And, of course, the most benevolent technology in the wrong hands …