Talk of machines replacing human thinking has been going on for a long time despite the enduring robustness of the “Turing test”, named after the British scientific genius, Alan Turing. Far from disappearing, such talk has gained ascendancy with the increase in computing power and access to “big data.” What are we to make of this discussion?
Let’s hear from a renowned expert. He is Leslie Valiant, who is the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics, School of Engineering and Applied Sciences at Harvard University. Valiant is also the winner of the Turing Prize, the most prestigious award in Computer Science. I will summarize Professor Valiant’s views on the question of machine intelligence based on my reading of his highly informative book, Probably Approximately Correct, Basic Books, 2013).
The most audacious view among Artificial Intelligence (AI) enthusiasts is that a machine (computer, robot and the like) after being taught to learn, will be executing some fixed program. Hence, the argument continues, this will enable a learned program to be created by a programmer that functions like a “meta learning program”. In other words, if you push the right buttons on this meta program, you get the desired learning outcomes, no matter no complex the task.
What does Professor Valiant think? One word describes his response: spurious. The argument of the AI camp will not work because, to quote him, “there is no alternative way known for obtaining a program learned, with all the associated parameter values, other than by a learning process.” In other words, the very concept of a learned program makes a mockery of the learning process, which by its nature, is messy.
Humans are uniquely wired to adapt to the messiness of the real world. Our brain architecture ensures that. Add in evolution over eons of time, and we are what we are: “experts” in learning and adaptation by all means: trial-and-error, learning-by-doing, learning by socializing etc. Valiant could counter-argue that Homo sapiens wouldn’t be on this planet for so long if it were otherwise. He has a fancier term for this adaptive learning process; he calls it PAC, for Probably Approximately Correct (the title of his book).
With PAC as our calling card, Valiant is adamant that no machine can replace human learning because this is the gift of evolution, forged over billions of years. PAC-learning doesn’t always give the right answers of course, but when the stakes are high, we quickly learn from it, adapt and move on. To quote the professor: “The artificial intelligence challenge has always looked enticing as a scientific problem because it attempts only to emulate systems that already exist in nature. It is made difficult, I believe, by the fact that the system that exist are the results of learning over billions of years from experience, all traces of which may have vanished.”
Comforting thoughts. Still, fans of AI would argue that there has been tremendous strides in machine learning in areas such as language processing and computer vision. So, who is right?
There is no contradiction, according to Valiant. Machines are perfect in some but not all domains. Indeed, Turing addressed this question by isolating four domains where he believed computers would easily outperform humans: game playing (chess is the prime example), cryptography, translation of languages and mathematical computations. In Turing’s view, computers excel in these tasks because they require “little contact with the outside world.” He singled out cryptography as the field he considered to be “most rewarding” for computer science. He was highly prescient as all of us who operate online bank accounts can appreciate. He also thought that learning of languages would be a most challenging task for computers since it is the most human of activities. Valiant agrees, and adds that the same argument applies to any tasks that require the accumulation of “common sense knowledge” like those gained from experience. Crucially, evolution is on our side as far as the design of intelligence is concerned. We may be inferior to computers or robots in tasks that require mind-blogging memory and grunt work, but we are smart enough to delegate them to the machines. “It gives us hope that we may endow robots with intelligence superior to ours but only in directions that are useful and not threatening to us …There is no good reason to want to make robots that are exactly like humans, [assuming we can].” He ends with a punchline: “We have limited fear of domesticated animals. We do not necessarily have to fear intelligent robots either.” We shall see.
 The Turing test was introduced by British computer scientist, Alan Turing (1912 – 1954) in his 1950 paper, “Computing Machinery and Intelligence” while working at the University of Manchester. It is a test for recognizing whether a machine can be considered to think. A simplified definition of the test is as follows: a machine passes the Turing Test if a person, conversing with it via remote electronic interactions, cannot distinguish it from a person. So far, AI researchers have not succeeded in building machines that can pass the test or in showing it to be irrelevant.
 The late distinguished physicist, Stephen Hawking presented an alternative view of AII. See for example, http://theconversation.com/stephen-hawking-warned-about-the-perils-of-artificial-intelligence-yet-ai-gave-him-a-voice-93416