This week on Data Skeptic, we begin the episode with a skit performance to introduce the topic of this show: The Imitation Game. We open with a scene in the distant future. The year is 2027, and company called Shamony is announcing their new product, Ada, the most advanced artificial intelligence agent. To prove its superiority, the lead scientist announces that it will use the Turing Test that Alan Turing proposed in 1950. During this we introduce Turing’s “objections” outlined in his famous paper, “Computing Machinery and Intelligence.”
Most of us has probably heard of the Turing Test. Suppose an interrogator is communicating by keyboard with entities that are hidden from view. Some are people and some are computers. The interrogator has to guess which is which. If a computer manages to fool the interrogator, it can be said to think.
At least that’s the way the meaning of the Turing Test is usually put. But it’s not the Turing Test, it’s the Imitation Game, which involves a human and a machine imitating a human. While most people associate the game with the idea of testing the limits of machines, Turing was actually interested in answering the question: Can a machine think? Midway through his paper, Turing wrote, "The original question, 'Can machines think?,' I believe to be too meaningless to deserve discussion." That’s a murky question. Turing believed we can replace that question with a machine that can play this Imitation Game.
The Imitation Game involves a judge engaging in two conversations - one with a human acting natural and one with a machine pretending to be human. But what does it mean to imitate? To explore that question, we invited a professional performer and improv instructor, Holly Laurent, on our show.
The third segment of this episode is an interview with Peter Clark from the Allen Institute for Artificial Intelligence about efforts to create question answering software and the corpus that they've put together to further these efforts. What is a question and answering algorithm?The basic task is, given a question, try to find an answer. There are different types of questions, in which a system would have to find the answer. For example, there are some questions in the form of multiple sentences or paragraphs, and the task of a system is to locate that answer within a large corpus. But such questions assume that the answer is within that corpus. But Peter and his colleagues at the Allen Institute are focusing on questions in which the answers are not explicitly written down anywhere; instead, the system has to combine a couple of bits of information together to come up with the answer.
The type of challenge questions Peter is interested in requires knowledge that’s difficult to pin down. In other words, general common sense knowledge about the world is needed, combined with information in the question. An example of a challenge question is posed in a multiple choice format: Which property of a material can be determined just be looking at it? (A) luster, (B) mass, (C) weight or (D) hardness. The answer here is luster. Here’s another example question: Which item below is not made from a material grown in nature? (A) a cotton shirt (B) a wooden chair (C) a plastic spoon (D) a grass basket.
Finding the answers to such questions taps into a common-sense picture of the world that even a child possesses. And it is this common sense that the AI behind voice assistants, chatbots, and translation software lacks.
Kyle will be giving a talk on Artificial Intelligence at SkepticalCon in Berkeley, CA on Sunday June 10th.
Kyle will be giving a talk on AI, Machine Learning, and the Blockchain at the University of Chicago Gleacher Center on Saturday May 19th. Get tickets here.