CX Spotlight by Fifth Quadrant

Should we be making chatbots that will pass the Turing Test?

You are probably familiar with the Turing Test, but in case you aren't, here is a quick refresher. Developed by Alan Turing in 1950, the goal of the Test is to see if an artificial intelligence (AI) can successfully trick a human into believing that it too is human. The test is typically administered by having judges who communicate via text message with a mix of human and AI participants and then try to correctly identify the AIs among them.

The imitation game

Since the 1950s, many chatbots have tried and failed to pass the test. In 1990, Hugh Loebner, in conjunction with the Cambridge Center for Behavioural Studies established the "Loebner Prize". It offers US$100,000 to any AI that can fool a judge. To date, the Loebner Prize has not been won, but lesser prizes are given out each year for the chatbots that come the closest.

Recent years have seen phenomenal advances in Artificial Narrow Intelligence (ANI), - AI that is really good at one specific task. For example, ANI's have beaten us at chess, poker, Go, and Ms Pacman. Beyond games, ANI's have also shown us up in the law, with one program recently beating human lawyers at spotting errors in legal documents.

But despite these advancements, passing the Turing Test still eludes AI. The main reason for this is that convincingly playing the role of human requires chatbots to progress up the next step in AI evolution to Artificial General Intelligence (AGI). We humans take it for granted, but having a natural, human conversation is actually a very tricky thing. Backing all our interactions is the wealth, depth, and understanding we have accumulated over years of living. For an AGI to "play human" it has to mimic emotion and human quirks such as humour, while also hiding some of its computational strengths.

Computer researchers aren't exactly sure when AI's will acquire AGI – intelligence that allows them to equal or better humans in any task – with predictions ranging from about 2030 all the way up to never. Famed futurist and inventor Ray Kurzweil is the most bullish of all, claiming the Turing Test will be passed in 2029. Beyond that, he sees AI reaching the "singularity" - the moment of when Artificial Super Intelligence (ASI) starts doing things we can't even imagine - by 2045.

In the early days of The Loebner Prize, chatbots were given specific topics to talk about for five minutes, which is something that an ANI should be able to handle. However, after the chatbots came close to passing, the rules of the competition were expanded to allows conversations about any topic for 25 minutes. This requires something closer to AGI, and for now, remains out of reach.

Here is a short conversation I had with 2017's winner of the "most human-seeming" chatbot, Mitsuku. I was already suspicious by the third statement as it is a little too formal and polite for the context:

 

Mitsuku 1

 

A few seconds later, Mitsuku's cover was blown when it asked me to clarify what I meant by "that" - something which any human would have easily understood. I thought I'd give Mitsuku a shot at hiding its strengths, so I asked it to do a tricky sum. The responses further confirmed I was dealing with a robot. Few humans would be able to multiply numbers of that size instantly, and even fewer would think it's cool:

 

Mitsuku 2.png

 

Fifth Quadrant recently looked at how 2018 may be the year of the chatbot. Major tech companies such as the so-called FAANG (Facebook, Apple, Amazon, Netflix, Google), are investing heavily in the technology, and one of the key takeaways from our recently released 2018 Australian Contact Centre Benchmark Report is that almost a third (31%) of contact centres will invest in webchat this year. So should we be making chatbots that will pass the Turing Test? Or is it better to acknowledge they are machines right off the bat? These are questions AI developers and, now the brands that use chatbots, are having to consider.

More human than human

The reason chatbots are designed in our image comes down to understanding. For chatbots to be effective, it is crucial that they understand what we say to them and respond accordingly. But then the question is "how human is too human?"

 In robotics, there is a concept known as the "uncanny valley". The idea of it is this that there is a relationship between the degree an object resembles a human and the emotional response to that object. In other words, if something is a little humanlike, with eyes and a face of sorts, it's cute. But once it gets too close to looking actually human, it repels us. Here is an example:

Awwww Cute!

Wall-E

 

 

 

Kill it with fire!

repliee-q2

 

Perhaps the same concept can be applied to chatbots. We want a chatbot that can do our bidding, but at the same time, it is creepy for it to act "too human", and even worse if it tries to pass for human unbeknown to us. After all, there are some things you might tell a robot that you wouldn't tell a human and vice versa. Appealing to robot's emotion, for example, might have disastrous results.

What about CX?

When it comes to Customer Experience (CX), it seems like a good idea to maintain transparency with your customers and let them know whether or not they are conversing with a bot. Advisory firm CEB, now a partner of Gartner, recently studied thousands of customers and found that there are six omnichannel factors that matter the most to them. Here they are, first as ranked by what service executives predicted would be the order of importance:

  1. Channel consistency
  2. Service Continuity
  3. Transparency
  4. Customer Recognition
  5. Proactivity
  6. Relationship History

Interestingly, the survey results from customers revealed a much different ranking, with transparency at the very top (by a margin of over 40%):

  1. Transparency (58.1%)
  2. Proactivity (15.3%)
  3. Service Continuity (9.8%)
  4. Relationship History (6.1%)
  5. Customer Recognition (5.4%)
  6. Channel Consistency (5.3%)

 The moral of the story is this: if you lie or jerk your customers around, you run the risk of angering them or losing them forever. This would suggest that all bot interactions should start with some kind of disclaimer, stating whether or not you are dealing with bot.

Chatbots that augment rather than replace humans

With chatbots improving fast and seeing increased uptake in the contact centre, a clear strategy of having them augment rather than replace humans is underway.

Last year, Fifth Quadrant attended an event in which Dr Nicola Millard, Head of Customer Insight & Futures at BT Global Services Innovation, spoke about the use bots in the contact centre environment. Millard noted that she often has large corporate clients complain about call handling time with human agents increasing after the adoption of chatbots. She believes this is actually a positive thing, as it indicates that chatbots are taking care of the lower level queries and then handing off to more complex issues to the humans. The contact centre of the future is one that blends AI with more highly trained agents.

To summarise, we want chatbots that are smart enough to understand us, but from a CX point of view it is likely not necessary - or desirable - that they are able to pass the Turing Test. It's probably best that chatbots openly admit they aren't human in order to maintain trust and transparency.  With high profile hacks, breaches, privacy compromises taking place worldwide, trust and transparency are more important components of CX than ever.

 

Stefan Kostarelis

Written by Stefan Kostarelis

Stefan is the Content Manager at a Sydney-based investor relations firm, and a freelance writer whose work has appeared in Techly, Paste Magazine, Lost at E Minor and Tech Invest.

Topics: Customer experience multi channel Chatbot CX Articles & Insights

You might also enjoy reading...