Nobody likes getting bamboozled.
In the world of customer experience (CX), trust is a major currency and is frequently ranked as the most important attribute a brand can cultivate in its customers.
When your customers trust you, they are more likely to buy your products or services, refer you to a friend, share information with you, and even forgive you when you have that one PR nightmare.
The question of transparency and trust is an interesting one when applied to the world of chatbots. As chatbots become more human-like, companies and bot developers will have to decide on whether or not the bot “discloses” that it is, in fact, a robot.
Speaking from my own experience, I’ve recently had several conversations on a website’s live chat feature in which I couldn’t tell if I was dealing with a human or not. In these cases, my queries were relatively simple and my problem was solved. No harm, no foul, right? Perhaps not, as after the exchange, it kind of bothered me that I didn’t know who or what was on the other end. In short, I felt bamboozled. I would have been much more comfortable if the conversation had begun with some kind of bot disclosure.
As we’ve discussed before, the state of California has taken a step in this direction, passing a law this year which makes bot disclosure mandatory. The Electronic Frontier Foundation opposed the original bill on ethical grounds, and worked with the state to revise it to focus on bots used to influence commercial transactions or politics.
What does good bot disclosure look like?
My view on this is that chatbots should be human-like rather than faux-human. Yes, we want chatbots to understand us and help us, but they need to do this without tricking us.
The optimal place for bot disclosure is in the greeting. Having a human name is fine, and perhaps even preferable, as we’d rather start a conversation with ‘Jamie’ than the ‘JCL-5000’. But what comes next is crucial and here is a good example:
This greeting checks all the boxes. The chatbot has a relatable name, albeit a cheesy one, states immediately it is a bot, conveys what it can do, and then provides its first options to the user to create a pathway of how it can help. It is even able to inject a little bit of humour into those options, which further endears it to the user.
Although chatbots have been around since the 1950s, they have only really taken off as a business tool in the past few years. The tipping point came in 2016, when Facebook launched its bot messaging platform. Two years later, Facebook announced that the platform has been used to create 300,000 bots facilitating 8 billion messages a day between businesses and their customers. Given the growth trajectory, that number is likely much higher now.
So why the appeal? Put simply, chatbots save businesses and customers time and money. As chatbots’ killer application is to handle simple queries, they tend to be most useful in customer service-heavy verticals such as financial services, retail, and travel.
As a chatbots primary function is to serve and assist people, it stands to reason that they should mimic human behaviours. But since they cannot recreate truly natural, human interactions, they can begin to generate nonsensical, frustrating and even offensive answers once they venture off-script and mostly in the case of bots that are in a learning phase. Bot disclosure not only improves transparency, but also sets realistic expectations for the capabilities of a chatbot.
This idea of getting upgraded to a human for more complex tasks is becoming more prevalent. Conversational marketing platform company Drift found in its 2018 State of Chatbots Report that just over a third (34%) respondents would like to be connected to a human if a chatbot doesn’t have all the answers. The top uses included getting quick answers (37%), resolving a complaint or problem (35%), and getting detailed answers or explanations (34%).
Putting it all together
Open Universities Australia has successfully combined the concepts of bot disclosure, humour, and handing over to a real human.
"From day one we've been very deliberate about the fact that this is not a human, but we have a bit of fun with it -- if it doesn't understand something it'll say 'hang on, I don't get you, let me go get a human for you'," head of digital at Open Universities Australia Lyndon Summers said this month, as reported by ZDNet.
The chatbot, StudyBot, provides students with information, solves simple problems such as password resets, and passes basic authentication to call centre staff. Like most chatbots, StudyBot was trained using the transcripts from call centre conversations between students and human agents.
Summers said that the end result is a chatbot that scores only a few points behind call centre agents when it comes to user satisfaction.
"If you set the right expectation then its okay, but if you dress it up .... and you are led down this path of thinking you are talking to a person, and it's not, that's only going to lead to disaster," he said.
To better understand chatbot use among Australian consumers, Fifth Quadrant has launched a new research study looking specifically at consumer interactions with chatbots in retail. The new research, which has been sponsored by LogMeIn, is expected to be released in early July with preliminary findings and insights expected to be shared during a 27 June webinar. Agi Metcalfe, Account Director, who's leading the study said, ".....
While it is clear that chatbots are useful in providing real-time solutions to simple problems, in their current state they should first disclose they are not human, and then be programmed to immediately handover to a human when it is taken outside its realm of expertise. From CX point of view, transparency trumps any benefit you would reap from deceiving your customers. Why even lie in the first place? People don’t really mind talking to robots at this point, and in some cases, they might even prefer it.
Are you looking to integrate chatbots into your business or wanting to improve your CX?Contact us today.
Learn more about customer experience