However, it looks like the battle for whether bots should form a core part of a company's customer experience strategy has been comprehensively won. Business Insider's survey of 800 business decision-makers found that 80% already used or planned to employ chatbots by 2020, believing that AI has reached a stage where it can increasingly be used to drive engaging and human-like conversations to support businesses at scale. Studies regularly find consumer receptiveness to bots is strong and the market is maturing rapidly. Even if good customer experience is still hard to find, examples are proliferating across financial services, retail, utilities, transport, education, healthcare, media, and law.
Spotlight on Financial Services
Earlier last year, Capital One launched "Eno", a virtual assistant to support basic banking transactions and credit card management. This adds to a growing roster of banking platforms that are introducing some element of CUI; Erica from Bank of America, The Facebook Messenger Amex Bot for American Express, Royal Bank of Scotland's with virtual assistant Luvo… the list goes on.
As well as being a boost for 24/7 customer experience, a key factor in the race to "botify" financial services is the clear savings that are possible by replacing expensive resources like brokers, insurance agents or traders with algorithms. Codifying even complex rules and then training across large data is leading to digital agents that can outperform humans in speed, quality and cost across a series of customer interactions.
The reasons are compelling. Messaging platforms have customer adoption and usage levels that any website or application can only dream of, outperforming social media since late 2015 in a now ubiquitous reference chart. That and zero friction to learning how to interact (conversation was universally human the last time I checked) massively reduce barriers to use. I don't foresee a large market for "how to use our chatbot" video production anytime soon.
Another factor is the more dynamic nature of engagement that this form creates. It just feels more interactive and relational, even when the outcome is as transactional as filling out a form on a website. The shared conversation history and ability to pick up where you left off perversely makes the experience more human than web or call centres would, and the micro-interactions create a far more meaningful and accessible bank of data for analytics and CX improvements.
The upshot of all this is that the questions around bots now seem to be shifting from “should we?” to “how should we?” and – as if on cue – a raft of conversational businesses have emerged from the woodwork, eager to proclaim themselves masters of designing for bots. As these specialist agencies position their work as leading a new discipline, more established design agencies simply add chatbot services to their offering list and carry on as before.
As you'd expect the reality is somewhere in the middle. A lot of the expertise needed here is nothing more or less than good design principles, but there are important differences in the approach as well as the technical execution when working with bots. A conversation is not like navigating an app menu and there are important design principles when working with messaging or voice platforms.
A bot by any other name…
To start with, not all bots are created equal, so to steal shamelessly from a presentation Pete Trainor gave:
Bots = UI
Bots ≠ AI
Chatbots are really just the presentational form on to any application; a banking service, a knowledge base, a holiday booking engine… anything. It just happens to be messaging based (voice or text). That's very different to Artificial Intelligence, which is where a machine is able to mimic key human cognitive features in relation to a given task, such as learning and problem solving.
The chatbot presentational form can be extremely valuable in itself (e.g. being able to set a kitchen timer through voice while cooking and not having to use your hands, or not having to work through a series of steps to request my specific coffee order and just type the order as if I was at the counter). And this is great. Chatbots that provide a new channel of interaction can be extremely useful and don't necessarily need to rely on any form of ‘intelligence’ to drive them. In fact, I'd hazard a guess that only a tiny proportion of companies delivering conversational experiences actually use any form of AI to power them… instead relying on keywords or a simple menu structure with options to determine how customers navigate through a process using messaging.
However, the real power of chatbots comes when you combine the conversational form of interaction with features - such as natural language processing and machine learning - that make up Artificial Intelligence. To add my own equation:
Bots + AI = where the magic happens
Within a relatively short period of time, creating online experiences (including those accessed through conversational interfaces) without having AI baked in will be as short-sighted as ignoring smartphone customers today. However, we're definitely not there yet, and with good reason. Working with current AI technologies and chatbots to support customer experience stretches the definition of intelligence pretty far. The tools are available but extremely immature, and changing at pace as the ecosystem and platforms try to keep up with customer demand.
This is the exciting time we're in, and this is the area where my interest lies. Taking the shiny technology of AI and conversational interfaces and watching it succeed or fail when it comes into contact with messy human behaviour. For most scenarios we don't need to create an experience where the bot appears sentient, but we do need the customer experience to be compelling and effective, and that means good conversational design.
At a recent conference in the US, I heard the team at IBM compare the state of current conversational capability to that of late ‘90s websites… a wild west of design with few rules and little adherence those conventions that do exist. So what does it mean to create good design for conversations involving intelligent bots?
Designing for intelligent agents
The first challenge is defining what we mean by "design" in a scenario where many of our previous landmarks (brand identity, graphical style, on page hierarchy of information, SEO, etc.) are conspicuously absent. With messaging platforms the medium is literally the message. You can get a long way just by adhering to general design principles and those aimed at universal design.
However there are some specific challenges when designing for messaging platforms and voice. To help when designing conversational experiences I've collated six of the questions it can be helpful to ask. They are by no means comprehensive but they will at least avoid some of the common pitfalls when introducing a chatbot into the customer experience mix.
Is this a good idea? No really, is it a good idea?
There are many great reasons for applying a conversational interface to an online experience, and a lot of terrible reasons. Make sure that the form of interaction and/or the addition of intelligence through cognitive services (e.g. natural language processing, image recognition/processing) add real value for the customer. I've seen too many travel bots where the need to iterate, change parameters and flex options make the conversation experience far more complicated and frustrating than it currently is via an interface with sliders, lists and date pickers. There are way too many cases of "throw a bot at it and see if it sticks", and while this is an inevitable stage in the technology maturing, the mistakes don't have to be made by you.
What does your bot want to be when it grows up?
With a little mission statement for your chatbot it's much easier to derive a discrete set of use cases and then start small, rather than set expectations way too high. Then focus on communicating those use cases to the customer easily at the start. Depending on the channel your bot is working on there are often multiple options for this – persistent menus, initial intro text, example choice buttons etc. Putting these in place help frame the conversation customers have with the bot and set it up for success.
Having clear purpose also helps prioritise any backlog for future development. The nice thing about conversational interfaces is that people can always tell you directly what they do want the bot to be able to do.
Will it talk too much?
Chat-bots is an unfortunate term, as it sets the expectation that they should be the protagonists when in reality they are nothing more than a sidekick helping the real hero of our story—the customer. We don't want our bots to chat, we want them to understand and do things on our behalf. That means way more listening than talking. Of course, this can only be a principle rather than a hard and fast rule, but if the bot is more interested in the sound of its voice than getting what it needs from the customer to help, then that's a warning sign.
Typical messaging interfaces are designed for short, snappy exchanges of 2-3 lines maximum, and this pressure is amplified with voice as the cognitive load in simultaneously listening and processing increases. That results in the need to be really careful about how we break up information, what's important to say, and how we balance brand and tone of voice with the need for efficiency. There's lots of building evidence around optimal message length for different channels (such as Messenger, Slack, SMS, Skype, Amazon Echo etc.), but until those stabilise a rough rule of thumb would be stick to less than Tweet-lengthed without a good reason to go beyond this.
Does it use the right language?
Related to the last item… is the language the bot is using relevant to the audience? Is it really clear? Is it relevant to the brand? Is the language used engaging enough to continue the interaction to completion? Wording is more critical here than in any other channel (including call centre scripts) because this is the only interface – copy is the designers tool for CUI.
Is it opening up or closing down? One simple example is that quick option choices or buttons give the impression of closing down, a useful psychological mechanism to support a linear process even if the bot is still able to cope with wider responses.
We're still some distance from bots exhibiting emotional intelligence and personalising language per customer in real-time the way a good human agent would. That means our best line of defence from poor experience is testing and retesting the language across use cases and making sure it holds up. That includes manners. Like any child it reflects on the parent how well we've taught manners, taking turns and respect.
Another perspective on this question is how well the bot understands customer language. This could be as crude as multilingual detection and real-time translation, but as importantly, does it understand shorthand, slang, and different dialects across regions and social groups? Layer into this the need for bots to accept relevant emoji as the fastest growing language (for example, Capital One Eno returns your current balance if you send it the 💰 moneybag emoji), and the need is clear for training on a on a wide enough set of data to represent the amorphous and evolving nature of language.
How can it fail better?
When you interact with a chatbot and things are going well it's easy to think you're dealing with an intelligent wonder. It's what happens when bots aren't getting it that make or break the experience. Therefore how you plan for failure is one of the most important parts of designing a bot. This can be as simple as being more tentative in responses and ensuring that the tone clearly positions the bot as in the wrong for not understanding. Every bot platform comes with some form of confidence metrics that make it possible to adjust the way the conversation progresses based on how likely it is to be right. That means there's no excuse for not supporting people better when the bot inevitably gets it wrong. Default options, nudges towards a possible path forward, backing up in the conversation, starting over, and escalating or handing the conversation off to a human are all mechanisms that may be appropriate to avoid customer frustration or reputational damage. If someone gets stuck and the bot is simply reinforcing that sense of being trapped and not understood the result is almost always irreparable.
One of the best long-term solutions here is analytics data. Unlike many web clicking metrics, the natural language of chatbot data means you're guessing less about what a user really wanted to achieve when things go wrong. That means you can drive improvements and features directly from the data, including using the failure conversations as training data to help ratchet up the bot's performance in the future. One word of caution here too… the tools are pretty immature at the moment, so it's important to work with people who take measurement for conversational interfaces seriously, and are committed to training and improving your bot for the long term. It's going to fail and the only way to avoid this is by incrementally making it better with time.
Is it being dumb or creepy?
If I start a conversation with someone in person, I have certain advantages and insights even if it's the first time I've ever spoken with them. From a quick glance I would probably be able to assess some obvious features such as gender, likely ethnicity, rough age, but also other social and demographic cues. As soon as I started interacting I'd be able to better gauge current emotional state and further refine those initial assumptions. Imperfect sure, but we use these unconsciously in everyday life for a simple reason: they're better than the alternative which is to ignore context and risk getting things very wrong. At best I might come across as socially awkward; at worst that lack of appreciating the right context could provoke violence!
With chatbots we are clearly operating in a different landscape, but many of the same or similar insights may be available to us through data points (e.g. profile information, other online activity, CRM data, the conversational history to date etc.) and we ignore this at our peril. However, it's clear there is a fine line we need to walk. Generally as humans we intuitively know the right amount of context to bring to any conversation. We take it for granted as a social skill. In the context of chatbots then, it's frustrating when we have to somehow prove or reiterate who we are when this could be inferred or applied. Equally, it's creepy or uncomfortable when a bot seems to know or wants to know more than it should about us.
Therefore, the crucial task in design around context is to judge this well for the bot's purpose and channel of communication (what may seem fine via Facebook Messenger may be uncomfortable spoken aloud via Amazon Echo), and do this in a way that meets expectations and regulatory constraints. In light of the looming changes around GDPR this will likely be a hot topic for future discussion.
While these six questions are only the start, they should help frame the way we think about designing chatbots with intelligence and – hopefully – a little realism.
I've found the following to be helpful when thinking about CUI and intelligent agents…
What a conversation really is:Paul Pangaro: Conversation is more than interface video and the accompanying slides (PDF).
Why chatbots fail:
CUI design principles: