Digital assistants go by many names - chatbot, virtual assistant, conversation agent—to name a few. They live in differe...
Perspectives on AI and the Future of UX
Posted by AnswerLab Research on Sep 6, 2018
Last week, AnswerLab hosted three experts in artificial intelligence and conversation design at our San Francisco office to share their take on the future of AI and user experience. Coming from a variety of different backgrounds, ranging from journalism and conversation design to engineering and development, our three panelists presented a diverse set of perspectives that we as researchers aren’t always the experts on. However, despite their differences, we found they aligned on a few key principles that drive successful AI. In case you missed it, you can watch the full panel here or read our key takeaways below.
Fail fast and gracefully
"Humans are less forgiving with chatbots than they are with humans. So if the bot responds with ‘I’m sorry I don’t understand’ then we need to offer other outlets, whether that’s presenting self-service tools, offering a hand off to a human agent, or simply letting them know that’s not a related domain or task."
- Yizel Vizcarra, Conversation Engineer, Autodesk
When it comes to chatbots, users have less patience and higher expectations. If your chatbot doesn’t understand a user’s request or can’t answer the question, don’t get stuck in an error loop by repeating an error message over and over. After a few failed attempts, offer a friendly apology, acknowledge that it can’t help with that kind of request yet, and create some ways to continue the conversation and help the user find the answer to their problem. Creating a route for the user to get what they need even if it means passing them off to a human customer service agent not only increases overall customer satisfaction, but leaves the user more open to future interactions with the chatbot for other tasks.
Be willing to invest time and resources in AI
"I sell AI as an enterprise solution, and my biggest challenge is that AI is research. Everything you do in AI will always be an experiment and they will mostly fail…Changing the mindset of first, the executive level, and then development and engineering to accept that it’s built to fail and you’re going to be building experiments constantly is a difficult shift for organizations to make."
- Noelle LaCharite, Principal Program Manager, Developer Experience Azure Applied AI and Cognitive Services, Microsoft
AI is still early in its development, and because of that, companies have to understand that building AI is very experimental. You could spend six months building a model without it ever creating anything that will turn into profits. But, that doesn’t mean you should give up altogether. Building AI is a process and if your team is willing to invest in it, it could make a big difference down the road. Get on board now before it’s too late.
Human context is an ever-present challenge
"You’re building for voice, but you have to build for all of these different things from tiny screens to huge screens in many different contexts. Knowing the context in which your users are going to be experiencing your skill in is actually very critical."
- Noelle LaCharite, Principal Program Manager, Developer Experience Azure Applied AI and Cognitive Services, Microsoft
Noelle built an Alexa skill called Mindfulness where users can take a minute to stop, breathe, and receive affirmations through the device. While she built it when the Echo was the only way to access it, when Alexa for the Car came out, they told her her skill was broken. She realized the first interaction is telling the user to close their eyes. As new ways to access these skills emerge, we have to respond with a deep understanding of the new context and continue to adapt and build with many situations in mind.
"You might feel super comfortable at home talking aloud to a speaker because no one’s listening, but the minute you go to a public place, it’s going to be much more uncomfortable for you to pick up your phone and have a voice interaction. It’s important to design for the mode of communication, but it’s also important to think about the type of communication you’re having based on your communication."
- Eva Steele-Saccio, Conversation Designer, Google
While we often think about designing for multi-modal experiences, we also need to think about location-based context when it comes to voice. If you’re in the car, your interactions will be much different than if you’re in a public place or in your home. We have the data to understand where the users are and how they’re accessing these devices. Now it’s time to use the information to build a better and more inclusive experience.
Transparency builds trust
"People don’t like to be fooled and we need to provide that transparency...We wanted to emphasize that this is a robot and not a human, so it’s okay if you don’t have that complete trust in intelligence because AVA is a robot."
- Yizel Vizcarra, Conversation Engineer, Autodesk
"If you tell the Google Assistant to text your spouse for the first time, it will ask for their name. And if you come back and say what’s my husband’s name, the Google Assistant will say, 'You told me that his name is X.' [...] The user probably forgot that they told the assistant [...] so, you’re showing that the assistant is aware that it got that information from [the user] and that was a personal decision that [the user] made to share that information […] It’s all about being clear about moments in which you’re soliciting info and returning information."
- Eva Steele-Saccio, Conversation Designer, Google
AI is still very unknown, and with that comes apprehension. It’s our duty to build experiences that are transparent and honest. Building trust demystifies AI for users. All of our panelists pointed to ways you can foster trust from disclosing where information is being gathered to creating honest depictions of the technologies users are interacting with. By pulling back the curtain, users have a more sincere experience with the technologies and companies that produce them. Building ethical AI is a conversion that must be fostered and prioritized at all stages of the development process to create inclusive, helpful experiences for all users.
Ticket proceeds from this event and all of our events this quarter are being donated to AI4ALL, a nonprofit working to increase diversity and inclusion in artificial intelligence. AI4ALL creates pipelines for underrepresented talent through education and mentorship programs around the U.S. and Canada giving high school students early exposure to AI for social good. Learn more about AI4All and find out how you can join us at an upcoming event!
AnswerLab Research
The AnswerLab research team collaborates on articles to bring you the latest UX trends and best practices.related insights
stay connected with AnswerLab
Keep up with the latest in UX research. Our monthly newsletter offers useful UX insights and tips, relevant research, and news from our team.