back

Tried-and-Tested Methodologies for UX Research on Artificial Intelligence

Posted by Ben Hoopes on Oct 16, 2023

There is deservedly much excitement today about AI tools and experiences, and as many questions as there are answers about how AI will impact our lives, products, and customers. While the backend engineering of the large language models is highly complex, the front end of the experience is still a user experience, which generally speaking is best when simple and straightforward.

With so many questioning how to begin researching AI-powered experiences, we’re here to demystify UX research for AI. Luckily, established UX research methods still work great for studying these new types of experiences. Below are ways some established methodologies can be applied to studying AI experiences, along with specific considerations and research questions to explore.

Tried-and-Tested Methods for AI Research

Persona development

Personas can help us better understand the different types of users of our products, their needs and wants, typical behaviors, and goals. As with all new products and technologies, we are likely to see some classic user types, such as early adopters and laggards. Conducting in-depth interviews can help add nuance to our perception of our users. In the case of new AI experiences, personas can help us understand what early adopters need, and the reasons behind those needs, as well as what users with less AI experience need to help them adopt new products, and why.

Research questions for AI persona studies might include: 

  • Which of our users are currently using AI experiences?
  • What do they like about them?
  • What challenges do they face when using AI products?

Jobs-to-be-done

To dig further into the reasons behind use of AI products, jobs-to-be-done research takes us deeper beneath the surface reasons of why users leverage AI products. Beyond completing the primary task, there might be underlying goals that AI experiences are helping users achieve. For example, they may use AI to help them be more efficient in a time-consuming task, so they can focus on a task of even higher leverage. Better understanding the range of goals your users have and their importance can help you prioritize which new features to tackle first.

Research questions for AI jobs-to-be-done studies might include: 

  • What primary and secondary jobs are AI tools helping users complete?
  • What about the AI experience helps users achieve this?
  • What are examples of AI experiences that complete these jobs better than others?

Journey mapping

Journey mapping helps us understand the experience of a user over a set time period. For example, we could look at the journey of how users decide they are ready to try an AI product. Or, we might study the journey of how users use AI products over the course of a week. This research can help you identify the distinct phases of the journey, pain points along the way, and opportunities for the products to shine in each phase.

Research questions for AI journey mapping studies might include: 

  • What are the steps in the AI-specific journey?
  • How long is spent in each phase and why?
  • What other resources are consulted before or after AI tools?

Concept testing

Concept testing is a way to test ideas before sinking resources into development. One highly effective method for this type of research, particularly with AI chatbots and conversational AI tools is Wizard of Oz testing. Wizard of Oz testing involves a real person pretending to be the AI, while the participant interacts with a prototype. It’s difficult to simulate the range of potential responses and interactions through prototyping tools, so having a human operator in the next room gives you flexibility to test a wide variety of concepts, responses, and situations. There are many ways to execute this type of research, and it can allow us to see the product in action before spending valuable resources on building it. And, by the time you start actual development, you have research-based next steps and priorities to focus on.

Research questions for AI concept testing studies might include: 

  • How do users interact with our product?
  • What are user expectations from the AI? What types of control do they want over it?
  • What are their reactions to proposed AI-generated content? What features are needed in a minimum viable product?

Read about a concept testing project we conducted for an ecommerce company launching AI shopping assistants. 

Diary Studies

Diary studies allow us to see what users do over longer periods of time. We may want to know how participants use an AI product, or products, over the course of a week. We may even want to learn something more general, such as the types of challenges they have, perhaps around a specific task and how they solve it currently, to see how an AI-powered product could solve it better.

Research questions for AI diary studies might include: 

  • How do users use AI over the course of a period of time? Why do they use it? What challenges and successes do they have?
  • How do they use a particular product over a period of time (either one they currently use, or one we tell them to try out)? What is their experience like?
  • How do users go about solving a specific problem over a period of time (regardless of whether they currently use AI tools)? 

Benchmarking

Benchmarking enables us to capture a snapshot of how users are experiencing a product at a particular point in time. We can run benchmarking studies at various intervals to better understand if a product is improving or not. During benchmarking studies, we can also learn about how users are evolving. In a rapidly growing field such as AI, where the general public is becoming more and more comfortable with AI products every day, benchmarking can help us understand if the experience is meeting users where they are currently at in terms of their level of comfort or understanding. When building AI-powered experiences, we need to be flexible and adaptable to shift to a rapidly changing landscape of user needs and expectations. Benchmarking helps you keep a pulse on that landscape in a measured and controlled way. 

Research questions for AI benchmarking studies might include: 

  • How easy or difficult is it for users to achieve their desired tasks on our platform (e.g., Can they phrase requests in a way the product understands)?
  • How aware are users of the features and capabilities the product offers?
  • What is the perceived accuracy of the product today, compared with the last time we ran a benchmark study?

While researching AI can feel intimidating, especially as so many brands try to make their mark in this space, we recommend going back to basics and employing your tried-and-true research methodologies. These methods are versatile and can be mixed and matched into single studies or multiple studies over various lengths of time to grow your team’s understanding of your products and users.

Learn more about AnswerLab's AI UX research capabilities.

Written by

Ben Hoopes

Ben Hoopes is a Principal UX Researcher at AnswerLab where he leads research to answer our clients’ strategic business questions and create experiences people love. He is passionate about the power of UX research to shine a light on unseen paths forward.

related insights

stay connected with AnswerLab

Keep up with the latest in UX research. Our monthly newsletter offers useful UX insights and tips, relevant research, and news from our team.