Imagine this scenario.
A parent is struggling to keep their toddler entertained while a kettle has boiled over. They desperately grab their mobile phone to play a cartoon that will hopefully entertain their toddler. The spill will take a while to clean up, so the parent finds solace that there’s a queue of “relevant” content on autoplay. But to the parent’s horror, when they return, they find the next video playing has been a violent rendition of the original cartoon.
Recommendation systems, recommendation engines, and recommended content are all variations of the robust set of algorithms that influence many of our behaviors and much of the media we consume today. We see them at work when you get recommended products while shopping on Amazon or new videos in your “Up Next” on YouTube. Here at AnswerLab, we’ve watched the emergence, successes, and failures (like my example above) of recommendation systems over time. We all know that they aren’t always reliable or beneficial. With so many variables working in conjunction, how can we best gather qualitative insights to inform the design process?
What should we use to examine a recommendation system that isn’t meeting a need?
When it comes to these failures, often the content they recommend is neither useful nor relevant enough to meet users' needs and expectations. Jobs to be Done (JTBD) is a helpful tool to apply a user-centered research methodology that focuses on uncovering user goals – a way to get to the jelly filling. JTBD places significant emphasis on discovering the obstacles that prevent a user from achieving their goals while helping us gather context, reveal mental models, understand motivations, and set ourselves up to effectively address success metrics.
If you’re planning to use JTBD in your research, there’s a multitude of ways to implement it. Ethnographies, diary studies, in-depth interviews, journey maps, and contextual inquiries are all perfect vessels for collecting the qualitative data needed to address the burning questions that will inform the next stage of iteration. In recent years, companies have found that qualitative insights from JTBD can help get them out of innovation ruts and pursue ideas with a renewed frame of reference.
Given that user needs are always changing, JTBD is the perfect match for iterating on recommendation systems. To effectively aid users in their decision-making process, recommendation systems should address a user’s needs and desires in context, rather than relying on historical preference (e.g., a user’s history of interacting with the system).
For example, a shopper usually shops online for household goods but is currently looking for a new monitor for their home office. The recommendations they see shouldn’t be for their favorite paper towel brand but instead purchases related to the current search; perhaps keyboards, a laptop stand, or a mouse.
When conducting research with JTBD, here are some questions to ask yourself to get the right information from users:
- What do they need and when do they need it?
With technological improvements like increased computing power, internet, and 5G, ask users what they need and when? How can you better satisfy those needs?
- What are the circumstances of the users’ struggle?
For example, a mobile billing system is convenient for all users, but it might inadvertently help users who are physically unable to retrieve their bill from their mailbox. Discovering underlying struggles of all of your users can help you create a more customized and complete recommendation system.
- What obstacles are in the way of the person getting closer to solving the problem?
- Are consumers making do with imperfect solutions through some kind of compensating behavior?
Rather than typing directly into a shared document, a user might prefer the privacy of creating the document in Word or Pages. This solves one user complaint, but once they paste the text into that shared document, they might encounter formatting errors that create unnecessary work. Is there a way we could create a system that solves the main problem right off the bat?
- How would they define a better solution? What tradeoffs are they willing to make and what aren’t they willing to sacrifice?
A user might say, “I want to do ___ without ___and___.” This can help you ideate on a solution that meets their needs without compromising on their “must-haves.”
Whenever recommendation systems are being evaluated qualitatively, it is crucial to prioritize and implement success metrics. Incorporating these metrics throughout the study will help keep the research plan organized and concise from start to finish, and synthesizing the data will be much swifter.
Designing a study
Now that you have an idea of what you want to learn, it’s time to build your study. We’ve addressed that the number of variables needed to run a recommendation system can complicate how user research might be conducted. So, take into account what you’re trying to learn and how certain variables might influence that.
When time is a major variable in your recommendation system:
When you just need reactions to content fast:
When recommendations aren’t sticking:
What sample size should I use?
With qualitative research, we rely on data saturation to determine whether the results are reliable or require further testing. To select the right sample size, consider all the variables within your system and your research goals. With recommendation systems, nuances among participants is a big deal--they can either provide invaluable personalized insight or muddle the focus of study.
When determining sample size, ask yourself:
- How many variables are we testing?
- How intertwined are they?
- How many segments?
- Am I testing traditional stimuli (e.g., buttons, layout)?
- How many participants will each variable need in order for data to feel sufficient?
Recommendation systems are often a behemoth to test, and the jobs to be done methodology can help to uncover and ideate different needs that a product can help meet.
Need help tackling research for a recommendation system? Contact us!