Posted by AnswerLab Research on Apr 1, 2021

Too often, under-represented groups do not have great digital experiences because they’ve been historically excluded by outdated research and design processes. It’s time to change that. We recently hosted a webinar with AnswerLab’s Gina Tenenbaum and David Muñoz to share how AnswerLab re-examined and reformed dated research operations processes to create more inclusive recruiting for research.

We had so many thoughtful questions on this topic from our attendees, we couldn’t cover them all during the event. To help you take a more inclusive approach, we asked our panelists to help respond to some of the most common questions we heard to continue the conversation.

Watch the recording and then continue reading for additional Q&A:


1. How do you approach asking about race and ethnicity with international audiences where it varies based on cultural context?

In international markets, we rely on our local vendors to help us understand the local cultural context. No one person can be an expert in every country’s culture, so we rely on those who are there on the ground. To help our local vendors understand why an inclusive approach is important, we share our philosophy and standardized US questions and choices. We then work with them to alter these to fit with local norms and include appropriate race and ethnicity options. We also make sure that the screener and any prototypes, moderator guides, etc., all translate correctly with an inclusive lens. If you’re conducting international research and want to be inclusive, make sure you’re speaking to local experts who understand what participants may expect or want from your screeners.

2. How do you avoid tokenizing POC participants to “represent” their race or ethnicity, especially as experiences can vary so much?

As with everything we do, we communicate that our findings are based on a small sample size. While we want everyone’s voice to be heard, each participant is still one person and their lived experience can vary greatly from someone with the same race or ethnic background. 

To avoid tokenizing participants, we recommend you do the following:

  • Conduct experience gap research which is more generative and focuses entirely on recruiting a specific diverse population. In this type of research, you’ll hear a variety of perspectives from within your target audience, allowing for a wider range of experiences and backgrounds to inform your product development. For example, you might only recruit those who identify as women to better understand their specific experiences using your product and how you can improve it for other women as well.
  • Educate your stakeholders on how and why you are including more diverse perspectives in your research. For example, are you consistently including more diverse representation in your usability tests while also conducting experience gap research for more focused findings? Explain the reasoning behind that. Giving your stakeholders and teams the background can help them learn how to digest the findings without tokenizing participants of specific groups.

3. How do you make the case to clients to broaden the idea of the "perfect participant"?

In our work, we communicate the why behind our inclusivity focus by sharing materials, articles, and UX case studies for how this makes a difference. We’ve found most of our clients are not only open to the idea but also excited to get started. Often, after an initial study using this framework, our clients are so impressed by what their participants bring to the conversation and the findings they unravel, they want to make this a standard across studies.

We also recommend encouraging stakeholders to attend multiple sessions (or watch as many recordings as possible) so they can see the range of opinions and experiences that your participants provide. Seeing that diversity will also help your stakeholders see the value of all participants, not just those that they consider perfect.

Review participant criteria carefully with your stakeholders, asking probing questions about different aspects or requirements as you go. By asking why, you might discover your perfect participant comes from outdated preconceptions of who is using your product, and as a result, you can expand your criteria in your screener.

4. How do you ensure participant privacy, especially when it relates to sensitive information, in this tracking process? Is the information only at the aggregate level?

We understand some questions can be sensitive, such as race/ethnicity and identifying with LGBTQIA+. We want our participants to be comfortable throughout the process and instill a sense of trust wherever we can. For those two questions, we always provide “Other” and “Prefer not to Answer” as answer choices for those who don’t want to give us that information.

In our tracking system, we do not include any names or other PII, just the 6 demographic data points. When we look at reporting and analyses, we only see those 6 metrics on the following levels: by study, by each client, and across all studies we conduct. Because we remove any PII that identifies our participants with these demographic points, we don’t feel we are ever compromising sensitive or private information.

5. Why are you only including quotas for Black and Latinx participants and not Asian Americans or other ethnicities? 

To start, we selected a few key metrics and have been optimizing those first. In the future, we may add metrics such as additional races/ethnicities. Black and Latinx people are historically underrepresented in the U.S. not just in terms of research participation, but also in technology jobs in general. In the United States, Black and Latinx workers account for 7% and 8% respectively of the tech industry, even though they make up 13% and 18% of the U.S. population. This is why we chose to focus on Black and Latinx participants for our initial inclusivity success metrics.

However, this doesn’t mean we aren’t including other populations in our studies. We do include Asian Americans in our studies but do not currently require it. This is because the percentage of the population (5.4%) does not equate to one whole participant until we get to n=24 and most of our studies have a smaller sample size than that. We are looking across many clients and verticals though, so certain companies with a larger Asian American representation in their user group might want to emphasize that metric first.

And lastly, this is also where our tracking system is supporting our on-going accountability and process iteration. We are watching our metrics across different groups over time to see where we’re doing well and where we can do better. Even without standardized quotas, it helps us monitor and adjust what we’re doing to ensure we’re not leaving anyone out.

6. What are the most effective ways to communicate the value of intentionally recruiting a diverse sample when your organization is in favor of taking a “colorblind” approach to research? How can I better communicate the business impact?

To start, we recommend summarizing any quantitative data (site analytics, survey data, etc) that shows who your user base is and subsequently calling out how well (or poorly) your research studies currently match those analytics. Your product most likely has many diverse users. If your research doesn’t reflect that diversity, your product is likely not addressing all of your customers’ needs. If your competitors are, then you won’t be able to keep up.

To continue building your case, call out findings that you wouldn’t have learned without speaking to more diverse participants. For example, feedback from a 60-year old participant that the font size and colors used in a design are too difficult to read likely helps all your users and creates a better experience for older adults. Keeping a running list of these findings and their impact can help show the value of bringing in diverse participants.

7. Do you have any suggested strategies in reaching participants who lack access to technology?

Reaching participants who lack access to technology or WiFi can be tricky, but these participants can provide meaningful perspectives on a range of products and services. We recommend a few strategies to reach these participants, including:

  • Buddy Recruiting: In this method, we recruit a lower-tech participant, along with a family member or friend who is more tech-savvy to help them get onto the videoconferencing platform for the session. Having a buddy to help them get situated and prepared for the study can make a big difference.
  • Grassroots Recruiting: Often you can’t reach these folks through typical research databases, so we employ grassroots recruiting (e.g. flyers in libraries, local community centers, coffee shops, etc.) to access participants who aren’t accessible in our typical methods. 
  • Referrals: Once you have some lower-tech participants in your database, ask for referrals to their friends!
  • Mobile Hotspots: If you need, mail participants a mobile hotspot to use for the session. This can help with participants who don’t have easy access to WiFi or have lower bandwidth than is necessary for the study videoconferencing.

Download your copy of our guide on Inclusive Research Operations to learn more.

Written by

AnswerLab Research

The AnswerLab research team collaborates on articles to bring you the latest UX trends and best practices.

related insights

stay connected with AnswerLab

Keep up with the latest in UX research. Our monthly newsletter offers useful UX insights and tips, relevant research, and news from our team.