Posted by Sylvia Bargellini on Sep 11, 2019

Despite the buying power of aging populations (people 50+ control ~83% of the wealth in America, we find many AI driven products, both hardware and software, generally aren’t made with aging people in mind. That's a lot of people having bad experiences who otherwise stand to benefit from products that can help them accomplish their goals and increase and maintain their autonomy. We know that if the general population already experiences pain points with a product, it’s going to be even worse for marginalized or minority segments. On the other hand, products designed to be more accessible and inclusive for specific populations often benefit other segments of society as well (e.g., the curb cut effect or the development of OXO Good Grips). Ultimately, taking an inclusive approach to research leads to products that positively impact everyone.

With this in mind, AnswerLab conducted primary research to add to the conversation with best practices for researching and designing for aging populations. We conducted a three-part study with ten older, retired adults between the ages of 60 and 71 to understand their needs and wants for voice assistants. We started with in-home visits to observe the onboarding experience, followed by a two-week diary study with assigned tasks they had to complete using their smart speaker. We wrapped up with in-person interviews in our labs to reflect on their experience after two weeks of use and discovery.

researching with aging populations

At the end of this three-part study, we had insights and best practices ranging from conversation design to hardware considerations.

Here’s what we discovered about our older adult participants:

Participants had the desire to learn, but current mental models acted as a barrier.

Many participants lacked the pre-learned mental models or prior experience with technology that younger populations typically have. As a result, their expectations and instincts around how these products work were inaccurate, causing them to struggle. All of our participants had the capacity and desire to learn, but because the devices made assumptions about what they would inherently understand, they were frustrated by a number of tasks. Several participants commented that they felt these devices are made for the technologically advanced and made them feel ‘technologically impaired.’

older man looking confusedParticipants expected physical interactions but were unsure where to start.

Most participants expected these products to need physical touch to operate the device. Some participants expected to find some form of volume control buttons on the Google Home Mini and were disappointed when they couldn’t locate them. (The volume can be adjusted by tapping the sides of the device, however, there is no physical indication of a button or “+” and “-” labels.)

One participant was holding Google Home Mini in their hand not realizing they were hitting the play/pause button since it wasn’t visibly marked, causing the device to turn the music on and off repeatedly, which caused confusion. Additionally, two participants with the Amazon Echo Dot placed the device upside down at first.

Expectations were high, but the devices failed to establish value.
Voice is an intuitive UI for humans, and this is no exception for older adults. They had no apprehension to use the device, and were eager to experiment with phrases that might help them succeed. Participants had a general understanding that they needed to talk to the device, but were left with little guidance as to what to say and how due to poor onboarding documentation. In fact, many of our participants expected to have more of a conversation with this device, instead of voicing commands for it to simply comply with.

The unboxing and set-up process excluded older adults.

smart speaker unboxing

The more steps users have to take without a clear understanding of what’s happening causes confusion, but even more so with this population. These participants struggled the most when needing to find the app (“Is it the Amazon Alexa app or the Amazon app?”), how to interact with the device physically and verbally, and how to connect the device to their WiFi network and existing accounts specifically due to difficulty remembering passwords.

We believe any form of usability testing with elderly or minority populations is a great first step in creating a more inclusive AI product. You’ll discover insights and pain points that you may not have thought of during development that will help create better experiences for all of your users. Even simply incorporating participants from these populations into your standard usability research can help provide diverse opinions; no need for additional specialized rounds of research specifically for certain populations.

A four element framework to keep aging populations in mind when designing AI products.

Organized by human senses a user would use when interacting with your product, these recommendations are informed by this primary research and learnings from our team over time. Start here to explore inclusivity in each part of your product design process.

EyeSight: Vision impairments can affect how users experience your product.

When working with the size, color, font, location of text, icons, and even LEDs, keep human vision limitations in mind.

  • Be aware of visual degradation and impairments.
    285 million people worldwide have some form of visual degradation. With a wide variety of impairments, this percentage only grows as you hone in on the elderly population (65% of people who are visually impaired and 82% of all blind are 50 years and older). Another thing to note is that much of the general population could also experience temporary visual impairment on a daily basis varied by context. For example, by driving into the sun, you can be temporarily impaired. In these contexts, even those with perfect vision could also benefit from features or devices catered to those with more permanent visual impairments.

  • Keep size and color in mind when choosing fonts and icons.
    Many sources suggest that legible font size hovers between 16pt and 30pt. This range is wide due to a few factors, including age, distance from the material (visual angle or distance), contrast with the background (luminance), and even font design. You may have noticed it’s harder to read information on a computer screen than on paper because of the difference in luminance. Make sure you’re taking these factors into account when exploring font options.

  • Include multiple visual cues to account for impairments.
    Instead of just an icon or a change in color to alert the user of a feature or link, consider combining multiple visual cues so people who have visual degradation in one area (for example, color blindness) can still understand the cue using the additional visuals. For example, buttons may be different colors, but also have different patterns (i.e. stripes or dots) to give users multiple ways of distinguishing them.

sound waves-1Hearing - Ensure sound levels and cues are accessible to all users. 

Sound levels should be accessible for people of all ages and backgrounds. But, the older we get, the more our hearing degrades, more for men than women. Average conversational speech at 1 meter is about 60-70dB amplitude at about 1000Hz but can fluctuate based on a variety of factors.

  • Context is key.
    Sound levels are always affected by the level of “background” noise, so consider all the contexts in which this AI product will be used and what level is appropriate in those contexts to find your product’s range of volumes.

  • Keep loud sounds to a minimum.
    Although loud noises get people’s attention quickly, it can also startle users, making it a risky solution. In our study, while unboxing the Google Home Mini, we found it starts up at too loud of a volume for even our elderly participants. Especially at this moment, the loud noise wasn’t necessary to alert participants, since most users are already exclusively focused on the device for onboarding.

  • Combine sound cues with visual or text-based cues to improve communication.
    Similar to including multiple visual cues, combining sound cues, alerts, and/or verbal prompts with visual cues can help clearly communicate with your users what has happened and how they can respond.

Hand pointingTouch - Create clear ways for the user to interact with your device through touch.

In our research, we found that the aging population often expects physical interactions due to prior experiences with older devices and technology. So, despite voice being easy to use, physical interactions are still a part of their broader experience. If you’re not testing with a variety of users with different abilities and/or hand or body shapes and sizes, you won’t fully understand what users can accomplish with your product. When designing for the physical realm, keep these in mind.

  • Consider a variety of physical and motor impairments.
    People have a wide variety of motor and physical impairments, including but not limited to, hand tremors, the level of force one can exert, or even having no use or control over their hands. As a general rule, think about what parts of the body will be interacting with the product and consider the different impairments that might affect that experience.

  • When dealing with force, take distance and angle of interaction into account.
    Although someone might be able to easily press a button in a lab setting straight on, different contexts affect physical abilities. Smart speakers are used in a variety of contexts within people’s homes: high on a shelf, in the kitchen, or on a bedside table. Consider these contexts to help define what shape, size, location on the device, and level of force needed an interaction should have to make their experience interacting with the device easier in differing contexts.

  • Make it clear by design what physical interactions are available for your device and what they help the user accomplish.
    Your device should give the user clear clues (affordances) as to what interactions and buttons are available to them, how they work, and what they do. Ultimately, this means that where and how to interact with an object should be intuitive by design (e.g. Where do you touch the device to prompt an action? Do you swipe across it or push the button? Norman Doors are a prime example of how design can fail on this front.)

    As we mentioned above, in our research, many participants couldn’t locate the volume controls on the Google Home Mini, as they’re unmarked, leaving them unsure of how to adjust the volume. Taking this concept a step further--once you interact with the object, you should know exactly what action it will take and how it affects your experience. For example, one of our users didn’t understand what turning the microphone off on the Google Home Mini did.

brainCognition - Consider comprehension times and minimize cognitive load.

Although cognition is not considered one of the five senses, it’s crucial to consider it when designing AI products, especially for the aging population.

  • Expect slow comprehension times, but thorough and comprehensive reading.
    The aging population generally has slower comprehension times, and many of our participants read everything! While many of us often toss the instructions when opening up a new device and “just figure it out,” our participants read the setup documentation in its entirety from start to finish. Some of our participants even found a few spelling errors during their onboarding experience with both the Google Home Mini and Amazon Echo Dot!
  • Memory is short. Reduce cognitive load.
    Obviously, all humans have a limit on their memory, known as cognitive load. However, with elderly populations, cognitive load is amplified. There are a number of tips and tricks you can use to better design for different levels of cognitive abilities. For example, we love this article on designing interfaces for short term memory. Consider utilizing psychological techniques like chunking which breaks up long strings of information into smaller pieces to minimize cognitive load and aid the learning process.

  • Give examples to prompt commands.
    Although voice is an intuitive UI for humans and older adults are no exception when it comes to eagerness to learn and use AI devices, the elderly are often left with little or confusing guidance as to how to maximize the potential of these speakers. While they may easily be able to say, “Okay Google, what’s the weather outside?” if they don’t know what commands and features are available to them, how will they know to try it out?

    Even further, the way Google Home’s instructions explain how to interact with the device confused some participants. For example, when the instructions said “Say ‘okay Google,’” one participant said, “Say Okay Google.”

We’ve seen a surge in market commitment to digital assistants and smart speakers, prompting a fundamental change in how we interact with brands across industries. But if you can’t reach all of your users regardless of age, ability, or demographic, you’ll be falling short of creating great user experiences and products.

Interested in learning more about inclusive product design or voice? Contact us.


This is a big topic, and we just scratch the surface here. If you’re looking to dig in further, start with these additional resources:

Vision and Sight:

Touch:

Hearing:

Cognitive:

General Resources:

Written by

Sylvia Bargellini

Sylvia Bargellini, a member of our AnswerLab Alumni, was a Senior UX Researcher during her time at AnswerLab where she led client research that identified and prioritized insights that improve their business results. Sylvia was also an integral part of AnswerLab's Emerging Tech practice with 6+ years of experience in research, design, and human factors, working with both software and hardware products. Sylvia may not work with us any longer, but we'll always consider her an AnswerLabber at heart!

related insights

stay connected with AnswerLab

Keep up with the latest in UX research. Our monthly newsletter offers useful UX insights and tips, relevant research, and news from our team.