Posted by AnswerLab Research on Feb 6, 2020

One common axiom you hear in user experience research is ‘Build the right thing, then build it right.’ This immediately comes to mind when considering the events that unfolded during the Iowa caucuses this past Monday night.

What happened?

In an effort to report precinct results more quickly to the state Democratic party, precinct chairs planned to record and send results through an app called Shadow. Unfortunately, many had trouble downloading and using it. To make matters worse, the data coming out of the app was suspect. When precinct chairs went to their backup method of reporting results over the telephone, many were placed on very long holds or couldn’t get through.

Significant delays in reporting results led to confusion amongst the candidates competing in the nation’s first race for the Democratic party’s nomination. In short, the app did exactly what it wasn’t supposed to do—introduce complexity and increase reporting delays. We believe a robust user experience research program could have helped avoid this mess.

Did the developer build the right thing?

To make our case, let’s investigate the first part of this common axiom—build the right thing. As stated, the goal of the Shadow app was to expedite the reporting of results. But was an app the best choice for this? To address this, we’d have done an investigation into the target user, in this case Precinct Chairs. We’d explore the following questions:

  • How comfortable are they with technology? Where are they on the technology adoption curve? Mobile apps are a little more complex to use than desktop tools and if the general precinct chair is less tech savvy, an app may not be the best solution.

  • How important is mobility to a precinct chair’s key tasks? How would having an app make their task easier vs. another form of recording and sending data electronically? If the alternative was phoning in results, could a desktop-based form of data entry have been a viable option?

  • Would a smaller screen size lead to any user frustration? When a lot of data has to be entered through an interface, this can lead to errors in data entry given it is more challenging to enter data on a mobile device vs. a desktop interface.

  • Importantly, we’d want to know what kind of phones would be downloading the app. Apps behave differently in different ecosystems (i.e. iOS and Android) and are constantly being updated (how many apps on your phone have updates available right now?). In addition, not everyone downloading the app will have the same version of their device’s operating system—depending on specs the app was built towards, some who downloaded it might have difficulty with it while others might not.

Beyond exploring the user needs and context, another factor to consider is distribution—how would precinct chairs download the app? Apps have to be accepted into an app store in order to be downloaded to a device. Alternatively, they can be downloaded from a beta testing platform. In the latter case, a user’s phone will likely issue a warning that what they are downloading may have risks (i.e. security, performance, etc.). Considering heightened concerns over outside interference in US elections, such a warning might have scared some users away from downloading the app.

We’d have argued that an app wasn’t the right tool to meet the task at hand. Developers not only had to consider different operating systems (and versions) when building the app, but they also had to create a database to house results. Further, they couldn’t distribute the tool in a reliable (and more standard) manner. Given the development cycle was short (reported to be two months) and the budget was small (estimated at $60,000), an app wasn’t the right approach for this task. This would have been uncovered in the early stages of a user experience research program.

Did the developer build it right?

AnswerLab tests websites, apps, and other interactive tools at all stages of development, from early sketches of concepts, to clickable prototypes, and eventually final product. Building it right means that end users can complete tasks as expected and that all systems work as intended. A variety of elements can impact the user experiences of the things we test—copy, imagery, navigation, layout, etc. While little has been reported about the usability of the app, most reports coming out of Iowa cite generalities such as 'users had some difficulty with the app,' and we do know that a final release was pushed just a day before the app was to be used in the Caucus. This means that end users had very little time to familiarize themselves with it before showtime.

Clearly, the Shadow app wasn’t built right given the difficulty users had downloading it and the fact that data coming out of the app was considered incomplete. A proper UX research program would have identified these issues prior to launch. Instead of being able to seize the momentum of the first contest of the season, Democratic contenders (and those who voted for them) were left guessing as to who won.

The Road to Hell

My father used to say, “The road to Hell is paved with good intentions.” We have no reason to question the Iowa Democratic Party’s sincerity to provide a solution that would make reporting more efficient. Some might argue that time and budget weren’t there to do testing. These are challenges we face all the time as UX professionals. We are constantly trying to find the balance between time investment and speed during development. That said, we do believe that with a robust user experience research program from idea development through usability testing, the debacle we witnessed in Iowa this week could have been avoided.

Written by

AnswerLab Research

The AnswerLab research team collaborates on articles to bring you the latest UX trends and best practices.

related insights

stay connected with AnswerLab

Keep up with the latest in UX research. Our monthly newsletter offers useful UX insights and tips, relevant research, and news from our team.