Harassment, abuse, and unwanted behavior are unfortunately extremely common on online platforms and apps. A recent study with thousands of young women and girls in 22 countries worldwide found that 58% of them had experienced harassment or abuse online. In our research, we’ve seen these kinds of interactions play out amongst a variety of groups and in various settings and situations.
Brands need to focus on creating better experiences for their customers by preventing harm and abuse from happening, and if it does happen, offering their users the tools and resources to report those instances and get support in emergency situations. But offering those tools is not enough--users need to be educated on what’s available and how to use them. Prioritizing these features helps users feel safe and included, and can often lead to higher engagement in the long term.
From our research on privacy, safety, and harm across various industries, products, and clients, we’ve surfaced a number of insights on what users need. These are just the first things you and your team should consider. There are many more nuances and specifics to explore depending on your industry and the product you’re building. We hope these recommendations will help you start thinking deeply about what next steps to take.
1. Make building easy-to-use tools for combatting unwanted behavior your top priority
Without effective tools to stop abuse, harassment, and other unwanted behavior, many participants we’ve spoken to have said they’d stop using social apps altogether. Others have told us when they can’t prevent or effectively deal with abuse online, they don’t feel safe being themselves on these apps. For example, to prevent unwanted behavior, female users might use male-appearing avatars.
Being able to rely on reporting tools and safety resources is a vital part of your user’s experience, regardless of what kind of app or product you’re building. We conducted research with a gaming client on women’s safety and found that all participants had previously received unwanted harassment or negative interactions while online. These participants shared that seeing a short blurb on reporting features would give them a sense of control and safety.
2. Ensure you’re communicating what safety resources and tools are available
If users can’t locate and utilize your safety resources at the right moment, it doesn’t matter if you have them or not. These situations are often high-pressure and stressful, so educating your users on what’s available before they happen is critical. Make sure your tools and options are easily findable and accessible (e.g. a brief help blurb upon log-in). But don’t overdo it. Some participants have said that if there are too many warnings, it sends the message that abuse happens frequently on the platform, and they may give up altogether.
3. Clarify what emergency reporting means and what steps will be taken
When building reporting features, make sure your users know about all the options and exactly what each action does. Menus or buttons that just say “Report” or “Flag this as inappropriate” aren’t descriptive, and we’ve heard time and time again that users will hesitate to select an option without knowing exactly what will happen as a result. This is especially important for emergency situations.
We conducted research for a company whose platform connects users with service providers they then meet in person. In-person interactions can be risky when it comes to safety, and transparency around safety features can help users feel confident in their options. You don’t want your customers to hesitate in a potentially dangerous scenario. Users want to know if their harasser will know it was them who reported it, if that user will be blocked, etc. Clearly outline what each process looks like so your customers know what to expect and can make decisions accordingly.
4. Communicate with your users throughout the reporting process
Make sure you’re clearly communicating updates at every step of the reporting process. Many participants have expressed they valued hearing confirmation the report was filed, as well as updates throughout the process. They need to know it’s not just going into a “black hole,” but that someone is actually reviewing it and taking steps to resolve it. Otherwise, users can feel as though reporting it was pointless.
Many participants we’ve spoken to have also wanted a reported user to be told the reason why they were reported, so they had the chance to reflect on and potentially change their behavior.
5. Don’t rely on algorithms and chatbots to do all the work
In some scenarios, you may need a live agent instead of a chatbot to help in an emergency situation. Many participants we’ve spoken to expected and wanted a human helping them during the reporting process and reviewing their cases instead of relying on an algorithm to make decisions. Chatbots can be useful for letting people know their options. However, interacting with a bot and a series of automated questions can be frustrating in a time-sensitive situation. Many participants have shared they wouldn’t have time to go through all those steps before being connected to an actual person. Other participants felt that algorithms make mistakes, potentially causing a misstep when reviewing reported posts.
6. Be aware that unwanted behaviors can jump from platform to platform and your tools may be used in an unintended way
Online abuse and cyberstalking can happen across multiple platforms. If a harasser is blocked or reported on one platform, they might resort to using a different app to continue, allowing this kind of behavior to jump from place to place. As a result, tools you developed might be used in an unintended way to cause harm. Many of us saw this recently in Slack’s newest DM feature. People were able to send harassing messages through email invitations, and those receiving them had no way to opt-out or block users. Be aware of how your new features might enable behavior like this and be ready to implement quick solutions when unexpected outcomes arise.
7. Build in checks and balances to crowd-sourcing features
Some participants expressed they appreciated "crowd-sourcing" features when dealing with abusive behavior. For example, many gaming platforms have a feature allowing users to initiate a group vote to decide whether someone should be removed from the area or activity. This can sometimes help when a group is being impacted, rather than an individual blocking an offender or waiting for results from filing a report. However, the downside is that users can misuse these tools. For example, a group of friends could vote to kick out anyone who isn’t a part of their group.
8. Ensure reporting and response language is kind, caring, and compassionate
Your users want to feel like you genuinely care about them and will listen to their needs. In reporting tools and subsequent response messages, ensure your copy and language is kind, compassionate, and welcoming. It sends the message that you are willing to take the appropriate steps and help the user to the best of your ability. We’ve heard participants use words like approachable, caring, and empathetic to describe brands that have done this successfully.
9. Conduct additional research to make sure guests with accessibility needs can use safety tools and emergency resources if their safety is at risk
Accessibility research is an important step in the development of every part of your product, and reporting and safety tools are no exception. Make sure your tools and processes are accessible to ensure everyone can take advantage of what’s available.
Additionally, everyone benefits from accessibility improvements. We know this from the curb cut effect, along with many other examples over the years. When people are stressed or feel threatened, it’s often more difficult to process new information. Reducing cognitive load by creating streamlined processes can be vital for everyone. Reporting needs to be as easy as possible. Don’t make your users jump through hoops or struggle to figure out what’s happening.