Instagram

Designed new systems to prevent bullying, encourage more positive interactions, and empower targets of harassment on Instagram.

Instagram has a strong commitment to lead the fight against online bullying. As the designer on Instagram's Anti-Bullying team, I created new systems to prevent online harassment and empower targets of bullying.

Though these tools were made available to everyone and can be useful in many different situations, our target audiences were teens and creators—the groups most vulnerable to bullying and harassment that happens both online and offline.

My design work was deeply informed by both qualitative and quantitative research, as well as our team's machine learning work. We spent a lot of time talking with teens and large creators about their problems and concerns with our existing tools, and co-designed new tools with them. Working with machine learning and AI to design new products is always an interesting challenge, but especially so in the case of bullying—where behaviors and relationships are always changing, and where human language is filled with nuance and subtlety. I worked closely with our ML engineers to use AI to solve problems at a massive scale, while also understanding its tradeoffs and how to design against them.

Note: I also worked brief stints on Instagram's Mental Well-being and Teen Safety teams when they needed additional design support.

A few example projects

Limits—Helps protect people when they experience or anticipate a rush of abusive comments and DMs. This was designed for creators and public figures who sometimes experience a sudden spike of unwanted comments or messages.

Multi Block—Allows people to not only block a single account, but also any new accounts someone may create or have. This feature helps targets of persistent harassment, where the bullying actor may continue creating new accounts to harass the target.

Post Caption Warnings—Asks people to reflect and edit their caption if Instagram's AI detects that the caption is potentially offensive or harmful.

Hidden Comments—Automatically hides comments similar to others that have been reported. We learned from research that, while people didn't want to be exposed to negative comments, they want more transparency into the types of comments that were hidden.

Comment Warning—Warns “repeat offenders" who regularly post potentially offensive comments. Through research, we found that the most effective way to shift behavior was to remind people of the consequences to their accounts and provide real-time feedback. These warnings forced repeat offenders to take a step back and understand the potential consequences if they wanted to proceed with posting harmful comments.

Well-being Guides—Enabled creators to connect with expert organizations to share resources at the start of the COVID-19 pandemic, including tips on how to look after your well-being, maintaining connection with others, or managing anxiety or grief.

Safety Notices—Prompts teens to be more cautious about messages from adults who have been exhibiting potentially suspicious behavior which could lead to grooming. These notices provide actionable steps (e.g. ending the conversation, blocking, or reporting) and safety tips.

Limits is a feature designed for public figures and creators who experience a sudden influx of abusive comments or messages.

Limits is a feature designed for public figures and creators who experience a sudden influx of abusive comments or messages.

Multi Block helps targets of persistent harassment and allows them to block someone as well as any other accounts that person may have or create.

Multi Block helps targets of persistent harassment and allows them to block someone as well as any other accounts that person may have or create.

tokatherineliu[at]gmail[dot]com

Copyright © 2024 Katherine Liu

tokatherineliu[at]gmail[dot]com

Copyright © 2024 Katherine Liu

tokatherineliu[at]gmail[dot]com

Copyright © 2024 Katherine Liu