Thoughts about Identity, Privacy and Moderation in the Age of Fake News and Emboldened Bad Behavior

I am currently teaching systems design at California College of Arts and we have been talking about the social architecture for social interfaces. Key to the foundation for social experiences is online identity and understanding the spectrum of ways that people can and want to represent themselves online. As designers we can give people the right tools to manage identity by understanding their needs and the various context’s within which they may be participating.

With the rise of fake news and the vitriol surrounding discussions on sites like twitter, reddit, Facebook, heck, pretty much everywhere online identity and the pros and cons of real names versus pseudonyms versus total anonymity is even more important to understand as a designer than ever before.

That there are real life consequences to having a real name out there associated with political or social opinions that never really go away is something that I think many of us designing these systems hadn’t really thought deeply about before. We thought that owning your words, owning your reputation was incentive enough, along with social norms and a good community set of standards and moderation tools, for most people to behave in a civil manner.

Unfortunately, as the country has gotten more divided, even normally civil people, have erupted and lashed out, made threats (idle and real) and generally used the fact that even though they may be using their real name, they can still hide behind their computer screen.

On the other side though, the person receiving those threats (idle or otherwise) can’t know that this person is really hiding and using this computer-as-mediation to express their deep true opinions in ways that they would never do if they were standing next to that person. As someone who uses my real name on most social sites, I have had to leave occasionally because of death threats, wishes for death and general other bad behavior launched at me, that up until the last couple of years, I had never really experienced before.

For some, this type of behavior is not new. Whenever someone other —be it because of gender, race or sexual orientation or even their opinion—is in a community where a majority set of privileged people feel they own the experience, the backlash of those people feeling threatened in their privilege, will and does surface. One only needs to look at the treatment of women in the gaming communities to see what I mean.

Many of the experiences I had heard about before were in more insulated communities and often were layered under the guise of pseudonyms or anonymous trolls. What’s happening now feels bigger and more widespread. These behaviors are coming out of the darkness for all of us to see. And of course this exposes my latent privilege of working in tech and being white despite being. And despite being a woman, I hadn’t seen it much before because I hadn’t really looked.

So what are we supposed to do about this. For some experiences, offering the ability to create pseudonyms totally divorced from our real name and real life can make the difference. There will still be bad actors but the ease with which they can make connections to real addresses or family members lessens.

We can start by making sure that people are allowed to represent themselves in the best way possible for them—perhaps they are best known through a pseudonym in the context of certain topics and that is how they identify online. That should be an option allowed for participation. Over the years, as activity has migrated from blogs to more mediated spaces like Facebook, which requires real names [which many still work around, my dog has a profile, showing that until someone complains the process for new accounts isn’t too strict], many of voices have disappeared because their online identity (with major reputation attached to that identity) was a pseudonym. Frankly, the internet is a sadder place with out these voices.

The ability to be totally anonymous should also an option. We know what happens here—lots of trolls, lots of fake accounts—but there can also be lots of safety for those marginalized while giving them the opportunity to have their voices heard.

As I have been thinking out this more and more, one of the things that I think is missing entirely, is a more human/e way of moderating and governing our social spaces. We, as in the tech industry, have made so many improvements with artificial intelligence and algorithms that we have forgotten that there are real people behind both sides of these encounters and that not all issues can be solved with technology. Sometimes it takes a person to really see the bad actor especially if that bad actor is technically following the rules.

I feel like in many of the stories I read about the wrong person being thrown off a service, or being —Timed Out—, they are mostly because technology was making the decisions and not real people. Escalation is possible in many cases, but why should they have to work so hard, when they weren’t the one in the wrong? Why should it take escalation for the problem to be recognized and addressed?

I see many companies—especially smaller ones—who go through their checklist and generally just abdicate the identity issue to Facebook or Google without offering more appropriate options for identity. It also seems that because of taking the FB or Google shortcut, they then don’t offer or build in the checks and balances on the moderation/norms/standards side of things to keep things civil. In an early adopter community this generally isn’t needed because of self-selection, friends of friends type of growth that generally happens. But once out of that phase, there should already be the tools and people in place over more sexy new features so that as growth happens and the cultural makeup of the service changes, the community can survive for the long term.

I’d like to see UX designers working with social features really dig into the online identity spectrum and really understand all the pros and cons of each possible approach and the issues on both sides. [read more details about it here and here. Teams need to understand their target user—not just the early adopters and those user’s concerns in context about how they are represented and how they are protected [read more about online harassment here] before just settling on the easiest solution. I am trying to raise the ethical issues, the pros and cons for each option with the next generation as I teach. But in the meantime, the rest of us need to step up.

We’ve been talking about these issues for some time now [a rape in cyberspace from 1993] [a list apart article from 2006] and we are still debating the pros and cons of real vs. pseudonym vs. anonymous presentations of online identity. With the rise of police states across the world and newly emboldened online bad behavior, it’s imperative that we offer better choices for our users that allow both participation and accountability while still offering some semblance of privacy as well as safety through more robust moderation tools.

You can also find me on medium.com 

erin
current: experience matters design :: senior level interaction design and systems strategy consulting former partner, tangible user experience; Yahoo! founder of the public and internal Yahoo! pattern library. design director of ued teams responsible for designing solutions across key yahoo! platforms: social media, personalization, membership and vertical search.