If you've spent much time online—playing games, talking on social media, using message boards—chances are you've gotten some abuse. Someone's called you a fag, or a dumb bitch, or suggested they might find out where you live and skullfuck you to death.
When that happens, we generally take it as a cost of doing business online. "It's the Internet," we say, licking our wounds and trying to focus on other things. "That's how it goes."
But that's not really okay. It's not a good enough answer, and on some level, we all know it. The Internet is where so many of us live, play, work, and socialize. It's an essential part of our lives, as "real" to many of us as any school or office or city street. Is it really okay that we've somehow passively decided, as a community, that this sort of behavior is par for the course?
In a new feature at Wired, writer Laura Hudson (disclosure: we're dating) has shared the results of months of research and interviews about online harassment. In it, she focuses not on harrowing stories of victims (believe me, there are plenty), but rather on possible solutions. She argues that long-term solutions won't be found just by banning trolls or removing the few most toxic members of a community. Rather, it's about a community deciding what its values and norms are, clearly communicating those norms, and giving community members the tools to help enforce them.
Imagine how it'd be if you walked up to someone at your job and screamed in their face that they were a faggot jew and you were going to track them down at home and kill them. It wouldn't be okay, right? People would say something, there'd be outcry as soon as you started yelling, you'd get called into an office, reprimanded and possibly fired. That's because your office community has a set of cultural norms in place that clearly state that it's not okay to behave that way. Most real-life communities have these sorts of norms. A lot of internet communities, however, have no such established norms, and many users have come to assume that anything goes. So it has been, and so it shall forever be. Welcome to the Internet.
Laura and I have spent many hours over the last several months discussing the article as she worked on it—her interviews, new perspectives, upended assumptions, and so forth. It's been instructive for me, and I've come away with a greatly refined perspective on the issue. Like her, I'm convinced that the best way for communities to improve their level of discourse and curb abuse and harassment is for the people in charge of those communities to establish better, more consistent social norms, and to put in place tools that let community members help enforce them.
League of Legends may be famous for having a toxic community, but the game's developers at Riot are actually doing some groundbreaking work in their attempts to address and improve things. Among their more successful endeavors is the Tribunal System, which allows a jury of players to vote on offending behavior and mete out punishments, including bans. Riot has also assembled a team to analyze player behavior, and brought together staff members with degrees in psychology and neuroscience to help better understand the dynamics at play in League games and in the community at large. It's working. From Wired:
This process led them to a surprising insight—one that "shaped our entire approach to this problem," says Jeffrey Lin, Riot's lead designer of social systems, who spoke about the process at last year's Game Developers Conference. "If we remove all toxic players from the game, do we solve the player behavior problem? We don't." That is, if you think most online abuse is hurled by a small group of maladapted trolls, you're wrong. Riot found that persistently negative players were only responsible for roughly 13 percent of the game's bad behavior. The other 87 percent was coming from players whose presence, most of the time, seemed to be generally inoffensive or even positive. These gamers were lashing out only occasionally, in isolated incidents—but their outbursts often snowballed through the community. Banning the worst trolls wouldn't be enough to clean up League of Legends, Riot's player behavior team realized. Nothing less than community-wide reforms could succeed.
Some of the reforms Riot came up with were small but remarkably effective. Originally, for example, it was a default in the game that opposing teams could chat with each other during play, but this often spiraled into abusive taunting. So in one of its earliest experiments, Riot turned off that chat function but allowed players to turn it on if they wanted. The impact was immediate. A week before the change, players reported that more than 80 percent of chat between opponents was negative. But a week after switching the default, negative chat had decreased by more than 30 percent while positive chat increased nearly 35 percent. The takeaway? Creating a simple hurdle to abusive behavior makes it much less prevalent.
The team also found that it's important to enforce the rules in ways that people understand. When Riot's team started its research, it noticed that the recidivism rate was disturbingly high; in fact, based on number of reports per day, some banned players were actually getting worse after their bans than they were before. At the time, players were informed of their suspension via emails that didn't explain why the punishment had been meted out. So Riot decided to try a new system that specifically cited the offense. This led to a very different result: Now when banned players returned to the game, their bad behavior dropped measurably.
Riot's approaches are fascinating and, as the company is demonstrating by trying various solutions out on their massive userbase, a lot of them actually work. Lin's GDC talk was great; you can watch a video of the whole thing here, and I really recommend it.
Games, with their closed communities and well-funded community management teams, have a terrific opportunity to blaze trails that (hopefully) might be followed in some ways by larger social networks like Twitter and Facebook, where harassment is still rampant and many users, particularly women, are besieged so constantly and unpredictably that they opt to forgo the service completely. Obviously no two online communities are created equal, and things that work for League of Legends might not work or might even be destructive if implemented someplace like Twitter. But the overall philosophy remains: If services like Twitter and Facebook want to get serious about reducing abuse and harassment on their networks, they need to be investing in solutions as heavily as Riot is.
Here at Kotaku, we've had plenty of our own challenges over the years. I've worked here almost three years, and during the first year or two I got the sense that readers didn't always understand Kotaku's commenting and community policies. Sometimes we'd ban readers who were abusive or spewed hate-filled language, but other times the space below a post would be festooned with awful garbage and anonymous hate-speech with nary a moderator in sight.
That said, our Editor in Chief Stephen Totilo's post last year, "A Note About 'Brutal' Comments and a Kotaku For Everyone" was actually very much in the spirit of community norm-enforcement that Riot and others advocate.
Rather than just tell Kotaku staffers to continue to unfollow and block abusive commenters, Stephen laid out what Kotaku's community norms should be, and who this site is for:
We still want readers to feel free to agree or disagree with our articles and say so on the site. We still encourage wit, smart argument and bold opinions. We still welcome debate. We still, as before, will diminish or even block the visibility of comments by those who simply attack Kotaku writers or readers.
Today I am also committing to expanding our discussion moderation to push back against any tide of comments that fail the test of being things that we believe you'd say to the face of the people you're commenting about. We imagine that any of our more than five million readers per month might disagree with something on our site, and we are confident that any of those five million can find a way to say so while getting over what is still a low bar.
In other words, this is a community, so act like it. Of course, that one post didn't solve everything—it's still on us writers to moderate conversation, get rid of spam and abuse, and promote the best discussions. And unlike League of Legends, Kinja doesn't have a built-in Tribunal-like way for users to police one another, short of responding to and shooting holes in lousy comments—which you guys do often and admirably— though readers can report abusive comments to firstname.lastname@example.org, and we encourage you to do so. But I do think that in the wake of Stephen's article, discourse on our site improved significantly, and that these days it's better than it's ever been. We've miles to go, of course, but every other internet community beyond a certain size has miles to go with us.
In order to make online spaces safer and more welcoming, community members themselves do need to get involved. But first, the people who own and run those communities need to decide what kind of an environment they want, to clearly articulate what that looks like, and to give users the tools to help make it happen.
An earlier version of Laura's article started out with a metaphor that I really liked. It went something like this: When we talk about abuse and harassment on the internet, we talk about it the same way we talk about natural disasters. We throw our hands up and say, hey, what can you do? We can't stop internet abuse any more than we can stop the rain from falling. "And on the Internet," she wrote, "it's always raining."
It doesn't have to be that way. Yes, there will always be jerks out there. Somewhere, it'll always be raining. But we don't have to just suck it up and weather the storm; together we can build shelter.