Discord, the only way young people communicate via voice after we collectively agreed to stop answering phone calls several years ago, is trying to be more open about how it handles harassment, threats, doxxing, and other forms of abuse on its platform. In order to do this, it plans to release regular âtransparency reports,â the first of which is available to peruse now.
The idea, says Discord, is to keep people clued into the decision-making underlying the 250-million-user chat service so that people can understand why things do and, in some cases, donât get done.
âWe believe that publishing a Transparency Report is an important part of our accountability to youâitâs kind of like a cityâs crime statistics,â said the company. âWe made it our goal not only to provide the numbers for you to see, but also to walk you through some obstacles that we face every day so that you can understand why we make the decisions we do and what the results of those decisions are.â
The numbers are fascinating, albeit not wholly unexpected if you display symptoms of being Too Online. First, a graph of reports received by Discord from the start of the year until April:
âOtherâ is the most reported category, but harassment comes in behind it at 17.3 percent. Significant pieces of the multi-colored pie also go to hacks/cheats, threatening behavior, NSFW content, exploitative content, doxxing, spamming, raiding, malware, and self-harm. Discord does not mince words in describing what these categories refer to. Exploitative content is defined as âA user discovers their intimate photos are being shared without their consent; two minorsâ flirting leads to trading intimate images with each otherâ while one example of NSFW content is âA user joins a server and proceeds to DM bloody and graphic violent images (gore) to other server members.â
After Discord receives a report, the companyâs trust and safety team âacts as detectives, looking through the available evidence and gathering as much information as possible.â Initially, they focus on reported messages, but investigations can expand into entire servers âdedicated to bad behaviorâ or historical patterns of rule-breaking. âWe spend a lot of time here because we believe the context in which something is posted is important and can change the meaning entirely (like whether somethingâs said in jest, or is just plain harassment),â wrote Discord.
If thereâs a violation, the team then takes action, which can mean anything from removing the content in question to removing a whole server from Discord. However, the percentage of reports that caused Discord to spring into action earlier this year was relatively small. Just 12.49 percent of harassment reports got actioned, for example. Other categories saw Discord intervene more often, but most percentages were still relatively small: 33.17 percent for threatening behavior, 6.86 percent for self-harm, 44.34 percent for cheats/hacks, 41.72 percent for doxxing, 60.15 percent for malware, 14.74 percent for exploitative content, 58.09 percent for NSFW content, 28.72 percent for raiding, and the big outlier, 95.09 percent for spam.
Discord explained that action percentages are lower than you might expect because many reports simply donât pass muster. Some are false or malicious, with people taking words out of context or banding together to report innocent users. Others demand too much for too little. âWe may receive a harassment report about a user who said to another user, âI hate your dog,â and the reporter wants the other user banned,â said the company. Other reports that Discord doesnât action might include information, but no concrete evidence. Lastly, users sometimes apply the wrong category to reports, but Discord says it still actions those, but it may not count toward action percentages.
From January to April, the biggest contributors to bans were spam and exploitative content. Spam accounted for 89 percent of account bansâa total of 119,244 accounts. On the exploitative content side of things, Discord banned 10,642 accounts, and it says itâs doing its best to squelch that issueâperhaps due to its well-documented troubles with child porn in the past.
âWeâve been spending significant resources on proactively handling Exploitative Content, which encompasses non-consensual pornography (revenge porn/âdeep fakesâ) and any content that falls under child safety (which includes child sexual abuse material, lolicon, minors accessing inappropriate material, and more),â the company wrote. âWe think that it is important to take a very strong stance against this content, and while only some four thousand reports of this behavior were made to us, we took action on tens of thousands of users and thousands of servers that were involved in some form of this activity.â
Discord also removed thousands of servers during the first few months of the year, mostly focusing on hacks/cheats. However, servers focused on hate speech, harassment, and âdangerous ideologiesââagain, an area where Discord has struggled in the pastâare also a big focus. On a related note, the company also discussed its response to videos and memes born of the March 14 Christchurch shooting.
âAt first, our primary goal was removing the graphic video as quickly as possible, wherever users may have shared it,â the company said. âAlthough we received fewer reports about the video in the following hours, we saw an increase in reports of âaffiliated content.â We took aggressive action to remove users glorifying the attack and impersonating the shooter; we took action on servers dedicated to dissecting the shooterâs manifesto, servers in support of the shooterâs agenda, and even memes that were made and distributed around the shooting… Over the course of the first ten days after this horrific event, we received a few hundred reports about content related to the shooting and issued 1,397 account bans alongside 159 server removals for related violations.â
Discord closed out the report by saying it believes that this sort of transparency should be âa normâ among tech companies, so that people can âbetter determine how platforms keep their users safe.â The Anti-Defamation League, an anti-hate organization dedicated to fighting bigotry and defending civil rights, agrees.
âDiscordâs first transparency report is a meaningful step toward real tech platform transparency, which other platforms can learn from,â the organization wrote in a statement about Discordâs report. âWe look forward to collaborating with them to further expand their transparency efforts, so that the public, government and civil society can better understand the nature and workings of the online platforms that are and will continue to shape our society.â