Today, Twitch released its first-ever transparency report, a lengthy, stat-based look at the platform’s safety initiatives over the past year. It contains some interesting, albeit granular, information about Twitch’s efforts to cut down on hateful conduct, sexual harassment, and even terrorist propaganda. But it also fails to clear the haze from the question that has surrounded many of Twitch’s most perplexing decisions: Why?
Certainly, the report contains many interesting numbers. Encouragingly, the company says it has made “a 4X increase in the number of content moderation professionals” over the past year, meaning that if users file a report, it’s more likely that someone will get around to responding to it in a timely manner. Twitch did not, however, say how many content moderation professionals it currently employs, nor did it say whether they’re in-house or contractors (aka the Facebook method, which has led to all sorts of issues over the years).
Twitch also pats itself on the back for greater moderation coverage, noting that between its AutoMod software and human moderators, 95% of live content on the platform was viewed by a moderator of some sort by the end of 2020. Most sections of the report focused on similar increases: chat messages removed by AutoMod and the blocked term functionality, both of which allow streamers to automatically pre-screen messages for specific words and phrases, rose 61% between the first half of the year and the second. Manual message deletion on the part of creators and moderators was up a whopping 98% relative to the first half of the year, which Twitch attributed to the elephant in the room: a 40% increase in the overall number of channels on Twitch between the two halves of 2020.
Twitch also pointed to increases in the number of rule enforcements against reported users and channels. Total enforcements rose 41% over the course of the year, and the numbers reflect that in categories like hateful conduct and sexual harassment, violence and gore, nudity, and terrorist propaganda (Twitch claims this is extremely rare on its platform, but it also depends on what you classify as terrorism). The company also pointed to progress on the part of its Law Enforcement Response team, which made over 2,000 reports to the National Center for Missing & Exploited Children in 2020. Twitch, however, continues to have issues with young users making channels, leaving themselves open to potential predation.
The report contains a handful of other, similar data sets, most of which paint Twitch in a favorable light. Certainly, they’re a useful measure of Twitch’s growth in these areas, and broadly, the report mirrors similar documentation provided by platforms like Discord, Facebook, and Twitter. The problem with these kinds of reports, however, is that they have a way of appearing to say a lot while revealing very little. Twitch has offered numbers and a small amount of context, but streamers and viewers remain in the dark on major issues that came to light last year.
Replies and quote tweets on Twitch’s Twitter post about the transparency report, for example, are filled with questions about the status of Twitch’s investigations into reported sexual harassment (the ongoing nature of which has benefitted accused harassers, some of whom can still stream on the platform), specific high-profile bans like that of Dr Disrespect, the lack of a trans tag and other discoverability tools for underserved communities, lengthy turnaround times on ban appeals (and data surrounding successful appeals vs denials), the Twitch employee who Kotaku reported last year was no longer with the company after accusations of sexual assault, data about DMCA takedowns, and the process by which Twitch applies its rules, which frequently leads to inconsistent outcomes.
Twitch concluded its post about the report by saying it will “look closely at the feedback we receive to inform how we can refine these reports moving forward.” If nothing else, it now has plenty of feedback to work with.