
The Wikimedia Foundation, a nonprofit group that hosts, develops, and controls Wikipedia, has announced that it won’t be moving forward with plans to add AI-generated summaries to articles after it received an overwhelmingly negative reaction from its army of dedicated (and unpaid) human editors.
As first reported by 404Media, Wikimedia quietly announced plans to test out AI-generated summaries on the popular and free online encyclopedia, which has become an important and popular bastion of knowledge and information on the modern internet. In a page posted on June 2 in the backrooms of Wikipedia titled “Simple Article Summaries,” a Wikimedia rep explained that after discussions about AI at a recent 2024 Wiki conference, the nonprofit group was going to try a two-week test of machine-generated summaries. These summaries would be located at the top of the page and would be marked as unverified.
Wikimedia intended to start offering these summaries to a small subset of mobile users starting on June 2. The plan to add AI-generated content to the top of pages received an extremely negative reaction from editors in the comments below the announcement.
The first replies from two different editors was a simple “Yuck.”
Another followed up with: “Just because Google has rolled out its AI summaries doesn’t mean we need to one-up them. I sincerely beg you not to test this, on mobile or anywhere else. This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source.”
“Nope,” said another editor. “I don’t want an additional floating window of content for editors to argue over. Not helpful or better than a simple article lead.”
A day later, after many, many editors continued to respond negatively to the idea, Wikimedia backed down and canceled its plans to add AI-generated summaries. Editors are the lifeblood of the platform, and if too many of them get mad and leave, entire sections of Wikipedia would rot and fail quickly, likely leading to the slow death of the site.
“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation rep told 404Media. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”
“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course. We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”
In other words: We didn’t give anyone a heads up about our dumb AI plans and got yelled at by a bunch of people online for 24 hours, and we won’t be doing the bad thing anymore.
Wikipedia editors have been fighting the good fight against AI slop flooding what has quickly become one of the last places on the internet to not be covered in ads, filled with junk, or locked behind an excessively expensive paywall. It is a place that contains billions of words written by dedicated humans around the globe. It’s a beautiful thing. And if Wikimedia Foundation ever fucks that up with crappy AI-generated garbage, it will be the modern digital equivalent of the Library of Alexandria burning to the ground. So yeah, let’s not do that, okay?
.