When the text-heavy fantasy worlds of multi-user dungeons first invaded the mainframes of Essex University and the dial-ups of Compuserve, there were few rules in place and even fewer ways to enforce them.
But today's most modern of virtual worlds now include cutting-edge technology used to not just provide an immersive experience but also to hunt down the potential real-world predators, bullies and criminals lurking in the online games.
The idea of actively policing how people play massively multiplayer online games didn't really come about until the phrase massively multiplayer online games became not only a part of gamer parlance, but also a viable commercial genre with the 1997 launch of Ultima Online.
Ultima Online struggled very publicly with finding the proper balance between the freedom of players and the need to enforce rules. While still in beta, developer Richard Garriott's character went in-game to address other players live. He was promptly assassinated.
Early in the game's life Ultima Online became a place known for player-on-player hostility. When developers stopped allowing overt attacks, players would come up with other ways to harass each other.
Garriott once told me about how players would use crafting skills, like the ability to build chairs in game, to create quick prisons for players and demand ransom.
For many, Ultima Online was the online equivalent of the wild, lawless west.
But as the popularity and the potential revenue of these games grew, so did the importance of moderation. Modern massively multiplayer online games don't have to deal with just cheating and player hazing, but gold running, character theft, even real world crimes, like players targeting underage players for real-world sex or discussing crimes they want to or already have committed.
Carefully balanced moderation also helps sustain the life of an MMO. Too much can kill off player freedom and the desire to play, too little can allow troublesome players to scare off those in the game to have fun.
"MMOs are often compared to theme parks, they are fun destinations you can go hang out in with lots of other people," said Ryan Seabury, one of the founders of MMO developer NetDevil. "Statistically speaking, with a large enough group of people, you are going to get a small percentage of trouble makers, and as the saying goes one rotten apple spoils the bunch. Most theme parks I've been to have safety and security staff on hand monitoring all areas to ensure that inevitable small percentage does not disrupt the enjoyment of the majority patronage. So monitoring in MMOs is very much along the same lines of thinking. "
Seabury points out that while Auto Assault, their car-themed apocalypse massively multiplayer online game, had almost no moderation, their latest MMO LEGO Universe, has state-of-art moderation.
"We didn't put very much effort into that side of things in Auto Assault from the development side other than very basic GM tools," he said. "So something we learned from that is investing in a bit of automation where it makes sense can go a long way."
LEGO Universe, which launched in late October, uses a state-of-the-art combination of live and automated moderation.
"You want to use automation where it makes sense, for example when someone has clearly and blatantly violated a policy," he said. "However, you can't rely on automated solutions to cover every case, or even most cases of poor conduct, as they too often are shades of gray infractions, not black and white. There are some things you just need humans to look for and handle."
Where games like Ultima Online eventually relied on rooms filled with people watching gamers play through their virtual world, LEGO Universe relies on a complex set of programs and artificial intelligence to spot potential problems.
The key to the monitoring program, which Seabury says includes LEGO trade secrets, is closely watching the behavior of people in game to try and figure out who the person is behind the avatar.
"Like pulling weeds, you want to take bad players out at the root so they don't come back," Seabury explained. "We look for behavioral trends that correlate with known patterns of play for adults and children, and based on this human moderators are automatically notified of potential problem cases to investigate in real time. The moderators are trained and empowered to take action immediately as appropriate. "
That means if a player is signed in as an 8-year-old boy, but the program believes they are chatting or behaving like a 40-year-old, moderators will be alerted to watch them.
This cuts down significantly on the human workforce needed to keep an eye on so vast a virtual world. It also means moderation can be applied almost universally, instead of relying on spot checks.
"Ten years ago it was all very manual, non-real time and very exploitable, which allowed for more than a few memorable anecdotes," Seabury said. "As an industry we quickly realized the value of eliminating one bad subscriber to keep the nine they would have caused to quit."