How Much It'd Currently Cost To Set Up Cloud Gaming For The US

People have been skeptical of how well game streaming technology can possibly work for quite some time. Early reports from the Playstation Now Beta aren't doing much to help consumers feel better. Many folks are saying the technology just isn't there yet, but the real question is will it ever be?


In a phrase? No, not for a long time yet. Even the best of set-ups have enough lag to cause dropped frames or missed inputs. To make everything comparable to playing on a console or a PC in your home, would require a high quality fiber optic connection to a datacenter that is as close as possible to you. All that's very expensive. According to research from the Fiber to the Home Council, a fiber optic advocacy group, a project of that scale take something on the order of $89.2 billion.

That was in 2009, but the paper was estimating the cost for 2015, assuming that 34.5 million households would be connected at the time. So far we've been keeping pretty close to those estimates with 25%of the 114 million households in the US having active fiber connections. With some of the changes in how the business of fiber optics is done, those costs come down for people in urban environments, but connecting the rural 20% will be just as expensive. The total cost of giving everyone in the US access to fiber, shouldn't be too far off from the 80% estimation back in 2009.

But why is this even necessary? To explain that, I'll be looking at every step of playing a game: seeing an image on screen, appropriately reacting to it, having the game update the image, the monitor display the new image and you to react again. That's the minimum number of steps you need to play a game for a fraction of a second, and the longer that cycle takes, the worse your gameplay experience is going to be.

Last week Tina Amini posted a video about how humans actually have quite a bit of lag just in our bodies and minds. Typically, it takes about 80 milliseconds for ours to see something and our brain to realize that we've seen something. That's not a solid rule, because, contrary to popular perception, your eyes work nothing like cameras. Instead of taking snapshots, you always have some cells that are sending chemical signals based on the kind of light they are hit by. Each of them has their own refresh rate based on how quickly they can take up the chemicals they need to function again, and they are all just slightly off-sync.

That's good for us because it allows our brain to get at least a partial image immediately. Then, based on prior experience it starts filling in the extra bits. All of that still takes an enormous amount of brain-power, and that's why almost 30% of your brain cells are devoted to processing all of that visual information. It allows us to be masters of pattern recognition and dominate the animal world, but that comes at a cost – time. When we contrast that to our ears, which only need 2% of our brain power but can react in less than 1/1000th of second, we start getting a clear picture (punintenional) of how tough this is. Anyone who's ever used a video editing program knows that modifying thousands of images all at once is tough work for any machine. Your brain isn't any different.


When you're playing games, you're not just watching though. You're also interacting. That takes time too. There's even a whole website devoted to it. At Human Benchmark, you can try to test your own reaction time, and you'll notice that's probably a lot slower than you think. I came in at 170ms, and I'm known among friends for having superb reflexes. The median scores for his site are around 215 milliseconds, and the site has had more than 10.4 million people test themselves – a huge sample size. Those numbers align with some of my own hardware tests and some data from computer peripheral manufacturers.

Today's hardware is pretty fast. An average mouse or keyboard can update information sent to the computer between 125 and 1000 times every second. Regardless of the settings, most of that lag is on you. Once it's in the computer it only takes a few more microseconds to get processed. The next big bottleneck is actually the input lag of the monitor. Again, video data is really tough for anything to handle – it's a huge amount of information. Monitors and graphics cards have to work extremely hard to display anything at all, and as Cnet's tests have shown, most set-ups take around 40ms to display. Then that updated visual data needs to be interpreted by your brain.


If you're keeping track, at this point in the process, just to go through a full cycle of seeing something on screen, reacting, and then seeing and understanding the changes on screen would take a third of a second. With that kind of latency it's a miracle any of us can really play games at all. And if you're playing a game locally with a console or computer, then your troubles end there. If, however, you're trying to stream a game there's a bit more to do.

So the most important number moving forward is ping. That's how long it takes for your computer to send a signal to some other computer and hear back. OnLive, is still probably the most well-known streaming service we've seen. According to their own FAQ, you want a latency of below 25ms. That makes sense, because the higher that number gets, the more likely you are to have some pretty big problems. I tested my own internet and, depending upon which server I picked, I'd end up with some different results.


I live in Minneapolis, so the closest OnLive data center to me was in Illinois. My first ping test showed a latency of 18ms for a server around 350 miles away. To get some comparisons, I also checked with some closer servers and found… higher, not lower pings. In trying to find out why, I learned about what's called the "last mile" problem. Something OnLive also mentions in their support FAQ.

Quite a few people think of fiber optics as this magical future technology, but unless you live in one of a few remote locations, you already use fiber optics for your internet. Connections between major cities like LA, New York, Tokyo, and London are all made with the absolute best optic cables money can buy right now. It makes sense. Laying a new cable from LA to Tokyo is a lot of work, but both cities usea lot of bandwidth. It's also just a few really expensive cable.


The"last mile" that connects you to one the backbone of the internet is by far the most expensive part to build and maintain. You'll have tens of millions of super-expensive cables, and in many cases that will mean ripping up old houses, laying cables under busy city streets, etc. etc. That's construction, that's traffic jams, that's a lot of money. That's why trying to communicate with a server that's close to me but in the suburbs or out in a rural area can take twice as long or more than for me to connect to someone in Chicago. There's already a massive data link between the two cities that keeps it all nice and quick, but there's another problem.


I noticed something else when I ran my tests. I had a much higher "jitter" when connecting to Chicago vs. somewhere local. Jitter is the variance between ping results after repeated checks. My jitter was 19ms, meaning that at worst I'd be sitting at 37ms. That's quite a bit higher than what OnLive suggests, and my internet connection is faster than 95% of the US. The reason for that occasionally higher latency is that while the link between is really fast, there's also a lot of individual connections and routers and servers along that line, and if those get hit by intermittent traffic, then your signals will get delayed and held for a short bit before they can be routed through. It's entirely possible during high traffic loads to get delayed by 25 milliseconds or more.


That's not something that will likely be improved that much as time goes on, either, as long as people use the internet and telecom companies scale infrastructure to demand, you'll always have burst of excess data use which will cause intermittent issues. That leaves us with 372 milliseconds of latency in the whole process. That might not sound like much more than what we had before, but it's 10% more than you'd have otherwise. You can always choose to play games locally, but your brain will never respond to inputs faster. It physically can't.

There's really only one viable solution to make cloud gaming work on a large scale – that's get everyone wired up with fiber while also moving data centers closer to consumers. If the server doing all of your gaming calculations is 350 miles away, you're never going to be able to get the ping and the latency low enough to make it comparable to gaming on your computer or console. Based on a conversation with a local fiber ISP, if all of the hardware was local and all the connections were fiber, and it was all properly maintained, then you could introduce as little as 3ms of additional lag, but the amount of infrastructure that would cost is so far beyond reasonable it's frightening.


Cloud gaming is hypothetically possible, but even you got past the fiber optic cost, you'd still need server farms with the hardware to run all of those games. Last year Nvidia began selling GPU boards designed for a large number of concurrent users – directly targeting cloud computing markets. Just one board ran $3,599. If we add in processors, ram and everything else we're looking at a cost of more than $350 per customer – for just the server-grade hardware. That figure doesn't include maintenance, datacenter construction costs, etc. At that point, you, as a consumer, might as well buy a console and be done with it. It's less economically efficient to buy expensive hardware, put it all in one place, and deliver a product that is still lackluster.

More than five years ago, Richard Leadbetter, a very well-known hardware critic and numbers-based guy at Eurogamer, said OnLive was impossible at the time. Unfortunately, it's still not viable, and doesn't look like it will be shaking up the gaming world for a long time yet.


You're reading Numbers, a blog on Kotaku that examines games and culture through the lens of math and statistics. To contact the author of this post, write to or find him on Twitter @dcstarkey.


Al Roderick

Here's a number for you: 16 billion. That's a useful scaling factor to use for any kind of public expenditure project. (This is a public spending kind of project even though all of this infrastructure is going to be funded and owned privately, because it's national infrastructure and that always acts like a government project even when it isn't.)

Why 16 billion? It's 300 million citizens times 52 weeks times a dollar rounded up. So about one dollar per resident per week. That 89 billion comes out to $5.50 a week per person. I know that these kind of things aren't evenly split among everyone like that, but it gives you an idea of how small the number really is. Big amounts of money aren't really that big in a big country.