What Good Is A Computer Processor That Does Bad Math?

Illustration for article titled What Good Is A Computer Processor That Does Bad Math?

One of the core functions of a computer processor is to perform the basic arithmetical functions of the system. Why would we want a processor that gets those basic calculations wrong? Carnegie Mellon's Joseph Bates has the answer.


Your standard computer processor is on the larger side and consumes a fair amount of power, but it gets the job done as far as math is concerned, crunching numbers with pinpoint accuracy.

But not all computations a computer performs need pinpoint accuracy, a fact that Bates, an adjunct professor of computer science at Carnegie Mellon University, took advantage of in designing a processor that can perform tens of thousands of calculations simultaneously using sloppy arithmetic.


A meeting with MIT Media Lab researcher Deb Roy helped Brown come up with one potential use for this highly powerful, slightly erroneous processor: Video processing. The pair knew that algorithms used to process visual data were more failure-prone than others, with a 50 percent success rate considered on the good side. Tweaking the margin for error by a percentage or so shouldn't make much difference in the process.

Roy and Brown, funded by the U.S. Office of Naval Research, got together in May of last year to simulate the sloppy math processor, tweaking an existing algorithm used to distinguish background and foreground objects in images so that the results were randomly augmented by a factor of zero to one percent.

"The difference between the low-precision and the standard arithmetic was trivial," Shaw says. "It was about 14 pixels out of a million, averaged over many, many frames of video." "No human could see any of that," Bates adds.

What makes the sloppy math processor more efficient than a normal one? For one, the lack of precision allows for much smaller cores, with 1,000 fitting on a single chip as opposed to today's most advanced standard CPU cores that max out at 12.

The way the chip works will be more efficient as well. Where regular chips allow for communication across all cores at once, Brown's chip only allows for communication between adjacent cores, making for much more efficient performance. Of course that means the focus of the processor's calculations must be much tighter, working on processes that can be divided up into smaller tasks, like video editing or image processing.


Former Intel architect Bob Colwell suggests that the chips could also be used in similar fashion to the graphics processing units found on today's video cards, producing 3D images that "probably don't need to be rendered perfectly."

The chip would work in conjunction with a standard processor, tackling heavy focused tasks to help take some of the weight off of the regular chip.


It's a wonderful idea that makes perfect sense, but Brown sees one potentially major flaw in his design.

"There's going to be a fair amount of people out in the world that as soon as you tell them I've got a facility in my new chip that gives sort-of wrong answers, that's what they're going to hear no matter how you describe it," he adds. "That's kind of a non-technical barrier, but it's real nonetheless."


The surprising usefulness of sloppy arithmetic [Physorg.com]

Share This Story

Get our newsletter



Might be exposing some ignorance now, but I thought CPUs were just switches. Really really small, silicon diode switches. And all they really do is turn on and off, hence computers being in binary.

Hence, they don't actually /do/ anything. The software /does/ the stuff, via the results of the switches (organised into logic gates by the software?)

If so, is this just new CPU software or some new way of switching?