Intel Sees Raytraced Games In The Near Future

Illustration for article titled Intel Sees Raytraced Games In The Near Future

Raytracing is a method of generating a computer image by tracing a ray of light through an image plane. The whole process is similar to how light bounces off objects in nature, determining the color, sheen, luminosity, etc. Whereas other methods of creating graphics have to generate special effects, shadows, bloom, and other popular lighting techniques are all occur as a natural product of raytracing. The problem is that raytracing is very resource intensive, making it great for pre-rendered applications, not-so-great for on-the-fly applications like games. According to Intel's Michael Vollmer, that's a fact that could change sooner than we think.

We keep in touch with companies all over the world - I dare say that in two to three years time we will see something. There already are some individual approaches, especially in the science sector, which show that Raytracing algorithms are scaling very well with the numbers of cores. But the migration to a new programming technology takes years; Raytracing is still in an early stage

We've already seen crude attempts at raytracing Quake 4, with pretty spectacular results. Those of you wondering where graphics could go from here now have your answer. Raytraced games in 2 to 3 years, says Intel [PCGH]

Share This Story

Get our `newsletter`


@pasquinelli: No, I'm saying to find a surface, then shoot rays to find more surfaces that could contribute light to the first surface. That is recursive, since those surfaces could require more rays to be fired from them to find the illumination of them in addition to primary light rays. For some cases, like the red and green walls transferring colour to the white ceiling, it can be fairly inexpensive because the walls are low frequency, and thus you can sample them fairly sparsely without getting aliasing problems. For things like the caustic, or light bouncing off one or more mirrors before reaching a diffuse surface, the light is high frequency, which requires lots of rays to avoid aliasing. Shooting 1000 rays from a single ray-surface intersection doesn't stop being raytracing, it stops being efficient. This is why people came up with stuff like photon mapping in the first place.