Develop - Issue 92 - March 2009

Page 21

OPINION | ALPHA

COMMENT: CODING

Deferred Shading Revisited by David Jeffries, Black Rock Studio

O

ur deferred shading system got its first real work-out for our Vertical Slice recently, so I thought it’d be interesting to look back and see if the technique justifies the hype. To recap, a deferred shading system is one in which the lighting is deferred until the post processing phase. When the geometry is initially rendered into the frame buffer simple shaders without lighting are used, while at the same time the GPU is writing information about the material of each pixel into a G-Buffer. Then, during the postprocessing phase, the lights are rendered in screen space using the information from the G-Buffer. For the programmers the deferred shader has opened up new post-processing avenues that hadn’t been available to us before. Per-pixel motion blur was the stand out example and something that fits our game perfectly. Because we’re already maintaining multiple render targets, the additional cost of motion blur is only writing the motion vectors into the G-Buffer and performing the final blur pass. Having all the information in the G-Buffer available to us at the post-processing phase enables us to do more advanced postprocessing than previously possible. For example, one of the entries in the GBuffer is the pixel normal (required for the lighting calculation), which means we can do higher quality varieties of screen space ambient occlusion or screen space directional occlusion. As expected, our post-processing phase is now far more important (and expensive) than before. In the past our post-processing would consist of tone mapping, some colour filters and maybe a touch of depth of field, with everything else going through the vertex units. Now we add diffuse and

DEVELOPMAG.COM

specular lighting, light-scattering, screen-space ambient occlusion, screenspace shadow maps and a colour cube to the phase. By moving these effects into the postprocessing phase we make the initial geometry rendering phase faster and get the desirable property that lighting and shadow receiving become independent of the complexity of the underlying model mesh. Also, because the lighting is applied in screen space, the lighting shaders are only used on pixels which are visible. We expect

If you decide to go deferred then it won’t be long before your artists request the game be set at midnight for good mood lighting. that by the end of the project we will be spending 50 per cent of our render time in the post-processing phase. We have 96-bits available in our G-Buffer spread over three 32-bit render targets and with this we’re able to represent almost all the materials we need. Any materials that can’t be represented in this way get handled separately by reserving 8-bits of the G-buffer for a material ID. For the artists deferred shading is all about the extra lights. With our old system we had a light for the sun and a kicker and that was about it. Any other light effects had

to be baked into the geometry as an off-line process which meant the lighting didn’t react well when the environment changed. With the deferred shader the artists can place down literally hundreds of lights for illuminating the geometry in real time. There are restrictions however; firstly, because the lights are rendered in screen space their expense is proportional to their size on screen. Lots of localised lights are fine but if you start putting in big lights that can get close to the camera then you’ll quickly eat up your fill rate. Secondly, remember that strong lights look strange unless they cast shadows and, while deferred shading can help with the cost of receiving shadows, your engine will still pay the cost of casting them in the first place. It’s clear that disentangling the lighting from the vertex processing is conceptually the right thing to do, and Microsoft and ATI have both predicted that we’ll all be doing it in the future. But the nagging question has been whether the performance costs outweigh the gains on the current generation of hardware, and the answer to that question really depends on the type of game you’re making. If you do decide to go deferred then, speaking from experience, it won’t be long before your artists start requesting the game be set at midnight so they can make the most of the mood lighting. So don’t put the moonlight tech on hold just yet. David Jefferies started in the industry at Psygnosis in Liverpool in 1995, eventually working on Global Domination and WipEout 3. He later moved to Rare where he worked on the Perfect Dark and Donkey Kong franchises. Next came a move down to Brighton to join Black Rock Studio (which was then known as Climax Racing) in 2003. On this generation of consoles he’s been the technical director of MotoGP’06 and MotoGP’07 before starting work on new racer Split/Second.

www.blackrockstudio.com

MARCH 2009 | 21


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.