SOLID STATE ALERT FREE DVD! URGENT REPORT - READ THIS NOW! PAGE 10 59 MUST-PLAY GAMES ISSUE 231 OCT 2009 ISSUE 231 REVENGE IS GOOD FOR YOUR HEALTH
REVENGE IS GOOD FOR YOUR HEALTH
Intel Core i5: jaw-dropping performance and value
“LUKE SKYWALKER AS THE JOKER! THIS IS SIMPLY BRILLIANT”
Dave loves Batman: Arkham Asylum
Core i5 kit reviewed
Mobos, memory and coolers THERE’S MORE… Does DX11 matter? Protect your gaming Classic games recreated
A RM II A C&C4: DISCIPLES OF NOD’S LAST STAND
HOTWIRED! HO H TWI WIIR WI RED! ED!
GET MORE FROM YOU RIG
¤ POP A TFT INTO YOUR RIG ¤ REVIVE YOUR LAPTOP ¤ UNDERCLOCK YOUR CPU!
HANNS.G HH251 ASUS P7P55D MSI 790FX ASUS TV TUNER RISE OF FLIGHT: BUST EUROCOM P900F OUT YOUR JOYSTICK! OCZ DDR3 LV KIT
Issue 231 Oct 2009 £5.99 Outside UK & ROI £6.49
11/8/09 2:24:1 pm
Illustration: Peter Draper 76
12/8/09 7:07:59 pm
es m a g e e s, e’ll s e y w s r y e a h t s MD whe A s k ? s r a a e d for he y t x r f O e o g d m n a n o l e t i e Ad ab y th e b k t i a t n o o t based ays it’s going s NVIDIA
12/8/09 7:08:3 pm
Does DX11 matter? ne of the key changes in Windows 7 – behind the fancy desktop dressing and pared down control panels – is DirectX 11. As the name suggests, it’s the successor to DirectX 10, Microsoft’s home grown API for software developers to interact with graphics hardware. It still builds on the codebase of its predecessor DX10 and because of this it won’t be available to people on Windows XP, but it will run on both Vista and 7. It’s not a huge rewrite of the code base, but brings a couple of really big steps forward when it comes to the GPU pipeline.
It’ll be popular too. Virtual shelf stackers at the world’s biggest online hypermarket, Amazon reckon that more copies of Windows 7 were pre-ordered in the first eight hours of links going live than Vista got in its entire 17 week build-up period. In addition, some of the key new features will work on DX10 hardware too, so developers won’t lose much and will gain a lot by switching to DX11 codepaths relatively quickly. If Windows 7 is the subject of a more muted marketing campaign than Vista – currently there’s no ‘Wow starts now!’ on the cards, thankfully – then DX 11 is a mere whisper on the breeze. Compared to the hype built up around DX10 using hugely anticipated games such as Crysis, Supreme Commander and Alan Wake, it’s
an unknown quantity. Is it because DX10 was as disappointing as watching any Woody Allen film post-1995? Is someone afraid of overhyping a technology that won’t deliver anything new in terms of games for a long time yet? We don’t know, we’re journalists not programmers (Okay, a couple of us are programmers). Therefore, we’ve asked a couple of people who are privy to all its secrets: one from AMD and one from NVIDIA, and we asked them what they think of its most significant features and what kind of impact it will have. We should note that our experts weren’t interviewed together and aren’t responding to one another. We cornered them alone, we’ve found that it’s safer that way.
WE’RE NOT IN GRAPHICS ANY MORE
here are five key new features that arrive with DX11, and the first has nothing to do with gaming graphics. DirectX Compute is Microsoft’s GPGPU language, joining NVIDIA’s CUDA and OpenCL in allowing graphics cards to be used for things like PhysX, AI and video transcoding – the later of which is supported as an impressively quick drag and drop feature in Windows 7. Importantly, any DX10 graphics card will be able to run some DX Compute code, although because of hardware differences it’s likely that most in-game routines will target DX11 class cards only.
TONY Name: Tony Tamasi Job title: Head of Content & Technology, NVIDIA What he does: Oversees NVIDIA’s interaction with games engine designers and incorporating their wishlists into future hardware plans
RICHARD Name: Richard Huddy Job title: Senior Manager Developer Relations, AMD What he does: Heads up the team of AMD software engineers that visit games studios and help programmers use the latest hardware features 78
GPGPU, in general, is the most important initiative for the whole graphics card industry period. First of all, it’s an entirely new market of potential software developers and customers that we can reach. The market for people who play games is obviously very large and very important to us, but the market for people who watch video, record things on YouTube and use Flash is much larger still, and in that market today there’s no strong motivation for those people to buy highend graphics processors. In GPGPU computing there’s a whole realm of application possibilities and potentially large growth. NVIDIA believes, at its core, that the more ways you have to make use of the graphics processor for any kind of parallel process is good. The exciting thing is that now we have Microsoft getting into the game and standardising the process, and completely legitimising GPGPU computing to the extent that Windows 7 has core functionality that will be accelerated. Using the GPU for general computing isn’t just about Photoshop or video transcoding though. It’s also about game functions outside of rendering.
We’re helping developers work on physics and artificial intelligence, for driving animation systems and so on. It can be used for a whole host of things.
I’ve got 30GB of video on my hard drive and my portable video player just died, so I’m probably going to have to move it to another platform and reconvert my video. Transcoding that on the GPU is typically three times as fast as doing it on the CPU, and better than realtime. In Windows 7 it’s so easy that my mother could transcode video just by dragging and dropping one file into another folder. It’s a thing of beauty, it hides all of the tech from you. Compute shaders in games are very difficult to write for at the moment, though: even my engineers are struggling with it. They can run anything from half the speed of the original code if you mess it up to a factor of three or four times faster if it works well. A member of my staff now has his compute shader running three or four times faster than the pixel shader, and we like that. It means that things like post-processing effects will typically benefit from using compute shaders from now on.
“[We now have] compute shader running three or four times faster than the pixel shader”
12/8/09 7:08:9 pm
Does DX11 matter?
s part of the DirextX overhaul, the Higher Level Shading Language (HLSL) developers use to write shaders is being expanded to allow for longer shaders and subroutines, and for more to be done in a single pass.
This is a big deal because now developers can write what I call ‘uber-shaders’. It sets the stage for increasingly sophisticated shadow and lighting models, and the ability to have sub-routines allows you to have increasingly complex code, but in a manageable way.
The thing that strikes me as the most instantly useful part of it is the ability to pick up four texels at once in a single clock cycle. Typically, when you look down at the fine grain detail of a GPU, we’re limited by one of two things: math operations and texture fetches. Either one of those will be the killer for any shader, the bottleneck. Texture fetches are a real problem, they’re limited by bandwidth and they’re limited by the number of texture instructions you fit into the shader and get them through the pipeline. If you can fetch four texels at once, you may be able to do four times as much texture work, especially if you’re working with information that’s in the texture cache. This can be a big win in some cases.
COME IN THREAD ONE, YOUR TIME IS UP
ost games developers are only now really getting their heads around the intrinsic difficulties of multi-threading their engines to make use of dual-core, triple-core and quad core CPUs, and then along comes DirectX 11 and makes the whole affair much easier for them. Spoil sports.
Multi-threading is really a driver level thing, which means the core runtime can spread itself across the CPU cores better. It’s inefficient and fairly awkward to do that with the existing DirectX, but with DirectX 11 it will be quite easy to maximise the benefit of multi-core systems. This will help GPUs, because in many cases they’re so fast, apps are limited by the CPU, not the GPU. Even if you added more CPU cores you wouldn’t be able to make it go any faster. DX11, on the other hand will let you make better use of both.
We’ve found that for about 50 per cent of games developers, multi-threading is
the number one most attractive feature of DirectX 11. It’s a case of ‘If I just recode in DX11 and use the multithreaded display lists I can get an average of 20 per cent extra performance?’ If you’re struggling, as many games developers are, to use more than a single core DX11 makes the missing cores easy to get at. Not only are the terms of DirectX 11 multi-threaded, but most importantly, when you talk to DirectX through one CPU core that very often becomes the bottleneck. Now you can talk to it from all of the CPU cores and they can push data to the graphics card more efficiently. That’s a purely software benefit that anyone with DirectX 9 or above hardware can enjoy.
“For about 50 per cent of devs, multi-threading is the most attractive feature of DX11”
12/8/09 7:08:10 pm
Does DX11 matter?
YOU WILL BE TESSELLATED
he most significant image quality advancement in DX11 is the requirement for hardware accelerated tessellation. With this, a graphics card can rebuild extraordinarily detailed models from a low-polygon count mesh and surface information like displacement maps, but adding in the missing polygons on the fly. It should eliminate the difference between the beautiful characters and scenery artists design and the pointy headed models that make it into the game.
In my opinion, the tessellation shader feature is probably the most important new feature outside of GPGPU. All previous attempts to do some form of tessellation in games were basically fixed function and pretty constrained. They had a large number of issues associated with them and almost no adoption as a result. So with the tessellation shader in DirectX 11, if developers want to do higher order surfaces they can essentially use whatever surface formulation they want to code up, say for example a subdivision, and run that on GPU so there are some nice benefits that accrue. One of the things about putting tessellation in a game, though, is that you have to change your production pipeline a fair bit, because every game engine is based off of tessellated polygons and not surfaces. So to take a simple example, if you have a character in a game and you want to shoot him, you need to do a hit test against that
For at least half of this decade I’ve been talking to games developers about how one day we were going to give them sub-division surfaces and displacement maps. And I remember back in 2000, when I worked for NVIDIA, I talked to games developers about how the next generation would get halfway to supporting sub-division and displacement. There was no chance. It turned out to be much, much harder than we thought back then. One good way of getting close to sub-division is tessellation. You take a single triangle and you smash it up into thousands of pieces and use a displacement map to put back all the surface detail that the artist has had to squeeze out to get it onto the PC. We’re starting to see content generated by games developers that take a parallax occlusion map, which is essentially height displacement information for a surface, and interpret that as detailed information for a tessellator. You can
“Use of tessellation throughout the pipeline is a philosophical shift in the way things work” polygonal mesh. In a world with higher order surfaces, the representation for that character is actually a mathematical equation of surfaces, which could be curvy or bumpy. So a hit test has to be done in quite a different way, because you have to resolve the result of that surface equation test before you can decide whether the bullet has hit the character or gone pinging off somewhere else. The issues and the technology are pretty well understood. Higher level surfaces have been used in film production for a decade, but implementing the process in a game is still going to take some time. You’re likely to see it used for the detail work, where today people use bump mapping or parallax mapping, they’ll be able to go to true displacement mapping and use higher surfaces for character detail. That’ll happen quickly, but pervasive use of tessellation throughout the pipeline is a philosophical shift in the way things work and will take a lot longer. 80
take a surface which was bump mapped, and now you can reconstruct it. It was just a lighting trick before, which was a bit lame and didn’t work with antialiasing, which now you can reconstruct as geometry as finely as you want. If you use tessellation where you didn’t used to tessellate, you’ll see a performance hit. But parallax occlusion mapping or bump mapping or any of the other techniques requires some extra maths per pixel, and the tessellation code runs per primitive – or vertex – which is typically lower frequency than the pixel frequency. So you get this odd situation where tessellation is slower than not using tessellation, but using tessellation with a parrallax occlusion map, which you are using to control how much you tessellate is faster than parrallax occlusion mapping, because that actually uses incredible amounts of mathematic calculations. By comparison, tessellation is a doddle. We’re wildly, insanely excited about tessellation.
here are two reasons we suspect DX11 isn’t being shouted about. First off, no-one wants a repeat of the DX10 hype fiasco, with just six disappointing games in the first six months using it. Secondly, not as many people care about graphics any more: the latest games don’t look very different to the DX9 generation.
DX10 got a bad rap because it was overhyped, but that goes for every generation of graphics API. There’s always been a gap between the API and chip launch and the arrival of games for it. It takes time for developers to produce that content, and I don’t think DX10 was really any better or worse, other than that it was strictly tied to Vista, so developers took the pure, available market perspective into account when deciding whether or not to code for DirectX 10. PCF: A lot of people look at current DX10 games and think graphics tech hasn’t improved much since the DX9 days. Is it dual development for consoles that’s holding engineers back? I woudn’t draw that conclusion myself. If you look at, say, Crysis or Far Cry 2 on a PC and turn all the effects up it looks pretty fantastic compared to any console. That’s a function of the GPU capabilities as well as the general horsepower of the PC as a platform. For first-person games or strategy games, the PC is still the platform of choice and is leaps and bounds ahead of the console.
The real danger moment for DX10 was when it became clear that Crysis could be run in DX9 mode and almost everything looked the same as it did in DX10; that was just a disaster. Microsoft will not let that happen again – and I don’t mean that they’ll hide it better, I mean that they’ll make sure there’s a real differentiation. Sub-div surfaces have been the way Hollywood has created effects – like Gollum in Lord of the Rings – for ten years, and we can do that now, virtually, in real time. With DX11 you’ll be able to do a comparison between the previous and the current stuff and it will stand up to a detailed analysis of whether it’s different or not.
12/8/09 7:08:12 pm
Does DX11 matter?
TO BUY OR NOT TO BUY?
t wasn’t such a long ago that a new graphics card launch was a really big deal and loads of people would rush out to be the first to upgrade. Can DirectX 11 recapture that kind of excitement?
The most important thing for us is whether or not the developers are making DirectX 11 content, and if they are, the users will buy their games and if they need to, hopefully buy GPUs to run them. As with every API before, though, it’ll be a trickle, then a stream and then a flood.
People will upgrade fairly quickly, when they see the kind of stuff that we’re working on. There’s absolutely no doubt that games developers and publishers will insist on a good quality fallback for DX10 or 9 hardware, but I think that when you do the direct comparison – the DX11 version of a game versus the DX9 version – you’ll realise it’s a significant jump forwards.
One of DirectX 11’s greatest features is its ability to render more realistic pebbles. Yay!
WILL THERE BE GAMES?
eally, the only question that matters is will there be any games in the first six months, and will they be worth upgrading for. It’s worth noting before reading on that AMD is widely tipped to have a DX11 card ready fairly soon after launch, while rumour has it NVIDIA’s GT300 will arrive around Christmas.
What’s more likely to happen is that you’ll get a wave of content that uses the new API but isn’t radically different. It’ll be the second wave of content, which is typically a year later, that really pushes some of the features. There’ll certainly be some new games that use DX11 in the first six months, but the real landslide of content is going to be in the second half of 2010.
We’re going to have games which arrive early, we’re going to have games this
year, and we’re going to have games which are significant and impactful, which look different to previous games. Look at Codemasters. They moved the shipping time [of DIRT 2] from September to November and in so doing switched from one financial year to the next, which hurt them. As a public company, they had to give a reason, and they said: “because we’re going to DX11.” Which means that it’s less than two months on the schedule for them to include all the recoding and the Q&A. It’s no wonder that games developers are so enthusiastic.
OpenCL Like NVIDIA’s CUDA, this is a multi-platform language for getting at GPGPU functions on a graphics card. As the name suggests, it’s open source and unlike CUDA also runs on AMD chips. DX Compute won’t replace it, because many high performance industrial apps run on Linux systems. HDR Which stands for ‘High Dynamic Range’. The ‘dynamic range’ of an image corresponds to the number of values between absolute black and whiteout: the larger, or higher this is, the more detail can be retained in shadows and displayed in highlights. Sub-divisional surfaces A 3D trick used by cinematic CGI experts whereby the large polygons in a low resolution mesh are split up into many smaller polygons during the rendering process, allowing more detailed surfaces to be created. The new tessellation shaders in DX11 implement this kind of geometry creation in real-time on on the GPU. Displacement mapping In order to tell the graphics card how to build the new surface detail generated, a 2D displacement map – like a greyscale texture – is generated from the original high-resolution model drawn by the artist. From this data, the GPU can rebuild detail using a low resolution model and tessellation. October 2009
12/8/09 7:08:23 pm