Issuu on Google+

FREE8GB OF VIDEOS & MORE Practical inspiration for the 3D community


ěĎ ě ěũũ



Digital Ed


an e

Power up your models with Maya Page 22

Colton Orr Software Maya, ZBrush, Substance Painter


Future Publishing Ltd Richmond House, 33 Richmond Hill Bournemouth, Dorset, BH2 6EZ +44 (0) 1202 586200 Web:


Editor Steve Holmes 01202 586248

Features Editor Carrie Mok Art Editor Newton Ribeiro de Oliveira Editor in Chief Amy Hennessey Senior Art Editor Will Shum Photographer James Sheppard Contributors Orestis Bastounis, Tim Bergholz, Paul Champion, Ian Failes, Philippa Grafton, Thomas Hall, Alex Hindle, Jude Leong, Reinier Reynhout, Patrick van Rooijen, Joel Zakrisson Advertising Digital or printed media packs are available on request. Advertising Manager Mike Pyatt 01225 687538 Account Director George Lucas Advertising Sales Executive Chris Mitchell

International 3D Artist is available for licensing. Contact the International department to discuss partnership opportunities. Head of International Licensing Cathy Blackman +44 (0) 1202 586401

Subscriptions For all subscription enquiries: 0844 249 0472 Overseas +44 (0) 1795 592951 Head of Subscriptions Sharon Todd $VVHWVDQGUHVRXUFHĂ€OHVIRUWKLVPDJD]LQHFDQEHIRXQGRQWKLV ZHEVLWH5HJLVWHUQRZWRXQORFNWKRXVDQGVRIXVHIXOĂ€OHV Support:Ă€OHVLORKHOS#LPDJLQHSXEOLVKLQJFRXN Circulation Circulation Director Darren Pearce 01202 586200

Production Production Director Jane Hawkins 01202 586200

Management Finance & Operations Director Marco Peroni Creative Director Aaron Asadi Editorial Director Ross Andrews

Level up your character modelling Page 38

Printing & Distribution William Gibbons & Sons Ltd, 26 Planetary Road, Willenhall, West Midlands, WV13 3XT Distributed in the UK, Eire & the Rest of the World by Marketforce, 5 Churchill Place, Canary Wharf, London, E14 5HU 0203 787 9060

Distributed in Australia by Gordon & Gotch Australia Pty Ltd, 26 Rodborough Road, Frenchs Forest, New South Wales 2086 + 61 2 9972 8800



he wheel keeps spinning – the start of a new year brings with it the exciting prospect of a whole 12 months of innovation and excellence in computer graphics for 2017. There’s an awful lot to look forward to, from substantial advances in new ďŹ elds like VR and AR, to big updates in your favourite tools and a wealth of tantalising new projects in the VFX, games and visualisation worlds. We’ll be working day and night to keep up with all the latest developments. Our 2017 mission starts here in a spectacular issue 103, with an awesome offering that’s sure to get you

motivated for the year and help you beat the postholiday blues. First up is our essential Maya workout – ďŹ ve diverse projects broken down into their component parts to help you make the most of the tool at home and in the studio. One studio that has worked tirelessly to drive the industry forward for years is Image Engine, the visionary bunch behind amazing work on Jurassic World, District 9 and plenty of other huge shows. We’ve had a chat with the team about its body of work and the things that make it tick. Elsewhere, don’t miss our character modelling masterclass and a tutorial section that’s ďŹ t to burst as usual. Enjoy! Steve Holmes, Editor

Sign up, share your art and chat to other artists at Get in touch...



The publisher cannot accept responsibility for any unsolicited material lost or damaged in the post. All text and layout is the copyright of Future Publishing Ltd. Nothing in this magazine may be reproduced in whole or part without the written SHUPLVVLRQRIWKHSXEOLVKHU$OOFRS\ULJKWVDUHUHFRJQLVHGDQGXVHGVSHFLĂ€FDOO\ for the purpose of criticism and review. Although the magazine has endeavoured to ensure all information is correct at time of print, prices and availability may FKDQJH7KLVPDJD]LQHLVIXOO\LQGHSHQGHQWDQGQRWDIĂ€OLDWHGLQDQ\ZD\ZLWKWKH companies mentioned herein. If you submit material to Future Publishing via post, email, social network or any other means, you automatically grant Future Publishing an irrevocable, SHUSHWXDOUR\DOW\IUHHOLFHQFHWRXVHWKHPDWHULDODFURVVLWVHQWLUHSRUWIROLRLQ print, online and digital, and to deliver the material to existing and future clients, including but not limited to international licensees for reproduction in international, licensed editions of Future Publishing products. Any material you submit is sent at your risk and, although every care is taken, neither Future Publishing nor its employees, agents or subcontractors shall be liable for the loss or damage.

© 2017 Future Publishing Ltd ,661

Mac OS X





The world’s most advanced visual effects and motion graphics software is now available on Linux as well as Mac and Windows! For over 25 years Fusion has been used to create visual effects on thousands of blockbuster films, TV shows and commercials. Fusion features an easy to use and powerful node based interface, a massive tool set, true 3D workspace and GPU accelerated performance all in a single application! Now with support for Linux, Fusion 8.2 is easier than ever to integrate into your existing VFX pipeline!

Incredible Creative Tools

Hollywood’s Secret Weapon

Scaleable Studio Power

Fusion has been used to create groundbreaking visual effects and motion graphics for Hollywood films such as The Martian, Thor and The Hunger Games, as well as on hit television shows like Orphan Black, Breaking Bad, Grimm and Battlestar Galactica! If you’ve ever gone to the movies or watched television, then you’ve seen Fusion in action!

Fusion’s GPU acceleration gives instant feedback while you work, so you spend more time being creative and less time waiting! Fusion 8 Studio also includes optical flow and stereoscopic 3D tools, along with unlimited free network rendering and tools to manage multi user workflows, track assets, assign tasks, review and approve shots, and more!

Whether you need to pull a key, track objects, retouch images, animate titles, or create amazing 3D particle effects, Fusion has the creative tools you need! You get a true 3D workspace, the ability to import 3D models and scenes from software like Maya and 3ds Max, along with hundreds of tools for compositing, paint, animation and more!

Work Faster with Nodes Fusion uses nodes to represent effects and filters that can be connected together to easily build up larger and more sophisticated visual effects! Nodes are organized like a flow chart so you can easily visualize complex scenes. Clicking on a node lets you quickly make adjustments, without having to hunt through layers on a timeline! *SRP is Exclusive of VAT


Free Download

For Mac OS X, Windows and Linux

FUSION ř STUDIO For Mac OS X, Windows and Linux


This issue’s team of pro artists




TIM BERGHOLZ Game art enthusiast Colton has joined us along with four other incredible artists to reveal some essential Maya workflows this month. Check out his tips for better retopology on p29. 3DArtist username n/a We came across Alex’s work a few months ago and were immediately enamoured with his unique style. Over on p48 he highlights his ZBrush and MODO pipeline in 17 steps. 3DArtist username Alex Hindle Tim, previously of Ubisoft, is now senior weapons artist at Digital Extremes and creates amazing game art tutorials in his spare time. Enjoy his advice for creating game-ready mechs on p56. 3DArtist username ChamferZone



PAUL CHAMPION Patrick van Rooijen and Reinier Reynhout join us from Hectic Electric Eindhoven this month to show you how to build pro-standard shaders in 3ds Max. Check out their tutorial on p64. 3DA username PatricvanR / ReinierReynhout We’ve borrowed Tom from Escape Studios in London and asked him to highlight a cool Houdini workflow for creating a splash effect. You’ll find his guide over on p68. 3DArtist username n/a Paul’s put his compositing hat on this issue to explore some of the recent, more advanced tools in NUKEX 10. Head to p72 now to discover how to master Motion Vectors.



ORESTIS BASTOUNIS RenderMan 21 is out now, and as has become the norm over the last couple of years, Pixar has relesed a completely free non-commercial version of its production renderer. Jude takes it for a spin on p80. 3DArtist username JudeMarv No doubt you’ll have seen Joel’s amazing real-time dioramas online or caught his tutorial in issue 102 – he really knows his stuff. Head to p82 to find out what he thought of Marmoset Toolbag 3. 3DArtist username JoelZakrisson Just when he thought that he might have room to move around in his house, we remorselessly packed up a massive Chillblast workstation and sent it to Orestis for testing. His review is on p84. 3DArtist username n/a

What’s in the magaz

News, reviews & features 12 The Gallery A hand-picked collection of incredible artwork to inspire you

22 The Essential Maya Workout Fire up your own creativity with five amazing projects to enjoy, including modelling, rigging and more

30 Insights From Image Engine Famous for its work on huge shows like Elysium and Independence Day: Resurgence, the team reveals all

38 Level Up Your Character Modelling Discover how to create game-ready assets, adapt existing 2D concepts and much more in this masterclass

46 Technique Focus: Broken Pencil Deepak VN reveals his experiment with Thinking Particles

A huge benefit of utilising XGen is that it has a good viewport representation of the hair

78 Subscribe Today! Save money and never miss an issue by snapping up a subscription

80 Review: RenderMan 21 Jude Leong tests Pixar’s latest noncommercial offering

Anneli Larsson discusses creating realistic renders in Maya Page 27

82 Review: Marmoset Toolbag 3 22

Joel Zakrisson takes the real-time renderer for a spin

84 Review: Chillblast Fusion Pascal P5000 Orestis Bastounis breaks down the latest workstation from the renowned retailer

86 Technique Focus: LB-378 motorcycle Patrick A Razo details a bike concept with MODO, KeyShot and Photoshop

98 Technique Focus: Bison Alin Bolcas explains how he quickly developed a 2D concept into a 3D render

Realistic procedural shading and texturing

Save up to 25% Review: RenderMan 21 8



64 68

Turn to page 78 for detai

Simulate a splashdown

Develop abstract character concepts

Insight from

The Pipeline

I created a basic skeleton in Max, skinned the character to it and then did a bunch of test poses

48 Step by step: Develop abstract character concepts Create interesting caricature-style renders in ZBrush and MODO with Alex Hindle

Baj Singh outlines his pro character design workflow Page 44

56 Step by step: Construct a game-ready battle mech Discover Tim Bergholz’s battle-proven approach to real-time texturing and asset building

64 Pipeline techniques: Realistic procedural shading & texturing Reinier Reynhout and Patrick van Rooijen get technical in 3ds Max and V-Ray

68 Pipeline techniques: Simulate a splashdown Thomas Hall joins us from Escape Studios to create a water splash effect in Houdini

72 Pipeline techniques: Master Motion Vectors 38

Construct a gameready battle mech

Join Paul Champion as he explores some of the latest features in NUKEX 10

The Hub 90 Community news We chat to the top three entrants in the latest Humster3D Car Render Challenge and check out their dazzling creations

92 Industry news The Foundry has launched MARI 3.2, Unity 5.6 will be the last point release of the cycle and V-Ray 3.5 for Max makes an entrance

Master Motion Vectors Visit the 3D Artist online shop at

94 Project Focus: The Famous Grouse 72

Flaunt Productions reveals how it built complex feather systems for the blended scotch’s new ad

96 Readers’ gallery for back issues, books and merchandise 56

The very best images of the month from 9

Free with your magazine Maya video tuition 4 Premium from Pluralsight CGAxis models

25 textures from

Plus, all of this is yours too…

Learn to create stunning medical animations

Download plenty of high-quality textures

࠮ A)Y\ZO ZJLUL MPSL 7OV[VZOVW brushes and ice cream textures from our abstract character tutorial

Brand new food models to use in your work

࠮ 3VHKZ VM \ZLM\S JVU[LU[ [V accompany our compositing masterclass, including NUKE scene files and a video tutorial ࠮ KZ 4H_ ZJLUL HUK [L_[\YL THWZ [V help you build better shaders ࠮ 4V\U[HPUZ VM ZJYLLUZOV[Z [V N\PKL you through our expert tutorials

Log in to Register to get instant access to this pack of must-have creative resources, how-to videos and tutorial assets

Free for digital readers too! Read on your tablet, download on your computer

The home of great downloads – exclusive to your favourite magazines from Future! Secure and safe online access, from anywhere Free access for every reader, print and digital Download only the files you want, when you want All your gifts, from all your issues, in one place

Get started Everything you need to know about accessing your FileSilo account

Unlock every issue


Follow the instructions on screen to create an account with our secure FileSilo system. Log in and unlock the issue by answering a simple question about the magazine.

Subscribe today & unlock the free gifts from more than 30 issues Access our entire library of resources with a money saving subscription to the magazine – that’s more than 400 free resources


You can access FileSilo on any computer, tablet or smartphone device using any popular browser. However, we recommend that you use a computer to download content, as you may not be able to download files to other devices.

Over 50 hours of video guides

More than 800 textures

Hundreds of 3D models

The very best walkthroughs around

Brought to you by quality vendors

Vehicles, foliage, furniture… it's all there

Head to page 78 to subscribe now 03

If you have any problems with accessing content on FileSilo take a look at the FAQs online or email our team at the address below

Already a print subscriber? Here’s how to unlock FileSilo today… Unlock the entire 3D Artist FileSilo library with your unique Web ID – the eight-digit alphanumeric code that is printed above your address details on the mailing label of your subscription copies. It can also be found on any renewal letters.

More tha an 40 00 reas sons o subsc crib be to

More added every issue

Have an image you feel passionate about? Get your artwork featured in these pages

Create your gallery today at


Benoit Regimbal

Downtown Life is the fifth image in my ‘Life’ series. This time I wanted to create a city that felt alive, from the big buildings to the dirty sidewalks, with characters going about their lives – doing mundane things such as going to work, to the dentist, or being a vigilante in a suit Benoit is co-founder and art director of MilkPresso and makes 3D illustrations Software 3ds Max, mental ray, Photoshop

Work in progress…

Benoit Regimbal, Downtown Life, 2016 13

Ara Kermanikian Ara is a concept designer with a focus on characters, vehicles and set design Software ZBrush, KeyShot, Photoshop

Work in progress‌

VKK is a render from the model I sculpted at the 2016 ZBrush Summit sculptoff. It was modelled in three hours using ZModeler and kitbash parts. I used the model with some changes to render some new images, including this one. It is a tribute to my father, Vahe Khatchadour Kermanikian, VKK Ara Kermanikian, VKK, 2016


Match Day is a personal project that was born as a class exercise. I love character design and to give them life and generate everyday situations in which we can express some love for the things we do Andrés Felipe Reyes Español, Match Day, 2016

Andrés Felipe Reyes Español Felipe is a 3D artist, illustrator and graphic designer from Colombia Software Maya, Substance Designer & Painter, V-Ray

Work in progress…


Ryo Asakura Ryo is a Japan-based freelance 3D artist and CG director known as Seventhgraphics Software Cinema 4D, Photoshop, OctaneRender

Work in progress‌

This is my experimental work for photorealism. Photorealism is a genre of art that encompasses painting, drawing and other graphic mediums. I study a photograph and then attempt to reproduce the image as realistically as possible in CG Ryo Asakura, CONTAX I, 2015


Geoffroy Calis Geoffroy is an environment and lighting artist at Ubisoft working on The Division Software 3ds Max, Unreal Engine 4

Work in progress…

I wanted to guide the viewer’s eye using shadows, lights and colours. For this scene, my main goal was to have good control over Unreal Engine 4’s Light Intensity and Global Illumination Intensity options. I worked on screenshots in Photoshop to try new colours, contrasts and shapes without having to bake the light each time I wanted to try something new Geoffroy Calis, Eye Of The Stars, 2016


In depth

Pari Rajulu Pari Rajulu is a 3D artist who loves to challenge herself to re-create the real world in CG Software ZBrush, Maya, Mudbox, Arnold, NUKE

Work in progress‌

I have always been fascinated by the anatomy of the scorpion and wanted to explore its beauty further. My process of building a character involves exploring many facets of the creature to understand it better. Being able to combine my love for CGI and studying the animal world makes it a gratifying experience Pari Rajulu, The Stinger, 2016



When it came to putting all these pieces together and presenting a composited image, I was keen to achieve an engaging composition. I wanted to place the scorpion in a natural environment where the rest of the scene does not dominate the main subject. I wanted to use negative space to emphasise the scorpion and draw the viewer’s eye to it Pari Rajulu, The Stinger, 2016

SCULPTING RIGHT I divided the scorpion into separate sections – pedipalps, abdomen, legs and tail – and sculpted each section starting with a DynaMesh sphere. When the entire sculpt was ready, I retopologised all sections as a single mesh.

SURFACE DETAILS RIGHT By sculpting each kind of surface detail on a separate layer, I had total control over the intensity of an individual detail to change at will. As a result, any required LOD changes or loss of detail due to SSS can be dealt with ease.


TEXTURE PAINTING ABOVE I first chose a texture reference where I loved the contrast between the mild body colour and the dark sting. It added more character and made the scorpion appear more venomous. Next, I picked a colour palette and painted all of the required maps in Mudbox.

LOOK DEV & COMPOSITING LEFT I had my partner do the look dev and compositing as I am not very good at it yet. We used alShaders created by Anders Langlands for the scorpion body and hair, then rendered in Arnold and composited in NUKE.





MAYA WORKOUT Hone your skills today across five crucial areas including hair and fur, retopology and modelling


aya is one of the most widely used tools across the VFX and games industries and with a wide-reaching range of practical applications, too. You could just model with it, but there’s so much more you could do with just the standalone tool. “I use Maya every day at work,” says character artist Anneli Larsson. “Right now it’s the 3D package that suits me best since it allows me to use many different disciplines within a single package. For character work I use Maya for things like modelling, shading, blendshapes, rendering and, of course, hair.” Other features range from effects and motion graphics, to simulations and animations.

Add in scripts such as froRetopo for retopology, which creates geometry over a high-poly mesh with just the one tool, and plugins like mental ray and Arnold and you’ve got an even more empowered app. “Maya is my main 3D application since I started my CG journey,” states Dušan Ković of his render, Long Exposure. “I relied heavily on its tools through the entire creation process.” He also likes using aIShaders for Maya over Arnold’s AiStandard, which has given him a lot more flexibility in his workflow. So why not be like Ković and explore what Maya has to offer? Get stuck into these five amazing projects and enhance your workflow today.



EMBELLISH YOUR MAYA SCENE Cinematic extraordinaire Dušan Ković teaches us to texture ivy and discusses how he uses Maya for almost everything

Dušan Ković Lead cinematic artist at Eipix Entertainment Dušan is a CG generalist from Novi Sad, Serbia. His favourite areas of CG are lighting, lookdev and rendering

A man packs up his life into bags and suitcases into a Beetle, and sets off on a new journey. One day he stumbles upon a beautiful sunset, and he can’t help but capture the moment with his old camera. This is Long Exposure, created by Dušan Ković for the Humster 3D Car Render Challenge 2016. It’s a tranquil scene, with Ković scouting for locations for his scene virtually on Google Earth, as he explains. “I had multiple interesting locations, but then I found Lake Bled, a beautiful lake in the northern part of Slovenia. There is a small island with a church in the middle of the lake, accessible only by boat. When I saw that, I got the basic idea for Long Exposure.” Ković is a long-time Maya fan, using it for this entire piece of art for modelling, UVs, base meshes and more. “The car, tripod and camera were entirely modelled and UV’d in Maya, most of the plants and vegetation were done with Maya’s Paint Effects. Also, the base meshes for the ground and rocks were done with Sculpting tools, and then sent to ZBrush for detailing. I also used nCloth for simulating the cloth on the camera.” The only exceptions were the trees (SpeedTree), car textures (MARI) and post-production (Photoshop).



Create base ivy Create a plane, scale it up, and go to Generate>Make Paintable. Open Maya’s content browser and go to Examples> Paint_Effects>PlantsMesh. Find Ivy.MEL, doubleclick it and draw a stroke on top of a plane. Pick the stroke and plane, and go to Generate>Make Collide.




Texture leaves Google ivy leaves and arrange them 3x3 with basic adjustments to create the rest of the maps – Diffuse, Opacity and Backlight Strength. Use djPFXUVs ( to randomise leaves. Select meshes on the ivy, run the script and click Layout UVs.

Edit attributes We need to edit some attributes to customise the ivy brush. The best way is to play with the attributes until you get a an ivy look. You can also play with Behavior attributes to give it more randomness. Convert PaintFx to polygons and delete its history.

Shade and render We’ll use an HDRI from As well as Skydome, use one directional light for a key light. With alRemapColor and alRemapFloat nodes, colour correct the Diffuse texture and use it in other shader slots so you can better control them.

RIG A DYNAMIC CHARACTER Get to know rigging, use scripts and master corrective blendshapes

Mehmet Fatih Usta Character artist mehmetfatihusta Mehmet is based in Istanbul. He has worked at various advertising agencies in Ankara, Turkey, gaining skills in 3D

To animate and rig a character, getting inspiration from a lively reference makes all the difference. Mehmet Fatih Usta was enthused by Beyonce’s dancers, and came up with the concept for Bianca very quickly. He used Maya’s own tools like XGen for creating dense and sparse hair clumps as well as scripts for rigging, such as Advanced Skeleton. “Advanced Skeleton is a very successful script. It meets a lot of needs including the facial rig. It has many proxies for rigs,” he says. “You can adapt these proxies to your model and easily create your rig. The important thing is to put joints in the right places. Basic knowledge about human anatomy and skeletal systems is enough for this. For me, a basic body rig is enough. Skinning, pose deformation and face rigging steps can also be done with this script. I do skinning, face rigging and pose deformations. I use the ngSkinTools script for skin. It lets you achieve beautiful skinning without much effort. You can work with layers and masks Photoshop. “I use the Sculpt Inbetween Editor script for blendshapes, corrective blendshapes and combo blendshapes. Then I finish by combining the body rig with my face rig and corrective blendshapes.” Adding muscle deformation with correct blendshapes was important to Usta, as it added extra value and added quality to the character. “It gives me a different excitement when the arms and legs are bent correctly. That’s why I care about muscle deformations.” To create the face rig, Usta used a skinned head and jaw joints. Then he created blendshapes, such as smile, frown and pucker, for expressions and he then connected them to the controls that will drive them. He used Sculpt Inbetween Editor again and made blendshapes to be added to the expression for the eyebrows, nose and cheek. For the eyes, Usta used the smart blink rigging method by Marco Giordano ( Finally he placed extra controls with the help of joints on the places necessary: the mouth, eyebrows and cheeks. The most essential aspect of rigging for Usta was that the model was right in the first place. “The quality of the model directly affects the rigging. The model’s proportions and topology should be done with animation in mind. “It is very useful to simply draw a skeleton over the character when placing the joints.”






Learn how to set up curves, change modifiers and tweak density

Discover how Anneli Larsson uses Maya in her character work to groom hair

Anneli Larsson Character artist Anneli started learning 3D at gscept in Sweden. She is currently based in Stockholm after working as a texture artist at MPC and Framestore.

Can you tell us a bit about your work as a character artist, and how you developed your visual style? I used to do a lot of traditional art but never felt quite like a traditional artist. All my sketches and ideas were focused around people and characters, and when I started studying 3D I realised it was a great way for me to develop my interest in anatomy and character creation. My inspiration comes from all kinds of things – I have a great interest in antiques and history in general and get a lot of ideas from those subjects.


Create curves After looking at references,

start making some curves using Maya’s CV Curve tool. Start by drawing out a few that provide the overall shape of the hairdo – adding too many curves will only make things harder later. When happy with the curves, it’s time to jump into XGen.


Set it up As XGen creates files that will be


Modifiers and expressions The

updated as you work it’s important to make sure the project is set to the right directory. Select the head geo and choose Create New Description. To make Spline Primitives for long hair, generate hairs randomly and control them using guides.

Where did you get the idea from for the character of Sofia? I’ve had the idea somewhere in the back of my mind for a while. My personal style is usually more stylised, so I thought this would be a good practice to do something realistic. I wanted to try and make something where the viewer would get a sense of the subject and her personality without it becoming too literal. Can you tell us the benefits of using XGen for Sofia, and were there any areas that could have been improved? A huge benefit to utilising XGen is that it has a good viewport representation of the hair. What you see is pretty close to what you get. It saves a ton of render time. For my purposes it’s been easy to use and I think it gets me the results I want. Since XGen is now included in Maya it’s just extra convenient.


XGen settings Paint a Density map that


Finishing touches Since I’m rendering in Arnold, I use the alHair shader by Anders Langlands to

controls where the hair will be denser or more sparse. I set the thickness of the individual hairs and add a Random function to make some hairs thinner. I paint a Region map as I want the hair to be parted in the middle, giving it a combed effect.

foundation is set, but the hair still looks unnatural. Go to Modifiers and add Clumping and Noise to break up the hair and give it a realistic look. To add more refinement I’ll add some Expressions to give the hair more colour variation.

What did you learn from the process? Laying out the initial curves is a really important part of the hair creation process. If you build you hair with a few well-placed curves from the start the rest will go so much faster. I also learned a lot about which parameters are needed to create something that looks natural.

A huge benefit to utilising XGen is that it has a good viewport representation of the hair. What you see is pretty close to what you get

give Sofia’s hair an elderly white colour. alHair is a great shader that’s inluded in the alShader shader pack that you can download for free from Anders Langelands’ website. Next, I render my scene with a fairly high setting to make sure the hair stands out with as little noise as possible. Finally I bring my rendered passes into Photoshop and create the final image.


BUILD BETTER BASE MESHES Explore modelling tools and find out how to save time when sculpting scales

Antoine Verney-Carron Student at ESMA antoineverneycarron Antoine worked as an intern at MPC in Summer 2016, which cemented his wish to work in the VFX industry.

Antoine Verney-Carron’s Dwarf Caiman used a combination of Maya, ZBrush, MARI, Photoshop and Arnold, with the modelling all done in Maya. To start, he gathered many photo references. “I knew that I would have to animate this model, so the topology was really important,” he reveals. “I modelled the base mesh in Maya first so the topology and the overall proportions were pretty good before sculpting. I found this method convenient but I also could have done a DynaMesh in ZBrush, for example, and retopologised it later. “I did a quick setup on the base mesh to make sure that the topology was good enough for


animation. If the topology of the base mesh was changed during the sculpt, I would still be able to update the model in ZBrush without losing details (with the Morph Target or Reprojection).” The reason for using Maya over ZBrush for this was because he found it easier to match references with the image plane in Maya than in ZBrush. “I needed a strong base to work on and Maya was perfect for this. I think it’s just a question of preference; now I would probably do it differently.” The mesh was around 4,000 polys, so Verney-Carron knew that he could subdivide it six times to reach around 16 billon polys. He then unwrapped the UVs to get four 4K UDIMs and imported the mesh in ZBrush. To ensure that the topology and overall proportions of the Dwarf Caiman were good before sculpting in ZBrush he “made sure that the silhouette matched the photo references as much as I could in Maya, by overlapping the two.” Secondary shapes were then

sculpted on a layer, with heads and side scales done by hand for more control over the flow of scales. The big scales on the back were custom Alphas created in ZBrush. Instead of continuing to sculpt the scales for the legs in ZBrush, he went back to Maya and sculpted a single scale, which was exported as a Z map from the top view to use as a Displacement map. The process of sculpting in Maya was simple: “I sculpted a subdivided plane with the Maya Sculpting tool and really quickly I extracted a Z map. I didn’t spend much time on this part; I could have, but I knew that I would tweak the map later in Photoshop, adjust the pattern in MARI and clean the sculpt in ZBrush. Again, you have to follow the right flow of the scales, and each scale to be a bit different from another. I saved a lot of time with this technique!” Though Verney-Carron didn’t end up animating the Caiman, the rig he had set up made it “really convenient for putting it in poses”.

nd 3ds Ma th Bonus T

ene First, b from ZBrus SubTools i m retopolo combined m and ďŹ na

Colton Orr Character artist

y hiding eve Then make aw to retop ile I work, I l the end to ump out of eck the low odel.


For the past few years Colton has been pursuing a Game Art and Animation degree. He created the Lunar Suit as a ďŹ nal for a 3D modelling class.

Create a ZBrush low-p

SubTools from ZBrush at th subdivision level and attach them to Maya. I use this technique on cylind such as the clip release on the back This saves time and guarantees that high will line up exactly for baking.


Clean up Once the retop complete, I will then select and merge them at a low threshold works well). Finally, I will then select use the Clean Up option to check fo 29


For Elysium, Image Engine reunited with Neill Blomkamp to deliver several vast space station environments. This is the original plate

The torus ring interior as added by Image Engine


For Chappie, actor Sharlto Copley performed the title role of the robot in a gray marker suit

Image Engine then animated the CG Chappie based on that performance, not with direct motion capture, but taking as many nuances as possible from Copley’s actions

Insights from

Image Engine Already known for its ultra-realistic CG robots, Vancouver’s Image Engine has added extensive creature and effects work to its bow


mage Engine seemed to burst onto the scene in 2009 almost out of nowhere with its visual effects for Neill Blomkamp’s District 9. The studio’s organic aliens helped garner that film an Oscar nomination, and began a multi-film collaboration with the director, with the equally impressive space visuals in Elysium and the titular robot in Chappie to follow. But Image Engine actually had its beginnings all the way back in 1995, working in design and television visual effects before branching out into film effects. Now it is one of the go-to full service

VFX facilities, with experience on blockbusters such as Jurassic World, Independence Day: Resurgence and Fantastic Beasts And Where To Find Them, and the hit TV series Game Of Thrones. In 2015, Image Engine also merged with Cinesite, a stalwart in the visual effects world. 3D Artist sat down with three visual effects supervisors at Image Engine to talk about the studio’s latest projects, and how innovations in the way artists work and the tools they use – especially for creatures – have given the company a unique advantage in the industry.



CREATURE FEATURES A series of recent creature-heavy projects have helped Image Engine make several leaps and bounds in how they create CG animals, aliens and other animated effects. Along with Jurassic World, Independence Day: Resurgence and Fantastic Beasts And Where To Find Them, the studio was also responsible for delivering complex creatures in films like Teenage Mutant Ninja Turtles and Kingsglaive: Final Fantasy XV. Each project has propelled VFX development at Image Engine, both from an artistic perspective and a technical standpoint. But the studio has made some very deliberate moves to improve its creature pipeline. One area is in modelling, texturing and look development. “For Jurassic World, we realised early on that we needed to push the level of detail in our modelling and texturing even further,” outlines Image Engine visual effects supervisor Martyn Culpitt, who oversaw some dynamic dinosaur chase scenes at the studio for the iconic film. “It needed to hold up to the scrutiny the creatures would come under from the audience, considering how close we get to them on screen.” That manifested itself in Image Engine’s crew

We realised early on that we needed to push the level of detail even further Martyn Culpitt, Image Engine

having to work out how to get high-resolution maps working over the entire creature, from Displacement to the final colours and look dev. The result was that artists could obtain the highest level of detail needed to view the creature at any size on screen. On Jurassic World and on subsequent features, Image Engine has also taken its creature FX approaches to new levels. Examples here include the tentacled aliens in Independence Day: Resurgence and the magic creatures in Fantastic Beasts And Where To Find Them. Much of the building of these creatures begins with real-world reference and creation from the inside out, following all the anatomical structures that a real-world version might have. Realistic approaches to rigging the bones of the creatures, and then having plausible muscle and skin animation systems, is where Image Engine has been emphasising its artistry. “We have a very strong R&D department who are always helping to make the systems better,” notes Culpitt. “We learned a lot from this exploration and have now


Image Engine delivered several shots of the aliens for Independence Day: Resurgence, bringing what had been done practically in the 1996 film into the digital realm

BREAKING DOWN THE BEASTS One of Image Engine’s most recent and complicated projects was David Yates’ Fantastic Beasts And Where To Find Them, a new entry into the Harry Potter universe. Here, the studio tackled several magical creatures, including a family of graphorns and a flying beast that likes to eat brains. Here’s a step-by-step look at how the studio modelled the graphorns.


Shoot the scene Newt Scamander (Eddie Redmayne) reveals the inside of his magically

expanding suitcase to non-wizard Jacob Kowalski (Dan Fogler), which is home to scores of dangerous creatures. That includes the graphorns – horned, hump-back beasts with face tentacles. Redmayne and Fogler were filmed against a blue screen observing the animals. For some shots where Scamander touches it, he acted against a blue ‘stuffie’ in order to achieve the right kind of interaction.


A shot from Kingsglaive: Final Fantasy XV in which Image Engine built a complex building and background environment for one of the film’s giant knights to break through in the dramatic final battle

Build the beast After researching several animal references for graphorns (including rhinos, elephants, ostriches and caterpillars), Image Engine modelled, rigged and textured its CG creature. A traditional pipeline involving ZBrush, Maya, Yeti and 3Delight was relied upon, as well as a suite of new tools the studio has been working with. “The team has also been researching new ways to build rigs and how to better use cloth, muscles, skinning and deformations,” explains visual effects supervisor Martyn Culpitt. “All of this feeds into the complex movements we require from our creatures. Two of the new systems we used on Fantastic Beasts are called Ziva and Vital Skin. There is still a lot of preparation and work involved to get these functioning within the pipeline, but they have been giving some very promising results. The hero creatures for Fantastic Beasts used a combination of this new technology as well as our old systems.”


Fantastic final Not only was the finalising of the graphorns from a CG perspective a complex challenge, but the suitcase sequence plays out in almost one, long shot. That means Image Engine had to pay close attention to hundreds of layers for compositing for long frame counts, something it also managed with a deep compositing approach for actors, creatures and extra effects. 33

Photo: Greg Massie


TV VERSUS FILM – A NEW VISUAL EFFECTS PARADIGM Rising budgets and lofty ambitions have brought TV VFX in line with the movies There’s no doubting the rise of television as a medium that is captivating audiences as much as film these days. Partly that’s due to recent TV shows being made with incredibly high production values, which means that several VFX studios that formerly only loaned their services to film are now being called upon to deliver film-quality shots for TV. And Image Engine is one of them, working on seasons five and six of Game Of Thrones, and crafting shots for The Man In The High Castle and The X-Files. But how does TV effects work differ from film? Visual effects supervisor Thomas Schelesny says, “The number of shots we deliver on a project has gone through the roof, while post-production schedules have become compressed. “In response, the industry has evolved to support a fairly standardised pipeline. Full-featured off-the-shelf software, robust hardware and high-speed internet have revolutionised the workflow.” The result is that movie-quality shots can still be produced on tight TV schedules. Since the TV schedules do tend to be more compressed than film schedules, one way that Image Engine has been able to work within both mediums – often on several large-scale projects concurrently – is by relying heavily on software for media management and organisation. “Real-time asset tracking would have been impossible using old-school databases and spreadsheets,” notes Schelesny. “Today, however, we can automate the entire process, each downstream department being automatically alerted of any inputs which require updating. “Thanks to this, artists spend less time managing their shot elements and more time making creative decisions. This allows our team to take on larger shows, while remaining focused on the quality of the final image.”


Image Engine’s CG alien work for District 9 was enough to get the studio nominated for Best Visual Effects at the Oscars

been able to automate a lot of these processes, including for the tentacles on Resurgence.” A further area in which the studio has concentrated for creatures is rendering. That seems even more pertinent in an age where several physically plausible renderers are now available, as Culpitt highlights. “The R&D team has done a lot of work to bring the render times down and figure out ways to work with highly detailed models that have thousands of polygons. We’re always looking at new renderers for the facility to see how they work compared to our current workflow. I think the latest renderer we are testing is making huge jumps for our work, in terms of both the render times and the final quality.”

including a mountain-side explosives sequence. Here, Image Engine was asked to simulate the explosions and an avalanche of dirt and rubble in Houdini, and realise vehicles and surrounding areas in CG to be integrated directly into the live-action plates. “As the scale of the scene was massive, we needed to add a huge amount of detail to sell the believability of these elements,” states VFX supervisor, Dave Morley. “When it came to texturing the avalanche, the team developed some extremely clever ways to handle the large amount of geometry. Everything was generated procedurally, which allowed for a huge amount of organic variation in what was witnessed on screen – every rock and stone was a unique object.”

STORY BUILDING But it isn’t just fantastical creatures that Image Engine is nailing right now. The studio has always been a close collaborator with the filmmakers it works with, and on a number of recent projects it has been called on to advise on and solve many crucial story points during the visual effects production process. One of these projects was Ericson Core’s Point Break – a gritty remake that featured many practical stunts and invisible digital effects,

The team developed some extremely clever ways to handle the amount of geometry Dave Morley, Image Engine

A CG pass of the exterior view of the torus ring in Elysium

Image Engine ramped up its rendering CG toolset to enable all of the detail for the torus

Importantly, Image Engine worked closely with the film’s editorial group in post-vis for the shots and helping with the scene’s narrative tension. The idea was that the digital dirt avalanche would be nipping at the heels of the bikers. “Thanks to that,” continues Morley, “and the intricate approach to the simulation work, we were able to deliver a real nail-biting moment.” On Independence Day: Resurgence, Image Engine had to translate what had been practical aliens on the first film to CG creatures, complete with much more horrifying but still legacy-aware tentacles. Artists also built huge environments for a landing platform scene on the alien mothership – it was an area roughly 1.5 kilometres in diameter, in CG. Image Engine’s team was on set during filming and would recommend ways for the shots to be filmed and staged, all part of the necessary collaboration in the filmmaking process. Another film in which artists from Image Engine not only crafted extensive CG and visual effects, but also worked hand-in-hand with the filmmakers on delivering the best story possible, was Kingsglaive: Final Fantasy XV from director Takeshi Nozue and Square Enix. On this project the studio made a final magical and fiery fight sequence with giant battling knights.

Again, Image Engine had a hand in post-vis with what the shots – which were all completely CG – would look like before launching into them, a way of efficiently working out which areas to concentrate on. Compositors even roughed in pieces of practical fire early on to establish the look and feel of the scene. Then, a combination of motion capture, extensive matte painting and digital environments, and effects simulations, made the battles in the scene a reality.

FLEXIBILITY AND INNOVATION Image Engine is not one of the largest visual effects facilities out there in terms of personnel or otherwise, but it still manages to win shows with hugely complex VFX work. Much of that can be put down to creating efficiencies in how the studio operates its pipeline. Helping to keep things running smoothly are some in-house proprietary systems and plugins. An example is the open-source Gaffer framework developed at Image Engine. It’s part of a unified approach to making the most compelling characters possible and also includes Cortex, another open-source project that deals with the core libraries that plug into tools such as Maya, NUKE, Houdini and renderers.

I know without the tools that we have, the job would be a lot trickier Dave Morley, Image Engine

“The pipeline here is by far the best I’ve worked with,” declares Morley. “The flexibility and speed at which shows can be set up and completed is phenomenal. The in-house R&D team make it look easy. I know without the tools that we have the job would be a lot trickier!”

WORK AND LIFE AT IMAGE ENGINE Then, of course, there are the artists of Image Engine, who come both locally from Vancouver and Canada and from around the world. They’re known to be a tight crew; with each other and the clients they work for. VFX supervisor Thomas Schelesny is a recent hire at the studio and was looking for that kind of environment.





Martyn Culpitt has supervised visual effects for Image Engine on several high-profile films. Here he runs down things any VFX artist should look for on-set


HDRI for each environment of the different lighting conditions. This should be placed where the creature or main CG component will be in the shot. A grey/ chrome ball and Macbeth chart should also be used in the same conditions. If you have a creature moving through the environment, you need to move the gray and chrome ball along the path in-camera to best capture the lighting conditions, as well as multiple HDRIs along that path.

2 3 4

On-set photography of areas that may be used for creation of the environments, and also photogrammetry of objects that may be used.

“When I look back on my career,” he says, “I find myself remembering the friends I’ve made, the times we shared, and clients I’ve particularly enjoyed working with. The films themselves are simply an extension of these personal relationships. So, when I started talking with Image Engine, I wasn’t looking for a gig; I was looking for a personal and creative connection. Image Engine fit the bill.” The kind of employees Image Engine tends to take on are obviously the reason for this connection. Culpitt, who interviews and hires many of these employees, has a clear kind of person in mind when recruiting. “I look for people who work well in a team environment, and who want to push their work forward, as well as help in the creation of the shots and work,” Culpitt explains. “I think wanting to make the best work possible has always been a key to artists at Image Engine, and then having a solid work-life balance around that. We try to be a family and include everyone in all that we do.” That’s something echoed again by Schelesny. “Image Engine has a reputation for encouraging a long-term sustainable work/life balance, as opposed to the perpetual deadline-to-deadline emergency mode I’ve encountered elsewhere. Visual effects supervisor Dave Morley

I wasn’t looking for a gig; I was looking for a personal and creative connection Thomas Schelesny, Image Engine

Also, every tool they use is seamlessly integrated into a common workflow, which allows us to generate many shots without drama or confusion among artists.” It helps that Image Engine is firmly embedded in Vancouver, a city where so much VFX work takes place and several studios have made their home or second home. This, says Dave Morley, makes for a healthy and competitive industry. “There’s a great industry presence and a lot of insanely talented people in Vancouver, with lots of TV and film being shot all the time. That means there’s a great opportunity for people to move around to different places – but who knows why you’d want to work anywhere but Image Engine!” A break-out area of the Image Engine office

LIDAR of the environment and key measurements/survey to go along with it. LIDAR scans are becoming more standard practice as time goes on, but they are costly. Ultimately it helps so much to lock down a shot’s camera and the details within each scene – it saves so much time.

5 6 7

Lens grids for each individual lens used on the shoot – this is especially important when shooting with an anamorphic lens. 3D scans of people and props that will be used within the CG component of a shot.

Be shot aware – outside of the key components, that mainly pertains to capturing data. There is also a need to be aware of each shot and exactly what is needed for it. Being close to the on-set VFX supervisor and being able to answer difficult questions and make decisions on the fly is also a key. This really comes from being on set often and learning from each experience.


Photos: Greg Massie

Witness cameras of each shot, as these help with tracking and placement of creatures or other CG components into the shot.

Inside Image Engine

>> When asked who we are and what makes us unique? We thought there is no one thing. Here are 5...


We are Master Technicians, We are Bespoke Solutions, We are Expert Consultants, We are Uncompromising Quality, We are Intel Platinum Partners,

We are the Workstation Specialists. :-YLX\LUJ`,UOHUJLK -VYT-HJ[VY:--













CHARACTER MODELLING Discover how to create realistic portraits, game-ready characters and stylised models with advice from triple-A character artists


or those working in and hoping to enter the videogames industry, character modelling is perhaps one of the most attractive and visually engaging aspects of any portfolio. It’s also perhaps one of the most divergent: creating a realistic human model is distinct from an alien with an exo-suit, for example. Having a great imagination and concept is only one half of the equation, too, as our contributing artists will testify. Baj Singh, lead character artist at Creative Assembly, is adamant that there should be a mix of both incredibly high-poly models with game-ready characters and assets – but this balance shouldn’t be a detriment to the


artist’s portfolio: “I think it’s less about sacrificing details and more about learning how to work effectively within your limitations,” begins Singh. “For example, which parts of the character are going to be focused on in the game? How much attention would be shown to elements such as the feet as opposed to the face and upper body? What can be made out of solid geometry versus using alpha testing? When making a game-ready asset, all these issues are taken into account during the creation process to ensure that no details get missed out.” So get out there and learn how to upgrade your characters from these modelling masters!


Adam Sacco -------------------Freelance 3D character artist

NO SUBDIVISION ------------------------------With this project I wanted to experiment with some new workflows that are used on both game and animation models. I am using it here to avoid using subdivisions. This was very handy when texturing and I found making the parts was a lot faster. I used Displacement maps on the chest and shoulder plating and head. The rest was Normal and Bump maps.

SMOOTH SURFACES ------------------------------I used chamfers, smooth groups and topology to smooth out the surfaces so subdivisions were not required. It works really well for hard-surface objects and keeps the poly count low. Also it helped to see the final model in the viewport without slowing the software down.

SCULPTING THE FACE ------------------------------The face was sculpted in ZBrush with topology and UVs done in 3ds Max. The final details were done in ZBrush for Displacement, and in Mudbox using skin scans for the Micro Bump. I just used the middleresolution mesh inside 3ds Max along with V-Ray Displacement.

SKIN WRAPPING ------------------------------For rigging I used 3ds Max’s biped rig. I copied the low-poly base mesh and split it up into parts so each bone had an adjacent piece of mesh to skin too. I skin-wrapped the full low-poly base mesh and blended the weights with the Paint Weights tool to smooth the weights and it worked pretty well. From this I just skinwrapped all the final parts. Some parts required extra skinning or their own standard bones to pose correctly.



Dan Roarty -------------------Lead character artist, Microsoft


Master of faces Dan Roarty discusses the use of Mudbox vs ZBrush

Can you tell us a little bit about your background and how you ended up becoming a professional character artist? Sure! I’ve actually always been obsessed with drawing characters [since I was] a kid. When I was 13 my parents bought me a 3D program called Truespace. From there I was hooked. I literally spent every minute I had trying to create 3D characters and usually realistic [ones]. I knew pretty much from the age of 13 that if I could get paid to make 3D characters, then that’s exactly what I wanted to do. How did you come up with the concept for Roxey November? To be honest, the inspiration came from some of the characters we created for Gears Of War 4. There were a few characters with the stylish ‘shave and a haircut’ look to them. It looked very appealing to me, so I decided to create a realistic portrait of a female with this type of hairstyle. I didn’t really have any specific reason or idea for creating it, I just hit a stride and decided to complete it. The name Roxey is my wife’s middle name, so I thought why not. You typically use Mudbox for sculpting your characters, so what did you learn from your use of ZBrush over Mudbox for Roxey? I found them quite similar in regards to sculpting; not a huge difference. The one large advantage of using ZBrush though is the DynaMesh tool and the ZRemesher. These are extremely handy in ZBrush. However, I still find texturing and surfacing a little easier and better in Mudbox, but this could also be because I’m not quite familiar enough with ZBrush. I do intend on using ZBrush much more (I used it quite a lot on Gears Of War 4). Moving forward, though, I think ZBrush probably will be my tool of choice.  Do you have any tips for creating realistic character portraits? I think it’s important to find a subject you are not going to get bored with. It’s also really important to show your work during a WIP phase to get some honest feedback. I tend to lean more towards portraits just because I love to do them. There also has be a lot of patience when creating them. They can be time consuming and frustrating, but the end results are very rewarding. Focus on the expression of the face and ensure the hair and eyes look realistic. As a previous boss of mine said, “Take care of the little things, and the big things will take care of themselves”.



Alex Figini -------------------Senior concept artist, BioWare

3D CHARACTER CONCEPT ART Going beyond flat concepts

Who could imagine that just drawing for fun could lead to becoming a senior concept artist at a triple-A game studio? Well that’s exactly what happened to Alex Figini. Now based in Canada with over a decade’s experience, Figini is a concept artist originally hailing from London, England. His use of 3D for concepts has resulted in jaw-dropping, unique characters, but also a whole host of benefits to production workflow, “I’ve noticed the benefits in a production environment of creating concept sculpts as opposed to a 2D image. The application for the work goes far beyond what you can achieve with 2D. For instance, the models can be used to test in-game almost immediately, which is beneficial to many departments even beyond art. It also removes any room for misinterpretation of volume, something which is often a problem when a 2D image is translated to 3D.” Figini’s latest character is Black Lotus: Bio Hacker Assassin, a gang member in a cyberpunk vision of the future that is a hybrid of two motifs, old and new, hard and soft surfaces. He used a base human mesh to start with to help with the plausibility of the character. He then

Guillaume Mollé -------------------Freelance senior character artist

loosely adhered to the proportions, flows and volumes. “For a mech like this, I often try to mirror and abstract the shapes of the human body, keeping in mind areas that need to move, that compress or extend. “Being aware of the areas that need to move and knowing how movement can be affected, for instance [by] armour plating, is essential. I also covered the major joints with fabric and simulated in Marvelous Designer using Morph Targets for the many poses. This gave the impression of natural and organic movement and contrasted well with the hard-surface elements.” Perhaps the most distinctive aspect of the Bio Hacker is the hard-surface head that sits on top of the human mesh. ZBrush’s Clay Buildup and hPolish were used to build up volume, with Dam_Standard used to cut in details and establishing design elements, with the planar used for finishing hard surfaces and edging alongside the Masking and Transpose tool. The process of creating Black Lotus: Bio Hacker Assassin is covered in Figini’s course ‘Concepting in ZBrush’ at Learn Squared ( courses/concepting-zbrush).


“It’s pretty rare in the freelance world,” says Guillaume Mollé of being able to choose a witch as his subject for a render promoting the launch of Allegorithmic’s Substance Source library. To start with, Mollé modelled the head for the witch “since it’s really important in the final composition and for a character in general.” He began with a sphere in ZBrush with DynaMesh on: “I have some base meshes for heads that I could have used, but I decided not to,” he explains. “Existing topology tends to bind you to the technical side of 3D in the design process. The hand, on the contrary, used a base mesh. For the body, I used a mannequin from ZBrush to pose it, then converted it to a DynaMesh object and worked on it. It’s really important to pick up the correct workflow for a piece if you want to work fast. The outfit was

designed with Marvelous Designer and imported into ZBrush for touchups.” Exaggerating some key forms, such as the hands and the witch’s nose, was next. “The size and the design itself helped to focus on the pose and really helped the stylised look of the character,” Mollé explains. “Same thing goes for the head and in fact the rest of the project, since you want everything to be consistent in the final illustration.” As Mollé’s portfolio is so varied, ranging from re-creating the likeness of French comedian Jean Rochefort to working on the model for the spindly form of Street Fighter V’s FANG, intensifying some parts of the witch’s form didn’t change his workflow too much. “I work on characters with different styles. But they all have in common a credible anatomy (I hope, at least). It’s a matter of knowing your anatomy and using it as a tool.”

The size and the design itself helped to focus on the pose and really helped the stylised look of the character










BREAKING INTO THE THIRD DIMENSION Guillaume Mollé explains how his passion for drawing 2D helps with his 3D models Mollé has always been drawing: “When I grew up I drifted toward digital drawing/painting first. I began 3D when software like 3ds Max became a bit more user-friendly, but as far as I remember, I’ve always been into creating characters both in 2D or 3D. “I always have a sheet of paper or a sketchbook in front of my screen. So most of my personal projects are indeed a sketch first. I don’t spend too much time on it since I know that the translation from 2D to 3D can change your design drastically, especially when it’s just a quick doodle.” Mollé will also do 3D sketches in the modelling process. For example with the hair, instead of going straight into FiberMesh, Mollé did a blockout first. “I found that sketching the important part of a character, without consideration for the technique you will use, helps you to really block the style and art direction very quickly. You will definitely save time knowing where you are headed.” Speaking of his influences and reference points for the witch, Mollé lists Carter Goodrich and Peter de Sève as his inspirations for their 2D

cartoony and stylised designs. For 3D, he says, “The first name that comes to my mind for 3D cartoony characters are Pedro Conti and Frederik Storm. These guys are surely an amazing reference to be inspired from.” But creating your own style as an artist isn’t as clear-cut as some may think he says, “I think style finds you; you don’t really choose 100 per cent what your style is going to be. All the references you gather in your head and the things you like most become your style. People usually think style is visual but not completely I think. The ideas behind your visuals are really worth something in the definition of a style.”



Baj Singh -------------------Lead character artist, Creative Assembly


Baj Singh reveals how to make a videogame character with attitude

SCULPTING AND MODELLING SOFTWARE -------------------------------------------------It took roughly four months to complete Robin (longer than usual as we were fairly busy at work). The main programs included 3ds Max (for hard surfaces, retopology and unwrapping), ZBrush (for sculpting organic elements and PolyPainting), Substance Painter and Photoshop for texturing and setting up Alpha maps and, finally, Marmoset Toolbag to showcase the piece

DYNAMIC POSE ------------------------------

To pose Robin, I created a basic skeleton in Max, skinned the character to it and then did a bunch of test poses to see what would work well. I then took her into ZBrush to clean up any issues that were missed during the skinning process. I felt the final pose amplified her confident (and somewhat brutal) personality

SCULPTING SKIN PORES --------------------------------------Her body and skin in general were sculpted using ZBrush’s default brush set. I use Standard, Move and ClayBuildup to achieve most of the forms, while the skin pores were added using the Standard brush with DragRect enabled with custom Alpha maps for the different pore sizes and spots

CONVERTING THE MESH ---------------------------------------------I used a variety of tools in Max to build and optimise the game mesh, allowing it to be used in a real-time environment such as a game engine. Max has a robust set of Retopology tools that allow you to quickly create strips of polygons that conform to the high-poly mesh perfectly

LAND A JOB WIT GAME-READY A Baj Singh tells us why artist portf need more than just high-poly sc “We get a lot of applications for cha who don’t actually have any game-r at hand. If you want to work in the g industry – specifically as a characte – then you have to understand that a high-poly asset is only half of the p Retopology, unwrapping and textur need to be present to really show th understand the entire process.”


RETOPOLOGY -------------------------------To make the character game ready, the high-poly elements had to be created first. Once I had everything built and sculpted, I organised my meshes in ZBrush, decimating and transferring them into 3ds Max where I use its toolset to retopologise the mesh



Incredible 3D artists take us

behind their artwork

PARTICLES This was research and development with Volume Breaker in Thinking Particles. I wanted to use this shot as a part of my future showreel, so I developed the shot with V-Ray and After Effects. It is a single frame render, which I took from the four-second shot. It took three weeks to create this animation.


3DArtistOnline username: killswitch Software 3ds Max, Thinking Particles, V-Ray, After Effects, Photoshop



Expert advice from industry professionals, taking you from concept to completion

All tutorial ďŹ les can be downloaded from: ďŹ


ALEX HINDLE Mr Swirls, 2016 Software

Develop abstract character concepts Use MODO and ZBrush to create comical, caricature-style renders to a professional standard

MODO, ZBrush, Marvelous Designer, Photoshop

Learn how to ěũũ!4+/3ũ!13..-8ũ!'1!3#12 ěũũ(%ũ-"ũ/.2#ũũ!'1!3#1ũ(-ũ MODO ěũũ2#ũ1#+ı6.1+"ũ!+.3'(-%ũ3.ũ dissect patterns for Marvelous Designer ěũũ1#3#ũ3#7341#ũ,/2ũ$.1ũ2*(-ũ shading in MODO ěũũ1#3#ũ238+(2#"ũ'(1ũ(-ũ  ěũũ#3ũ4/ũũ2(,/+#ũ2!#-#

Concept "61"ũ6(1+2Ĕũũ5#3#1-ũ(!#ũ cream man. He’s a big old softie. I set out to create a comical and abstract character, someone misshapen and endearing


ver the next few steps, we’ll look at the stages involved in creating this cartoonish character. Taking the familiar figure of an ice cream man, we’ll look at ways we can abstract him through a play on associated shapes and texture. Using an initial loose 2D concept, we will firstly block out and find our character in ZBrush. As he will be clothed we’ll concentrate our efforts in detailing only the main visible parts, such as the head and hands. After that, we’ll hop over to MODO where rigging and posing will take place, readying him for clothing in Marvelous Designer. With detailing being then taken care of in ZBrush, and texturing in Photoshop, we’ll then bring everything together in MODO for final scene building, lighting, shading and rendering.


Block in the body To establish a body shape,

append a Sphere3D SubTool and block out a torso. For the rest of the body we’ll append a ZSphere tool, draw (Q) and pull out (W) some arms and legs. We’ll keep the body in a neutral A-pose for rigging later. Once we’re happy with proportions, convert the ZSpheres to a Polymesh. On the torso SubTool add some hands from the IMM Bparts Brush, then go to Subtools>Groups Split to separate them to a new SubTool. Adjust their shape to fit the overall design. Append a Cube3D, DynaMesh and block in some teeth as a placeholder. 02


Block in the head The low head asset from MODO provides a good base mesh. Let’s export an OBJ and bring it into ZBrush. Having your concept as an Image Plane or in a background app can help in the early stages of a block out. Being able to dial down the canvas opacity or the model directly is really useful, so use it to overlay the model and help determine some proportions. In these initial stages we’ll also stay in Symmetry mode until we’re ready to break it.


Landmarks Let’s subdivide up a few levels and establish some primary features. Using Clay BuildUp and the Move brush, place the largest facial landmarks first, such as eye sockets, nose, cheek bones, ears and jowls. Add a couple of spheres for eyeballs and sketch in some eyelids. Use Dam_Standard and score in some lip shapes. Nothing needs to look refined because we’re still searching for the right proportions for our character; it might look sketchy but that’s okay. Mask out a hairline and eyebrows and extract SubTools and establish an overall hair shape.






from ěũ'.3.2'./ũ 142'#2 ěũ142'ũ2!#-#ũăũ+# ěũ#7341#ũ,/2 ěũ43.1(+ũ2!1##-2'.32




Retopology With our blocked out SubTools in place,


let’s create workable models from them. ZRemesher can work, since this model won’t be animated, but for the face we’ll retopologise in MODO. Bring in a decimated copy of the head and torso and switch to the Topology tab. Animationready topology isn’t necessary here, but an edge flow that holds detail nicely will benefit us later on. With the decimated mesh in the backround, draw out an initial polygon using Pen, then Shift+drag edges out with the Topology pen. Retopologise the face and torso as a single mesh.


Break symmetry This is where we can really find our character. One of the ideas for Mr Swirls is that his head resembles a melting whipped ice cream. To introduce this characteristic, his facial features will need to slope. Transpose Master is great for the task of breaking symmetry on all your visible SubTools at once, while the Move tool is ideal for shifting stuff around. Let’s make him lopsided, but while skewing features also try to maintain a balanced relationship between them. Transfer TposeMesh to SubTools and continue to develop the character sculpt.



Pose for Marvelous Designer Export OBJs of all SubTools at their lowest resolution and bring them into MODO for rigging and posing. Draw out a skeleton and heat bind each mesh individually. It may help to also block in part of your scene that you want the character to interact with. We will bring the posed model into Marvelous Designer so we’ll need to animate from a neutral pose to an end pose. Keyframe all joints on the first frame, move the timeline on and set the character’s pose. The Pose tool is great for this. Export an OBJ of the neutral pose, freeze the end and export that too.


Make clothes in Marvelous Designer Import

the neutral pose and arrange patterns for the coat. The thinking for the coat was that it also had melting ice cream characteristics. For this purpose we’ll make it oversized, especially in the sleeves, as this will simulate some nice, big draping folds when posed. Once we have the coat arranged and simulated, we can switch from Simulation to the Animation editor. Import the OBJ pose from earlier, and load it as Morph Target, with a Morphing Frame Count of 50 or higher. The resulting posed clothing can be exported as an OBJ with UVs and imported into ZBrush. 04


Polypainting in ZBrush Switching to the Skin material in ZBrush and laying a foundation of colour early on can really help direct a character’s design. During the blockout stages, use Polypaint to get a feel for how the design works in colour. Assign a hotkey to ‘Colorize’ to switch Polypaint on and off during your sculpt. Your Polypaint can be the basis of your texturing workflow later in the process.



Finding the right pattern Marvelous Designer works by laying out cutting patterns to then stitch and simulate on your model, and you’ll find a lot of patterns online. Actually having a version of the clothing to hand can really help in understanding how patterns connect, how their shape affects draping and where to introduce folds. For Mr Swirls a similar coat was sourced on the cheap from eBay. Photo references were taken, from which a pattern was designed, firstly as a layout in Photoshop and then drawn out in Marvelous Designer.




Start UV mapping Before any further detailing,

now is a good time to get UV maps in place. ZBrush’s UV Master will do the trick in some cases, but MODO in combination with UV Master works just as well. For the head and hands, MODO was used to first select the seam edges and UV Unwrap. The resulting mesh would be imported back to ZBrush where going to UV Master>Use Existing UV Seams would organise the UVs nicely. The coat already has generated UVs, so they can be further organised in MODO.


Detail in ZBrush Now we can sculpt in some tertiary details into the head and hands. The Clay Buildup brush with a soft, round Alpha is good for building up fatty tissue. Use it in combination with Dam_Standard to establish wrinkle direction and folds. Alpha brushes for the skin details were extracted from photography of melted ice cream. The coat will need a degree of detailing in order to enhance and stylise what’s there and add some extra folds and wrinkles.


Bring it together in MODO We don’t have to rely on Displacements to capture the detail for this character. Instead, let’s use Decimation Master in ZBrush to generate and preserve detailed lower-resolution meshes – around 1 to 2 million polys for the head and hands will do. Import them into MODO. We’ll generate a Displacement map for the coat. Over in MODO use Convert To Multiresoltion to bake the Displacement details into a multi-resolution mesh. Bypassing Displacement maps in this way will reduce preview and render time but will maintain sculptural detailing. 51





Texture skin We’ll take our Polypaint from earlier as a


base for our skin textures. Also generate a Displacement map in ZBrush and export both images. In Photoshop, open the base texture and overlay the Displacement, as this will give us a skin-creasing effect, which will need adjustments in some areas. Using Overlay and Soft Light modes, begin to layer up your textures. We’ll need four skin texture maps: Epidermis Diffuse, Subsurface Color, Upper and Lower Dermal Subsurface Color. MODO’s Skin material has default colour settings for each, which is helpful as a colouration guide.


Make it hairy We’ll create a hair cap from our lowest subdivision head mesh. Then, in the Topology tab, use the earlier hair block as a base to draw B-Spline curves onto it. These will be the guides for the hair. Copy and paste the guides to the hair cap and turn Snapping on to snap the guide ends to the cap. Assign a new material to the base and add a fur material. In the Guide Options Properties, change Guides to Range. Uncheck Use Guides From Base Surface. Open a preview window and adjust the guide options until the hairs follow the guides.


Add eyeballs and teeth Similarly to the hair, create

caps for the eyebrows, chest hair and stubble. This time use the Hair Tools>Create Guides and sculpt them into shape. For the eyeballs we’ll adapt MODO’s Eyeball-Realistic preset, adding our own iris texture and yellowing the Sclera to add a bit of age. Replace the earlier teeth placeholder with some box-modelled versions, UV and paint some yellowish and brown decayed textures. Duplicate the texture and set to Subsurface Color and dial up the Subsurface amount to emulate an enamel material.


Sourcing abstract textures Sourcing interesting texture imagery to work with can really help in defining the style of your work. Try to use alternative textures to create Skin maps. For Mr Swirls there were some interesting textures extracted from photography of ice cream. The raspberry sauce made for some great rosy cheeks and blemishes, and bubbles in the melting ice cream looked a lot like moles. Alpha skin brushes were also sourced from the ice cream shots and put to use detailing the skin in ZBrush.



Texture the coat It’s a white coat, so we’ll want to add a bit of grubbiness to take some of the CG sheen off. Let’s export the UVs as an EPS and open in Photoshop. Layer in an off-white colour, followed by some yellow-ish grease splodges. A rust texture, inverted and desaturated, adds a nice overall grubby texture. Think about where his hands might rub at his coat after serving, and then add in some smeared and dripped raspberry sauce for good measure.


Make a string vest We’ll first model a guide for the vest. In Topology mode, model in where the vest will be visible under the coat. Unwrap some UVs and take them into Photoshop and draw in a criss-cross pattern for the vest. We’ll now use this as a guide to model the vest’s strings. Again, in Topology mode, draw out the angled segments of the vest. Select all the polys and bevel with Group Polygons unchecked. Delete the remaining selected polys and run the Thicken tool. Auto Retopo for a more organic mesh to work with.


Build the ice cream van Now we’ll model and

texture the van chassis, replacing any earlier scene placeholders. Bear in mind to model only what will be visible in the shot. Beginning with a cube primitive, subdivide and cut window shapes and model the window frames and dividers. Once in place we can unwrap UVs and then look at reshaping the van with MODO’s Sculpt tools to better fit the character’s cartoon style. Create texture maps in Photoshop. Paint in dirt accumulation and some tatty old sign writing. In MODO, add a limescale texture to the windows and set as Transparency Amount and Transparency Color.






Compose and light As 3D isn’t


always a linear process, these last steps could begin as soon as we have all the elements in MODO. Compose the shot, choose a camera angle and lens and lock it off. Lighting for this character and scene will be quite simple. Let’s go into the Environment presets and add Ditch River, as this adds a nice outdoor HDRI setup and a directional sun light. Rotate it if needs be. The only other two lights will be area lights, acting as back and rim lights. You can exclude all but the character from these lights. 18

Alex Hindle A freelance 3D artist from Manchester, England, Alex divides his time between 3D character modelling for animation, 3D design and visualisation projects and animation work. From an early age, creating characters has always been a passion.


Render Adding an additional camera to your scene is handy for test renders. MODO’s

fast preview render capabilities allow for a lot of freedom in quickly developing the final look of your piece. Use this additional camera to pick out areas of focus as the character develops. Setting up a Surface ID Render Output can help us if we want to colour correct specific parts individually afterwards. Finally, let’s render our scene, set the Indirect Bounces to 2 to add some extra light within the van and hit Render.


Moon Man, 2016 ZBrush. MODO Personal character design. He was a little play on the ‘man in the moon’ tradition. Imagined as a moonish solitary figure, staring into space.


Construct a game-ready battle mech The image style is dark and grim just like the mech itself. Discover how to make a mechanical monster ready to strike its prey


nspired by the recent battle mech trend in videogames such as Call Of Duty: Infinite Warfare and Titanfall 2, this article covers all the essential modelling and texturing steps for a game-ready battle mech. The two main software packages used are 3ds Max 2017 and Substance Painter 2. We’ll cover the base shape refinement as we work on our low-poly model, learn about instances and the Symmetry modifier, and how to create realistic-looking edge panelling on the high-poly model. Learn how to bake by naming convention for the best texturing experience in Substance Painter 2 as we continue to create a stunning-looking texture with plenty of wear and tear on it. At the end we’ll take a look at the Export settings where we get them ready for creating a striking render in Marmoset Toolbag 3.





Block it out At the beginning we’ll start with a rough


block out, which we’ll detail out more and more as we go. It doesn’t matter too much which element we tackle first, but the main body makes the most sense to get the initial feel for our mech. The blockout is the base on which we’ll build our low-poly model. Let’s keep the poly count low in the beginning to have maximum flexibility in shaping it until we like it.

TIM BERGHOLZ Rhino Mech, 2016 Software Ċ"2ũ 7ũĉćĈĐĔũ4 23-!#ũ (-3#1ũĉĔũ'.3.2'./Ĕũ 1,.2#3ũ..+ %ũĊ

Learn how to ěũũ *#ũũ 2#ũ +.!*.43 ěũũ#ăũ-#ũ3'#ũ2(+'.4#33# ěũũ2#ũ(-23-!#2ũ3.ũ25#ũ6.1*ũ -"ũ3(,# ěũũ1#3#ũ3'#ũ+.6ı/.+8ũ,."#+ ěũũ2#ũ"(Ăũ#1#-3ũ,."(ăũ#12ũ3.ũ !'(#5#ũũ'(%'ı/.+8ũ,."#+ ěũũ-61/ũ2 ěũũ*#ũ24//.13ũ,/2 ěũũ#7341#ũ%,#ı1#"8ũ,."#+2ũ (-ũ4 23-!#ũ(-3#1ũ ěũũ1#3#ũ6#1ũ-"ũ3#1 ěũũ7/.13ũ3#7341#2ũ3.ũ2/#!(ăũ!ũ %,#ũ#-%(-#2 ěũũ#-"#1ũ.43ũ8.41ũ6.1*ũ(-ũ 1,.2#3ũ..+ %ũĊ

Concept '#ũ,#!'ũ(2ũ#-3(1#+8ũ,8ũ.6-ũ !1#3(.-ũ-"ũ(-2/(1#"ũ 8ũ ,.5(#2ũ24!'ũ2ũRoboCopũ-"ũ 1#!#-3ũ5("#.%,#2ũ24!'ũ2ũ Titanfall 2ũ.1ũ3'#ũ+3#23ũCall Of


Refine the silhouette Once we have the main

elements blocked out with simple geometry it’s time to take a step back, squint our eyes and look to see if we are headed in the right direction. Does anything look out of proportion? Now would be a good time to address it. Consider adding a pitch-black material to your geometry so far. That will help you to see the silhouette without any distractions.


Instances Our mech has a couple of identical pieces,

such as the feet and the legs. The best way to approach them is to create one element first and then copy it as an instance. The beauty of that is that once we put them in place and want to further change them, those changes will automatically be applied to our other instances. That saves us plenty of work and allows us to concentrate on one piece only.


Symmetry modifier As we get more into the

detailing of our geometry, it’s time to use the Slice tool in the vertical centre of our model. Cut everything in half and apply the Symmetry modifier, which lets us do all the work on one side only and automatically applies it to the other side. Our mech is perfectly symmetrical, which makes that step easy. In a case where you find yourself with an asymmetrical concept, it is still good practice to do as much work as possible with the Symmetry modifier and only collapsing it at a later stage where it would then make sense to do your custom changes to it.

Baking time A good practice is to bake the Normal map first, and only once everything is quality-checked, commit to the rest of the maps. Anti-aliasing has a huge impact on the baking time. Usually it’s good to bake in two separate passes, all maps aside from the AO and Thickness map (which take the longest) in 8x8 AA and then a second bake for these named two in 4x4. The difference is barely noticeable but a lot of time can be saved.





from ěũ43.1(+ũ2!1##-2'.32







Make the high-poly Once we are happy with our progress from a low-poly model perspective it’s time to create a copy of all our geometry and paste it into a new folder that we’ve named ‘high-poly’. Make sure you don’t copy it as an instance, as we want to keep our current-state low-poly geometry. For the high-poly model we can make a lot of use of floaters. In combination with Smoothing groups, the Chamfer modifier and the TurboSmooth modifier, we’ll be able to create our high-poly model in record time.


Edge panelling Usually at the very end of the high-poly modelling phase comes the edge panelling. This is a crucial step to get a lot of realistic-looking depth in our model, which we want to have in our Normal map later on. A good practice is to make use of the already existing edges. Pick them one by one while holding Shift to make a selection where you want your panelling to appear, which is usually at the border of two elements. With our selection active, we can then extrude in by a small margin and get a precise machine-cut look that represents our panelling.



Unwrap UVs After the high-poly modelling phase

it’s time to unwrap all our individual UV shells. There are a lot of different approaches to doing that and a proven way can be to start with a Flatten Unwrap. Right after that we can start stitching all the pieces back together how we want them to be laid out. Later on, we’ll make use of the Packing tool that 3ds Max comes with. Make sure to leave enough space in between your UV islands, as we want to prevent intersections at all costs. At the end of unwrapping, we’ll use the TexTools script to generate Smoothing groups based on the unwrapped UV islands. That guarantees the perfect baking result.

Copy, paste and hide 3ds Max makes it very easy to have a non-destructive workflow thanks to the Modifier Stack. It enables us to toggle different effects on our model without having to worry about collapsing everything down while we still work on it. Regardless of that, it is a good idea to copy your current mesh every once in a while, then hide that copy and continue working. That way you can always roll back to an earlier version if necessary and we don’t need to save out a ton of different Max back-up files.





Smart masking The key to a non-destructive workflow is based on masks. The Smart Masks in Substance Painter enable you to drag a predefined mask onto any new material and give it a specific look. You can also create your own mask effects and save them as a Smart Mask, ready to reapply to another element of a model.


Export for bake In earlier times, people would


Bake the essential maps After exporting our

explode the mesh out into the scene. The reason for this was because otherwise the rays that project on our meshes during the bake would intersect with each other. This results in Normal map intersection errors, which are ugly to look at. The good news is that nowadays we can keep everything in the same place by simply naming every element with _low and _high for our low and high-poly elements. Later on in Substance Painter we’ll bake based on mesh name, which bakes every piece individually and we can keep our low-poly model in one place.

high-poly and low-poly meshes, it’s time to boot up Substance Painter where we start by adding our low-poly model. In the bake window we’ll add our high-poly to it and bake out our so-called support maps. All these maps that we initially bake out will help us getting the maximum out of our upcoming texturing work. The different generators and filters in Substance Painter require these maps and produce their effects based on them.


Clean up Normal map errors In nine out of ten

cases you will discover errors in your initial bake. The more complex the geometry is, the more likely it is that some parts didn’t bake properly or you may even discover some parts that require some model adjustment. Whether it’s in the unwrap or on the mesh itself, now is the time to address these changes. We should only start texturing once we have the perfect bake result in front of us. This is the last control check.


Different material types With our textures all baked properly, it is now time to get started on the actual material setup. Just as we started out by modelling our basic shapes, it is the best approach to lay out all our different material types that we want to use. In our case, that is a yellow-coated metal with a medium-range Roughness on it, a matte dark metal and a shiny chrome-type metal. All these materials come out of the box in Substance Painter. The Polygon Fill tool enables us to mask these materials to only the regions where we want them to appear.







Include an Emissive channel We want to have cool-looking laser eyes on our mech, so in order to get the glow we’ll have to add an Emissive channel under our TextureSet settings. This enables us to make any material glow. All we have to do is enable it in the material and mask it to wherever we want it to appear as glowing. This step can still be counted as setting up our base layers.


Build some wear and tear with oil smears Once the base layers are all in place, it’s time to have fun with the many powerful generators that Substance Painter comes with. One that always creates great effects on machine-looking objects is the MG Leaks generator. Let’s drag a fresh fill layer into our scene, change it to black, make it very glossy and add a black mask to it. The mask will make it disappear until we add that generator onto it. Now we can see oil leakage forming up and we can control the length, variety and many other parameters until achieve something we like.


Add metal damage After adding the oil smear, we’ll add another fill layer into the scene, this time as a chrome-looking metal. With the same procedure as we just used to create our oil, we can now use the MG Metal Edgewear modifier, which looks very good on top of our black and yellow metal and gives it a lot of contrast. Additionally we can drag a few Smart Materials in the scene, such as Dust. This generates tiny speckles on our mech and makes it look like it’s seen some heavy service.



Add text and symbols Substance Painter comes with plenty of Alpha symbols, which we can apply quickly and easily through the Projection tool. Specific text is best created in Photoshop, saved out as an Alpha PNG and simply dragged into the shelf of Substance Painter where we can then project it onto our model.

Post-processing in Photoshop To make a stunning-looking render even better, it’s sometimes worth it to load it into Photoshop. Before you do that, let’s create two renders, one with a background and one with transparent background that has only the model. Search “dirt lens” in Google Images and pick any high-resolution image that you like. Add this image in between our transparent and solid render and set it to Screen mode. The result will be further depth in our render by Photoshopping in some particles.





Export textures for target engines After a final Polish pass, it’s time to go to Export

Textures. In here we can export our textures to almost every possible target engine out there. Substance Painter will automatically create the textures based on the needs of the chosen engine. In our case, we want to export for Marmoset Toolbag where we’ll be able to create stunning-looking renders.

Tim Bergholz Tim started his career at Crytek in Germany. After moving to Canada and working for Ubisoft, he started his new job as senior weapons artist at Digital Extremes. In his spare time he writes comprehensive tutorials in which he shares his skills with others.


Ultimate Weapon Tutorial, 2015 3ds Max, Substance Painter Tim’s most comprehensive tutorial so far, this was a 17-hour course that will teaches the full creation process of a triple-A first-person weapon without skipping any steps.


Blade Tutorial, 2016 3ds Max, Substance Painter The kukri tutorial is Tim’s latest tutorial and, just like the grenade, it’s available as a free download on his website or free to watch on the ChamferZone YouTube channel.


Light and render in Marmoset Toolbag 3 Marmoset just got so much more

exciting with its excellent version 3 release. Let’s drag our low-poly model into the scene and plug our exported textures into the materials. If anything looks odd compared to Painter then you’ll want to make sure to invert your Normal map’s Y channel and invert the Roughness – that should fix it. For the best possible render, add a few lights to the scene. Turn on Global Illumination and Local Reflections. In our Camera settings, we’ll add some bloom, lens flares, depth of field and a bit of sharpening.


Ultimate Grenade Tutorial, 2015 3ds Max, Substance Painter The grenade tutorial has currently over 200,000 views on the ChamferZone YouTube channel, where it’s available free to watch for everyone.

CREATE THE IMPOSSIBLE w w w. p h o t o s h o p c r e a t i v e . c o . u k

Available from all good newsagents and supermarkets

ON SALE NOW y Striking imagery y Step-by-step guides y Essential tutorials PHOTO EDITING






Print edition available at Digital edition available at Available on the following platforms

Techniques Our experts


The best artists from around the world reveal specific CG techniques

3ds Max, V-Ray Patrick van Rooijen 31(!* (2  Ä&#x160; 13(23 6(3'  /22(.- $.1 ,.5(#2Ä&#x201D; %,#2 -" !.,(! ..*2 IJ Batman (- /13(!4+1Ä?

Reinier Reynhout #(-(#1 (2  /1.$#22(.-+ Ä&#x160; 13(23 6(3'  +.-%Äą23-"(-% /22(.$.1 .3' /'.3.%1/'8 -" ,.5(#2

Houdini Thomas Hall ., 6.1*2 3 2!/# 34"(.2Ä&#x201D; .-# .$ 41./#ÄŚ2 +#"(-% Ä&#x201D; -(,3(.- -" ,# 13 !"#,(#2Ä&#x201C;

NUKE Paul Champion  31(-#1 3 .41-#,.43' Ä&#x201D; 4+ĹŠ2/+(32ĹŠ'(2ĹŠ2/1#ĹŠ 3(,#ĹŠ #36##-ĹŠ$1##+-!#ĹŠ -"ĹŠ2'.//(-%ĹŠ$.1ĹŠ13Ä&#x201C;

Realistic procedural shading and texturing T




from ďŹ Ä&#x203A;ĹŠ43.1(+ 2!1##-2'.32 Ä&#x203A;ĹŠ Ä&#x203A;ĹŠ'"#12


o illustrate the workďŹ&#x201A;ow of texturing and shading, we wanted to create a tutorial based around a helmet. Weâ&#x20AC;&#x2122;re not big fans of creating UV maps, so we did it all in a procedural workďŹ&#x201A;ow driven by custom made textures. This way itâ&#x20AC;&#x2122;s easy to re-use, make changes and itâ&#x20AC;&#x2122;s suitable for low and high-res imaging. We experimented with a lot of different workďŹ&#x201A;ows, and every time we bumped into unwrapping and lack of texture details. In order to combat this, we decided to switch to a procedural workďŹ&#x201A;ow. There are multiple advantages with this approach, which we will explore over the next few steps. The shaders are layered, so the amount of dirt or damage is easy to control without changing textures in Photoshop â&#x20AC;&#x201C; so if you want more dirt but less damage, this can be easily adjusted within the shader itself. On top of that, youâ&#x20AC;&#x2122;ll have a good setup for your next project! For this purpose the shader is built for a helmet material, but when you change the base material for car paint you could use it for a dirty damaged car or similar. Weâ&#x20AC;&#x2122;ve also supplied the setup for this scene, which you can download from FileSilo. Hopefully this will be enough to get you going!


Make a concept We wanted to create a dirty and

damaged procedural texture-driven shader and we needed a subject to apply it to. Since we saw quite a lot of beach shots in the latest Star Wars movie, we decided to create an image that feels like it could belong in that particular ďŹ ctional universe. It is intended to show the aftermath of a ďŹ erce battle.


Model the scene We started modelling the helmet

by setting up orthographic views from the front and side of the helmet to use as reference. Afterwards we made a plane and, using Edge Extrusion, we quickly created the basic shape of the helmet, keeping the overall topology in mind but not being too concerned with it. When we were done with all the basic shapes we ďŹ nalised the topology to get it to smooth how we wanted it to using the TurboSmooth modiďŹ er. If needed you can apply one more level of TurboSmooth in order to get some more geometry to work with to add extra detail. The water and ground surface is modelled with the Waves texture in a high-res displaced poly plane until it looks nice enough. Just stack new Displace modiďŹ ers with other



wave seeds to get enough randomness. The water splashes and droplets are models that we put in the scene for a stronger depth of field and motion effect. The fish has been modelled to slightly resemble the Rebel Alliance logo.


Create your own HDRIs For this tutorial we decided to create our own HDR. We did it with common gear, so it is easy to re-create. We used a Samsung Gear 360 for the HDRI, Canon 70D for textures and backplates and a living tripod named Patrick. We went out at 12.30pm so we had a nice, high winter sun. We took 13 pictures with the Gear 360, stitched them on a mobile phone and cleaned them up in Photoshop so you can’t see the top of Patrick’s head and shoulders. Normally we’d use more expensive equipment and this process would take way longer, but for what we had in mind this would do the trick. It all happened in 20 minutes, including the drive to the park.




Set up the lighting in 3ds Max In 3ds Max we

created the V-Ray dome light, added the vrayHdri texture and loaded the HDRI we just made in the park. Increase the Overall Multiplier to boost the sun and change Inverse Gamma to 0.9 to get a bit more contrast in the shadows and highlights. With colour correction we reduced the amount of blue in the HDR. Rotate the dome to the correct angle corresponding to the backplate. Add a Material Overwrite with a standard V-Ray material to check your lighting setup. The camera settings in 3ds Max are the same as my real camera, so we used the same aperture, shutter speed, ISO and lens.




Shade the base material For the basic material we


created a new VrayMtl, set the BRDF to GGX and because we knew we wanted to do everything procedurally, we added a Composite map to the Diffuse channel so that we could build our Diffuse texture in there. We added a VrayColor with the base colour of our helmet. After that, we gave the shader some slight reflections and lowered the Glossiness a little bit to use as a base to start from.


Shade the dirt Now that we had the basic shader set up, we decided to tackle every part of the shader one step at a time, so we started with the Diffuse channel. We added a second layer in our Diffuse Composite map and gave it another VrayColor, using a darker colour for the dirt. To not get the dirt everywhere we added a VrayDirt in the Mask of Layer 2 in our Composite map. For the settings we adjusted the Distribution to 0,1 to get the dirt closer to the edges of the model. We wanted to get a bit of randomisation in our dirt, so in the Radius channel we added a VrayTriplanarTex with a Dirt texture in the Texture slot. After that we adjusted the Radius of the VrayDirt and the scale of the Triplanar map until we were happy with the result.



Add more dirt Dirt in just the edges wasn’t enough

for this model, so we added some more layers to the Composite, setting them to Multiply, and using VrayTriplanar texture maps with black and white images to give the shader a more dirty look. We adjusted the Opacity of the layers to make the effect subtle but still visible.


Reuse maps Now that we have most of the Diffuse finished, we moved on to the Reflection / Glossiness and Bump channels of the shader. We wanted the dirtier parts of the Diffuse to be less reflective and glossy, so we copied the work we’d done there to those channels and adjusted the values of the Opacity where needed. For the Bump we created another Composite map and in the first layer added a neutral 50% grey colour. Again, we used the other maps we made before to get the Bump to show up in the parts we needed them to show up. 05




Use procedural shaders to save time We try to work procedurally as often as we possibly can in order to save time on our next projects. The setups are easy and extremely adjustable, and it avoids the painstaking UV-unwrapping process.



Create weathered edges and damage To add damage, we added another layer in the Composite slot in our Diffuse Map channel, using a VrayColor to add a black colour for the metal material we want to come through. As a mask, we added a VrayDirt with the options Consider Same Object Only and Invert Normals ticked. We also swapped the Occluded and Unoccluded colour slots, as the mask only makes the white parts visible. We applied a VrayTriplanerTex map into the Radius slot and added a dirt texture in the texture slot to get a bit of distortion in the damage. Afterwards, we tweaked the Radius until we reached an effect that we liked. We then plugged another layer into our Bump, using a noisy grunge texture and masked it by using the VrayDirt that we just created to give the damage a little more of a random look.

10 11

Adjust the IOR In the previous step we added the

damaged edges to our Diffuse channel. After this, we wanted to make it appear as if there was bare metal under there, so we added a Composite map to the Fresnel IOR slot in our material, adding a VrayColor set to pure white. To adjust the IOR values using a VrayColor, put the IOR value you want to use into the RGB Multiplier of the VrayColor – in our case, we used 2.5 as a starting point for the base of our helmet. After the basic IOR value was set up we added a second layer and duplicated the VrayColor, but put a different value into the RGB Multiplier – in this project, we used 6 to get it to reflect more. As a mask we reused the VrayDirt map that we made in the previous step.

11 12

Add shaders for sand The last detail we added is a layer of sand on top of the whole helmet shader. We created a bunch of sand textures to drive the VrayDirt radius and the sand colour. The Radius and Diffuse are loaded in the Triplanar to avoid unwrapping. In the VrayDirt we change the Z Bias to -3, ignore GI on and the rest off. The Radius is set to approximately 10cm, but you’ll want to tweak this setting a bit. Now the sand stacks up like it fell from above. In case you want it the other way around, set it to 3. What’s more important is the Triplanar Radius Texture Scale. This controls the shape of the sand in the occluded areas. The amount of whites and blacks in relation to the sand grain scale is important in order to get the correct feel in terms of scaling and distribution of the sand on the geometry.


Do your post-production When the render is done, the fun begins. Importing the render in Photoshop as a 32-bit EXR is recommended. With Camera Raw, adjust the Contrast, Levels, Color Balance, Shadow and Highlights, Clarity and Vibrance of the image – these make a big difference. The whole image has a blue-ish tone except for the helmet, as it increases the focus on that part of the image. We added fire embers behind the helmet to increase the war feeling. With Color Dodge and Burn, we painted volumetric beams in the water to put focus on the fish.

All tutorial files can be downloaded from: 67



Simulate a splashdown T

his tutorial will be simulating a ship breaching through a body of water and splashing down, similar to a scene you might have seen in Miss Peregrine’s Home For Peculiar Children. It will run through a basic FLIP simulation, as well as secondary whitewater elements. We will run through a workflow of generating accurate collision geometry using VDB volumes, as well as optimising the simulation parameters. We will then cache the simulation to disk and use these cached volumes to simulate secondary whitewater elements foam and spray. We will look at a method for meshing the FLIP simulation to enable the simulation to be rendered, and finally generating Displacement maps to simulate an ocean surface.




from ěũ43.1(+ũ2!1##-2'.32


Create FLIP Tank We will firstly use the FLIP Tank

shelf tool, located in the Particle Fluid shelf tab. Drop a FLIP Tank at the origin of the scene. We can hide the Fluid Interior node at the object level for the time being. We can see that our FLIP tank is too small. However, before you start resizing, jump into your Auto DOP network and change the particle separation (located in the FLIP Tank node to 3. We can now adjust the size of the FLIP Tank with quicker viewport feedback – we’d recommend x250 x y250 x z250w and increase the water level until you are happy with the depth of

the FLIP Tank. The large Y value of the tank is necessary to ensure no splash particles collide with the limits of the tank.


Generate accurate collision geometry Firstly

we need to clean up the ship geometry using the Poly Cap node. In order to allow our ship to collide with and influence the particles of our FLIP simulation, we must make it a static object. Create a VDB from a Polygons node to resolve finer details on the animated ship – although it’s worth noting that while using a smaller Voxel size will resolve finer details on the geometry, the disk size of the resulting VDB can get extremely large extremely quickly. Finally, by creating a Trail SOP you can calculate point velocities on the ship geometry.


Use VDBs for collisions Once we have the ship as a static object, we need to tell Houdini to use the animation on the ship. Ensure Use Deforming Geometry is enabled on the Static Object node. We can visualise the collision geometry generated automatically by turning on the Collision Guide check box and turning off the Display Geometry option. As you can see, finer detail areas are not correctly resolved. This is where we use the VDBs created in the previous step. Change the Mode dropdown box to Volume Sample, and reference your VDBs in the Proxy Volume



A new feature gives us the ability to compress particle fluid data prior to the data being written to disk parameter. Increase the Uniform Divisions until you are happy with the resulting collision guide.


Refine our FLIP simulation The parameters on the FLIP solver are largely dependent on the look you’re trying to achieve with the simulation. Firstly, under the Particle Motion tab of the FLIP solver, change Collision Detection to Move Outside Collision – also, check Kill Unmoveable Particles. Secondly, under the Volume Motion>Collisions setting, enable Stick On Collision. This will give the appearance of the liquid sticking to the faces of the ship mesh. You can play around with the Normal Scale and Stick Scale parameters until you achieve the desired look. Decreasing the CFL condition under the Substeps tab can give a more accurate result, but may take longer to simulate.


Cache compressed FLIP simulation A relatively


new feature of Houdini gives us the ability to compress particle fluid data prior to the data being written to disk. This results in minimal loss of quality, and gives you the ability to cull particles a certain distance from the surface of the fluid. If you look in the Fluidtank Fluid node at the object level, you can see that a Fluid Compress node and a Compressed Cache node have automatically been created for you. Save the entire frame range of the simulation to a logical location in your project directory.

Enhance your FLIP workflow To find out more about enhancing FLIP fluid workflows watch a great video from the SideFX team at www.vimeo. com/145178660. The video goes into detail regarding the new fluid compression pipeline in Houdini, and ways to minimise disk usage while still maximising cached simulation quality.






Mesh the FLIP Simulation Now we have a cached


Prepare the white water simulation Houdini’s



FLIP simulation, we can generate a mesh from the particle fluid. Generating a mesh allows us to render the fluid. Firstly, connect a Particle Fluid Surface node to the simulation cache. We can now use a similar workflow to how we created our collision geometry. Drop a VDB From Particle Fluid node, and then a Smooth VDB into our network. This generates a VDB from our particle simulation. The Smoothing node works by removing any small imperfections that are on the surface of the fluid. We can now connect this to a Convert VDB node, ensuring Convert To is set to Polygons. You can write this to disk using the same workflow as we did with writing out our base FLIP simulation.

off-the-shelf tools are great building blocks to achieving a realistic simulation. Use the White Water shelf tool in the Particle Fluid shelf tab in order to create a white water simulation setup. Houdini knows to use the compressed fluid data stored in the cache. We can see this if we look at the Merge Fliptank node in our White Water Source. The compressed cache contains the volume data needed to compute the white water elements. The default parameter settings for the white water elements are satisfactory. However, some parameters you might want to look at changing are the Constant Birth Rate in the White Water Emitter node. Be mindful that white water simulations can sometimes involve millions of particles, and small changes to this parameter can increase memory usage drastically.



Cache the white water simulation Use a ROP Output driver to save the white water simulations to disk. I have split the white water simulation into ‘bubbles and foam’ and ‘spray’. This is not absolutely necessary, but it can save you time further down the line as you would not need to completely resimulate if you decide to change a parameter in either of the two simulation categories. It also means that you can have more control over these different elements when it comes to shading and rendering. To cache the white water elements, connect two separate ROP Output drivers to the Attribute Wrangle node in the White Water import node at object level.



Generate Vector Displacement for fluid surface To replicate the waves and details of an

ocean surface, we can use the Ocean Waves shelf tool. The Ocean Spectrum node uses Jerry Tessendorf’s approach to simulating ocean waves. The resulting Displacement can then be exported to Vector Displacement maps. These maps are then used at the point of rendering in order to generate realistic surface detail.

Simple Expression Often when objects are created in Houdini they are placed with the centre at the origin. You can place a shelf tool at the origin by Cmd/Ctrl+clicking the shelf icon. However, a lot of the time, it is useful to have the base of the object placed on the grid. Use the expression ch(“sizey”) * 0.5 in the Centre channel of an object to place it on the grid. Any scaling on the object will keep the object placed with its base on the grid.


All tutorial files can be downloaded from:

From the makers of

3D printing is our future. Whether it be a simple decoration, or a fully working prosthetic, 3D Make & Print will take you through h w this phenomenon is going to change everybodyâ&#x20AC;&#x2122;s lives and d i h your own projects!

Also availableâ&#x20AC;Ś

A world of content at your fingertips Whether you love gaming, history, animals, photography, Photoshop, sci-fi or anything in between, every magazine and bookazine from Imagine Publishing is packed with expert advice and fascinating facts.


Print edition available at Digital edition available at



Master Motion Vectors O




from ďŹ Ä&#x203A;ĹŠ("#.ĹŠ343.1(+ Ä&#x203A;ĹŠ ĹŠ2!1(/3 $.1 #!' 23#/ Ä&#x203A;ĹŠ..3%# Ä&#x203A;ĹŠ!#-#ĹŠÄ&#x192;ĹŠ+#2 Ä&#x203A;ĹŠ43.1(+ĹŠ2!1##-2'.32


ver the next 12 steps we will be compositing with the newly introduced Smart Vector toolset found in NUKEX 10. At the heart of the process are two new nodes: SmartVector and VectorDistort. The SmartVector node is designed to analyse movement in a shot and then automatically generate Motion Vectors, which can be used to distort patches and textures on complex moving surfaces. The VectorDistort node reads the rendered Motion Vectors and then applies warping to the source imagery. For really heavy scene ďŹ les you can output ST-Maps for previews then hook up an ST-Map node to do the actual warp. The warping is reliant on both good Vectors from the SmartVector node and a good reference frame in the VectorDistort node. SmartVector provides an ideal solution for both day-to-day production tasks, such as removing branding and logos, and dealing with complex shots like face replacements, removing the need for Trackers and Camera Projections. For ďŹ xing broken renders in CG itâ&#x20AC;&#x2122;s priceless as it saves huge render time in production or when revisiting older renders to add polish where the scene ďŹ les are perhaps not available anymore. In fact, itâ&#x20AC;&#x2122;s a useful alternative for anywhere that requires tracking. Since we have all done our fair share of logo replacements, we will instead use the Smart Vector toolset in this tutorial to add a couple of computer ports to an actorâ&#x20AC;&#x2122;s

head. In the past you would likely use tracking and manual warping with Grid and Spline warps to achieve this, but with a higher cost in time. Instead we will let NUKE do the donkey work for us, so our only concerns are where to best place the assets for the desired look. Changing placement is as simple as using a Transform node to move the asset around with NUKE taking care of the rest. As a general note, we will introduce nodes throughout the tutorial by placing the mouse cursor in the Node Graph pane and then hitting the Tab key and typing the nodeâ&#x20AC;&#x2122;s name in the pop-up window before hitting Enter and then adjusting the parameters in the Properties pane. For fast-moving shots, by default the SmartVector doesnâ&#x20AC;&#x2122;t output motion, forward and backward channels currently. If you need them you can add after the SmartVector node a VectorToMotion. This converts the vectors so that a VectorBlur can use them to create motion blur, without using a VectorGenerator.


Import and prepare the footage Select Edit>Project Settings and in the Properties set the Frame Range to 50 and 125 and change Full Size Format to HD_720 1280x720. You can also set the project directory to the NUKE tutorial folder. To load the footage, create a Read node. Ensure the Sequences checkbox is enabled then select

SmartVector provides an ideal solution for both production tasks and dealing with complex shots


the vid.####.TIF 50-125 footage and hit Open. Next, add a Sharpen Node to reduce softness in the footage. The Sharpen settings can be left at the default or adjusted as desired.


Generate Motion Vectors Add a SmartVector node and connect its Src input to the Sharpen node. In Properties click the folder icon alongside the File Parameter to choose a location to write the Motion Vectors. You’ll need to output using the .EXR format, so append the end of the file path with vect.###.EXR, for example. Set Range to Input so that From is 50 and To is 125. Next, we will set the vector detail.


Vector Detail parameter The Smart Vector node’s Vector Detail parameter has a default value of 0.3. This parameter defines the amount of detail you choose to generate for the Motion Vectors. Increasing it to the maximum of 1 will allow you to capture more detail, but also takes longer to produce. For footage with low detail and little movement a lower value works well. For this tutorial we will set the value to 1, but you can experiment with different values here to evaluate the results. Hit Render in the SmartVector properties. To view the Motion Vectors, be sure to switch the Viewer’s channels to SmartVector.



Set the reference frame Revert Channels back to RGBA. Disconnect the Smart Vector node from the Sharpen then add a VectorDistort node and connect via the SmartVector input. Alternatively you could delete the SmartVector node, and use a Read node to pipe in the .EXR sequence. At frame 125 click Set To Current Frame in the VectorDistort Properties or manually enter in the Reference Frame parameter. The VectorDistort node takes imagery from the Src input at the reference frame and uses the Motion Vectors to position through the sequence. 01






Set up the USB asset Create a Read node that is

separate from both networks you have created so far and browse to USB.JPG and then hit the Open button. As this is a still image, you don’t have to enable the Sequences checkbox. To help give the appearance that this is a 3D image, connect an Emboss node. Change the Angle to 111 and the Width to -10. The slider doesn’t let you go below 1 for Width, so enter that value manually.


Add contrast and clean up To increase the contrast, connect a Grade node to the Emboss and set the Black point to 0.4 and White point to 0.85. Next, add a Merge node, set its Operation to Multiply and connect the A input to the Grade and the B input to the Read for USB.JPG. We now have an image that looks a lot less flat. You could further clean up any problem areas at this stage using a Rotopaint node’s Painting toolset.


Rotoscoping the asset Add a Roto node and connect its BG input to the output of the Merge node. Use the Roto’s Bezier tool to draw around the USB port image. Marquee Select All the points once you’ve closed the rotoshape and press Z to smooth and tweak the handles as desired. Set Premultiply to RGBA so we only keep what’s inside of the rotoshape. In the Lifetime tab ensure Lifetime Type is set to Frame Range from 50 to 125.



Composite onto the footage For the VectorDistort to work, you need to match the input formats between the footage and any added elements. Do this by adding a Reformat node to the Roto and set the Output Format to HD_720 1280x720. Next, connect the VectorDistort Src input to the Reformat node. Create a Merge node. Connect its A input to the VectorDistort output and the B input to the Sharpen output to composite the image onto the footage. If you scrub the timeline while viewing the Merge you’ll see the image is now being warped by the Motion Vectors. 05


Understanding the frame distance parameter For this tutorial we left the VectorDistort node Frame Distance parameter at the default setting of 0. This setting informs NUKE to calculate warping based on a range of frames near to the reference frame. If needed, you can increase this parameter to extend the range with the ultimate goal of helping to resolve warping issues. However it’s not possible to animate this parameter, so you may find an alternative workflow by splitting up the sequence into manageable clips instead, then rendering .EXR with different values and switch between the one that looks best at the time of movement. For best results overall, avoid warping over 200 frames per SmartVector.




09 10

Place the USB port Add a Transform node to the

network between the Reformat and VectorDistort. Check you are on frame 125 and then in its Properties set Scale to 0.1. Grab the Transform handle from the middle and move the image to the back of the actor’s head or set Translate X to -145.5 and Y to 135. You’ll want to rotate to match the actor’s head tilt – do this by hovering the cursor in the upper-right quadrant of the handle until two curved arrows appear, or type 5.63 in the Rotate parameter.


Add another asset If you want to add additional

elements into the composite you can reuse the Motion Vectors. For example, to quickly add a second USB port, duplicate the Transform and VectorDistort nodes. As before, pipe the Transform from the Reformat. Connect the VectorDistort to the SmartVector and composite with a Merge (over). Use the Transform node to offset the image and voila!


Clean up inconsistencies You may find that when the

actor’s head is fully turned the USB port becomes warped on the green background. To fix this we will use a Roto node. Draw and animate a rotoshape on the frames where the problem occurs and then connect the Roto to a Merge node B input with its operation set to out. Connect the A input to the corresponding VectorDistort and pipe the Merge (out) to the Merge (over) A input.



Render the compositing With the compositing complete, hit Play to see the results. To render the footage add a Write node to the Merge (over) at the end of the network. Click the folder alongside the File Parameter and choose where to output. Append the path with (filename).###. (format), for example render.###.TIF and hit Render. In the pop-up box, check you’re happy with the settings and hit OK.

Using a mask with Smart Vectors When you render from the SmartVector node, by default the entirety of each frame is calculated. This isn’t very efficient when you know, for example, that you only intend to work on a specific area. The solution is to use NUKE’s Keyers or Roto tools to define the area you need. This will speed up rendering and reduce file sizes. The easiest approach to isolate the working area is to add a Roto or Keyer and then place it between the footage and SmartVector, draw the rotoshape or key the footage then Premultiply and hit Render.

All tutorial files can be downloaded from: 75


What makes a great portfolio? Let Fabrik do the heavy lifting for your new site

Gallery layouts feature a large primary image with thumbs underneath – perfect if your project media is all the same aspect ratio and covers varied shots or subjects

Lightbox layouts let you browse projects as a slideshow, catering for different shapes and image aspect ratios. Start and stop at any point from a grid of your thumbs


he value of having a stunning portfolio site that shows off your best work and, crucially, you as an individual, is often overlooked. Fabrik is a platform aimed at creatives looking for an intuitive and powerful solution that lets your work do the talking. Already a trusted companion for artists, designers, directors and more, Fabrik strips away the nonsense to help you create a portfolio site that works for you. Maxx Burman is a veteran in the VFX scene, having worked with some of the most innovative studios around the world including Blur Studio, The Mill and Elastic. As a freelance VFX supervisor and artist, he relies on Fabrik to provide a platform for him to showcase his astonishing creations. “The first thing I noticed was how beautiful the websites are,” he says. “It makes sense – it’s a website builder created by designers, for designers. It’s built around how to best present great work.” Accessibility is at the heart of Fabrik. Elegant themes allow you to highlight important areas of your portfolio and they’re fully responsive, meaning that they’ll work they way you want them to on any device. You can also say goodbye to cropping images, as Fabrik optimises them perfectly for you. If you want your showreel to sit front and centre, Fabrik gives you the power to embed video content with ease. “I’m always looking for ways to be as creative as possible with few technical hiccups in the way,” continues Burman. “My favourite part about Fabrik is how intuitive and effortless it was for me to create a great looking portfolio with all of the features I envisioned.” Burman’s personal site is a shining example of the beautiful portfolios that can be built with Fabrik. Check out his four key tips for building a great portfolio site on the next page and explore Fabrik for yourself at


Spotlight layouts allow you to tell a story with your project media – helpful for explaining linear stories through screenshots, or showing your process as a case study

Hook “What’s the very first thing a viewer will see when they go to your portfolio site? Does this have enough impact to make them want to keep looking?”

Flow “Is it immediately apparent how to navigate through the site easily? Is it simple to cycle through different projects quickly and access what you need?”

Utility “Is your portfolio website viewable and easy to use from any device you choose and with any software or browser you prefer, anywhere in the world?”

Display “Is the work being showcased in the best way? Does the portfolio page enhance or distract from the work that you’re trying to show to your viewers?”



Ultra-quick portfolio website builder

Customise fonts, colour and styles in one click

Import content from other creative services

Fully responsive for different screen sizes

Intelligent themes adapt to project media

No need to crop and resize imagery

No coding or technical knowledge required

One-click layouts instantly individualise your site 77





M EOSfor& ID V F the 3D c O B n tio inspira Practical FREE 5G




UPGRADE G workflow nce your I Enha BU ur own U CH Create yo Dev YOUR sci-f PERFECT REELerly and SHreseOntW yourself prop y ISSUE 096

toda Rep dream job land your


Discover essential tool help you master prof s and essiona


Harness the power toolset and achieve of Maya’s robust fluid unbelievable results



6.1*Ą.6 2 ě 1. 2!4 +/3(-% Ĝ '1" 241 $!#2 ě "5-!#" 3#!'-(04 #2 #7/+.1 #"


DI4D Audio the awes mot on and ome tech Reme 001_3D behind Quandy explain A_094 D g ta tum Break Ed t on ndd 1



Utilise ZBru generate sh and 3ds Max tr ple A standa d to tileables 03 05 2016

17 2

*US Subscribers save up to 40% off the single issue price.

See more at:


Every issue packed withâ&#x20AC;Ś Exclusively commissioned art Behind-the-scenes guides to images and fantastic artwork Interviews with inspirational artists Tips for studying 3D and getting work in the industry

Why you should subscribe... Save up to 37% off the single issue price Immediate delivery to your device Never miss an issue Available across a wide range of digital devices

Subscribe today and tak of this great offer!

Download to your device now


RenderMan 21 How does the latest free version of Pixar’s pro renderer hold up?


he new non-commercial RenderMan 21 brings many new and exciting features to artists, as well as continuing to improve upon some of its older features. It is clear that Pixar is looking towards the future with its latest release of RenderMan. It has now officially dropped the REYES rendering engine and is fully adopting the RIS path-tracing renderer. This change in direction means more focus will be directed towards developing a physically accurate ray-tracing engine. One the the biggest changes with RenderMan 21 is the introduction of the new PxrSurface material. This is where the bulk of the shading is done. PxrSurface is a production-ready übershader that provides artists with all the settings required to create a vast array of materials, from simple plastic and metal materials to more complex glass and skin shaders. The shader can be both artist-friendly and approachable, but can also be more technical for those that require it. This is evident in its controls for its specular parameters, for example. Artists can choose one of two modes: Artistic or Physical. Artistic specular controls give users a more intuitive way to change the look of the shader, while Physical controls lets artists input real-world values for IOR and Extinction Coefficient parameters in order to re-create physically accurate materials. Controls for Subsurface have also changed, allowing you to pick between four different SSS algorithms in order to have full control of their shader. The introduction of the PxrSurface shader means that the older LM shaders are now considered ‘legacy materials’. Along with the PxrSurface shader comes a new layering workflow. RenderMan 21 lets you layer together multiple PxrSurface shaders with intuitive controls in order to re-create virtually any material imaginable. This, combined with new Pixar Utility and Pattern nodes make shading a rewarding and enjoyable experience. RenderMan 21 also introduces the new Preset Browser. It is essentially a library where you have access to pre-made materials and lighting rigs that you can quickly view and assign to different geometry in your scene. The browser also lets users save custom-made shaders or perhaps even import shaders acquired from other artists. This new feature can help artists adopt the ‘create once, use many’ ideology, and can drastically improve the shading workflow. Unfortunately, the content within the Preset Browser resides locally on your computer. It would have been nice if it could access an online community where shaders and light rigs are shared.


Other new features include new integrators for rendering out occlusion passes, as well as a new texture baker that allows you to bake rendered results as texture maps. This is useful for when you have complex node networks that comprise a shader and you want to compress it all into a texture in order for RenderMan to process it faster. If all those new features weren’t enough, Pixar has also improved upon some of the features introduced with the previous versions of RenderMan. One of these features, the Denoiser, has been updated to support GPU acceleration for CUDA capable video cards. This feature allows artists to achieve production-quality renders with fewer samples, resulting in faster render times. With the full switch to RIS, and with the introduction of a variety of new features such as the PxrSurface shader, RenderMan is continuing to become an extremely good ray-tracing renderer, and considering that the non-commercial version is free to use, RenderMan is a software that every artist needs to experience. Jude Leong

One of the biggest changes with RenderMan 21 is the introduction of the new PxrSurface material

Essential info Price Website OS RAM Plugins

Free Windows 7 or newer 32GB recommended Maya, KATANA, Houdini, Blender

Summary Features Performance Design Value for money

MAIN The new non-commercial RenderMan 21 brings many new and exciting features to artists and continues to improves upon some of its older features. ABOVE Glass layered with dust and scratches TOP RIGHT Node Network for a Glass shader with separate layers for dust and scratches to control the look of the material


LEFT One of the new features is the Preset Browser, which is stored locally on your computer

This iteration is an extremely capable renderer that is both artist-friendly and technically complex



Marmoset Toolbag 3 Marmoset moves into new territory with animations, global illumination and baking in its latest real-time renderer


e’ve been waiting for the third instalment in Marmoset’s Toolbag series for three years now, and we’re glad to say that it was well worth the wait. Toolbag 3 delivers highly requested features, such as animation support and global illumination, but also pleasant surprises like a baker and exporters linked to ArtStation, Unity and Unreal Engine. It’s clear from the start that Marmoset uses the predecessor as a foundation and mainly builds new features upon it. The renderer itself is more or less the same and while it’s still great, the visual impact can be disappointing when importing your first models. For us, it wasn’t until we imported a cellar project by Niclas Nettelbladt that we really got to see how global illumination changes real-time rendering. The new bouncing lights greatly enhance scenes like Niclas’, where occluded areas and vibrant lights and colours dominate the environment. As a side effect, we’re also provided with crisp specular reflections and some spectacular ambient occlusion. Global illumination isn’t, however, the renderer’s most prominent selling point. Lack of animation support has been holding Marmoset back for a long time and was the major downside when comparing Toolbag to rendering in game engines. With Toolbag 3 that’s about to change, since animators get the ability to import their animations, tweak them and keyframe new ones with ease. It’s even possible to animate light properties, cameras, and post-process settings. Turntables has also become a part of the animation system, rather than just being a standalone feature. One of the most surprising features is Marmoset’s baker, and while it seems like all programs gets a baker nowadays, Toolbag 3’s already in the top. We get several unique features here, but the ability to paint the height of your cage and skewing of Normals in real-time are some of the most revolutionary ones. Baking times take no more than a few seconds, even at 4K resolution for complex models. You’re also able to generate bake groups, instead of exploding your mesh into separated pieces. It’s definitely something to consider incorporating to your workflow, especially since your exported high and low-poly meshes are updated automatically. Some other interesting additions are the new exporters for ArtStation, Unreal Engine and Unity. While these never were especially critical


implementations, they are all major improvements in comparison to the previously non-existent ability to export meshes. The game engine exporters are especially helpful, considering the pain it would take to set up new materials for complex scenes. While all mentioned features are great components of Toolbag 3, Marmoset has made sure to implement countless small options within the slick redesigned UI. The interface is still simple and easy to navigate, but it’s also a lot more fleshed out, which is welcome since Toolbag 2 could feel a bit limited in its alternatives. To name a few minor options, you can now export videos directly, import and export materials and presets, clear unused materials, and apply shadow catchers. There are also updates to the materials themselves, with improvements to subsurface rendering standing out the most. All in all, Marmoset Toolbag 3 is both a great improvement and a competitive renderer by itself. It’s easy to recommend to any creator of real-time rendered graphics, but also for those who are looking for a fast and intuitive alternative to regular renderers. At its core it’s similar to what we’re used to, but several new features and especially animations bring something new and fresh to the table. All of this, in combination with the interactive Marmoset Viewer, really turns Toolbag 3 into something truly extraordinary. Joel Zakrisson

MAIN Toolbag 3 is fully compatible with Toolbag 2 and we’ve tested a variety of projects, such as Fishing Trip BOTTOM LEFT We didn’t expect a baker, but we got it and it works fantastically well. You can even tweak cage offset and skewing in real-time BOTTOM MIDDLE Global illumination is Toolbag 3’s strongest visual upgrade. The bouncing light is especially useful in small scenes like Workbench by our talented friend Niclas Nettelbladt BELOW The Animation feature adds a new dimension of possibilities for artists. Projects get life and energy like never before

The interface is still simple and easy to navigate, but it’s also a lot more fleshed out

Essential info Price Website OS Graphics DirectX RAM Storage

£140 / $189 Windows 7 (64-bit), Mac (pending) Direct3D 11 GPU (GeForce 470, Radeon HD 5800, Intel Iris) Version 11 8 GB 535 MB available space

Summary Features Performance Design Value for money

Verdict Toolbag 3 is a fresh but familiar iteration of the realtime rendering tool, with several exciting features that make it a worthy upgrade



Chillblast Fusion Pascal P5000 How does this mammoth of a professional 3D editing workstation hold up under testing?


he Chillblast Fusion Pascal comes in a great-looking case with a ten-core CPU, PCI-Express SSD and one of NVIDIA’s latest Pascal graphics cards capable of 3D design tasks. It might be a hefty package, but the Chillblast manages to go some way to show that big is still beautiful, offering powerful performance with plenty of room for expansion if its offering isn’t quite enough. From the courier struggling to carry the box to our door, to the photographer heaving it in to push it into place and the reviewer almost breaking his foot, this is a fairly heavy beast of a PC, with an incredibly solid build and a large case, which offers massive amounts of internal airflow. It’s gorgeous, too. The Phanteks Enthoo Evolv ATX tower that Chillblast has chosen is absolutely one of the most attractive PC cases on the market. Tinted glass panels adorn each side of the tower, providing a view into its techy innards, but without any garish lighting that you might get with a gaming rig, save for some subtle blue LED lights on the memory modules. It shows off the fantastic attention to detail that Chillblast has applied to the internal build, with not a cable out of place in the main section of the case, while at the back behind the motherboard the bulk of the power and data cables are neatly tied away with Phanteks-branded cable ties. It looks absolutely professional, as should really be expected when spending such a sum on a pre-built workstation. The specification is high-end, but Chillblast has sensibly avoided including anything in this build that would drive the cost to ridiculous levels. It has a single desktop CPU, an Intel Core i7-6950x chip, which is from the Broadwell-E generation. It has ten cores with 20 threads, running at 3GHz and 3.5Ghz in Turbo mode – enough to crunch CPU-bound rendering tasks fantastically well. The review sample sent to us by Chillblast does not have an overclocked processor – a puzzling choice, given the Corsair Hydro H100i V2 liquid cooler fitted to it offers plenty of thermal headroom and enough cooling capacity for at least some additional speed. When we asked why, Chillblast responded that unlike gamers, designers and professional customers are aiming for reliability over raw


performance, and this configuration is more typical of the sort people ask for. In any case, a free speed bump to 4GHz is available if you ask the team when ordering. There’s a 512GB Samsung SM961 PCI-Express NVMe SSD, the professional variant of the firm’s lightning-quick 960 Pro drive, slotted into the lower M.2 slot on the Asus X99-E WS/USB 3.1 motherboard. The drive managed an absolutely ludicrous 3480 MB/sec burst read and 1686 MB/sec write speeds in our tests – well above the capability of a standard 2.5-inch SATA SSD. 64GB of DDR4 memory is included in a configuration of four sticks of 16GB modules, leaving four more slots empty for expansion up to 128GB. And in this configuration, a 4TB Seagate hybrid hard disk is mounted inside the case, again with even more room for expansion if you so choose. We’ve also been given a taste of NVIDIA’s Pascal-based workstation GPUs here in the form of the Quadro P5000, with a full 16GB of GDDR5 memory doubling the capacity of the GeForce GTX 1080, but matching its specification in other areas. We’ll be examining the P5000 in more detail soon. The P5000 shows its strength in benchmark results, with scores in SPECViewPerf and Cinebench that even exceed the results from systems with NVIDIA’s M6000 card installed. However, the stock-clock CPU performance was a touch disappointing. The Chillblast Fusion Pascal chewed through CPU-bound rendering tasks, boosting performance from its high core count, but was unable to match the results of overclocked workstations with the same chip, with (slightly) lower overall results. That’s the only fly in the ointment here, though. In all other respects, from the choice of storage, case and cooler to the processor and up-to-date NVIDIA graphics card, the Chillblast Fusion Pascal represents the workstation configuration of choice for high-end rendering. With Chillblast’s excellent build quality and five-year warranty, too, it’s another trump card in the deck of a company that is fast becoming one of the UK’s premier independent workstation vendors. Orestis Bastounis

MAIN The modern glass look on the case is very attractive, but in use we found it easily attracts fingerprints. A microfibre cloth and cleaning fluid is recommended BOTTOM LEFT The Corsair RM850i power supply and its modular cabling are neatly hidden in a compartment inside the case, for that extra-clean look BOTTOM RIGHT Plenty of cores running at a respectable clock speed, with a powerful GPU, makes this a dream configuration for 3D workflows BELOW There’s a lot of room for internal expansion, whether a second graphics card, more memory or storage

Tinted glass panels adorn each side of the tower, providing a view into its techy innards, but without any garish lighting

Essential info Price Website CPU RAM GPU SSD HDD

£5,299 inc VAT / approx $6,448 US Intel Core i7 6950x processor 64GB DDR4 memory NVIDIA 16GB Quadro P5000 512GB Samsung SM961 4TB

Summary Features Performance Design Value for money

Verdict A superbly built PC lavished with high-end hardware that pushes polygons at speeds that justify its pricing



Patrick A Razo (Nino) Incredible 3D artists take us

behind their artwork

DETAILING CONCEPTS The LB-378 motorcycle was an asset used in the short film Lostboy, directed by Ash Thorp and Anthony Scott Burns. The concept created by Ash did not contain a ton of detail but had a balance of shapes and form. I took some liberties with the silhouette to create the bike as you see it.  As a bike rider myself, I’m interested in these machines.

86 Nino is a designer based in the LA area, working in concept art, costume design, fashion, illustration and graphic design Software MODO, KeyShot, Photoshop

LB-378, LOSTBOY, 2015/6


Special offer for readers in North America

6issuesFREE FREE

resource downloads every issue

When you subscribe

tical inspiration D enthusiasts and rofessionals


Online at *Terms and conditions This is a US subscription offer. You will actually be charged ÂŁ80 sterling for an annual subscription. This is equivalent to $105 at the time of writing, exchange rate may vary. 6 free issues refers to the USA newsstand price of $14.99 for 13 issues being $194.87, compared with $105 for a subscription. Your subscription starts from the next available issue and will run for 13 issues. This offer expires 30 April 2017.

USA2 for this exclusive offer!

The inside guide to industry news, VFX studios, expert opinions and the 3D community

090 Community News

Humster 3D car challenge winners The render contest revs up with this yearâ&#x20AC;&#x2122;s incredible winners!

092 Industry News

MARI 3.2 out now Plus, Unity reveals a new numbering system for its engine

We needed a rig that provided animators with flexible controls over single wing feathers, without overwhelming them with too many options

094 Project Focus

The Famous Grouse Flaunt Productions takes 3D Artist behind the scenes of its iconid ad

096 Social

Hudson Martin, Flaunt Productions

Readersâ&#x20AC;&#x2122; Gallery 94

To advertise in The Hub please contact Simon Hall on 01202 586415 or

The latest images created by the community



In first place was Nööburgring by Piotr Tatar

Humster 3D Car Render winners revealed The competition’s three victors of the month-long contest explain the driving force behind their awe-inspiring renders


his year’s Humster 3D Car Render Challenge resulted simple wordplay, because I imagine my first ride as a guy in yet another crop of incredible vehicle designs. with more equipment than skills – a typical noob!” Though stricter instructions were in place (works Tatar’s driver was fully rigged for the animatable scene. In were only accepted if they were in a 3D environment), there terms of his working process, Tater explains that, were still 156 entries in the contest. “I knew that the driver wouldn’t make any heavy moves, 12 jury members were given the task to choose three just simple body shaking and rotatation of a steering wheel. winners: first place was won by Piotr Tatar for Nööburgring, So I used the native 3ds Max biped and simple Skinning second place was won by Lukasz Hoffmann for Inside The modifier to rig it. Both cars were animated using Craft Inventor’s Barn and third place was won by Aldison Ymeraj Animation Tools – they have a nice free plugin to simulate for Toyota GT86 – Formula Drift. vehicles. In this case, a simple spline as a route does a job. I First place winner, Piotr Tatar, had always dreamed of also prepared animation for some additional small parts driving a supercar on the most dangerous racetrack in the such as cables, sport seat, belts and so on. Those elements world – the Nürburgring in Germany. behave quite well just with an Realising that this would take a lot of animated Noise modifier.” money (and courage), he decided to The entry awarded second place, re-create the race in 3D instead. “So Inside The Inventor’s Barn, was inspired the initial idea was to re-create this by the work of Alejandro Burdisio. “In as a short animation and show my my interpretation I tried to create an friends on social media that I am image, which tells a story placed in the actually there,” begins Tater. “I near future or alternate world with wanted to finish it for around the developed technology,” begins the challenge deadline, but I had to focus work’s creator, Lukasz Hoffmann. more on the final entry to finish it on “Placing the scene inside the barn with Aldison Ymeraj, time. The name ‘Nööburgring’ is a old tools and choosing the Wartburg Third place

Details like the cameras, the burnout smoke and the scratches around the body I think contributed to make this Toyota GT86 look so realistic


In third place Toyota GT86 – Formula Drift by Aldison Ymeraj

Sport as my main subject was supposed to enhance the retro-futuristic feel.” Hoffmann reveals that the lighting setup was challenging, with a very fine balance between making the scene too dark or too light. “I always like to think about my light setup as a photograph,” he continues. “Photos, [just like a] 3D render, can be realistic, but it does not necessarily mean it is pleasing to the eye. Both in photography and 3D art, sometimes it’s necessary to use additional light sources. Aside from two main light sources I added three area lights to produce a rim light and to separate the main object from the background. [I also included] two area lights for reflections on the car body, and one very subtle light to enhance the foreground of the scene.” Like overall victor Tatar, third-place winner Aldison Ymeraj is also a fan of race cars and made his own interpretation of a Toyota GT86. He researched Formula Drift cars, SpeedHunter modifications and Tokyo Drift movie cars heavily to prepare for his project. Ymeraj explains how he made his Formula Drift modified car so realistic: “After I finished the modification I added many stickers on the body to make it look like a race car. Also, details like the cameras, the burnout smoke and the scratches around the body I think contributed to make this Toyota GT86 look so realistic. “I like the modified cars to keep most of their original body shape. This is the reason why this Toyota looks like an existing car but in a unique way. Also, the more you model something the better it becomes; I made the body kit on my Toyota three times!” Congrats to all the winners!

Get in touch…

In second place was Inside The Inventor’s Barn Lukasz Hoffmann

@3DArtist 91


Unity 5.6 will be last in cycle New versions in 2017 will receive a new naming system

Other enhancements in MARI 3.2 include a maximised, widened space between the node graph’s pipes to enable easy selection

MARI 3.2 out now Node graph more efficient plus OpenEXR 2.2 support The latest release of MARI is now available. Version 3.2 has an enhanced Node graph, improved Bake Point performance, connected mesh selection and reduced file sizes for OpenEXR. Users can now create and edit channels and layers using a visual Node graph that enables you to select and connect nodes directly. MARI’s Node graph has a left-to-right layout option called the Show Port List, intuitive navigation, configurable

Session Scripts supports requested 3.1 features The Session Scripts feature in MARI 3.2 will let users save and load the node graph, graph layers with a greater amount of MARI project features. The only limitations, according to a forum post by QA engineer John Crowe, is of locators and the Weta addition of shelves in the image manager.

shortcuts, zoom-dependent level of detail and other welcome enhancements. MARI users can also benefit from a better performance as Bake Points free up space on your GPU with new non-destructive nodes. A Bake Point will change all upstream nodes, including procedurals, to a texture. If this texture becomes out of date as you work, you’ll be alerted and invited to update it. With the new Smart Type mode for Smart Selection – a connected mesh selection – independent islands of polygons within a single mesh can be easily selected by extending any face selection to all connected faces. OpenEXR 2.2 is now supported, including DWA lossy compression, with smaller UDIM texture map files without noticeable degradation in quality. Uniform scaling receives support from MARI’s locators and visibility of objects, patches and faces can all be inverted. For more inforation, head to

Unity has revealed that the Unity 5 series will come to an end in 2017. From March 2017 customers with the Unity 5 licence will stop receiving updates – that is, unless 5.6 releases after March, in which case the release will still be included in updates for the cycle. New versions will then follow the 2017.x numbering system. With version 5.5 made available in November 2016, Unity also released the beta of version 5.6 of its engine. The Unity 5.6 beta features an improved editor with a new progressive lightmapper, better graphics performance, a new video player with 4K playback and support for Google Daydream, Google Cardboard, Facebook Gameroom, Vulkan and Compute on Metal. Overall graphics has been improved in the 5.6 beta, including that of the Particle System and GPU Instancing. Built by the Khronos Group, Vulkan is a graphics and compute API providing cross-platform access to GPUs on desktop and mobile. It takes advantage of multiple CPU cores to run multiple threads in parallel.

Some of the current features in Unity 5.5 include the experimental Look Dev

HAVE YOU HEARD? Marvel’s EVP Victoria Alonso will receive the Visual Effects Society’s Visionary Award in February 2017 92

V-Ray 3.5 for 3ds Max revealed New features include a lighting method for quicker scene rendering with multiple lights

Chaos Group has revealed V-Ray 3.5 for 3ds Max’s latest features, with the production renderer now usable in the interactive production rendering mode. Other features include resumable rendering, which can enable V-Ray to continue with the previous render state even when interrupted. This is for backups or for completing fast preview renders. Adaptive Lights is a new method that helps with and accelerates scene renders with many light sources. AlSurface material is now available, and NVIDIA’s Material Definition Language is also supported. There is also now live VR rendering with the V-Ray GPU for HTC Vive and Oculus Rift. Default bit-depth for individual render elements can be set, too, for example, setting Diffuse to 16-bit and ZDepth to 32-bit.

The interactive production rendering mode joins a host of new rendering options in V-Ray 3.5

Mocha Pro 5.2

3ds Max world-first announced

Cross-platform licences supported in new release

Plugin creates a Photoshop adjustment layer matching 3ds Max gamma correction

Planar tracking and VFX tool Mocha Pro released version 5.2. Floating licences are now available for 5.2, with a render licence also supported for rendering projects on a network render farm or separate workstation without having to use a full licence. GPU tracking has been enhanced with faster motion tracking, taking advantage of supported video card software. OpenCL GPU tracking is faster, too, with support for both 16- and 32-bit source images. On top of this, a brand-new Apply Tracking Feature After Effects plugin is available, which applies generated tracking data to other layers via the plugin interface, which forgoes the need for any clipboard exporting.

Software shorts Elementacular 1.5 Alexandra Institute’s cloud rendering and sculpting plugin now supports renderers Arnold and RenderMan through OpenVDB. Cloud shapes can now be designed in Maya and viewed directly in the viewport interactively. Version 1.5 of the volumetric cloud simulator will support Maya 2014 to 2017 ext 2.

The new version of PSD-Manager can allow for correctly reproduced beauty passes with a linear workflow in 8- or 16-bit PSDs thanks to a new adjustment layer that can match gamma corrections from 3ds Max. “[PSD-Manager] is designed so users just need a few clicks usually, but still have the flexibility if they need it. That is why users love it – they save so much time and provides them with an ease of mind for their creative decisions,” says creator Daniel Schmidt. PSD-Manager enables easy and flexible render element output for 3ds Max with hundreds of built-in presets. 3D Artist readers can get 15 per cent off until 21 February with the code PSD43DARTIST.

g g y that can write image data on the fly as it is created

Bringing you the lowdown on product updates and launches Substance Designer 5.6 New filters have been added to version 5.6 of the Material Author tool. The filters, which include Normal map, Ambient Occlusion and Snow cover, are dedicated to the blending and weathering of scanned materials. The filters come with their source SBS files and can be modified. For more information, visit

KeyShot 6.3 The final version in the KeyShot 6 cycle from Luxion is a small release. It includes support for Alias 2017, Maya 2017, Inventor 2017, Siemens NX 11, Solid Edge ST9 and Solidworks 2017. New import libraries are also in 6.3, with KeyShot Rendering Network updates and under-the-hood preparations for KeyShot 7.

DID YOU KNOW? Thanks to continual support from sponsors, Blender 2.8 will soon be assigned ten full-time developers 93


The Famous Grouse Website Location UK Project The Famous Grouse Project description Flaunt Productions worked with the Leith Agency for two adverts for a The Famous Grouse campaign: â&#x20AC;&#x2DC;Perfectly Balancedâ&#x20AC;&#x2122; and â&#x20AC;&#x2DC;Smoothâ&#x20AC;&#x2122; Studio Flaunt Productions Company bio Based in the UK with a new studio in London, Flaunt creates content for ďŹ lm, TV, games and commercials. Flaunt is part of the Axis Group and recent projects include Mattelâ&#x20AC;&#x2122;s â&#x20AC;&#x2DC;Monster Highâ&#x20AC;&#x2122;, design, art direction and look development for the Amazon Studiosâ&#x20AC;&#x2122; Lost In Oz, design and animation for Dixi and television commercials for Goodgameâ&#x20AC;&#x2122;s Goodgame Empire Contributors Ä&#x203A;ĹŠĹŠ-"1#6 #1!#Ä&#x201D; #7#!43(5# producer Ä&#x203A;ĹŠ .- #(++Ä&#x201D; '#" .$ +(%'3(-% Ä&#x203A;ĹŠ4"2.- 13(-2Ä&#x201D; '#" .$ 


Flaunt Productions talks complex feather systems for the blended scotch advertising campaign



ppearing in dozens of commercials and campaigns, the eponymous mascot of The Famous Grouse adverts, Gilbert, has typically appeared as an animated creature with a white, clinical blank background. All this has changed, however, with Flaunt Productionâ&#x20AC;&#x2122;s newest campaign. Working with the Leith Agency, Flaunt has created two adverts led by director Ben Craig and head of lighting Jon Neill that place Gilbert in his Highlands habitat. The two ads, â&#x20AC;&#x2DC;Perfectly Balancedâ&#x20AC;&#x2122; and â&#x20AC;&#x2DC;Smoothâ&#x20AC;&#x2122;, embody Flauntâ&#x20AC;&#x2122;s advertising portfolio in just 20 seconds each with their spectacular vistas, animation work and humour. You get a real sense of the charm thanks to the twinkle in Gilbertâ&#x20AC;&#x2122;s eye just before he winks at the camera. To re-create the environment that The Famous Grouse would be calling his home, Neill explains that Flaunt employed the use of VR and drones to shoot panoramic reference textures for the dramatic hilly backgrounds in Glencoe. These images were then stitched together to create a full 360 environment. â&#x20AC;&#x153;We then imported this into the Oculus VR system, which gave the client an accurate image of what material we had. They were able to input how they wanted to enhance this image through traditional matte painting. Once we had created these matte paintings we took them into NUKE and projected them onto simple geometry so we got a strong feeling of parallax in the mountains.â&#x20AC;? When it came to the CG grouse itself, Hudson Martin, head of FX, explains how the team used a bespoke feather system with real textures to drive procedural hair and shaders. â&#x20AC;&#x153;From the start, we knew this shouldnâ&#x20AC;&#x2122;t be a 100 per cent photorealistic bird. Rather, a very expressive one that would be realistic enough to blend with the environment, but still able to move in the distinctive way that the brand fans love.â&#x20AC;? The animation work didnâ&#x20AC;&#x2122;t stop there, though: a complex rig was then used to control and keyframe individual feathers. â&#x20AC;&#x153;We needed a rig that provided animators with ďŹ&#x201A;exible controls over single wing feathers, without overwhelming them with too many options,â&#x20AC;? continues Martin. â&#x20AC;&#x153;This led to the wing feathers having a detailed rig with additional procedural animation controls for wind motion. â&#x20AC;&#x153;Next was to make sure these animated feathers had the same look as those on the birdâ&#x20AC;&#x2122;s body, which were created procedurally. So, we created an in-house tool that allowed the modeller to convert feathers with texture and Alpha to actual fur-based feathers. Thus the animator had full control on the movement of the big wing feathers, and those features blended well with the body.â&#x20AC;? The campaign was released in time for the festive period, with 3,000 bus shelters in the UK adorned with The Famous Grouse posters. With the run being so successful, Andrew Pearce, executive producer, reďŹ&#x201A;ects on the project. â&#x20AC;&#x153;Following the Grouse campaign and two other live projects with birds, our feather expertise has grown fast! Weâ&#x20AC;&#x2122;d love to do more.â&#x20AC;?



THE PERFECT BLEND Executive producer Andrew Pearce explains the tools that Flaunt is looking forward to working with in the future â&#x20AC;&#x153;All of our clients are interested in VR, to some degree. Itâ&#x20AC;&#x2122;s a natural step for us, since our work is predominantly pure CG; creating a VR version of some content is a relatively small step. â&#x20AC;&#x153;Our R&D in GPU and real-time rendering should have a big impact for all clients. For the ďŹ rst time, CG artists will get instant results as they light shots. This means more iteration in less time.â&#x20AC;?


05 06

01 The campaign featured in 3,000 bus shelters across key cities in the UK 02 The team shot drone footage in Glencoe, allowing real-life textures to be applied to the CG world 03 Flaunt needed to direct the main procedural shader so the team created various maps and combined these with some randomness 04 The movement of the feathers was a mix of simulation and hand animation 05 Each feather was unique, achieving the desired organic look without huge amounts of work 06 Flaunt is part of the Axis Groupâ&#x20AC;&#x2122;s three collaborating studios


Share your work, view others’ and chat to other artists, all on our website

Register with us today at

Images of the month These are the 3D projects that have been awarded ‘Image of the week’ on in the last month 01 Steampunk Gun by Pascal Deraed 3DA username Pascal Deraed Pascal Deraed says: “This was a really fun steampunk project with the purpose of studying advanced texturing. The entire scene was created and textured inside Blender Cycles and the lighting was achieved using Pro Lighting Studio from BlenderGuru.” We say: This is a really intricately modelled and well-textured image by Pascal, and we’re big fans of the steampunk aesthetic.

02 Locomotive by Ruslan Anisimov 3DA username ars1024 Ruslan Anisimov says: “This is for my son, who is a big fan of the Chuggington children’s TV series. As a reference for my train I’ve chosen the amazing Soviet Locomotive 2-3-2V, which was built back in 1938.” We say: The sense of motion in this scene is really tangible thanks to clever use of warping towards the right-hand side. This is a really fun render and we hope Ruslan’s son likes it!


03 Lobstamonsta by Arnaud Lonys 3DA username Arlo Arnaud Lonys says: “This cute but fierce Lobstamonsta is based on a concept by Creature Box. It was created using ZBrush, KeyShot, Photoshop and Lightroom, and I had a lot of fun sculpting it.” We say: What’s really striking about this image is that Arnaud has managed to transpose a cartoon-style character into a wonderfully realistic render. The quality of the sculpt is also readily apparent.

04 Apothecary by Christian Otten 3DA username IgnisFerroque Christian Otten says: “I tried to re-create the intricate details on shop furnishings and apparatus from around the 19th/20th Century period here, and also the kind of dusty atmosphere found in some museum exhibitions or reconstructions.” We say: Christian’s apothecary could be a really cool game environment or something similar, and the attention to detail throughout is startling. The haphazard posters at the back are a nice touch, too. 03


Image of the month

Chaika by Nail Khusnutdinov 3DA username nailgun3d Nail Khusnutdinov says: “A black limousine was delivered to its owner in a small airport, where he was waiting for a private jet. The rays of the setting sun illuminated the forms of two swift vehicles built for flight – one to fly in the sky, the other to fly on the road.” We say: This is a really interesting car render and we’re particularly keen on the lighting and the colours coming off of that HDRI. Great work!


Attic Room by Morteza Yadegari 3DA username Morteza Yadegari Morteza Yadegari says: “An attic that I planned in the corner of my mind after a busy week gives me peace.” We say: Morteza has clearly built this interior scene with a crisp, clean vision in mind. This attic room really does look like a peaceful place and little touches like the kinks in the rug and sofa fabric add an extra layer of realism.


Styracosaurus by François Boquet 3DA username Zackb François Boquet says: “I like to sculpt the concepts of Alberto Camara. His designs are so fantastic. I really enjoyed making this dinosaur.” We say: This dinosaur sculpt could pass for a classic Disney character thanks to how wonderfully expressive it is, and utilising a grey background has allowed the colour palette to really speak for itself. 97


Alin Bolcas

Incredible 3D artists take us

behind their artwork

QUICK CONCEPTS The focus of this project was to create a really quick concept of the character from 2D to 3D. One thing I learnt from this is the importance of working loosely and not focusing on all the details. Polishing is important but only if is going to end up in the final frame. Many thanks to Chris Ayers for letting me use his 2D concept art!


Alin is a student at Bournemouth University working to become a character artist in the industry Software ZBrush & Photoshop

Bison, 2016


19,',$ 4XDGUR 0




19,',$ 4XDGUR .'9,




19,',$ 4XDGUR .




19,',$ 4XDGUR .












19,',$ 4XDGURp  7HVODp  *5,'DZ  31< 66'





5000 CPUs

© Daniel Linard Character modelling by Marina Soares

Making your deadline in time was never easier, faster and safer than now. Unleash the power of 5000 CPUs right into your desktop. If you are new to our online render service feel free to use this coupon code 3DA-RF-NY103V to receive 35 Renderpoints worth 35€. Go to our website, register and enter your coupon code. 'RZQORDG RXU DGYDQFHG 5HEXV)DUP VRIWZDUH DQG VHQG VRPH WHVW IUDPHV WR ĺQG RXW yourself how fast and easy it works. The RebusFarm render service is processing 24/7.



3d Artist 103 2017