Vertex 3

Page 1

VERTEX

3

Featured Artists 2D Concepting Benjamin Last

3D Environment Paul Pepera Marvelous Designer Alien Isolation Art Destiny FX Art Dragon Age Art 2.5D Environments 3D Concept Art Houdini Pipeline Indie Tips & Tricks

Cover Image By: Joakim Stigsson


2

VERTEX


About The Editor

3

Ryan Hawkins is greatly involved in the game-art community. Previously know as Aftermath, Ryan was an Administrator at Game-Artisans.org as well as a key contributor and co-organizer of world-renowned contests such as Dominance War, Unearthly Challenge, and Comicon Challenge. His passion to contribute to the game-art community is unparalleled as he is always looking for ways to give artists more exposure as well as giving them opportunities to prove themselves and improve their skill sets. Consequently, he was invited to join the Polycount.com team to assist in helping develop and improve the contests and challenges for Polycount. Also, he was heavily involved with the BRAWL and Darksiders II contests. On top of all of this, he has spent years investing his own time and effort into developing the Vertex books, with the assistance of a cavalcade of talented artists and game-art veterans, to bring you one of the most comprehensive resources that belongs in every game-artist’s library. We hope that you find VERTEX 3 to be enjoyable! VERTEX 3 is, by far, one of our more variety packed volumes. Each year it is a challenge to try and out due the last book. Please let your friends, families, and co-workers know that all of the VERTEX books are still very relevant in our industry and should be shared to help our industry grow! Special Thanks to Derek Thor Burris for helping out with the early text edits of the book!!

Ryan Hawkins, Editor ryan@artbypapercut.com

VERTEX


Table of Contents

04

TABLE OF

CONTENTS

Going Indie Going from AAA to being an Indie Developer By: Joseph Mirabello Creating Wings Low Poly Wings from concept to final By: Thiago Vidotto

World Interaction FX FX techniques used on Destiny By: Ali Mayyasi Pattern Drafting Using ZBrush to Create Patterns for Marvelous Designer By: Andrei Cristea Mountain Lodge

PAGE08 PAGE20 PAGE34 PAGE42

Creating the Mountain Lodge By: Joakim Stigsson

PAGE50

Fusion 360 for Concept Artists By: Kirill Chepizhko

PAGE66

About leading the environment department for Risen 3 by: Sascha Henrichs

PAGE74

Concepting in 3DS Max and Vray by: Paul Pepera

PAGE86

Hard Surface

Regular to Lead Sci-Fi Corridor

Houdini Cables & Pipes

Procedural UV-mapping of Cables/Pipes by: Magnus Larsson

VERTEX

PAGE100


Table of Contents

05

Marvelous Designer

Marvelous Designer Tips and Tricks by: Xavier Coelho­-Kostolny

2.5D TEXTURING Texturing using a Fixed Camera Perspective by: Matt McDaid Booleans Hard Surface Boolean Workflow by: Hunter Rosenberg THE RAIDER Creating The Raider Image by: Steven Stahlberg Silhouette Modelling Back to the Basics of Character Modeling by: Steffen Unger

Voxel House Breaking Down the Voxel House Demo by: Oskar Stålberg Matte Painting

Alien: Isolation’s Matte Painting Workflow by: Creative Assembly

Efficient Highpoly Modeling Approaches and Techniques by: Simon Fuchs Art Direction Becoming your own Art Director by: Josh Herman Post Marvelous Designer

PAGE108 PAGE122 PAGE132 PAGE148 PAGE162 PAGE170 PAGE178 PAGE198 PAGE220

Techniques for adding details after Simulation by: Seth Nash

PAGE234

Storytelling with vehicles by: Matthias Develtere

PAGE246

Designing to Increase Challenge (Not Punishment) by: Drew Rechner

PAGE266

Using Set Dressing to Sell the Believability by: Devon Fay

PAGE274

Life as a Technical Animator by: Matthew Lee

PAGE282

Storytelling

Challenge

Set Dressing

The Service Industry

Iron Bull Dragon Age Inquisition character art by : Patrik Karlsson Concept Illustration Introduction to a Concept Illustration Workflow : Benjamin Last Content is King

PAGE288 PAGE302

A game designers take on content by : Tom Mayo

PAGE310

Helping you speed up your workflow by : Guilherme Rambelli

PAGE316

3D Concept Design by : Vitaly Bulgarov

PAGE324

Photogrammetry

Making of “Atom-Eater”

VERTEX


6

Christoph Schindelar

www.artstation.com/artist/chris-3d


7

G liulian

www.artstation.com/artist/gliulian


Going Indie

8

Going Indie

Going from AAA to being an Indie Developer By: Joseph Mirabello

Over two years ago I formed Terrible Posture Games and in March of 2014 I launched my first project as an indie developer, a bullet-hell, rogue-lite, first-person-shooter called Tower of Guns. It was well received, being called “absolutely endearing” by Rock, Paper, Shotgun, and a “surprisingly addictive...beautiful marriage of two genres” by Destructoid. It recieved a 77% with Metacritic, was featured by a bunch of popular Youtubers and twitch streamers, was part of a major Humble Indie Bundle, and was eventually was ported to Mac, Linux, and consoles. Looking at it objectively, Tower of Guns was a small project, but still took two years to build. More precisely, it took 3850 hours to get the game to launch (I track my time pretty obsessively). I’d spent years in triple-A as an artist and a tech artist, and while I had a good grasp of the tools and technology I was not nearly as equipped as I should have been in order to build a full game, let alone start a company. After all, “making a game” is only a single component of actually making a game. A great deal of time needs to be spent on business development and promotional tasks. Given that the tools are increasingly becoming democratized, many a Vertex reader might find themselves tempted by the indie road so consider this a brief primer on a handful of things you might not have considered, intrepid future-indie-developer!

Are you incorporated?

Starting a company is more than just getting together with a few friends and jamming on a game idea. Before you ever try and sell the game, it’s wise to incorporate. Terrible Posture Games started its life as a sole-proprietorship, which is about the simplest form of “official” you can get, but really is only good for getting a P.O. Box and sounding professional in front of relatives. In order to properly handle taxes and to have some liability assurances, an LLC (which is what Terrible Posture Games is now) or an S-Corp is what you’ll need depending on the circumstances. In fact some partners and publishers, like Steam/Valve along with the major console manufacturers, actually require incorporation. They simply don’t want to work with “individuals.” The paperwork for filing for an LLC isn’t terribly complex for a single-person company like Terrible Posture Games, but things get increasingly complicated depending on the country, state and the number of people involved. Preparing Articles of Organization, handling state fees, terms of employment and termination, ownership details... a good lawyer or tax consultant’s experience in those matters can save you endless headaches later.

VERTEX


Going Indie

9

Do you have a lawyer and an accountant?

Lawyers and accountants, despite their reputations, are both worth the money. Did you know you may be able to deduct percentages of your utility bills if you work from home? Or that some purchases can have portions of their value deducted multiple years in a row? Do you know what a 1096 is and when you need to file it? Do you know how much of your earnings you can set aside for retirement annually? Do you know what to do if you are being audited? Find a knowledgeable accountant and you’ll not only save money but you’ll also have the peace of mind that the tricky redtape parts of running a company are handled properly. Lawyers are trickier. A good lawyer’s worth is hidden within the negotiations of contracts and in the advice they give you but it can make the difference between being successful and being taken advantage of. Experienced game/entertainment lawyers are not cheap (you’re looking at several hundred USD an hour), but indie development has become a full industry in and of itself and with that comes people who make a killing preying on uninformed indies. A good lawyer can help you navigate this rather depressing minefield of exploitation and leverage. Fortunately, you don’t need to consult a lawyer and an accountant on everything. The more contracts you look at and the more you learn, the more you’ll spot problems early. Your eyes get accustomed to hunting for unusual periods of exclusivity, parity clauses, odd dispute locations, and extraneous rights. It doesn’t mean you don’t need that lawyer’s number, but it does mean you can save your money if things seem suspicious right off the bat.

How are you going to fund your game?

With Tower of Guns, I was in a very fortunate situation. My wife and I had been living well below our means and had been saving up for a long time with the idea of me someday trying the indie life. Then the company I’d been working for (38 Studios) imploded. I thought about running a Kickstarter for further funds and was approached by several other crowd sourcing platforms who wanted me to adopt their platform. I was also approached by several publishers who were interested in giving me money to complete Tower of Guns. In the end my wife and I did the math and decided to we could get by on our savings and her income for the project’s duration rather than divide the game’s revenue. I might not make that same decision today, given today’s indie climate, but I stand by it as being a wise decision back in 2012.

VERTEX


Going Indie

10

Kickstarters were, once upon a time, a great way to get press attention which would lead to traffic and ultimately funding. Nowadays it takes a certain kind of game to gain the press’ eye and, not surprisingly, it seems that there’s much smaller chance of success on Kickstarter. A special project with modest goals can still successfully find revenue via crowdfunding, but it’s a very saturated landscape where success often requires a large chunk of the project be visually complete long before launching a Kickstarter which can be counter intuitive for a crowd-sourcing campaign. A well-run Kickstarter campaign takes frequent updates, a lot of commitment, and some really strong tiered incentives that take time to fulfill. If you’d focused on development rather than a Kickstarter, how much more work could your team have gotten done? Would that work have convinced a publisher to fund you instead?

The publisher situation is an interesting one. When I first started Tower of Guns, several publishers were interested in talking with me. I could have arguably made a better game with publisher money and gotten it into more hands, but I did not feel like I knew enough to run a larger studio properly (yet). Additionally, I wasn’t sure if I needed a publisher. Some publishers will help you sell your game. They’ll help you get it onto Steam or onto the consoles and help you promote the game. They’ll pay for devkits or help you pass console certification requirements or help you show at tradeshows. This is all true, but in the last couple of years, Steam has expressed a strong preference toward indie developers using their Greenlight system, self-publishing, and they encourage indie developers to retain as large a share of their own profits as possible. That makes sense since a publisher might take 30%, which after Steam’s similar cut leaves you, the developer, with very, very little. Given Valve’s stated goals of making Steam even more open, it seems reasonable that they’ll continue to promote indies self-publishing.

VERTEX


Going Indie

11 And that’s why I think publishers have a second life in them. As development tools become more democratized, and as the indie circle grows, the hurdles won’t be about getting your game on Steam anymore. They’ll be about getting your game in front of Youtubers like Total Biscuit. It won’t be about publishing, but about managing SKUs across nine different platforms, across multiple consoles, across multiple countries. Publishers, I suspect, will be focused on making your game make a splash, rather than being a drop in the bucket. I’m not certain if the dollar value they’ll ask for is worth it, but given the direction of today’s climate I’ll be very curious to see what publishers can offer in the coming years.

Where are you going to sell your game?

After you’ve got your company set up, and after you’ve paid for and built your game, you still have to convince people to buy it. This is not insignificant work—and it’s work that ramps up dramatically as the project nears completion. For Tower of Guns about 40% of my time was spent on these sorts of tasks in the last few months of development when the game needed the most crucial attention. During the course of the project’s entire development cycle, about 25% of my time was spent on these sorts of tasks. Let’s say you made your game. It’s complete, looks good, and you’re ready to sell it. Congrats! Unfortunately, it’s highly unlikely you’ll be able to sell your game successfully with just a widget on your website, the way Minecraft did seven years ago. You still CAN sell that way, and it’s certainly better than nothing, but the most realistic way to sustain a team of developers is to get your game on a few of the bigger platforms (aka storefronts).

Getting your game onto platforms can be tricky. Some platforms, like Desura and itch.io, allow you to add your game yourself with varying levels of approval gating. Others are more curated, like GoG.com and the Humble Store. There you need to submit your game and hope they like you. More platforms will only add you if you’re already on another platform, or only by referral, or only by them approaching you, or only by community vote (such as with Steam’s Greenlight system, which may very well be retired by the time you read this as seems like it’s on its way out). But just because you CAN be on a platform, doesn’t mean you should put your game there. There’s a good argument that by having your game on many platforms there are more places that might feature your game. That you’ll receive more frequent paychecks from multiple sources, and that the game will be available to larger communities of thousands or millions of gamers. Those are the arguments the platform holders themselves with tell you.

VERTEX


Going Indie

12

What they don’t tell you, and I can mostly only speak of PC platforms here, is that you’ll have to manage those multiple income streams, track whether or not those platforms have actually paid you, whether they report their sales honestly, and that they might only sell a dozen copies... ever. You have to upload and manage patches with multiple vendors, every one of which has a different process for pushing content. You have to deal with annual tax forms from all these platforms, have to send them invoices in order to get paid (as is customary when working with EU based companies). You need to make sure they have a supply of Steam keys if they are selling them and you have to support customers who picked up the game via that service. And then, you have to worry about price erosion, one platform undercutting another (which can quickly drive your game’s worth to nothing) and how many “marketing copies” a platform can simply give away. Oh, and they’ll take 30%, usually. Fortunately, most of the people I’ve worked with for Tower of Guns have been absolutely fantastic. Most platforms won’t try and undercut others (by drastic amounts). But around the holidays, where everyone is offering sales at the same time, it can get tricky. Knowing you had a good lawyer and that price approvals are in the contracts can help you sleep easier.

Now that I’ve scared you, it’s worth mentioning all the good things platforms can do for you. It IS worth being on multiple platforms—particularly if they will help you feature the game. Steam might never put your game on the front page (they never did for Tower of Guns), but some of the other platforms might feature your game prominently (and indeed did for me). This can drive interest on all other platforms too, since their customer bases tend to overlap, including Steam. With some platforms however, there’s no overlap at all; GoG users for example, tend to be philosophically opposed to the DRM aspects of Steam and are a completely different user base than you’d find on Valve’s service. In the end it can be absolutely beneficial to be on multiple platforms.

Spreading the word about your game?

Even if you are on every platform there is people might never pick up your game. The days of the press covering every Kickstarter, Game Jam Winner, or Indie foray of a former tripple-A developer are long gone but there a tons of ways to get your game noticed by the press. First, obviously, is making a good game. However there are also trailers you should make, websites to post to, gamejams and joint promotional events to join in on, streamers and Youtubers to send your game to, festivals to submit to, the list goes on and on. And, as the indie developer circle grows ever larger, you can be certain everyone else will be trying all those other tactics too…but not all of them can find ways to make it to tradeshows.

VERTEX


Going Indie

13

Tradeshows can be extraordinarily expensive. Some of those large publisher booths you see at E3 likely cost millions of dollars. Even the smallest space at a popular trade show, a ten foot by ten foot booth, can cost several thousand. The electrical drop is a few hundred on top of that. And renting large televisions? Well, there’s a reason some booths just buy them and then give them away to conference-goers as raffle prizes at the end of the show--it’s cheaper! And I hope you don’t need internet access for your game: that’ll be another thousand. Then there’s carpet rental—nothing comes for free you know. And insurance. And drayage, if you have anything big. Oh, yeah, then there’s travel, lodging, food, and the actual task of manning the booth. Oh, and the cost of, you know, actually making a concise playable demo version of your game. There was a solid argument to be made that, for a long time, trade shows simply weren’t worth the money or time for indies. You’d simply get lost amid the much larger booths. But picture the current indie developer landscape from the perspective of the gaming journalist. Now the indie scene has grown so mammoth that some journalists get dozens, if not hundreds, of emails a day about various games. They simply don’t have time to sift through a hundred presskit() pages (which, by the way, are pretty important to set up as it’s a current standard method of conveying your game’s info to the press).

VERTEX


Going Indie

14

Journalists ARE still paid to walk through trade show expo halls though. They need to be there for the big announcements and big games, but they’ll wander the indie sections too. Being in the right place when that happens is exactly how Tower of Guns showed up on Penny Arcade, Destructoid, Joystiq, Kotaku, HardcoreGamer, Rock, Paper, Shotgun and on a slew of other sites. And here’s a secret: it actually doesn’t need to be super costly. There are a lot of great opportunities for indies at tradeshows if you look closely. Showcases, ambassador programs, contests, deals with partners to show in their booth, etc. For example, I can’t speak highly enough of the Indie MEGABOOTH’s work. I worked with them for PAX East and they helped me negotiate rentals, secure sponsors, and figure out paperwork. They hacked away at the otherwise prohibitive costs of showing a game. It’s largely to their credit that I was able to have a full booth with four playable stations of Tower of Guns, a large flat panel HD TV for showing the trailer and mirroring gameplay, and Alienware sponsored computers, mice, headsets and keyboards. The MEGABOOTH offered tech support, provided me with volunteers to help run the booth, and even scheduled press to come around and see the game. In the end, I showed Tower of Guns at five different tradeshows—that was five opportunities for the press to see the game—and I actually paid very minimal amounts for it (travel, mostly). Of course, I also stayed on friend’s couches, built signage out of PVC pipe by hand, shared hotel rooms, volunteered a lot, and did a lot of deal shopping. The point is, you can do these things on a budget. You don’t even have to do a lot of detective work, you just have to keep your eyes open. Oh, and be on Twitter.

How do you interact with the public?

Jeff Vogel of Spiderweb Software once said, “Twitter is a loaded gun pointed at your career.” That’s absolutely true. I’ve seen people become blacklisted, seen entire projects pick up hate campaigns against them, and seen games removed from Steam entirely. You can destroy an entire team’s hard work with a single insensitive tweet taken the wrong way. This is especially true if you’re outspoken or thoughtless. If so, you probably should reconsider being the public face of your company. But you should still be on Twitter...just don’t talk. Listen. Twitter is more than just people complaining about things. There’s a global conversation going on about indie development on twitter: What are people talking about? Where should you sell your game? What bundling sites are going to rip you off? When do submissions for certain showcases open? How is Youtube screwing up now? What are the currently hot-button issues to step carefully around? It’s also a fantastic way to reach fans, to tell them about promotions of your game, answer their questions, or simply remind them that the person who made that game they like is a real person. While having a public identity is a scary part of being an indie developer, it’s also one of the only advantages indie developers have over bigger budget titles.

VERTEX


Going Indie

15

Essentially, much of what I’ve been talking about –tradeshows, getting your game featured on platforms, interacting on Twitter is marketing. I’ve never paid a single dime for traditional advertising for Tower of Guns—although I might in the future just so I can learn a little bit about how it works. Instead I’ve relied on word of mouth, twitch streamers, YouTube, joint promotional events with other developers, participating on forums, pay-what-you-want bundles, giveaways, joining in charity events, etc. The simple fact of the matter is that there are many, many, good games out there. No one will have heard of yours, no matter how good it is, unless you show them why they should give it a try.

When is it done?

Here’s a quick way to incur the wrath of the internet: release your game, start working on the next. The final part to making a game, and one that often goes forgotten, is support. No one makes a perfect game, particularly for PC where there are endless hardware and driver combinations to thwart any rollout. Since launching Tower of Guns, I’ve rolled out at least six bug patches, each with some minor content updates. I’ve responded to questions on the forums diligently and interacted with fans as regularly as I can, and the one thing that comes up repeatedly is the customer’s surprise at that support. It’s amazing to me that a developer simply answering a question on the forums or helping someone run the game properly is interpreted as unusual these days. While it IS a large time investment, it is also appreciated by the customers who perhaps will become repeat customers of future games I make. But these days, more than ever, releasing the game into the wild is just the first step. For single player games that road is shorter than for multiplayer games which require constant nurturing to grow. But even a straightforward single player game like Tower of Guns required bug fixes, price monitoring, sale and bundling schedules, and a forum presence. Given that you won’t receive any income from your game for 30-60 days after shipping anyways, depending on the platform, you certainly should plan on reserving funds to support yourself during any post-launch maintenance.

VERTEX


Going Indie

16

Putting it all together In the end, it cannot be overstated how much rides on the game you make being actually decent. Nothing will kill chances of success faster than a bad product, that much should be obvious. And while you don’t need to be the shrewdest businessperson in order to succeed as an indie developer, it’s becoming increasingly important to educate yourself at least somewhat in the aspects of this industry that lie peripheral to the actual development. To ignore these aspects, especially in today’s climate, risks your game being washed aside amid the ever-growing tide of indie games. There’s a daunting amount to learn, yes, but what’s funny about this indie-development world is that it’s not a cutthroat business, at least yet. Indie developers genuinely care about making good games, looking out for one another, and working together to build a strong community. If you reach out to the indie community you’ll find that oftentimes other indies will reach back. So ask questions. Help others. Participate in that community. Much of this knowledge about building, selling, and promoting your game can actually be soaked up just by that active participation. And remember to enjoy the process. That’s why we make games, after all.

Where can I learn more? I would be remiss if I didn’t leave you all with some of the great resources I’ve leaned on over the past two years. This article was just the briefest of surveys, and these resources go into much more detail. Game Development Business and Legal Guide by Ashley Salisbury This book is largely outdated, as it reflects a pre-indie landscape, but some of the individual sections about running a studio, navigating legalese, and what to expect, are incredibly helpful. Pixel Prospector: http://www.pixelprospector.com/ This is a treasure trove of information about all things indie game related. I wish it had been as complete when I set out on my indie-development mission, as it would have made many things easier. Still, it has been, and I know it will continue to be, a fantastic resource for me GamaSutra Expert Blogs: http://www.gamasutra.com/blogs/expert/ From inspiration to concrete revenue numbers, Gamasutra’s expert blogs are a cornucopia of information. Just be sure to check the date on the articles, as the wisest course of action changes fast in this industry! Independent Contractor, Sole Prop, and LLC Taxes by Mike Piper This is a great primer on all the stupid boring stuff about staying proper with your taxes. It’s short, to the point, and easy to read. Additionally, I’ve found that Gotprint.net has consistently great prices on printing services for signage, business cards, flyers, and other needs, although you have to plan well in advance to account for shipping. I also use Toggl.com for my time tracking needs, and Presskit() for my marketing hub (dopresskit.com/) both of which are available free to use. Good luck!

VERTEX


Going Indie

17

About Me Joe Mirabello started out as an game artist working on such games as Titan Quest, Titan Quest Immortal Throne, and Kingdoms of Amalur (as well as the occasional side-gig helping out an indie). Along the way he picked up a lot of technical chops and has worn many, many, many different hats. After the last studio he worked for, 38 Studios, met with a very catastrophic demise, Joe decided that he wanted to explore working on his own games for a little while. Tower of Guns is his first solo endeavor.

Joe Mirabello

www.towerofguns.com

VERTEX


18

Morgan Yon

www.morgan-yon.com


19

Ruan Jia

www.ruanjia.com


Wings

20

Creating Wings

Low Poly Wings from concept to final By: Thiago Vidotto

This tutorial is intended to artists who would like to improve their low poly asset production workflow. It will cover some parts in the making of a low poly model starting from scratch. I will talk briefly about the importance of the camera view, the size of the model in the screen, how to get the most of your mesh based on the animation and some common mistakes. For this project I had the limit of 600 hundred triangles and a 512x256 texture for both wings.

Camera and Detail density

Detail density is really important and sometimes it does not receive the deserved attention. If you don’t have enough, your asset may look too simple; but if you go too far, it may look noisy and confusing. A good way to plan it is by considering how your asset is going to be displayed. Things like the camera distance/angle and the actual texture size you have available to work with. This project was made for Dota 2, a game that has a top down camera angle most of the time. The character usually covers less than 10% of the screen, although at times it can be displayed at a closer view. Therefore, it is important to create details that work on both situations.

Example of the size of the character in game

VERTEX


Wings

21

Second situation mentioned where you can see the character in a closer view. For the distant camera it is good to focus on simple and recognizable shapes, this can be achieved by value contrast, specular, saturation‌ it is important to keep this readability inside the game environment amd in-game tests are necessary during the production.

Simple shapes that I wanted to be recognized on the topdown camera. For the closer view it requires finer details. These details should be subtle to the point where they are not distracting on the topdown view, otherwise the asset might seem noisy and it could affect the silhouette recognition. The subtle details must also not be too small. The texture size and the UVs need to be considered when planning them. I will show an example of a wrong use of detail density on the next topic.

VERTEX


Wings

22

Feathers

For the high poly model I tend to use ZBrush and start with its basic shapes, but the same results can be achieved with other similar programs. I started the feathers with a sphere, got to the “blade” shape using the Move and Trim Dynamic brushes. I then used the ZRemesher to get a better edge flow (top mesh on the next image). Then I used mostly the Slash2 and Clay buildup brushes to create the barbs (for the cuts on the barbs, two strokes with the Slash2 in opposite directions and an alpha mask were enough). In some of the feathers I used the MAHcut mech B (http:// mahstudios.com/) with lazymouse on to create the rachis (the middle part of the feather).

It is better to not exaggerate with the subdivisions and details as mentioned previously. Those feathers were duplicated more than a hundred times for the final high poly and since the texture is relatively small, it is good practice to avoid too many details. The normal mapped version tends to look more subtle than the high poly, for that reason I like to exaggerate a little more on the detail’s height. The cross section of my feathers is thick and diamond shaped instead of a normal plain feather. This allows me to use both sides of the mesh and make it more distinguishable when the specular map is added.

It is also important to add some variation to the feathers, especially the most visible ones. I slightly modified the shape and the barb cuts position of almost all of them. Since these are not complex details, it’s easy to do and just requires some patience. To achieve more contrast in the feathers I basically made two different styles: shorter and curved with the idea of being lighter and more flexible, and a longer version with a straight tip and a stronger aspect.

VERTEX


Wings

23

Wings

References are really important and they help to create a connection with something that player will easily recognize, but it is even more important to not copy straight from it. As the name suggests, it should be only a reference. The amount of detail present in those references would be impossible to recreate with this projects limitations. I like to simplify the reference to its main shapes and then reproduce most of it with volumes in the high poly. They might look exaggerated in volume when compared to the references but after they are translated to the bakes they turn out less intense.

When doing a repetitive process, like adding a hundred feathers to the wings, it is really easy to tunnel vision on the details and ignore the big picture. This is one of the reasons I sketched the wings on an intermediary single mesh which helped me to picture the feather’s placement and the overall thickness. I also blocked in the main shapes and tested in game to see if they were visible from afar.

VERTEX


Wings

24

To create the high poly mesh, I used this intermediary mesh (number 1) as a guide and started layering the feathers over it, replacing the previous volume. For each new feather I made some slight changes on the size, main shape and barbs. I also considered the adjacent feathers when making the changes to keep a constant flow.

Low Poly Workflow

My lowpoly workflow looks a little strange to people used to a straight forward process in which you have a defined order to work and animations are the last one to be done. I work more like I am prototyping the asset and I keep jumping from one stage to the other, trying to work everything in parallel and using placeholders whenever needed. I try to let the project dictates what is the most important to be worked on. For this project I was worried about the effect the animations would have. I was expecting a lot of mesh and texture deformation, so I decided to jump to the animations early and create at least one idle cycle. So I went in the opposite direction of a regular workflow: I finished the lowpoly only after I had the animations and I knew all the extreme poses. Sometimes the animation requires different extreme poses than it was planned and that happened on this project. Animating with an intermediary low poly mesh saved me from wasting a lot of work when I decided to change its topology. In a more straight forward production I would be wasting all the UVs and texture work by doing that. Working this way gives me a better picture of the entire project and allows me to spot some problems before they get bigger. It also allows me to switch between areas whenever I get tired, renewing my motivation. (if you are not going to animate the mesh, consider sending the mesh to the animator for some feedback before finishing it)

UV Layout The first “rules” when talking about seams are usually to hide them and use as few as possible. I’ve seen a lot of projects where people use less than 50% of the texture size just because they didn’t want to create a new UV island.

VERTEX


Wings

25

Usually a seam is considered a bad thing that you are forced to use, but avoiding it is not always the best answer. The seam we are going to discuss was created to allow more resolution on the most important area of the mesh. I know that excessive number of UV islands can have an impact on performance, but one extra cut may allow a much better usage of the texture size. What I am trying to say is that a seam is not always a bad thing, it can also be useful. When you have limited texture space, getting more resolution is extremely important. I wanted to make the outside of the wings have more resolution and be the biggest possible in the UVs, since it was going to be the the most visible in game; but I was having a hard time achieving that due to the texture being constrained to a rectangular shape, so I decided to split that island in two parts.

Doing that cut usually leads to a visible seam on the final mesh, which is caused by a couple factors. The first one I want to talk about is the pixel density. No matter what you do with your texture, if it is not a flat color and the pixel density doesn’t match on each side of the seam it is going to be noticeable. When you have a higher resolution to work with, the contrast tends to be smaller, but is not the case on this texture. A way I found to solve this was by keeping same pixel density on both sides of the seam and the same angle of the island borders (see image below). I do that by using the seam to drive my effort in unfolding the mesh instead of letting the unfold decide the position of the seam on the UV. The red line on the image is showing my cut and that both islands have the same size and angle on the edges. After making the cut, it is possible to move the islands without affecting the density just by keeping them proportional. if you need to rotate it, it is easy to do so in multiples of 90 degrees. Making the UV island border straight is not necessary but in this case it made my work easier. The seam was getting better but still visible when looking closer, especially with a normal map applied. Another problem is the edge padding, which helps a lot with the mipmaps by growing the texture outside the islands but it is not be the best option on this case. A solution would be duplicating the lowpoly mesh when baking and using a different UV placement to let the texture continue on both sides of the seam instead of just repeating the last pixel color.

VERTEX


Wings

26

The brown islands represent the duplicated lowpoly mesh used to extend the bakes on the cuts, instead of stopping and repeating the last pixel color.

The difference on the seam after extending the baked area. This took more time to do, but the seam is nearly invisible. This technique is also useful on flat surfaces like blades, and allows you to split the mesh in the middle of its length to fit better on the UV Layout.

VERTEX


Wings

27

Mesh with the texture applied. (300 triangles and 512 x 256 pixels for the texture. Diffuse + Alpha + Specular)

Exact same angle showing where the seam is located.

VERTEX


Wings

28

Diffuse Texture

For the textures I usually try to keep it simple and I use the bakes as a shortcut. They help me block out the texture and work as a guide for my color masks. I started with a flat grey color, occlusion, the green channel of the bent normals, a value gradient based on the up axis and a cavity map generated from the normal map.

I wanted to give a metallic effect to the diffuse, so I used NDo and generated a new occlusion and the diffuse from the normal map; I layered them over my flat gray as multiply and overlay respectively.

Then I painted the values and added some details to help distinguish both styles of the feathers. Using one of the bakes as a reference makes it a lot easier during the process of masking the flat grays for the values.

VERTEX


Wings

29

I like to use a lot of mask on my layers as it makes it easier to edit and recognize in Photoshop’s layer panel. I made some darker details on the feathers and since I used the same size of the island border explained on the previous topic, I could measure the distance of the details and match the painting on the seam directly in Photoshop.

The colors were mostly flat colors using the same masks I used on the values. They were placed over the values as “color� layer bleeding.

VERTEX


Wings

30

For the specular map I used a similar process where I layered the occlusion, cavity map and the values with some slight adjustments.

VERTEX


Wings

31

The alpha was created using as a base a random bake from Xnormal with 0 on edge padding, which gave me the contour from the high poly. I just had to mask the part that I didn’t want to be black, painted some extra details on the bards and it was done.

Conclusion

I just wanted to emphasize how important it is to consider the way your assets are going to appear in game and how much attention it should attract. It needs to be consistent with the environment and the concept must take in consideration the size on the screen and the angle it will be displayed. This may sound a little obvious, but testing it in game is extremely necessary and should be done often during the production and not just when it is finished. The project on a clean canvas works different than when in game, and it is really easy to zoom in on the model and end up adding tons of detail that get lost on the textures or become noise later. Also I would like to point out the importance of having some basic knowledge about areas that are not your specialty. This will make you much more valuable for the team and you will be able to spot and avoid problems on the pipeline before they get bigger. Try to be flexible and allow yourself to prototype the assets. It is really frustrating when you finish a model and discover that the silhouette is not working on the camera angle, or that you need to redo the lowpoly because the animations changed. Try to focus on the big picture before really committing to finish the asset and allow yourself to make mistakes instead of trying to fix something that started wrong.

About Me I am a brazilian self-taught generalist artist with a passion for games and for learning new things. Working with games started as a hobby more than 15 years ago doing maps for Quake, and it took me years to discover that I could make a living doing it. In the mean time, I graduated as an Architect and Urban planner, and worked with traditional modeling and animations for television. I owe a lot of what I know to the Polycount community and will be always grateful.

Thiago Vidotto

w w w . t v i d o t t o . c o m

VERTEX


32

Andrei Cristea

www.undoz.com


33

Martin Teichmann

www.martinteichmann.com


World FX

34

World Interaction FX Ali Mayyasi FX techniques used on Destiny By:

In this article, I will talk about the setup and FX techniques used on Destiny for player-world interaction. These FX were intended to believably ground the player’s actions in the world, enhancing immersion. As players move around the environment, their feet kick up dust, leave footsteps, shake foliage, and ripple water. Similarly, grenade bounces, detonations, supers, and hover vehicles all affect the environment.

Sand footstep particles and decal

Grenade bounce particles

Snow footstep decal

VERTEX


World FX

35

Arc “Super� impact particles and decal

Overview

Vehicle water hover particles and ripples

In fleshing out player-world interaction, we had to first define the following 1) Player impacts that mattered 2) Virtual materials we cared to represent 3) Desired response for each material, for each impact The Audio, Design and FX departments got together to flesh out these lists from each of their perspectives. The different departments had different, yet overlapping, needs. Consider grenade bounces for instance. From an FX perspective, grenades bouncing off thick or thin metal will look the same; however, from an Audio perspective, bouncing off thin metal will sound very different than bouncing off thick metal. From a Game Design perspective, grenades bouncing off thick metal might bounce further than grenades bouncing off thin metal. As another example, grenade bounces against Martian sand look different than those on lunar sand, but they sound and bounce the same. Similarly, bullets may pass through thin metal, but not through thick metal. The bullet impact fx might look the same for both, but would most likely sound different.

VERTEX


World FX

36

Virtual Materials The FX team narrowed down the list of surface materials requiring unique visuals, and ended up with something like this: Flesh, Sand, Dirt, Snow, Water, as well as a general Default. Flesh was used for players and combatants receiving melee or bullet hits, both primary interactions in an FPS. Sand was used heavily for lunar regolith and Martian deserts. Water was used abundantly on Earth and Venus. The Default was used as a catch-all, as well as for any hard surfaces that didn’t need any kick-up, like concrete, rock, or metal. The other departments generated similar lists, after which we setup data files representing each of the desired materials. Then it was up to the Environment Art team to disseminate these materials throughout the virtual world. As part of their process, Environment artists create renderable geometry as well as lower-resolution collision geometry. They then create and assign shaders to the renderable geometry. They tag each shader with an appropriate material from the established material library. At that point, when the environment asset is placed in the game world, it can be collided with, and its material looked up at runtime.

Player Actions The Design team defined all the player impacts: walk, land, hover, slide, body fall, grind, melee, bounce, bullet, and detonate. Programatically, each impact fires a raycast against the virtual world, and knows what environmental geometry it collided with. The geometry’s shader is then checked for its previously mentioned material tag. Each action is configured with a list of “material -> Result” table, which triggered the result of the detected material. In the case of player foot impacts, the player rig was given markers on each foot. Each of the player’s movement animations was tagged with trigger events at the exact frame where each foot contacts the floor. The marker was used as the source of the above mentioned programatic raycast. Player melee was setup similarly, raycasting out from the player’s hand marker at a specific frame of the melee animation.

Desired Responses Gameplay FX should always complement and enhance the gameplay experience and communicate information like: Did I take damage? Did I cause damage? What is the damage type? Where did an attack come from? Gameplay FX also communicates an action’s anticipation and follow-through, which is critical in fast-paced action-packed games. Anticipation and follow-through can also be thought of as charge-up and aftermath. A grenade usually flares-up, gives a quick short explosion, and leaves lingering dust and settling debris. The color of the explosion typically communicates the damage type of the grenade. Additionally, lingering scorch marks on the ground also communicates the damage type of the explosion. It’s easy to miss the quick explosion flash in high intensity combat, but these lingering elements communicate that an action of a certain type just happened here. FX responses typically involve the following elements: Particles, Lights, Lens Flares, Screen effects, Controller Rumble, Camera Shake, Decals, Wind Impulses, and Water Ripples I worked closely with the FX team to build a library of efficient, sharable elements for many of the above responses. Grenade detonations for example, can have different sizes (small, medium, large, extra-large), and different damage types (solar, void, arc..)

Particles

We used traditional camera facing billboard particles to fake kick-up dust. We also used particle meshes for kick-up debris. To stay memory efficient, we made heavy use of shader variants to tint the same sand kick-up particles differently for each planet. For example, Earth and Venus sand particles defaulted to a yellowish color, but they were tinted red on Mars and white on the Moon. This really helped cut maintenance and memory cost for the sand kick-up content.

VERTEX


World FX

37

Rocket launcher sand detonation particles. A common core explosion and surface specific kick-up.

Decals Decals are basically shaders that project on world geometry. For impacts, we dynamically spawn decals positioned at the point the programatic raycast hit the world collision geometry. Similar to particles, a different shader is used based on the detected material type. Sand footstep decals were the most complicated to nail, due to sands fluid nature. I had to put in a lot of texture variety for the decals to look natural. At first I prototyped the different elements as separate decals stacked up. I used a main depression element, a displacement element, and small kick-up elements. Once I was happy with the look, I baked down the layers into a small set of variants in order to reduce the texture memory cost on the GPU as well as the rendering overdraw. We also used parallax occlusion mapping to fake depth. Bullet and detonation decals were typically comprised of a common shared scorch element, and a damage-type specific glow element.

VERTEX


World FX

38

Sand footstep decals

Wind Impulses

Thermal detonation scorch decals

Sand bullet decal

On next gen consoles, we used “wind particles” to shake foliage and grass. Each detonation played invisible “wind particles” that were rendered to a separate buffer. These wind particles were horizontally aligned and encoded a two dimensional wind direction (X in the red channel and Y in the green channel). This wind buffer was sampled by foliage geometry to affect their vertex shader, pushing them out of the way. Each foliage mesh had stiffness painted into it as vertex color, effectively weighting how much wind each vertex can receive, resulting in realistic bending of the upper leaf elements while the base stayed grounded.

Player wind pushing out foliage

VERTEX

Wind buffer


World FX

39

Water Ripples

Similarly, next gen consoles benefited from special “ripple particles”. These ripple particles that were invisible, horizontally aligned, and rendered to a dedicated buffer. They encode ripple frequency and displacement amplitude. The ripple buffer is sampled by water shader each frame, so the water always ripples and distorts with the different impulses. Additionally when characters run through the water, we spawn v-shaped ripple particles that orient along the character’s velocity, creating nice v-shaped water wake.

Conclusion

As a technical artist, I had to coordinate this effort across multiple departments, implement all the data setup for each action, and maintain all the material-to-interaction mappings for each interaction. Ultimately, interactive impact functionality involved the collaboration of many departments: • • • • • • •

Programmers adding raycast functionality to the key action Animators tagging action animations with frame triggers Environment artists tagging shaders with the appropriate material tags Fx creating the interaction elements per material per action Sound designers setting up the appropriate impact sounds per material per action Graphics engineers supporting the different render features like decals, wind, water, particles, parallax occlusion mapping Technical art setting up the material-to-interaction mappings

About Me

I work at Bungie as a Sr. Technical Artist on the FX and Spectacle teams. I graduated with a Computer Science bachelor’s degree in Lebanon, and landed my first job as a Tools Programmer at the Jim Henson Company in Hollywood. Working closely with the content developers there, I was increasingly fascinated with 3D, and so I signed up at the Vancouver Film School. There, I experienced the full breadth and depth of the 3D pipeline, and I knew without a doubt that I wanted to become a Technical Artist. After graduating, I was hired by TimeGate as their first Technical Artist, and there I stayed, eventually became their Lead Tech Artist. It has been a remarkable journey, and I look forward to the fun problem solving to come!

Ali Mayyasi

w w w. ps i o n i c p i xe l s . co m

VERTEX


40

Brian Sum

www.briansum.com


41

Rasmus Berggreen

www.rasberg.blogspot.com


Pattern Drafting

42

Pattern Drafting

Using ZBrush to Create Patterns for Marvelous Designer By: Andrei Cristea

If you gave up on Marvelous Designer after looking through your grandma’s sewing pattern book trying to find a template for that cool sci-fi suit you wanted to make, then I might have a tip for you. Pattern drafting for complex garments is a craft in itself and it can take a lifetime to master. Tailors use a handful of techniques for pattern making, and most of these are very hard or almost impossible to replicate in the digital medium. Finding and modifying a template for generic apparel is probably the most common approach, but when you are facing a complex design, this can become a very frustrating process. If you are under a tight deadline and you don’t have an established workflow, things can get tricky. I encountered the same problem when I wanted to use Marvelous Designer to create a costume after an established concept. After many failed attempts in coming up with a template that fit my model, I took a step back and decided to experiment with a different approach. If you think about it for a second, UV Unwrapping is probably the closest thing to the reverse process of tailor fitting a piece of fabric over a form template. So, in theory, if we have a polygonal model that fits our body form and looks similar to our concept then we should be able to cut it along the seams and unfold it. This should give us a base pattern that we can use to build out garment. Overall, the process is very simple, and I will try to illustrate it on the next page for you.

VERTEX


Pattern Drafting

43

The Blockout

The following steps can be reproduced in any 3D modeling package. I personally prefer using ZBrush since it has all the tools I need to generate a quick blockout model without worrying about topology, and a good UV unwrapping tool that supports large polycounts. I usually start by masking out the area of the garment from my body mesh, and extract it to a Dynamesh Subtool. When converting the model to Dynamesh I have to keep the tessellation high enough to allow marking and cutting out the seams. Afterwards, I start modeling the apparel. In the beginning I use the Move and Trim brushes to cut away the extremities of the mesh and then I switch to the Clay, Move, Inflate, Pinch and Smooth brushes to build out the main shapes. My main focus is on the general volume and the primary forms. At the same time I have to make sure that my model fits well on my reference body mesh, and the design looks similar to my concept art. As a final touch I give a slight hint of the seam placement so I can easily trace them in the following stage. I don’t spend any time figuring out the details since I am not going to reuse this model again.

VERTEX


Pattern Drafting

44

The Cut

After finishing the blockout, I have to prepare the model for unwrapping. The problem I found with UV Master is that its seam painting algorithm doesn’t separate the mesh where it doesn’t consider it necessary. For this reason I have to cut out the model manually to brute force it. I start by painting out the seams onto my model (1), and then I generate Polygroups from Polypaint (2). Afterwards I hide the Polygroups of the marked seams and any other parts that I don’t need (ex. the end of the sleeves, or the collar if the geometry is capped), and delete the hidden geometry (3) . At the end of this stage my model is single sided, composed only by individual panels, and ready for UV mapping.

The Pattern

If everything was done right in the previous stages, then all that’s left to do is to unwrap the model using UV Master and save the Flattened UV layout. All the preparation work was done for this image. The initial flattened output from ZBrush can be hard to read, so I take an extra step in Photoshop to rearrange the pattern and make it easier to reference in Marvelous Designer. As long as I didn’t resize the individual islands I should be fine.

VERTEX


Pattern Drafting

45

The Fitting

Now that I have my custom base pattern drafted, I import the layout image into Marvelous Designer and trace each section of the garment. Minor shape adjustments are needed to obtain a better fit, but from here on I have a solid foundation to work on and flesh out my design. I try not to lose myself in details and focus on the big areas. To be able to make quick adjustments without slowing down the simulation I keep my patterns as simple as possible with as little or no layering. I always save a copy of this stage in case I over-complicate my design and I have to start over with a new approach.

Once my base fitting is done, I start working towards my final design. In this particular example I split up the upper sleeves, layered the shoulder areas, and created a couple of interior seams to obtain a more interesting fold pattern. I only create details like piping, stitches, zippers, buttons, and buckles when I need them to affect the fabric simulation in one way or another. In most cases these are placeholders that I rebuild in the final ZBrush pass.

VERTEX


Pattern Drafting

46

The Export

Before jumping back to ZBrush, I like to run a few simulations with different fabric settings, layering setups and fittings. By doing so, once I am in ZBrush I have the option to mix between different areas of the simulation to achieve something that is more eye pleasing. This is done by storing projections in Morph Targets and blending between them. Below there is an example of two raw exports which were used for the final model.

The Retopology

For the retopology stage I prefer to have the imported MD model unwelded and single sided. First, I clone the model and separate each area group into its own Subtool. Then I use ZRemesher at fairly low settings (1) and Panel Loops to add thickness to my geometry (2). In the end I Subdivide the model a few times (3) and Project the details back onto my new topology (4).

The Detailing

Now onto the most enjoyable part of the process. This is where most of the love goes and where you can add a personal touch to the model. In this particular case I tweaked the overall proportions in order to bring the rhythm of the shapes closer to what I originally intended. Then I remodeled the accessories and the stiffer parts of the garment. Once I had everything in place, I did a secondary modeling pass to achieve a better distribution of the secondary forms, and finally, I added a layer of micro details consisting of memory folds, scratches, wear and damage.

VERTEX


Pattern Drafting

47

Conclusion

As you can see, the workflow is very simple and could have been summarized into one phrase, but I wanted to give you a glimpse of the entire process so you can foresee what to expect when using Marvelous Designer, and how much work is put into every stage. I think that a lot of the frustration comes from expecting a one click solution, but like anything else, the quality of the final result is relative to the amount time put into it and how carefully you approach every step of the process. Afterall, it’s just another tool to add to your arsenal and you are the one deciding when it makes sense to use it or not. On this particular character I chose to model the rest of the clothing directly inside ZBrush since I considered it being more time efficient.

About Me

My name is Andrei Cristea and I am currently the Lead Character Artist at CCP Games in Iceland. Ever since I can remember I’ve been passionate about art, video games, and technology (especially in areas that touch closely on science fiction, dystopian worlds, posthumanism and cyberpunk). I have 16 years of experience in CG, and throughout my career I’ve been working on a variety of projects including video games, cinematics, commercials, advertising, publishing, tv, animations, architectural visualization, product design and concepting.

Andrei Cristea

w w w . u n d o z . c o m

VERTEX


48

Victor Hugo Queiroz

www.artstation.com/artist/vitorugo


49

Caroline Gariba

www.artstation.com/artist/carolinegariba


The Lodge

50

Mountain Lodge

Creating the Mountain Lodge By: Joakim Stigsson

When I was a kid I used to go up to northern Sweden with my family and ski for a week or two. We used to stay in one of those wooden lodges and I remember I thought there something special about it. I always felt at home in this warm and cosy house and I wanted to stay there forever. I said to myself that one day I will build my own dream lodge. 15 years later I still don’t have enough money to build a lodge like this but with my experience in 3d art I decided that I could at least build a virtual one and make it into my future dream house. In this breakdown I will go through how I achieved the final result in my project “mountain lodge” that was created using the latest version of CryENGINE. I’m not going to go into too much detail on the content creation since it’s a pretty large project that includes a lot of different workflows, tools and techniques. This breakdown is mainly going to be an overview on how I was thinking about planning, structure, design, lighting, and final execution. I hope that this article can inspire and give you some ideas when you start working on a new project.

Getting Started

Since I knew I wanted to create a realistic home, the first thing I did was gather as much relevant reference as possible, both on mood, lighting, structure and assets. There is always plenty of good reference material and I think it’s very important to really take some time trying to find something you like. Houzz.com is a really good website for interior design and has a lot of good references on buildings and assets. During this stage, my general ideas of what I wanted to create became much more defined. It helped me figure out which objects I needed to create to achieve the final result. Once I was done looking for reference I decided I was going to create an interior environment with three main rooms and an exterior with mountains in the background. I’ve mainly been working on damaged/dirty environments before so I wanted this one to be clean and tidy, like one of those photographs you see in an IKEA catalogue. I decided to go for a rustic contemporary style. The rustic interiors often have good variations of materials that I thought would look great with the new PBR in CryENGINE. I wanted the lodge to be based somewhere in Montana or Canada, where you can find amazing scenery with big mountains next to a blueish clear lake.

VERTEX


The Lodge

51

Planning and Blockout

Since I was going to build this environment in my spare time I didn’t really pick a time frame for when it needed to be finished. I wanted the quality to be as good as possible so I didn’t want to rush things just to get it done. I also knew that I wanted to try new workflows and tools to get better, which makes it even harder to know exactly how long things are going to take. In this case I was more planning on what I was going to do next, instead of how long it was going to take. I always find it much easier to get a rough blockout into CryENGINE as soon as possible. Since the engine lets you drop into game really quick, it’s much easier to run around and get a sense of scale and proportion in game instead of looking at the structure in 3ds Max. I tried to get the whitebox as simple as possible but still get the main features in. This is where I play around with layout to get something that I think will look believable and could work in reality. This is also the stage when I’m trying to get the first lighting pass done which will help when starting to place assets and tweak materials. Once I had something that I liked I started with the first prop, just to get something in there at the level of quality I wanted. As you can see the there isn’t much difference between the first rough blockout and the final environment.

VERTEX


The Lodge

52

Structure

When creating a complex and big project like this with many assets and materials, I think it’s very important that you have a nice folder structure and keep everything nice and tidy. I always try to name and structure my project’s files as if someone else could pick it up by anytime and continue working. From my professional experience this is something that happens all the time. By this I mean everything from naming folders on your computer, objects in your 3ds Max scene, layers in your Photoshop file etc. It may take some time to setup everything at first but it will definitely speed up your workflow when you don’t need to spend time trying to find one of your 26 overlay layers in Photoshop.

Design and Create Assets for Games

Creating an environment from scratch also means that you have to pick which assets that want to have in the scene, turning yourself into an interior designer. I put a lot of effort into choosing which kind of style and design I wanted for each object by looking at references and first blocking it out. Even though I was creating a realistic environment with objects that could exists in reality, I still had to make some design decisions that would fit in the scene. If you would just create an object exactly how it looks in reality you can bump into some issues since the way that you see an environment in a game engine is not the same way that you see reality. For example sometimes you might have to scale up smaller details, like bolts, studs, and seems, to be a little bit bigger than normal. If not they would simply disappear and create bad AA once the object is rendered in the engine. Also you might have to compromise on your design if you want to stay modular, optimize your UV-layout or mirroring your object to create more variations. All the assets were created by first making a high-poly mesh and then baked all the information to a low-poly version that was used in CryENGINE. Some of the assets have a few more triangles than what’s normall necessary for a game environment , just because of the detailed close up-shots.

VERTEX


The Lodge

53

What I think is great about the rustic style is that it’s using a lot of different materials and trying to mix them together. Everything from brass, copper, wood, leather and stone. I thought this will be a great combination for showing off the PBR technique inside CryENGINE but also create a nice contrast within each object. Since the scene contains a lot of props it can easily feel noisy and chaotic. Thats something I had in mind when I created the design for each asset. I tried to make them as clean as possible to avoid an overall noisiness. Even though there is a lot of unique assets in the scene I try to be as smart as possible by reusing objects, textures and materials. I always had in mind that this would be an environment that could be used in a game. So I tried to optimize each asset as much as possible.

VERTEX


The Lodge

54

Lighting and Mood I knew pretty early that I wanted to create two different kinds of lightning for my environment to show of the materials in different situations. Also to see how much difference the lighting would make for the mood and atmosphere. CryEngine lighting tools are pretty powerful if you quickly want to create different lighting conditions for a scene. Since my scene contains both an interior and an exterior I had to set up my lighting to work for both. First I created the environment during daytime with a pretty low and strong sun that would light up the interior. As a compliment I also had the lamps on during daytime. This is something you often see during those presentation images of a house, which was the look I was going for. The light setup itself is pretty simple. For the exterior I’m using an environment probe with a big radius to cover the whole space. For the interior I used a probe for each room/area. In CryEngine you can set the priority of the probes to get them to blend between each other the way you want. I didn’t want to fake the lighting by adding extra fill lights. Instead I was focusing more on placing light sources to naturally light up the whole interior. Each lamp in the scene got its own light entity, which is tweaked with radius, bulb size and intensity to give a believable result. I am using a VizArea for my interior with portals around the windows to stop the interior probes from bleeding out on to the exterior. This also helps with performance to only draw what’s seen inside the portals. For the night time I used the moon as the outside light source and created a cosier interior with candles and a fireplace. I also had to switch the environment probes for nigh time to get the right light and reflection. I managed to setup a schematic that disabled the daytime probes and enabled the night time probes as soon as I switched to the night setup. This definitely saved me some time as I was going back and forth tweaking the different lighting.

Materials & Texturing

I personally think material definition is one of the most important things for an asset to feel realistic. You can create the most awesome realistic looking high-poly and perfectly baked low-poly but if you don’t put some effort into the texture and material, the object is never going to feel real and believable. After I was happy with the high-poly of an object, I always tried to render it with materials in Keyshot. Keyshot is an awesome tool if you quickly want to throw some materials on your object and render it. I exported my highpoly using a Keyshot plugin for 3ds max. Keeping your objects separate will make it easier to assign materials in Keyshot. I always find it much easier to get a comprehension of the objects form and design if there are some materials on it. Keyshot has a pretty large library with materials that you can tweak very easily.

VERTEX


The Lodge

55

High Poly Chair Wires

High Poly Chair in KeyShot

When I was happy with my result I exported the asset from Keyshot back to 3ds Max. All my materials I assigned are getting imported into 3ds Max. I created my low poly and rendered the maps with a cage in 3ds max. I then baked a color map from the assigned materials together with normal map, ambient occlusion and heightmap. By baking a color map I could make sure the colors are the same as the ones that I assigned In KeyShot.

Low Poly Chair Flat Color

Chair Texture with Wires

VERTEX


The Lodge

56

I started texturing my objects with the rendered color map as a base, also using it as a mask for the different materials. Then I started adding details and photo sources in both the albedo and smoothness map. With PBR it’s so much easier to archive a realistic result since there is already predefined values for different materials. It’s a matter of tweaking the smoothness map to get that right feeling for a surface. A good tip is to create Swatches in Photoshop for different materials. I tried to keep the diffuse as simple as possible and add more details in the gloss texture for material definition. I tried to use this method on a large number of my assets.

Chair Texture Color

HighPoly in KeyShot

LowPoly in CryEngine

I was keen on doing a high poly for all my objects and textures. I know there are plenty of tools to generate a normal or heightmap from a diffuse, but since this project was a way for me to develop myself I wanted to go with a more handcrafted workflow.

VERTEX


The Lodge

57

When I was creating my tileable textures I tried to tweak them in the engine using a sphere with the final lighting. Having all the materials next to each other made it easier to compare them and to make sure they all fit together. I also added a sphere with full gloss and spec to see how my cube map looks in reflection. I was using the standard values for PBR-materials when creating the textures. All textures have an Albedo and Normal map with gloss in the alpha channel. Since there is often just one specular value for a tileable texture, I could instead set this value in CryENGINE and save some texture memory and draw calls.

VERTEX


The Lodge

58

For some objects in my scene I was using tessellation/displacement map to make them look more organic and realistic. One example is the fireplace stones that are used in some areas of the interior. Instead of creating a mesh with a lot of triangles to form the shape of the fireplace, I used the baked heightmap from my stone texture to simulate the tessellation. The base mesh itself is pretty simple. I added some extra splits in my mesh, trying to keep the faces square, just so the tessellation would be smooth and even. It’s also important that there are no splits in the UVs, that will create some artifacts in the mesh were the splits are. I also added extra support loops on the edges to get more resolution around the corners. In CryENGINE’s material editor you have options where you can tweak the tessellation. Using tessellation is of course more expensive than just using a normal map, and it only works in high settings, but as you can see the final result looked much better than just using a normal map. I also used the same method for some exterior trees and rocks

VERTEX


The Lodge

59

Landscape and Foliage

To create the landscape terrain I used World Machine. I first sculpted the basic shapes in CryENGINE and then exported the heightmap into World Machine. I simulated some erosions to get a more organic look that is almost impossible to create manually in the engine. I did some final tweaking in Zbrush before importing the terrain back into CryENGINE again. I also exported a color map from World Machine that I used as a base layer in CryENGINE when painting out the terrain materials.

When creating the vegetation I wanted to see how close I could get a realistic look by modelling a highpoly manually, instead of just using a photo texture with an alpha. By doing it this way I was also getting a better normal map which plays a big part for vegetation. You want to make sure you have some angle variations in your normal map to get the vegetation to look more realistic in game. For the fir branches I started by creating the twigs with the branches tool in 3ds Max. The branch tools lets you create different shapes really quick using a box as a start. I made a few variations and applied a texture. I also created three needles with different colors and gradient.

VERTEX


The Lodge

60

To paint the needles on the twigs I was using the object painter in 3ds Max. I added the needles to be painted randomly on the selected twig. In the object painter you can control size, ramp, rotation, distance, etc to be able to achieve something you like.

I did a few smaller twigs for variation. I created a main branch and did the same procedure as with the needles. but instead this time I painted the twigs on the main branch. When I had all the twigs roughly in place I started to bend and rotate them to get some variation.

VERTEX


The Lodge

61

I made some more branches and rendered everything down to a plane. Since I had everything setup in 3ds Max I could tweak the colors and materials to render out a gloss map and translucency map.

With the low poly plane, I separated the branches and then used the twist and bend modifier. I did a few different sizes of the tree again using the object painter to paint the branches on the trunk. I was using this method of creating foliage for pretty much for all my vegetation in the scene. Its might be a little bit time consuming making high polys for your vegetation but the final result often gets better with a proper baked normal map. I was looking at references for the kinds of vegetation that normally grows in this kind of area and created a library with grass, flowers, bushes and shrubs to get enough variation to make the place look believable. I placed a few man made assets in the outdoor area to create a few interesting points and also add some contrast into the vista. I wanted the landscape to feel massive and to be a place that the viewer wants to explore. By adding a road that disappears in the distance the viewer would be dragged into the scene, making them wonder what’s behind the trees. I wanted to create some depth in my background by having the road in the foreground, the water in the middle and the mountains in the background. I also added some distance fog using the time of day in CryENGINE to make the vista feel more believable and for some more depth

VERTEX


The Lodge

62

Finalising & Presentation

If you been working on a project for endless amount of time, and put a lot of effort in it, I think it’s important to present it in a nice way. It’s always hard trying to wrap up your project. Is this enough? Can I do more? What will people think? Etc. Many times I see people that have done awesome stuff but rushed the last presentation just because they are tired of it or want to get it out as soon as possible. I think if you spend some extra time on the presentation, people will see that you have put a lot of effort into it and hopefully it’s going to get more attention. I wanted to show my environment in a nice package with images from CryEngine and video capture to show the environment as a whole. One of my main goals with the project was to stay consistent in quality. Everything from modelling, lighting, composition and presentation. I think no matter what your skill level is, if you try to push everything to the same quality, the final result will be much more solid. I also think that they all depend on each other. If you only focus on one thing, the other stuff will drag the overall impression down.

Conclusion I must say I have learned plenty of things during this project. Both when it comes to workflow and what my strengths and weaknesses are working on an big environment like this. When you work on a project for a longer time I think it’s important that you have a lot of patience and stay dedicated to be able to finish it. I was working with the project on and off on my spare time during last year but it wasn’t until the last month that I really pushed to get it finished. I must admit there’s been a couple of times that I’ve wanted to throw the project and computer out of the window just because I was so sick and tired of it. I think a good thing is to break up your project into smaller pieces that will later come together and become the final environment. If you always look at all the stuff that is left to do, it’s easy the get scared and lose interest. Finally I must say I’m really proud of what I managed to achieve. There was a lot of hard work and challenges that has helped me become a better artist. I also want to take the opportunity to say thanks for all the people that gave me feedback during the progress. Now I am really looking forward to start something new. If you want to know more details about the project and my workflow, don’t hesitate to contact me. I’ll happily try to answer all your questions and share my knowledge.

VERTEX


The Lodge

63

About Me I grew up in a little small village in the deep forest of Sweden. I’m currently working at DICE in Stockholm as an environment artist. I’ve been in the industry for about 3 years now and I am enjoying every single day. I remember when I was younger I got 3 big boxes full with Lego. Since I didn’t get any manuals with it, I had to come up with different layouts and designs myself. I remember building small islands with houses and roads, occupying the whole living room floor. When I got older I started to develop an interest in photography and video editing. At high school we had an introductory course in 3D and since I’ve always had an interest in video games I thought it was really awesome to be able to create your own ideas on the computer. After getting my degree at the University, I started working at a smaller company as an artist doing mobile games. Then I got an offer to come to Crytek in the UK and I said Yes without hesitation. I’m trying to always expand my knowledge in the field by trying different workflows, learn from people and push myself to do my best. My goal is to create inspiring environments that people want to explore and remember for a long time. Hopefully I will meet some of you in the future that share the same passion for making video games.

Joakim Stigsson

w w w. j o a k i m st i g s s o n . s e

VERTEX


64

Joshua Wu

www.artstation.com/artist/joshuawu


65

Liz Kirby

www.kirbyhasaportfolio.com


Hard Surface

66

Hard Surface

Fusion 360 for Concept Artists By: Kirill Chepizhko

“Every new technique, every trick is critical to getting better.” -Shaddy Safadi This article is primarily intended for those concept artists who are looking for new ways to create their designs. If you are designing lots of hard surface props and looking to expand your portfolio with 3D work or you are looking to improve your workflow by adding 3D elements to your art, keep reading. If you are a 3D modeler who is looking to speed up or even completely change your hard surface modeling pipeline, this article is going to be useful for you as well. Throughout the text you will find randomly placed protips that can help improve your workflow if you are already familiar with Fusion . This way you can quickly extract the most valuable technical information from the article, the rest of it mostly focuses on theory. Protip: Assign Materials to bodies to help with the design process. Assigning different materials to bodies will help you change the look of the model which in return will help you improve the visual complexity of the design. The default Visual mode that has shadows and visible edges is helpful but extremely deceptive and will make you think you have a really complex model when in reality you don’t. Get used to changing visual modes and assigning different materials to train your eye see past the lines.

What can I Model in Fusion ? I would like to answer this question right away because I myself ask this question a lot when I hear about any new software. What does it do? How can it be useful for me ? So the answer is: in Fusion you can model pretty much anything including but not limited to hard surface. As you already know the term “hard surface” usually describes objects that are constructed or manufactured by man, often related to industrial design, such as robots, architecture, tools,-

VERTEX


Hard Surface

67

weapons, vehicles, various mechanisms and props, etc. and hard surface objects can have organic shapes, for example cars, modern medical equipment, designer furniture and even procedural architecture. Technically the thin line between organic and hard surface modeling is blurry and is merely a matter of opinion. So hopefully this should give you an idea whether or not you should give this software a shot.

Nurbs are for Nerds Supposing you think that things you design can be modeled in Fusion and decided to keep reading. In order to be able to make an informed decision on whether to use Fusion 360 for your pipeline or not I will quickly remind you what NURBS modeling is as well as the logic behind using Fusion as the primary modeling software for designing hard surfaces in concept art. Protip: Exporting to .obj directly from Fusion. In model mode, open the Create menu and find Create base feature option on the bottom. When inside that mode go to Modify and select the Mesh command, choosing Brep to Mesh. This will open a new context window where you will be able to preview the triangulated mesh and play with the refinement settings like surface deviation and normal deviation. When done hit ok, find the mesh in the browser tree and right click to export as .obj format. This works great for exporting separate bodies. NURB(S) - Non-uniform rational basis spline, is a mathematical model used for constructing surfaces and models which allows for high flexibility and fine control over the shapes. The approach is often referred to as parametric modeling ( controlled with parameters) and is commonly used in CAD ( computer-aided design), CAM ( computer-aided manufacturing) and CAE ( computer-aided engineering). Pretty much anything that requires precision. In other words nurbs are for nerds.

You can see that most of the shapes were built using t-splines and sketch lines. I find both most useful in model mode when you are trying to get complex shapes. How is it really useful for designing concept art, that belongs to a completely different industry ? The latest technological advancements in gaming and the movie industry, mainly the new hardware capabilities have been pushing the quality and fidelity of designs, demanding highly detailed concept art that would satisfy the ever-hungry eye of the consumer. Our monitor resolutions also keep increasing, now able to display seemingly infinite amount of detail.

VERTEX


Hard Surface

68

This is when the regular approach to creating concept models began to deteriorate and artists started looking for ways of improving their workflow, making it more efficient and adaptive. Fusion 360 is a great solution for the problem. It offers a fast and intuitive way of combining organic modeling with the precision and incredible amount of detail of solid modeling exactly because it uses NURBS. Models that come out of Fusion can be ultra detailed and unbelievably complex while offering small size and extremely fast rendering times ( especially in Keyshot). Not only that but parts done in Fusion 360 are perfect for creating IMM brushes for Zbrush. Protip: Hollow out your bodies for 3d printing. Select the target body and find the Shell command in the Modify menu. It will remove material from the interior of the body creating a cavity inside. You can specify the desired wall thickness as well. Not every CG studio has embraced the fact that using 3D software for concept art is a truly revolutionary ( or maybe evolutionary?) step of fusing 2D and 3D that leads to a more efficient PLM (product lifecycle management). In other industries CAD and CAM have been tightly integrated into every step of production of a product, while for most games and movies the concept stages are still done using exclusively Photoshop. Indeed it used to be faster to create a design and its iterations in 2D programs. But when the industry is demanding higher fidelity and non-draft quality concept art and we have software like Fusion 360 to produce the said premium quality hard surface concepts within a similar timeframe I think we need to use it as much as we can. Because modeling hard surface for concept art in gaming and movies mainly refers to industrial design and man made objects, it is only natural to use the same modeling tools that are used in real life for engineering and manufacturing of the same objects. This can be compared to how character artists are using Marvelous Designer ( a software used for designing and simulating clothes), because it offers a real world approach to design process of clothing, from sketches to using seams and actual material properties

No Universal Solution

Of course the CG industry is yet to establish a universal approach to designing hard surface parts that will perfectly fit all workflows. A number of software packages such as Autodesk 3ds Max, Modo and even Zbrush offer great solutions for that and so does Autodesk Fusion 360, but most 3D artists will prefer polygon modeling to NURBS modeling in Fusion as it best fits the rules dictated by the animation, gaming and movie industry. However no such limitations exist in the world of concept art. I have chosen Fusion 360 to help me with bringing my concept designs to life because in my workflow I couldn’t care less about the polycounts or UVs when texturing. All I care about is an engaging design that looks fairly real and hopefully interesting and charismatic.

Engineering Design with Concept Art I believe in functional designs. Designs that make you think that a prop has been engineered and manufactured. You will be able to find other examples of workflows in Fusion in this issue, so I will focus on the theory behind making objects interesting and what is more important, believable. Protip: History mode vs. Direct mode. Turning off history mode will prevent Fusion from saving your every step but it will open some new tools for you like editing faces and much more. To turn off history mode click the cog icon on the bottom right and select “Do not capture design history�. First, lets take a look at an impression of an engine block I have modeled to illustrate how metal fabrication process can be imitated in 3D to give your models an extra layer of realism. (see right image)

VERTEX


Hard Surface

69

Fusion has a powerful set of tools aimed at tasks ranging from sketching to filleting, chamfering and performing boolean operations. Pretty much anything that can be done during the process of fabrication in real life can be done in Fusion and even taken beyond that. Things like cutting, chiseling, welding, machining are not a problem. For example creating a thread on a cylindrical surface is a only a matter of a couple of clicks.

Creating a thread. In model mode, select the Thread tool from the create menu, choose the cylindrical surface you want to apply it to and set the desired parameters of the thread. A lot of times, simple things like filleting / chamfering your edges is what makes the difference in the final look of your prop when your are rendering it. (see below). The cylinder on the left looks like it is not really connected to the block compared to the cylinder on the right that looks like it has been welded on it. Things like that add to the visual complexity to the design as well as sell the manufactured look.

VERTEX


Hard Surface

70

Of course the trick for designing cool hard surface props is not only knowing how to use fillets and chamfers or any other program tools. Great hard surface design starts way before you open a 3D program. When I start a new design I spend at least a day looking for the references to help me put the idea together, and a lot of time it includes looking for cutaways of things or simply reading about the subject in order to be able to understand how it works. Protip: Best way to transfer your model from Fusion to Keyshot. Best way to do this is to export it in .step or .iges format. After that, when importing it to Keyshot make sure to check the accurate tesselation option and import NURBS data option. The most important thing is to set the tesselation quality slider as high as possible. The highest value is 1 and it is what I recommend but depending on the model you may have to adjust it a little. Sometimes high values of tesselation can break the model. Once I have a general idea I will start breaking the design in my head into components, thinking of how they fit together and how they work with each other. For example, when you look at rifles you rarely think about what is inside,but if you do - you study how the mechanism works and it can give you some new ideas on how to alter it to fit into your design or to make something new. Concept artists are not engineers of course, but we can get as close to the real engineering as possible. This in return will help improve the designs, make them believable (see figure below)

Knowledge of how things work in real life usually inspires the way I design in Fusion. This influences shapes, joints, necessary cuts and additions.

VERTEX


Hard Surface

71

This shape of the lower part allows for a 90 degree rotation without limiting the movement or parts touching each other. Protip: Converting the project into .obj for import into Zbrush. Save your project as .stl, selecting desired surface deviation and refinement level ( higher levels will produce more triangles per face) Reopen the .stl file in Meshlab (free) or any similar software. Preview it to make sure it looks the way you want it then save as obj. This way you won’t have to save each body of the project separately.

Conclusion

Every efficient workflow, as I see it, is a matter of your combination of skills, habits and time spent learning how to apply them. I initially started working in 2D but soon realised the potential of 3D applications and the way they can help speed up the way I work. Fusion is great for hard surface stuff but as any other software it has its drawbacks and limits and some people will find it unnecessary to use for concept art. At the end of the day it doesn’t matter what software you are using as long as your concepts are inventive and compelling. This article is my invitation to all concept artists to explore more as there is always space for improvement. Be creative in your workflows as much as you are creative with your designs. Sources: Wikipedia, Digital Modeling By William Vaughan, George Cox (Cox Review , 2005)

About Me

I am concept artist who has been working in the entertainment and gaming industry since 2010 and now I am shifting my attention towards industrial design and product / prop design to learn more about manufacturing to be able to improve my concepts. I am fascinated by designing real products and 3D printing as well. Fact: I am actually not a human but a really smart russian bear dressed in human skin who has taught himself how to use a computer.

Kirill Chepizhko

w w w. k i r i l l m a d e t h i s . c o m

VERTEX


72

Andrei Pervukhin

www.pervandr.deviantart.com


73

Edon Guraziu

www.edonguraziu.blogspot.com


Regular to Lead

74

Regular to Lead

About leading the environment department for Risen 3 by: Sascha Henrichs

Screenshot of an early prototype of Risen 3 Being an artist means working at the frontlines of game development and most of the time your art is crucial to the reception of the game. Therefore you constantly have to evolve and stay up to date. No problem so far, you got your ducks in line. But what happens when you suddenly have to switch from your daily content creation routine to a job that also means leading and managing? To fill a lead position right after you have had a full time art job might be overwhelming and often hard to handle. Let me give you an idea based on my personal experience what to expect and how to react in certain situations.

A brief Summary I have been into environment art since 1998 and work at Piranha Bytes, a small company in Germany, . The games we forge here are single player RPGs in a medieval fantasy setting: Gothic 1, 2 and 3 and Risen 1, 2 and 3 plus Addons. I worked on all of them and saw the whole genre of the 3D open world RPG come to life in this period of time. Since the very beginning I was an environment artist and did all kinds of level things. From object modeling and texturing to object placing/staging, lighting, event placing, inventing/concepting of locations, modeling landscapes (yes, in the early years we modeled every bloody landscape by hand, height maps were introduced in our company with Risen 2), creating Speedtrees and whatnot. Since our company was and is relatively small, we also had a small environment team. Therefore the tasks were always very diverse and everyone had to carry a lot of responsibility.

VERTEX


Regular to Lead

75

After some successful years in developing - around 2001 or 2002 - I took my first attempt in taking over the lead position of our art department and failed. So why did I fail? To be honest, I simply didn’t know how to organize myself, let alone my team. I didn’t create a schedule, I talked too little and buried my head into my own artist tasks like I always did before. I didn’t ask others for help or got external information on what I could do to be more successful as a leader. I was also not self confident enough to report to the management at eye level. In the end I still recognized myself as a regular artist. I made my artist stuff, which I was good at, gave some short time goals to others, but had no eye for the big picture. I had no idea about what to have in mind as a lead. Fortunately though, I was able to see that something was wrong and although I still didn’t know how to improve the situation, I made the early enough decision to quit my position as a lead. This position was not mine. Not yet. So, how did it happen, that I went for department lead again? When Risen 2 (in which I wasn’t the lead) was in its last days of content development in autumn 2011, we ran into some serious trouble with one of our game levels - The ghost island. We recognized a major flaw in the planning. The level was just forgotten. And suddenly we had to finish a whole level from scratch in 10 working days, having nearly no art to reuse for this level. A level that would have caused work for over a month for ~3-4 people. New textures, new architecture, some new props, also one new cave. Everyone in the team was knee deep into some urgent last minute shit and no one really had time for it. So in the end it was up to 3 of us environment artists to make this level and we were pumped to accept the challenge. Since I was the senior of this small “tag team”, it was mandatory for me to do the administrative work and insist on communication. This was the starting point of getting into leadership again. And this time it felt right. We made a plan for how we could solve this task in the given amount of time and I created a schedule.

Another shot of the prototype. The goal was to create an interesting lava landscape within our inhouse engine We thought the key to success would be to simplify the art design to a minimum and maintain the idea of a ghostly location. We also decided to work more with post production, environmental and particle FX to save some time. So we just concentrated on the essential idea to get it done. The tight timeframe needed daily communication and high frequency updates. “Staying together” was crucial, no one was allowed to get lost in some details. But in the end we finished the task in time and I recognized that it was a pleasure for me to work that way. The management took notice of how we performed and shortly after this incident I was asked if it was ok for me to take the lead on our department. We then headed off and started prototyping for Risen 3 in this new structure. We had a few months to create prototypes for some level parts of Risen 3 with our new environment department. This time was particularly useful for us to get used to one another in this structure and to find good strategies for planning and executing our level tasks.

VERTEX


Regular to Lead

76

Defining the Parameters The start of a project might be the best point of time within a project to begin your new lead position because you get plenty of opportunities to put things on the right track. Like discussing design decisions, size of the project vs. manpower/budget and chronologic approach. Meaning when certain parts have to be done throughout the project in respect to other departments and outsourcing. E.g. you want to get your blockouts done as early as possible so the story department can place their characters to test dialogue, quests and pacing. Then again you will find that some engine features will be too far away in the schedule for your department to start off (like a heightmap module or object placing feature) then you can intervene and set things straight.

This is one of the tropical islands Risen 3 featured. Kila was inhabited by pirates along with indios. Though the stress factor might not be that intense in the first few months, you may be busy getting comfortable with the project and your department. It’s useful to take this time and get an eye for the big picture. How many resources do you have, and what are the individual skills of these artists. Being aware of your department will help you estimating workloads and identifying risks. Also leave your precious assets and textures for your colleagues and step back. Become aware of the kind of product are you going to ship and focus on relevant tasks. I can only speak for our open world game development, so what we did was cut the bigger project locations into sections and discuss their importance and priority. This affects when you work on each section. Then dissect the sectors again and assign the subtasks to the artists. You know your people and therefore you assign e.g. the architectural modeling to someone who likes it, or who needs to learn it and the vegetation stuff or whatever to another artist.

Meeting, Communicating, Nursing There are always things no one wants to do, like creating collision meshes or dummy meshes or fixing buggy stuff. You want to distribute those tasks equally over the department. Be fair and play with open cards. Meetings are the way to go for assigning tasks; also the boring tasks. When being most transparent on your schedule in an open meeting and you are willing to discuss certain problems a colleague might have on some tasks, one will hardly ever complain afterwards on getting all the annoying tasks.

VERTEX


Regular to Lead

77

You want to create the best working environment. Besides simple task assigning and discussing project related stuff, meetings are a perfect tool to deliver trust and respect from lead to artist and vice versa. You get the chance to display your understanding of the project and how you cope with it and your appreciation towards the work of your colleagues. By adressing single artists or small “tag teams” personally, you’ll find them being respected and recognized. Show, that you report to the management on behalf of the whole department. Here and there you’ll share some non crucial insider information. A high level of trust is a great way to gain loyalty. Also make it clear that you will try to shield the department from any possible threat, distraction or sanction from the management or publisher towards your department. So, e.g. art related feedback from outside will only be forwarded when you consider it relevant. You are in charge and your ass will be responsible when there are delays, because of endless “art directing” iterations. Concerning sanctions, you are also responsible for your department. E.g. if a coworker constantly comes too late to work, it is on your shoulders to get things in line again. Your boss should never bother your co worker personally. In fact, when your boss approaches your buddy personally it should only be necessary when he failed big time (And this should only happen after your own ass got kicked before).

An old mining location on Calador, which was created on the fly. In a meeting, your colleagues sometimes have good arguments about why to differ from your plan. It is on you to give in or discuss it further. However always be aware that you have the last call! You have the last call and when you use it, it is appreciated when you can explain why. You don’t always have to make everyone happy by explaining the shit out of things, but it will be recognized when you make the last call because you were not prepared enough, just egocentric, or weak. When you give in on a decision, it is only for the better reason or psychological reasons, not because you want to be the good guy and avoid problems. Your ability to change your mind based on new information and not abusing your status will be recognized positively by your colleagues and will reinforce your position. So a high amount of credibility can only be obtained when being prepared and having a particular plan. But it is not necessary to enforce your way. In fact you will also need to give certain things and choices away. There might be design choices you will leave open to the artist. Many, many times I left the outcome of a complete task open when assigned to one of my artists. E.g. We had so much terrain to be filled with natural objects like vegetation and stones and after some time into the project it was no problem to just let them go without further instructions. When you know your colleagues, you can easily estimate if they have a weird taste in some areas and you can channel different tasks.

VERTEX


Regular to Lead

78

A tropical storm on Kila. Unfortunately it was not featured in the final product. So how did we handle our To-Do’s? Schedules are tight most of the time and you need your people to finish on time. You want your department to be the best performing in the company so you need to be clear on your schedules. When you hold your meetings for the milestones, discuss every single to-do with your colleagues. Ask them how much time they need for every single task and write it down. You should be able to estimate, if their suggestions are realistic, if you are unsure add some extra time if necessary. Be clear that these estimations are mandatory. It also lies in your hand to help them keep their time table. You will be required to ask about the progress every now and then. Perhaps make some corrections in the schedule or the priorities when things get out of control. Whatever you do, keep yourself informed about what your department is working on. Sometimes it will glide out of your hands and situations begin to stagnate. Whenever this happens something is wrong. It might be missing art direction for the department on a particular thing, pervasive exhaustion, or missing guidance from the people you report to. Perhaps it is only you who is exhausted and over your own paralysation the whole department is dragged down. Or you are just uninspired. Often this happens in the later days of a project. Whenever this happens, it helps a lot to change a thing or two. It is best to avoid something like this in the first place by throwing in some interesting bits every now and then. This could be an “education friday” where you give a hands on lecture to your colleagues. Perhaps on a topic that has nothing to do with your project. Or go to a pub after work. Establish some kind of “department sub-culture”, decorate your room with your colleagues, have some quality time.

We gave our airsoft collection some love with serious christmas decoration stuff!

VERTEX


Regular to Lead

79

You can put together a package of to-do’s where two colleagues can simultaneously work together. These small teams who work on one “sub system” and who can organize themselves are usually highly performant and well motivated. Also you can show your appreciation when allowing your colleagues to work their on own designs and ideas for the project, when they are not colliding with the main schedule.

Egos and Criticism

Every artist regularly gets his ego kicked. And If you have a sensitive one like me, I can tell that it will not be solidified easily but constantly. You will be hurt and you have to get over it. You want the critique to be reasonable and friendly when articulated. This is exactly how your fellow artists in your department also want to be treated. You want your artists to be self confident and motivated. Be fair and friendly with your critique. Also before articulating any premature nonsense about your coworkers art, think twice. Sometimes their ideas are not too different from yours. Don’t give too much critique when not necessary. Ask yourself if your colleague isn’t right with their perception, that in this particular case you think a green sky is better than a red one. When you feel that the situation is more about who is right and who is not, and not about the sky color, just let off and make a concession.

Fog Island was a DLC location Let the skies be red and evaluate the whole scene later, when you both have had a chance to think about it. Perhaps you will see that they were right or the other way round. On the other hand you have to straight talk when you see that things gone utterly wrong. It is easy to identify situations where egos are colliding or where others just had a bad day in making artistic decisions. If you feel hurt, angry and your authority is being undermined, it might be ego stuff. If you just face palm and have great arguments against the bad looking asset, you are probably right.

Outsourcing You will always have people who are better in scene dressing and some who are better in technical modeling, some are more sculptors and some are better in making textures. Keep this in mind and use these insights for scheduling and outsourcing preferences. If you have no dedicated hard surface modeler, you need to arrange more time in the schedule for that kind of stuff. It’s as simple as that. Also try to assign individual matching tasks that satisfy your artists. Not only to make them happy, but also to get better quality through motivated artists.

VERTEX


Regular to Lead

80

It might be a good idea to outsource highly detailed assets and other potentially time consuming stuff. Let the gameplay relevant stuff stay inside the company. Try to outsource big connected chunks of work that are easy to manage and that do not much depend on gameplay. I.e. You don’t want your heightmap needs to be outsourced, it is simply not practical. Also you do not want to outsource special kind of topics like lava landscape stuff. This is highly bound to shader tech which typically can only be done on-site.

The last location “Skull Island” was a challenge. We decided to create unique assets for it, to create a special looking lava landscape Especially when outsourcing to bigger companies, plan to have enough time for detailed outsourcing management tasks. The descriptions of your assignments have to be 100% clear. Big outsourcing companies act differently than your local freelancer. Communicating with a company might need more time. You also might not get the quality you agreed on in the first place. Some companies give out top notch test assets to get the contract and then let the intern do the rest of the assets. Then you have to intervene to get your ducks in line. This is all very time consuming. When working with asian companies you might recognize language and cultural barriers. Everything has to be explained very detailed in documents and concepts. There might be a different language of “art” and it might take some time to get them on track for your project.

Dealing with the Management

The best starting situation for your lead position is to be a regular in this company for quite a bit of time. You will have plenty of time to observe your company’s internal problems, you will learn about deficiencies of the management and difficult personalities but you can also strengthen your impact and credibility by proactive and high quality work. You will have a better view on these aspects when not involved in management tasks yourself. The management creed is a special social caste with special needs. Often they act reasonable and things perfectly work out, because this is what they do. Make things work. However, sometimes the management also can be faultless and above any criticism. Here it is not usually helpful to insist stubbornly on your opinion. Do not try to change obviously bad decisions from higher management levels. Speak out but do not start a fight. Find a workaround. Often this will seriously hurt but you are working for a higher cause and this will be respected in the long run.

VERTEX


Regular to Lead

81

The “Dark Tower” on Skull Island was essentially built out of three sculpted stone assets On the other hand there are situations where you absolutely must not give in. This is when you deal the schedules for your department. Usually the meetings for the schedules are held at a very early time in the project and then receive some updates throughout a project. However take your time to write down a most accurate plan of things to do and lay down a realistic time plan for the whole project. You will always forget some aspects, but these will be fit in later in the project. Anyway, when the project lead wants a bigger project than your department is capable of delivering, it is in your hands to veto. No one will save you here but yourself. Make clear that your department has a certain amount of manpower and that this cannot be stretched to fantastic heights. Do not trade in quality for a content monster, trade in the high quantity for a good project quality and a sane schedule. Either the project has to be cut down to a reasonable size or the manpower needs to be increased. When you give in to early, you might have a terrible time all along the project or you will fail completely, also damaging your credibility. Remember: Graphics are a BIG selling point. If you deliver poor quality but you managed to hold the impossible deadline, the holding of the impossible deadline won’t be the thing that you will be remembered for. The poor quality will be remembered. So hold a reasonable deadline with the best possible quality.

VERTEX


Regular to Lead

82

Concerning the communication towards the management, it might be helpful to establish some kind of monthly round up of your department “todo’s”, “done’s” and vision, put them in a short email and send them around the companies mailing lists. This way the management and everybody else in the company is steadily informed about the current status of the environment department throughout the whole project. Also honestly mention delays along with things that went surprisingly good. I did this for our project and the feedback was exceptionally positive and also led to a trust benefit going out from the project management.

Learn to Teach

There are people who like to teach and there are those who don’t. Being delighted by teaching seems to be a natural perk for some people. Either you chose it in your character screen when starting your life, or not. But even when you like to give advice and teach, there is plenty to learn for you too. Not only in the field you are teaching but also concerning the teaching itself. Giving lectures can help you quite a bit. It might be the way you are explaining, or the way you are acting. Do you speak too fast, are you boring etc. Whatever it will be, you can improve it by giving lectures besides your regular job. If you are lucky to live near a town with a school for game development, do not hesitate and apply as a lecturer there. Most of the time the schools are glad to get teachers that come from the industry itself. In my time being a lecturer since 2006 I learned a ton about how I appear to people and how I act in front of people. Ask for feedback on your lectures and also talk to other teachers about the lectures and how to improve. It will also help you being a mentor to your colleagues at your company.

We are Family but not Friends

It might be helpful to monitor your own role in the department. It happens that you let your buddies down because you are too busy creating content for yourself and forget about your lead position. People who have no lead will go astray. Their productivity will stagnate and no one will be satisfied. Then it’s time to leave your assets alone and lead again. Feel for the different moods that might come up, and think about how could you change the situation by changing your own behaviour.

VERTEX


Regular to Lead

83

To help get the attention, it is useful to drop a bomb from time to time in form of a very cool asset or level design. An asset that is top notch and that the others can compare to. It is good to have some kind of rivalry going on about who can make the coolest art. Also it’s good to be an example of dedication. Spend some extra hours when suitable and motivate the others. Do not come too late to work, if you do, the others will follow your example faster than you think. Make clear that you are more interested in the project than in pleasantries, therefore be more of a mentor than a friend. However, when you look out for new artists, try to find “friends” for your buddies. I am soft skill focused when searching for artists, because one psycho can poison the whole departments atmosphere.

How to Save your Department’s Ass Last but not least I’d like to mention how we stayed on schedule and what we did when things got tight. So one option when you get yourself in trouble is to simplify an idea to its minimum. Often this is sufficient and the simplified asset or location fits into the setting flawlessly. E.g. cut down the art style of the problem child to simple shapes and nearly no special details. Often it’s enough to deliver a cliché with a realistic material rather than a full blown art directed, game designed and plausible piece. In Risen 3, we had an Island named “Fog Island”, where we wanted many shipwrecks around the beaches. Unfortunately those shipwrecks were not listed in any schedule. So I wiped away the idea of different full blown shipwrecks with cabins and doors and under deck arrangments, but only modelled a few Woodpieces of the frameworks of the ships and built some different skeletons with these in 3ds Max. Then I put some ropes as objects and some physx flags and the effect was convincing, it looked really good and no one ever missed any “in-depth” gameplay or a special art direction for the ships. The other thing that is worth mentioning, is to not lose yourself in endless revisions. Always try to achieve a quality that is very good, but come to an end, when the effort is too high compared to the benefit. Some artists do not like to hear this, but in the heat of the production, you sometimes have to accept the next best version of an asset. Many improvements will not stand out or add to a scene when pushed to the max. So yes, “do not trade quality for quantity” but when you are in trouble, this might be an option to work with.

Conclusion

And the last paragraph in my mind, I now should come to an end, even though I could write endlessly on this whole topic. Hopefully this long read was somehow applicable for you and perhaps gave you an idea. Feel free to contact me via email.

About Me Sascha Henrichs is an environment artist, born 1975 in Duisburg, Germany. Since 1998 he has worked at Piranha Bytes and on all published productions from the company. 2007 he also started lecturing and now regularly gives lessons in different schools all over germany.

Sascha Henrichs

www.saschahenrichs.blogspot.com

VERTEX


84

Sven Juhlin

www.daybreakcg.com


85

Seung Ho Henrik Holmberg

www.somniostudios.com


Sci-Fi Corridor

86

Sci-Fi Corridor

Concepting in 3DS Max and Vray by: Paul Pepera

Introduction

For this article, I wanted to create a series of three interior spaces and talk a bit around some design ideas for each one while using Max and Vray to concept them out. Even though I’ll be using Vray to do pre-rendered images of the scenes, I’m hoping that the concepts I talk about can easily translate to any other medium - such as real-time game work. Ultimately the principles of design are the same regardless of implementation - though work for video games does have the added multiplier of the viewer being able to control the game. The point of this article isn’t to teach you how to model using subdivision surfaces or how to get the best quality render out of Vray - there are plenty of very well written articles by far more capable people on how to achieve those things in this fine publication. For this piece, I am more interested in talking about the thought process that goes behind a work largely independent of tool considerations or technical restraints to get right down to the essence of designing a sci-fi room. Ultimately, the goal of an artist is to ensure their designs look as if they were not created by an artist but rather engineered for a purpose. In science fiction few things are as ubiquitous as the corridor - almost every work of fiction in this genre has one. There are a few things I specifically wanted to incorporate into these works. One of them is using cloth and soft bodied surfaces as an integral component of the works. In actual spacecraft interior cloth is ubiquitous. In places such as the ISS, the Soyuz spacecraft, and the no longer existence Mir space station, every nook and cranny of a space was used for storage. So let’s get started. There are already some decision I made before I began. I set the aspect ratio of the compositions to be 2.35:1 - which corresponds to a common cinematic ratio used in motion pictures. I love composing scenes to this aspect ratio and when working in a series I tend to prefer to keep all aspect ratios the same. It also creates interesting design challenges when you are forced to adhere to a consistent aspect ratios. For this first piece I was inspired mainly by the interiors of a C130 Hercules and C-5 Galaxy. Both aircrafts are large, cargo planes, with the C130 being extremely versatile. I want to keep things functional and believable. Wherever possible I try to avoid details and elements that are strictly ornamental in scenes like this. Also, have a photo reference up when designing and always pull inspiration from real life - your designs will be more grounded and believable.

VERTEX


Sci-Fi Corridor

87

I knew bags would be a large motif in this scene so I immediately started blocking out the room with them in mind. I quickly made a proxy bags just to have an element I can play around with in 3ds Max. I also take some elements already modeled from previous projects and drop them in to have more elements to play around with. Things are very sloppy at this point with meshes colliding into each other, normals smoothing errors, etc. - but that is all fine as I am only interested in extracting an interesting composition at this point.

Once I get something I find interesting, I start dropping in lights. Throughout the whole process modeling and lighting will be inextricably linked. I constantly do test renders to see how the composition is taking shape as lighting best informs what is working and what isn’t. One thing I wanted to experiment with was disorientation and zero gravity. I did not want there to be a true up or down in the space but rather have some a M.C. Escher quality to it - with conflicting planes of movement. I wanted ladders to go up to the same point but from different directions as in a zero gravity space such things are possible and can help create an interesting feeling of disorientation. To help create an impression of zero gravity, I add a dutch angle to the camera. It also tends to add more dynamism to the scene by making sure that the vertical parallel lines are not flush with the edges of the frame. It’s a relatively subtle effect that goes a long way to add more interest to the composition while helping reinforce the backstory of the setting.

VERTEX


Sci-Fi Corridor

88

To create the cloth elements in these scenes, I used Marvelous Designer. It’s an extremely powerful and versatile cloth sim originally created for fashion design - you can apply it toward almost any subject matter is soft bodied. You can make anything from a full character outfit to a duffle bag or beach ball. Apart from the storage bags, I also use Marvelous Designer to create some cloth covering bulkheads - such as the frames around the main door on the left and for the subsequent scenes in this tutorial.

I only create 3 different bag variants for the scene. To save some time and to increase the mileage of existing assets, I just rotate them around in the scene to create the impression that they are unique. I imagined the crew of this ship would store various things in them - food supplies, tools, clothing, etc. Every square inch of the spacecraft would be maximized for efficiency like in real life.

VERTEX


Sci-Fi Corridor

89

One thing I try to avoid incorporating into my designs is the random 45 degree angle. By random, I mean a shape that is purely ornamental. Every element in a scene should serve some sort of purpose apart from itself; minimize noise and getting down to the essence of a design by distilling it down to its purest form. I start to place the bags into the scene - piling them up against the sides of the walls, similar to how such bags would be stored in my aircraft reference. You can tell already how good reuse of a few assets has a lot of mileage. With such rapid concepting work, you want to expedite the process wherever you can - as long as the design, the mood, and the story of scene is communicated then you have done your job as an artist.

Originally, I was planning on putting some sort of storage canisters in the center of the space, but later realized there was an opportunity to put something more interesting there, so I started to concept out a quick spacecraft idea. Just like how vehicles are stored inside a C130 Hercules, I imagined that vehicles could also be stored in this space.

When modeling out the hard surface areas, I keep the geometry rather messy. I want to prototype these rooms rather quickly, so I’m not terribly concerned with keeping proper mesh topology. As long as the smoothed result looks fine, then pretty much anything goes. In a full production environment, such ways of working are probably not as acceptable - especially if animations and deformation is a concern.

VERTEX


Sci-Fi Corridor

90

You always want your designs to help reinforce the story and setting of the environment. Since this is a zero-G space, I wanted the door to be a design that could be operated from two different sides. I put the locking mechanism in the lower right corner where it could be reached from both access ladders. A large hydraulic arm would life it open and make room for people to climb up into it from the ladders. I put some warning symbols on it to reinforce the idea that it is a crush hazard.

In terms of modeling, the scene is quickly approaching completion - I started doing the final material pass on the elements. Orange materials are used to help draw interest to certain points in the composition - mainly the door on the left and spacecraft. I wanted the spacecraft to be industrial in nature, so I put a very bright color on it along with some warning stripes for added visibility. I add some text on the bags and door to create some secondary and tertiary detail reads and to help ground the elements in some sort of functionality.

VERTEX


Sci-Fi Corridor

91

The project has entered its final stages at this point. I do last minute adjustments to materials and lighting. I like very specific points of light sources and I only use a small handful of physical lights in the scene. I let the GI and bounce lighting extra most of the forms in the scene and only put lights where I want specific attention drawn. The final render is created in an .EXR format - which is a raw, high dynamic range image format that allows a very wide range of values. From this format, I generate several different exposures of the final render and composite them in photoshop. I do some fine color balancing and curve adjustments in photoshop to make sure all the right forms are extracted and to emphasize certain areas where I want the viewers attention to go.

For the next scene, I wanted to explore a more claustrophobic feeling with a tighter corridor. I got heavily inspired by the movie Gravity and used screenshots from that film for lighting and modeling reference. I immediately just start throwing down elements - cylinders, spheres, tubes, etc. I also take some components from other scenes and throw them in here. An advantage of concepting in 3D is that you can change perspective and field of view very easily. For this tighter scene I used a wider FOV - around 25 degrees - to help bring in more of the space into the frame and to make the perspective more dramatic. I’m playing around with round shapes - hatches, lids, canisters - to contrast the mostly square shaped motif of the prior image. I also wanted the scale to be tighter than the previous concept - a space only a single person would be able to traverse at a time. For the focal point I wanted to have a large, pressure sealed hatch.

VERTEX


Sci-Fi Corridor

92

As with the previous scene I wanted to incorporate cloth into the design. Functionally, I imagined this space would get rather cramped with storage - movement would be difficult and you would probably bump against things traveling down this corridor. I figured some of the walls would be padded to protect against injury or concussion in the event of an emergency with the crew of this ship needing to move quickly. Visually it adds a nice contrast between the hard surface elements such as the metal frames and doors. By now, I see something interesting already. I set the main door shape on the right third of the frame to create a better composition. The ladder on the lower left creates a good diagonal line to draw the attention of the viewer to the focal point of the door. It also helps reinforce the perspective and scale of the scene by giving an indication of roughly how large a human would be in this space. I also take some of the bags modeled for the previous scene and drop them in as placeholders so I can continue to compose the shot. I take the blockout mesh and start cutting some shapes into it that would comprise the cloth sections. Using Marvelous Designer again, I take the cut outs from the mesh and recreate them in the software. Alternatively, I could have simply used a tessellated version of the cut out meshes in MD directly. Once I get a good result in Marvelous Designer, I drop them back into Max and finesse the elements. I set my camera angle and aspect ratio so I know the bounds of the composition and what perspective is doing to the scene.

I adjust the composition again to keep the focal point on the right third of the frame. I want a bit more perspective in the scene to help sell the space and scale so I increase the FOV a bit. At this point, I’m pretty happy with the overall layout of the scene and will be modeling accordingly from this viewpoint. To help speed up the concepting process, I really only worry about geometry that is visible in the frame.

VERTEX


Sci-Fi Corridor

93

At this point the scene is quickly coming together. I start working in some materials and textures - mainly around the door frame. Narratively, I want to reinforce the door’s importance. I put giant red and white warning strips to indicate that there is a danger present when this door operates since it swings open. Fictionally, I wanted this scene to take place in a ship that is a joint American-Korean space incitative. To tell some of this backstory, I put bilingual signage on certain objects to help communicate the idea that the crew probably knows one , if not both, of the languages. It’s always a good idea to add secondary reads like that in a scene, to create interest and depth - and to reward audiences that take a slightly longer time to view a scene.

I’m pretty satisfied with the side walls of the scene so I turn my attention back to the main focal point - the pressure door. To add a bit more interest to the scene, I make it half open - to partially reveal the adjacent room. Also, to add some more supporting lines, I make some floating tubes that lead their way into the partially open passage. I got inspired for this particular detail by looking at pictures of the interior of the old Russian MIR space station, which was a mess of tubes circulating all around.

VERTEX


Sci-Fi Corridor

94

Most of the scene is illuminated by the 6 small light fixtures flanking the frame of the door. There is another light source out of frame, above the door, that adds a blue hue to the scene. As in most of my work, I try to keep my light sources localized and minimal - letting the bounce lighting in the GI do most of the work. Again, I looked at scenes from the movie Gravity as inspiration for the lighting and mood.

For the final piece, I wanted to play around with scale a bit more - make something far larger than the previous three rooms. I took the theme of the first room and decided to apply it to a space that was a lot bigger in size - a room that is around 200 feet in diameter. I used a spaceship I had already modeled as the focal point of this scene. Fictionally, I wanted this room to be a sort of mechanic hangar where these types of spacecraft would dock and operate out of. I started modeling the scene with a series of concentric circle shapes that would lead the eye of the viewer to the spacecraft. As with the previous scenes, things are kept simple and messy to allow easy changes of composition and elements. I like bold and repeating shapes and wanted the motif of this scene to be the circular ribs flanking the edges of the tubular shape of the space. The goal here is to make high level decisions on composition and overall design. I go through a few different layouts quickly before deciding on a direction I want to explore.

VERTEX


Sci-Fi Corridor

95

I like bold and contrasting colors. I found some reference images of interesting funnel stacks from power plants that had a red and white stripe pattern on them. Investing the shape and making it a cylindrical room with a similar paint scheme seemed very interesting to me. I imagined such a paint scheme would serve a logical function of helping pilots maneuver in this tight space as a reference point for position and speed. The numbers painted on the sides of the walls indicate specific positions a pilot should aim for when docking. The control room on the top would have a hangar operator that would help guide the giant craft into proper position. Also, at this point I set the final camera position and art the space accordingly from this vantage point. To respect the rule of thirds, I set the focal subject (the spacecraft) on the left third of the image. All the elements should be reinforcing the focal subject and leading the eyes to it. The surrounding of a viewer - whether in a 2D image or in a real time game environment - should also reinforce where you want the viewer to look or proceed.

Some of the subject matter I am referencing are industrial locations such as refineries and powerplanets. The walkway on the sides of the space are there mainly to reinforce the scale of the scene, but also to add more eye leading lines in the composition back toward the spacecraft. To add some more interest to the scene, I start modeling some stairways and additional catwalks. As the geometry becomes developed, I introduce some more lights into the scene and start to play with the color palette. As with the previous scenes, I always do regular renders to see how the overall images is taking shape. Watch where highlights are appearing, where shadows fall, if ambient occlusion is causing a shape to ‘pop’ well, etc. All this information will help inform you where to put your attention and maximize efficiency and speed.

VERTEX


Sci-Fi Corridor

96

Fictionally, I wanted to set this scene as if the spacecraft had just recently docked after finishing its daily work. The airlock door has closed and the atmosphere has been re-introduced into the environment. This reintroduction of atmosphere - combined with the temperature change - creates a misty fog in the scene as water condenses with the change in humidity. I achieved this “misty� effect by rendering out a Zdepth pass in V-ray. I also colorized this fog layer to add some more hue variation to the composition. This Zdepth render can then be layers on top of the .EXR render to create that foggy atmosphere - in this case I used a Hard Light blending mode on a layer that had 75% opacity. I also add some more patching cloud texture to this layer to more sensation of condensation to the space. At this point, I start a deeper material and lighting pass. I pick out which surfaces are highly reflective/metallic and try to contrast them against the more matte cloth elements. I also start doing basic UV unwraps of surfaces and started working on diffuse textures. I want to keep things relatively clean and simple to make sure the elements read well. I add some more walkways and stairs on the left of the image to create further foreground elements to help add depth to the scene.

VERTEX


Sci-Fi Corridor

97

By now, all the major elements are in the scene - it’s at this point where I added further elements to help sell the scale as well as make sure the lighting is taking the eyes of the viewer where I want them to go. I wanted another foreground element in the scene so I added a giant arm suspended from the ceiling. This arm would grip and hold the craft in place. This arm was inspired mainly from giant straddle carriers that are a common sight at seaports around the world. For the final render, I warmed up the tones and added some more secondary details - such as some warning stripes on the ceiling and decals on walls. I do some more curves adjustments and color balancing and add some subtle vignetting to draw attention back into the center of the composition. The final color palette is warmer with some greener hues with the intention that it would communicate the idea of humidity a bit better than the cooler colors of prior renders.

Conclusion

Whether you are building a game environment in Unreal Engine 4 or a pre-rendered static scene in V-ray, the same principles apply. Pull ideas from real life reference - it will help ground your designs and add functional depth to them. Unless there is a good stylistic reason for doing so, avoid details that are purely ornamental - keep things utilitarian and logical as it applies to the function of a design. The most important possession a person has is their time so you want to make sure you reward them by making your designing meaningful. Ask yourself questions such as “Why am I putting this into the design?” or “How does the design benefit from this decision?” As a designer it is important not to just make a pretty illustration, but to communicate an original idea.

About Me

I am a 3D artist/designer that has been working full-time in the games industry since 2007. I currently work at Oculus VR and have previously worked at Valve, 343 Industries, id Software, and Timegate Studios. Some of the projects I have contributed to are Halo 4, Team Fortress 2, Red Orchestra Heroes of Stalingrad, and Section 8. I was born in Poland and currently live in Seattle, USA.

Paul Peppera

w w w. p e p e ra a r t . c o m

VERTEX


98

Frederic Daoust

www.artstation.com/artist/fredericdaoust23


99

Kimmo Kaunela

www.artstation.com/artist/kimmokaunela


Houdini Cables

100

Houdini Cables & Pipes

Procedural UV-mapping of Cables/Pipes by: Magnus Larsson

Well, I am not sure what compelled me to do a Cable tool in Houdini, but suddenly it just clicked in my head that I knew how to make all the parts, so I just sat down and did it because of that simple reason. I’ve used Houdini for a couple of years and really like the procedural nature of the program since it can cut down tons of time on repetitive work. Also, I think it is way more fun to do a little tool for generating the art I want than actually sitting and doing the same thing over and over again. The image above shows just some of what the Cable tool can do, and I imagine I will polish it more over time. Right now what you do is putting out a curve (or spline depending on what you call it) and from that single curve it generates cables. There are easy controllers for how many “sub” cables it should generate and if it should use hangers or fall right on the ground. You have min-max settings for cable width and tension and you can define “Holder” geo if you want. The tool uses pre-rolled wire simulations so cables collide against against geo and each other. But since it is pre-rolled you don’t have to press play and wait for it.

VERTEX


Houdini Cables

101

However, if you have tons of cables, you will have to press a refresh button from time to time. The curves can be imported from other programs and if your program supports Houdini Engine you could just make a Digital Asset out of it and use this tool in one of those programs. Maya, Max, Cinema 4D, Unity and soon Unreal supports Houdini Engine. One of the things it does is that it procedurally UV-maps the deformed curves. How I did that is what I will describe here because going through the rest would probably eat the entire magazine. This being Houdini, you can do this a million different ways, but this is how I did it.

Building the Backbone Starting with putting out a Nurbs Curve, in this example, it is just one simple hand made curve, but you can easily apply the same things on any type of generated Curve. Then put down a Resample SOP which converts the curve to a Poly Curve and adds more points. Then put down a Refine SOP, activating both First and Second U setting them to 0 and 1, so it refines the whole curve. This is done to reduce the amount of points. The “Tolerance U” value will be your slider for how much detail of the curve that remains which means straight sections of the curve will have less points, translating to less polyloops in the final “cable”.

Now, let’s convert the optimized Poly-Curve back to a Nurbs-Curve. We do this so we can, in the next step, UV-map it. The important thing here is to set the “U Order” option to 2, which will work best for UV-mapping the curve. So, onto the UV-mapping. Put out a UVTexture SOP, Setting the Texture

Type to “Arc Length Spline” and Attribute Class to “Point”. Setting it to “Point” because we will later use a VopSop which only handles Point attributes. And, in the end, put down a Measure SOP set to perimeter, which later will be used to manipulate the UV. Basically, what it does in this case, is that it measures the length of the spline and adds that as a perimeter attribute. With that, the backbone of the cable is done. It has it’s UVs, it is optimized by the Refine SOP and it’s also the perimeter attribute, so now we turn to the Cross Section.

VERTEX


Houdini Cables

102

Building the Cross Section

The Cross Section starts with a Circle SOP set to “Polygon” mode and another UV Texture SOP also set to “point” mode. After that, an Attribute SOP. Rename the Point attribute name of “UV” to “UV2”, otherwise the attribute will collide with the previous setup UV attribute. Finish off with another Measure SOP. This will measure the circumference of our circle. An important thing here is that your BackBone UV’s are in the U direction and that your Cross Section is in the V direction.

Doing the Sweep With both the Backbone and the Cross Section done, it is time to put down a Sweep SOP; adding the BackBone nodes to the middle input and the Cross Section to the left. On the properties of the Sweep SOP under the “output” tab, set Skin Output to “Skin Unclosed”.

VERTEX


Houdini Cables

103

At this point, we should have “Cable” in the 3D Viewport and some ugly looking UV’s, like seen on the images. Since we put both the UV Texture SOPs before to be point UV’s, both the regular UV attribute from the BackBone and the renamed UV2 from the Cross Section, have “survived” going through the Sweep SOP. That means that we can now use a VOPSOP to split up the UV’s and combine them back together.

VOPSOP Trickery

After the Sweep SOP, put down a VOPSOP node, dive into it (by double click or select and press I) and create two Parameter nodes, one for the UV attribute and one for the UV2 attribute. The type of these needs to be set to “Vector”.

After each of the Parameter nodes, split the float3 with a “Vector to Float” node, combining them back together with a “Float to Vector” but using the U values from both (which will be the fval1 on the previous nodes), as the new U and V values (fval1 and 2). Connect the “Vector to Float” into a “Add Attribute” node, where the Attribute is set to “uv” (needs to be lowercase) and that should make the UV’s map correctly along the “Cable”.

During the writing of this, Houdini 14 was released and in H14 a “Bind Export” has to be used instead of the “Add Attribute”. The reason to set up a Measure SOP’s for the perimeter, on both the Cross Section and the Back Bone, is that if you are later Copy Stamping several Cables with different length and width, this will ensure that they get the same UV values, which we will see more in detail later.

VERTEX


Houdini Cables

104

Perfecting the UV’s Now that we, with the VOPSOP node, have fixed our UV’s, we need to scale the right. Before that can be done, we have a couple of things to fix. The first thing is to put down an Attribute Promote SOP and promote or UV attribute from “Points” to “Vertex”. The reason for this is because Point UV’s will have a seam where one side of our “Cable” will stretch from start to end. We can only fix that if our UV’s are “Vertex” attributes.

With our UV’s now as “Vertex” attributes, put down a UV-Texture SOP and set the “Texture Type” to “Modify Source”. Check the little toggle called “Fix boundary seams”. Now, the stretching UV will snap magically in place. The final step is to scale our UV’s so we will use the whole 0-1 space around the surface and scale based on the length. I am assuming here that you have a texture which is tileable in one direction, which you want to use the entire 0-1 in the other direction.

Put down a UV-Transform SOP and we are going to edit the translate and scale. By adding some expressions, this will become procedural and react to the width and length of the incoming curve. For the translate: X: -$XMIN Y: -$YMIN Z: 0 For the scale: X: prim(“../Measure_Curve”, 0, “perimeter”, 0) / ( prim(“../Measure_Circle”, 0, “perimeter”, 0) ) Y: 1 / $SIZEY Z: 1 You might wonder why we are working with XYZ in UV space, but that’s just how it is. XY is UV in the UV-Transform SOP. So -$XMIN is actually the U minimum. If U min is 0.5 and we put -$XMIN it will move it by -0.5, snapping it to 0. The same goes for -$YMIN.

VERTEX


Houdini Cables

105

The translate part is perhaps a bit more complicated. The X value is using our previously set up perimeter measurements, dividing the Length (the first Perimeter we set up) with the Width (the 2nd Perimeter). This will ensure that the U value is as long as it needs to be based off the length and width of the cable, meaning that there are several cables that will all look right. The Y value is just a simple 1 / $SIZEY, ensuring that it will fit within the 0-1 space. This setup will give different pixel density, depending on the width of your cable. That is because we are using the 1 / $SIZY. I set it up like this because textures for cables often want to use the 0-1 in circumference of the “cable”, but if you prefer not to do that, you can use the perimeter we measured to scale that.

Conclusion It is now done, but you can put down a UV_Quickshade SOP if you wish to see the result in viewport. Or, apply a material with the correct texture. Since Houdini is procedural, you can now go back to the original Curve SOP and edit it. Everything will be automatically updated. You can also do as I did in the main image at the beginning- spawn several Curves, with Copy Stamping, feed them different width and length, simulate them as wires, etc. and then just feed the Curves into this setup and they will be procedurally UV-mapped. This way, Houdini, saves you time and allows you to open up for new things you might never even have considered before. To get more insight to Houdini and procedural thinking for games, there is a great article on gamastura.com called “Go Procedural - A Better Way to Make Better Games”.

About Me

Name: Magnus Larsson Current: Senior Artist at MachineGames Past: Technical Art Director at Ubisoft Massive, Senior Technical Artist at King.com Magnus is a self-taught 3D enthusiast that began to play around with Lightwave and Blender in 1998. He got his first job in the game industry in 2000 and has released several titles since then. He designed much of the workflow experience in the SnowDrop engine. His main focus now is on Houdini and Modo, which compliment each other rather nice (at least in his mind). And, he prefers to figure out why a ForEach SOP in Houdini is not working over writing about himself.

Magnus Larsson

www.MagnusL3D.com

VERTEX


106

Sung Choi

www.sung-choi.com


107

Joe Tuscany

www.artstation.com/artist/crazyhorse


Marvelous Designer

108

Marvelous Designer

Marvelous Designer Tips and Tricks by: Xavier Coelho­-Kostolny

Marvelous Designer is a cloth simulation and garment designing program. Its greatest strength lies in its ability to help artists quickly iterate on designs and eliminate some of the more challenging aspects of constructing realistic clothing in 3D. While Marvelous Designer is a very powerful program, that is capable of producing incredible results, it is not designed to produce 100% finished products. Rather, it is at its best when paired with Zbrush and other 3D tools. As mentioned before, Marvelous Designer is best used as a starting point for sculpting. While Marvelous Designer is capable of exporting quad meshes, they are not ideal for sculpting and adding additional details to on their own, and they require some cleanup before they’re fully useable. Marvelous Designer really shines in its ability to eliminate the most difficult parts of creating clothing – laying out and sculpting realistic panels, seams, and folds. Marvelous Designer does this in a way that should be intuitive for most experienced 3D artists: By starting with a lowresolution triangulated simulation, the artist lays out the general shapes and arrangement of the garment and iteratively increases the resolution to run the simulation in greater detail. This allows you to work in smaller and smaller detail, arranging folds and seams as they go.

VERTEX


Marvelous Designer

109

Creating Patterns

In designing clothing patterns from scratch, it helps to think of the process in terms of how various 2D shapes can be bent and folded to create 3D shapes. In this way, the process of creating clothes in Marvelous Designer is very much like creating a 3D model from its UV map. In fact, many of the same shapes you would commonly see associated with distorted cylinders (such as sleeves for a jacket) and other similar shapes in a UV map are used in Marvelous Designer to create panels for clothing.

The similarity between UV island shapes and clothing panel shapes is no coincidence; these shapes are the most efficient way of converting a 3D shape to two dimensions and vice versa.

VERTEX


Marvelous Designer

110

Sewing panels together is as easy as selecting the Segment Sewing tool and clicking on two edges to sew them together, though it’s possible to have more complex connections. The Edit Sewing tool lets you select, delete, and switch where different seams connect to each other. Reversing sewing (Right click>Reverse Sewing or Ctrl+B) is helpful when working with panels that connect to multiple other panels, such as the tops of sleeves.

VERTEX


Marvelous Designer

111

The Free Sewing tool allows you to connect sub-sections of a single line in your pattern to sub-sections of another line. This can be helpful for partially unzipped zippers and for adding bunching to certain seams.

Properly arranging your garment patterns in 3D space is critical to getting a good fit and simulation. Changing your Gizmo coordinate system (Preferences>Gizmo) to your preferred behavior will make this task easier. In addition, doing an initial simulation with no gravity will keep your garments from falling away from the avatar before they have a chance to wrap around and fit to it (Object Browser>Scene tab>Simulation Property>Gravity). In addition to adjusting the gravity value to help with your garment arrangement, deactivating and freezing panels, helps to save time when potentially problematic areas need to be simulated for the first time. Deactivating a panel removes it from the simulation and keeps it from interacting with or affecting other panels. Freezing panels keeps them from moving during the simulation, but other panels will still sew to and collide with frozen panels. Freezing and deactivating panels can be done by right clicking on them in the 3D window.

VERTEX


Marvelous Designer

112

Complex Clothing and Simulation

There are multiple ways to join different panels together in Marvelous Designer. There is, of course, the more realistic method whereby two edges are laid over each other and then stitched together to form sandwiched layers. This can lead to attractive results, but it’s extremely time-consuming and difficult to manage with more complex garments and seam layouts. By contrast, it’s also possible to simply weld the edges of two panels together with little consideration for where the panels would realistically be joined. Even though this is a much less realistic method of working with seams, it can end up looking very similar to the realistic alternative after a small amount of cleanup in your chosen sculpting software. It is also much less time consuming to both arrange and simulate than more realistic seams.

VERTEX


Marvelous Designer

113

Interior shapes are the backbone for making garments have believable bunching, elastic, folds, interior stitching, and they’re the easiest way to add features like quilting and patches. By creating interior shapes in panels of clothing, you can easily add details (that would otherwise be impossible (or at least time consuming and difficult) using only regular stitching and panel construction.

VERTEX


Marvelous Designer

114

Interior shapes make it easy to add bunching for elastic and draw strings without having to actually make panels for each small piece and layer.

Interior shapes are also good for adding structured folds and creases that are not possible otherwise without sculpting. In this example, the interior shapes form the basis for the folds, and then the simulation adds additional shapes and wrinkles around them.

VERTEX


Marvelous Designer

115

One of the best applications of interior shapes is creating seams that panels can be sewn to. In this case, an exterior pocket can be easily sewn to the outside of a jacket where an interior shape has been drawn on a panel.

Another way to take advantage of interior shapes is to create holes and darts. Holes have various applications, especially if you’re working with garments that will have things like space suit valves of stitched tears. Darts are useful for reshaping flat panels into complex curves like you might see on dresses and fitted jackets.

VERTEX


Marvelous Designer

116

Using multiple materials for your garments is a great way to get different levels of detail and wrinkling in the same way that you might want to use different specular or gloss values for material definition. In addition to adding visual interest, using multiple materials also allows you to add structure to areas that would otherwise be saggy or shapeless. The best areas to use thicker or thinner materials are in stiff collars, zippers, inner linings, and patches. In this example, only the materials have been swapped; no other changes were made, including to the dimensions of the panels.

Layering is a key component of many types of clothing, but it can be difficult to affectively create layered clothing in Marvelous Designer. There are a couple of different applications of layers, namely for creating things like patches and thick garments like stuffed jackets. When working with layered to garments, you may run into situations where different layers fight to be on top of each other, similar to z-fighting. The best way to fix this is to adjust which layer a given piece is assigned to. Higher numbered layers should try to push themselves outwards from the avatar.

VERTEX


Marvelous Designer

117

Working with layers drastically increases the number of collision calculations necessary for good final results, so there are a couple of different options for decreasing CPU load in these situations. Using lower resolution for interior panels is a good option in many cases, especially if the internal panels won’t be visible in the final product. As you can see in the above example, the results are indistinguishable as long as there isn’t a huge disparity in resolution that could lead to lower layers showing faceting.

Buttons, zippers, and other attachments can be simulated by breaking them down to their most basic shapes and using those as additional panels and internal shapes. In the above examples, snaps are simulated with ring-shaped internal shapes and zippers can be simulated by using a stiff cloth preset and a long, thin panel.

VERTEX


Marvelous Designer

118

With many types of garments, you may occasionally see polygonal faceting on pressure points such as shoulders and elbows. To minimize cleanup later on, it can be helpful to subdivide these areas while leaving other areas at lower resolution to maximize performance. In this example, the smaller facets on the right will smooth out much more cleanly and easily during sculpting than the larger facets on the left.

VERTEX


Marvelous Designer

119

By adding a slight bend to your avatar’s knees, elbows, and other joints, it’s possible to add believable folds and wrinkles in those areas without having to worry about sculpting them later on. Smart usage of blend shapes and UV mapping will also allow you to create wrinkle maps by using differently posed versions of the same avatar with the same garment. And there you have it — a brief overview of Marvelous Designer along with some of the key tools and concepts you need to get started making believable clothing.

Conclusion

As you can see from this brief overview of Marvelous Designer’s key tools and features, it’s a very in­depth and powerful program. Despite its depth, however, creating realistic and convincing cloth has never been easier. Expect that within the next few years, knowledge of Marvelous Designer and similar programs will be essential parts of the character artist’s toolkit.

About Me

My name’s Xavier Coelho­-Kostolny. I’m a freelance 3D character artist currently working with Facepunch Studios on Rust. After many years practicing modeling and texturing, I got my start in the industry making items and weapons for Team Fortress 2 in 2010. From there, I moved on to studio jobs on MOBAs and MMOs, and finally went freelance full time at the beginning of 2014. I’ve always enjoyed seeing where the industry is going and I love being on the cutting edge by learning about innovative new techniques, tools, and software.

Xavier Coelho-­Kostolny

w w w . x a v i e r c k . c o m

VERTEX


120

Antonis Karidis

www.artstation.com/artist/roen911


121

Drew Hill

www.artstation.com/artist/drewhill


2.5D Texturing

122

2.5D TEXTURING

Texturing using a Fixed Camera Perspective by: Matt McDaid

We have all seen those street artists that can create works of art on the ground, giving the illusion of 3 dimensional depth and those skewed advertisements on sports fields, that when focused on by the Television cameras, read perfectly, yet from any other angle appear visually broken. We know it’s because they are only intended to work from a specific viewing angle. The technique I am going to share focuses on achieving a painterly aesthetic used in conjunction with a fixed camera perspective. I’ll elaborate a little about my texturing techniques during this, but it shouldn’t really be deemed as a ‘Hand-Painted Texturing’ tutorial. It is more about the 2.5D texturing principal on the whole; something I learnt whilst working within a fantastic team as an artist on ‘Diablo III: Reaper of Souls’. I would like to pay it forward by sharing this with the VERTEX art community. The majority of this procedure can be accomplished in 3DS Max / Maya and Photoshop. However, I like to include 3D Coat for the ability to paint directly onto the geometry enabling me to visualize the results instantly. In Diablo, due to having the camera position fixed at 45 degrees to the grid (Fig. 1), we are able to obtain greater depth in our textures, often referred to as ‘2.5D Texturing.’

VERTEX


2.5D Texturing

123

Fig.1

2.5D Explained

This style of texturing is best illustrated in the 3 directional templates in Fig. 2. Each plane contains a grid of 2D squares that have been made in to cubes by adding the necessary line work at a 45 degree angle which then corresponds to the appropriate surface. Red = Left Wall, Blue = Right Wall, & Green = Floor. When attempting to use this technique, it’s important to adhere to these specified angles, otherwise it will have the same effect as the camera being moved and will consequently break the illusion. Depending on how you paint light information in to the textures, UVs can be flipped on the walls enabling you to share the same texture.

Fig.2

Fig.2

Fig.2

VERTEX


2.5D Texturing

124

Scene Preparation

To begin creating a diorama scene like this, I import the core geometry on a per-material basis (no props etc) in to 3DCoat. This step can also be achieved in Photoshop, but you would be required to export the textures frequently in order to read how they look under game camera circumstances. Once 3D Coat has initiated, as the models have their UVs laid out, I begin by selecting the ‘Paint directly over UV’d model’ option. I fill all the diffuse textures white and then begin sketching very rough structural information into my textures indicating where I want my main elements to sit within the scene. Also during this step, I will suggest where I want my 2.5D angles to fall on my textures. It is important to lock down the camera angle throughout this sketching process. If you find yourself zoomed in to too far while painting details, the easier it becomes for your viewing angle to break. In those circumstances, usually later on in the process, I get into the habit of regularly pulling the camera out to enable me to evaluate the overall look. Once complete, I export the textures and use them as guides in Photoshop.

Fig.3

Painting Once in Photoshop, I overlay my sketched textures for reference. I also create a few guidelines at 45 degrees as visual aids. Again, I rough in the 2D shapes and ensure the texture tiles accordingly (Fig. 4). Once I’m happy, I’ll go in and start introducing depth to the textures by painting the angles along a 45 degree vector. (Fig.5) Due to the in-game camera being some distance away, I keep any high frequency detail to a minimum in order to maintain a better overall read of the medium / low frequency details. Should I feel the need to include small details, I try and keep them within a similar value range to their surrounding pixels. This reduces unwanted contrast and prevents them from fighting for visual attention with the larger details. When all my angles are defined, I then start to push and pull some elements to really emphasize the depth that I want to achieve. (Fig. 6)

Fig.4

VERTEX


2.5D Texturing

125

Fig.5

Fig.6

I won’t talk too much about values here, but on the traditional value scale of Black = 10 White = 1, I mostly stay between 3 and 8. I only ever delve into the 1, 2 and 9 range for the brightest accent details, and the darkest of shadows respectively. As a base, I always like to start with a mid-grey value and then slowly introduce a layer of broad shadows and highlights, adding on extra layers of refinement one after the other until I’m content with how the contrast is reading. (Fig.7) After this step, I like to throw in a handful of other hues in order to get some variation going on in my diffuse. Having a texture with the same uniform hue throughout can appear very flat and consequently a little boring. It doesn’t need to be anything of great contrast, but rather a very subtle variation of colors. So much of this texture painting is about laying down darks next to lights in order to define the shapes. Take advantage of the fact that you are using textures instead of geometry, so don’t feel constrained to just painting flat, extruded-like forms. Tilt, buckle, and break the details to create a more natural, painterly aesthetic. The older and more worn the material, the more likely it will be to have undulating details. Remember, it is an important part of our role as game artists to be able to substantiate our games narrative and to provide a level of instant deep context. Adding character and story related details to our textures really helps support the whole narrative for our games while adding a layer of charm.

Fig.7

While there isn’t much call for physically accurate shaders for this painterly look, that shouldn’t stop you from having a strong understanding of how materials react to light. Material definition is a key part in successfully creating textures. Knowing how to paint light information in to different material types can make or break the outcome of how your visuals will read overall.

VERTEX


2.5D Texturing

126

At this point of the process, I am always going back and forth between Photoshop and 3D-Coat checking the texture in situation from the game camera angle to ensure that it’s reading ok. Once I’m happy with it in Photoshop, I’ll finalize it in 3D-coat and tighten up any wayward angles by painting in some deft amendments. A typical video game texture may start off looking like the one in Fig.8, but by applying the techniques outlined above you can really achieve that extra layer of the depth. (Fig.9)

Fig.8 2D

Fig.9 2.5D

Game Cam view Fig. 10

VERTEX


2.5D Texturing

127

Geometry Tips! A useful tip to include, that really helps sell the overall illusion is, when butting these textures next to adjacent surfaces, try adding cheap, basic cuts to the geometry (Fig. 11). The idea being that you want to affect the surfaces silhouette slightly and give it some extra character. Add some topology around a couple of the features and pull them out a little, being careful not to cause too much UV distortion. When the two surfaces are together, it can trick the player into thinking that there is more going on in the scene than just texture work. (Fig.12)

Fig.11

Fig.13 Fig.12

Flat Shaded with Lighting

Fig.14 Wireframe on Flat Shaded

VERTEX


2.5D Texturing

128

Scene Review & Lighting When I feel the textures are near their finishing stages, I will take a screenshot of the scene into Photoshop. The screenshot then gets positioned on its own layer above a separate layer of black (0,0,0) with the layer mode set to ‘Luminosity.’ From this, I am able to determine how the overall scene reads in terms of value. I specifically look for light and dark contrast and try and ensure that each component compliments one another. (Fig.15) Once I feel satisfied with the way the values are looking, I’ll turn my screenshot layer back to ‘Normal’ mode and begin to assess how well the hues are working with one another; the overall goal being that I want my hues to sit well in the scene and be distributed in a way that will complement each other and not compete. There are also areas of focus that we need to draw the players eye towards. In this case, it’s the table and the many books on it. I have mostly achieved this with complimentary lighting. With the bulk of the cool lighting coming from the exterior window light, I have been able to direct the eye towards the table with the placement of warm candle lights. This is complimentary lighting in its simplest form. (Fig. 13)

Props

With regards to props, for this scene I have used the 2.5D method as I don’t need to re-use them elsewhere. For a full game environment, if you want maximum re-usability from your props, then I’d recommend texturing them the traditional way so that they can be positioned in any orientation. To make them stand out a little from the background, I create a top to bottom (white to black) Ambient Occlusion gradient which helps them pop a little, while still grounding them to the scene. (Fig.16) From then on in, it’s just a case of refining and iterating until I’m happy with the overall scene.

Texture Example

Fig.15

Fig.16

VERTEX


2.5D Texturing

129

Conclusion

I hope this article offers a valuable insight in to the 2.5D texturing process and offers some inspiration towards your future art projects. Whether you’re making a top-down AAA PC game, a tablet game, or just a hobbyist with the intention of creating hand-painted visuals, I feel that this process can really help you achieve that painterly direction.

About Me

Since my older brother introduced me to video games, and my older sister introduced me to art at a young age I’m pretty sure there was a degree of fate that led me to become an artist in the games industry. I was born in Wales, UK and have been working in the games industry for just under a decade. Before games, I spent 5 years as a 3D modeler for Military Simulation Programs. Whilst I got to make lots of cool stuff for the Aviation industry, I always had a passion for stylized games and so I really wanted to push myself artistically and so consequently pursued my dream of making art for games. The first company I worked at was the late, great Free Radical Design, which later became Crytek UK. From there I joined the excellent team at Splash Damage in London where I got to work on some great projects alongside some seriously talented artists. I currently find myself surrounded by talent again in every direction, this time at Blizzard Entertainment where I work within the World of Warcraft Dungeon Team as a Senior Artist. I have recently worked on both ‘Diablo lll Reaper of Souls’ and ‘World of Warcraft: Warlords of Draenor.

Matt McDaid

w w w. a r t s t a t i o n . c o m / a r t i s t / m a t t m c d a i d

VERTEX


130

Ilya Kuvshinov

www.artstation.com/artist/kr0npr1nz


131

Jose Daniel Cabrera Pe単a

www.artstation.com/artist/thrax


Booleans

132

Booleans

Hard Surface Boolean Workflow by: Hunter Rosenberg

Have you ever thought to yourself, “There has to be a faster way to make hard-surface models with all those complicated cuts and bevels!”? Well, there is! It’s called Boolean based modeling. When most hear this, they nod and smile, but this isn’t your grandma’s booleans! It’s what I would like to consider a new era of boolean based modeling. As the years have progressed, more and more programs are embracing boolean based methods of modeling and I think you should, too! Once I explain how Booleans are meant to be worked with, modeling with Booleans will be fun and explorative!

Boolean Thoughts and Methodology

A Boolean, by definition, is an algorithm that has many operating sets: ADD or SUB. Originally, Booleans were introduced in NURBS based modeling used in CAD design. In the program that I use, Autodesk Fusion 360, they will be called Combine, Cut, and Intersect. Each operation produces a different result. For the Addition operation, it joins the two bodies for you to merge and chamfer or fillet in between them. You can keep cutting out more shapes using a Cut operation which is excellent for taking out chunks with boolean shapes that are premade or custom. You can use them to cut or add intricate shapes into the body in an extremely fast manner. Last, but not least, there is the Intersection, which takes the active selected bodies volume to slice everything from the outside or inside of that object and the body remaining inside stays intact. Each of these boolean operations usually have multiple options to reverse the selection process.

VERTEX


Booleans

133

Additive and Subtractive Boolean Process

Booleans without the trash geometry

When doing these operations, you are putting on serious computational stress. This is a killer in your boolean workflow. You want to keep the object as clean as possible. What I am saying is you really want to create your model with the least amount of moves. Whenever an operation that would be an add, subtract, or intersection is initiated, the computer has to calculate between the cuts you’ve made, visible or otherwise . The more cuts, additions and subtractions, the heavier the load that the object will put out. The amount of effort on the CPU/GPU involved to mathematically calculate a cylinder with 0 booleans as opposed to a cylinder with 6 booleans is substantially less.

Comparatively body A puts less stress on the graphics load than body B does

VERTEX


Booleans

134

If you aren’t careful, at the end of your project, the Boolean object will tend to look faceted and pock marked. Sometimes it ends up having weird normals, pinches, seams and marks that conflict with your overall flow and cleanliness of the model. The reasoning for this is that when it’s time to export the selected object, you want it to look as clean as possible. With every program or modeling method, there are upsides and downsides. Sometimes, when you merge or filet between objects that have been boolean’s combined together. You can get the dreaded star pinches and n-sided edges. If you have a lot of overlapping stared corners, it causes a lot of issues later on. It’s best to avoid this as much as possible

The upsides to booleans are that the amount of time you save and iterations that you can pump out for your model is increased ten-fold. With Booleans, you don’t have to worry about patching the faces that you end up cutting out or slicing. These are booleans and are mathematically solid objects.

This is why you can keep cutting into them without having to worry about keeping a good polyflow and just focus on your forms and shapes. You can punch holes through the most complex of models with ease in a single operation.

VERTEX


Booleans

135

How to make non-flexible planes/faces bend

Instead of tweaking verts and cutting polygons till the cows come home, it’s more about obtaining the shape that you want in the least amount of cuts and filets. When booleaning, less is more! The less cuts the cleaner the overall result will show in Fusion. This follows over to other programs you end up exporting the body out to. Booleans are perfect, but there are cons. The cons of booleans is that they are not flexible. In traditional polygonal modeling, you have the option to push and pull vertices and edges if you are unsatisfied with the shape of the poly object. Booleans do not contain actual movable/editable vertices. They contain what I would call planes with edges that act like a hinge If you want the plane (face) to be rotated at an acute angle or anything in between, you can also extend and shorten the plane similar to an extrusion. If you are making a major cut/Boolean and it ends up the wrong shape and is too small, there is very little you can do outside of making the hole bigger or smaller, especially if you are past the point of doing an undo. This doesn’t exactly apply to doing boolean operations in Maya or max. There are polygons in those programs. Fusion is planes, not polys. Personally, if I am booleaning in poly based programs, I like to keep the object’s polycount that i’m using for booleans as low as I can get and just boolean with that. The great part about booleaning with polys is that you do have that flexibility to adjust that inner shape you punched out. In Fusion, there are no deformers to bend or warp the bodies. It is critical that you plan your moves ahead of time when it comes to creating shapes with booleans. Slicing the block to get the curve form I need because there are no deformers in fusion.

If I want a curved body, I have to cut the curve of the body shape I want. Just like the above photo shows, I cannot take a solid rectangle and bend it with a deformer to the shape that is wanted. Whether it’s going to be used for VFX/Movies, Gaming, or Commercials/Personal work. It will benefit you to have cleaner shapes to bake maps off of or retopo or just to clean up and use as is. It is important to understand and actually try and experiment for yourself how these Booleans behave and interact when you start cutting, slicing, filleting. A good thing to note is that this program tries to make operations that would function in the real world, so it is important to try and think “realistically”.

VERTEX


Booleans

136

Protip: it is extremely helpful to use planes from the bodies to begin putting down your sketches for slicing the body in half or to begin another body using the plane to assist your accuracy of your boolean. When creating a Boolean object, it’s best to approach the object as if it was a solid block of metal and your modeling tools are the bits in the milling machine. Notice in the photo on the previous page, when making those slices, which is the named process in Fusion, I treated them just as described. You are trying to emulate the manufacturing process. Booleans have more options than just subtractive modeling. When it comes to dealing with booleans, it is an additive and subtractive process. Knowing that, you can take two or more objects and add then blend between two or more separate bodies using a loft method to create very unique and hard to model shapes.

I am lofting between two different bodies to create a body that blends in between the two to create a singular unified and blended body.

VERTEX


Booleans

137

When modeling using Booleans, it is important to start extremely basic with your shapes and move up to more and more complex cuts. Think of it in zbrush terms. You want to start with your blocky basemesh and after you nail out all those shapes, you start adding all the features. Whatever you want to build will come from a huge mass that you either create out of the standard bodies or from curves/sketches that generate a block mass to cut into. When I start out modeling, I will start out with a giant chunk. It’s usually bigger than what is needed and it gets cut and trimmed down to size or added onto, if need be. If I need more control and accuracy, I’ll use curves or sketches. These are what I would consider the most effective methods of cutting into Booleans to shape. If you are going to build the mass from curves, which most of the time I prefer, there are rules that will make the the body it outputs easier to make changes to and handle. When drawing sketches or using curves, it is important to place as few curve points as necessary. This allows you to edit the shape on a more detailed level and speeds up the workflow.

This is how i start 90% of my projects. For curved parts with an angle, you are going to want to make use of the filet sketches/curves bezier function for nice rounded sides, but it’s best to hold out filleting any sharp edges that you can and filet it when it becomes a solid body. If you look at the picture, most of the edges are left hard and sharp. This is because I have more plans for them later. The only edges filleted are the edges of that cylinder and that is it for now.

VERTEX


Booleans

138

Blocking in shapes and using the sketches/curves to trim down or add to sections.

VERTEX


Booleans

139

From this point, you can use sketches/curves to trim down the body or you can start booleaning and combining more bodies to get the shape and volume that is desired. You can do this by selecting multiple bodies, combining them, and filleting between them. This creates those nice tig welded areas that end up catching light in your final render. Bodies that intersect and were combined are now a single object (body), allowing you to fillet and chamfer between those edges that would, otherwise, take up a ton of time welding verts and polys to patch everything up, and then applying some edge loops to get this desired effect. In Fusion, this process is almost instantaneous. You can chamfer it as well. It’s really up to personal preference.

Emulating Life with Chamfers and Fillets

It is very tempting to chamfer and fillet all of your objects early on. This is understandable because it makes the model look amazing. The problem with this is, that when you fillet you are creating a rounded edge that is mathematically generated. Now that the fillet is set, you really can’t work that edge anymore besides pulling it to create a sharper fillet and even then it might not agree and function. In fact, in Fusion, this makes modeling around multiple fillets a big inconvenience. So, it’s best to hold off and chamfer all your edges until you absolutely need a curve to define the shape of that body. Personally I like to add chamfers and then worry about the filets later. That way you don’t have to look at unrealistically sharp, unchamfered edges that don’t catch light. Chamfers are a little more forgiving. It creates a flat face between the 2 faces adjoining edge. you can adjust width and angle to your liking. This can’t be done with fillets when Fusion’s history is turned off. So be aware. You can however filet the chamfered edges much later on giving it a real sophisticated look that has all those edges catching light in a realistic way.

When i’m this far into the model is when i start to add a nice amount of chamfers.

VERTEX


Booleans

140

You can start adding a filet here and there to Define a big shapes outline. It’s important that your model catches light realistically so you can get a feeling for how the chamfers and bevels will read. You want them to read properly and not a ton of randomly sized chamfers and fillets. When doing any boolean operation you can boolean with any complexity of object. If you are planning to add it further on, when the object is very much worked over. It is best to use it as boolean when it is already filleted and chamfered to your liking. My reasoning behind this is that you don’t want to spend all your time filleting and chamfering all those tight and hard to reach corners after its tightly inside a crevice. This in itself is extremely time consuming to filet after booleaning on a complex object. To make all those offsets rings on the inside i usually use sketches to build on the planes to use as not only guides, but to use them as slice templates to slice the object in half. The amazing thing about this is there is no patching or welding when you slice or cut. This cuts the down-time of patching polys to speed up your workflow immensely. When doing this it should always on clean edges. The edges that I would consider clean, are the edges that have not been fileted at the corners of the body.

When it comes to offsetting edges its best to leave the filet till the end.

VERTEX


Booleans

141

It is important to note that when using booleans that have issues to cut other objects/bodys that it will cause a problem that will compound later on in your body/file. This ranges from export issues to polys exploding in some areas come time to export or just a good ol’ fashioned crash.

Stamps: The Ultimate Boolean

Stamps..what are stamps? They are the amalgamation of all your booleaning knowledge into you guessed it ...a stamp. This stamp is perfect for when you blocked in the main forms and you really want to start punching in complex details immediately with fantastic results. This is what I consider booleans strongest selling point. These prefabricated (by you) boolean shapes known as ‘Stamps’ is used to either combine, subtract or intersect into the model. This will help speed up your modeling workflow by leaps and bounds.

Above is the stamp i will be using to boolean out these shapes.

VERTEX


Booleans

142

When creating Stamps its important to know that you are to be punching out the inverse of whatever the stamps shape is. If you do not want that. What you need to do is punch that shape into a clean block then trim off the excess. Now you have the original shape that will be punched in with a convex shape, getting the desired effect.

At this time in the model I have been just merging other bodies, filleting between them, stamping with pre made shapes.

VERTEX


Booleans

143

Exporting

When using Booleans the one thing you want to keep in mind coming into a program such as Fusion is, how are you going to deal with the body after you are finished creating it. Exporting your boolean out with proper normals to the other programs is very important. While this type of modeling is invaluable, it does not work well come time to use in a gaming engine or to be used in production for movies/vfx. It’s triangulated and extremely ugly. Be that as i may, there is 2 exporting methods I personally have experimented with great success.

.STL Just do it

There is stl. I love stl format. It usually has the least amount of issues when you export and you can set the density of mesh. There are multiple settings and a slider so you can get really specific with poly densit. This is awesome for quick cleanup for baking.. Its only downside is that you can only export one .stl at a time and with a scene with a lot of pieces that could take a long time. It comes out triangulated, but if set to a low enough resolution and the body is just chamfers clean up is pretty easy. For anything more complex that has been filleted to the nth degree, thats where your retopo programs come in handy i.e. topogun, 3dcoat, maya, and Zbrush to name a few. Getting the file ready for export as .STL

VERTEX


Booleans

144

When importing into max with stl, make sure to select “quick weld” on the import options and don’t touch anything else. This is the best way to make sure your normals look as good when you were working on the boolean. I have found that anything else and you end up having weird facing normals that will ruin the look you are going for. You want to take this beautiful boolean from one program and preserve its normals and facing angles through program to program. Maya imports the stl format fine, the problem is that there is no option to do Max’s quick-weld option on the import menu, resulting in weird pinched looking normals. Oddly enough if you go through max to maya it ends up looking great!

IGES for Big and Dense Files

There is Iges which is great, but it comes with a few caveats. First off it doesn’t seem to import well with Maya but does just fine for keyshot and Max . Also when they are imported into 3ds Max. It comes in its own IGES format and must be removed from that group by literally clicking and deleting the iges group until its just bodies in the browser. Then you can apply an ‘edit poly’ modifier added onto it to get it ready for clean up or uv’s. Now its ready for obj export. Otherwise, IGES format is amazing if you have a ton of pieces in your boolean set that you need exported out asap as a single file as opposed to piece by piece, file by file like in stl. This is all the iges data now converted to an obj. This geo is great to retopo and bake.

When importing into max with stl. Make sure to select quick weld on the import options and don’t touch anything else. This is the best way to make sure your normals look as good as when you were working on the boolean in Fusion. I have found that anything else and you end up having weird facing normals that will ruin the clean machined look you are going for. You want to take this beautiful boolean from one program and preserve its normals and facing angles through program to program. Maya imports the .stl format fine. The problem is that there is no option to do Max’s quick-weld option on the import menu. This is how you import .stl files cleanly that preserve all the normals information

VERTEX


Booleans

145

Conclusion

This type of modeling is a dawning of a new detailed based modeling era. It’s fairly new and just like Zbrush I see this type of modeling especially using Fusion to be a real up and coming program to work into your workflow. While topo programs have yet to perfect retopologizing hard surface models, it isn’t too hard to clean the triangles and make a nice clean mesh to use in engine for baking the normals back into it. The benefits of using boolean based models is undeniable. From baking sharp very detailed normal maps to just being able to get the concept in 3d as a guideline to use in other programs. The purpose of this article is to shed light on a brand new workflow that is very young, but yields amazing promise. I assure you if you take the time to explore this type of modeling you will be able to speed through some complex parts that would take days to do in other programs with an unprecedented amount of precision.

About Me

I am have been in the industry for 4 years working on Feature films, games and VFX. I primarily work as a Modeler/Generalist In the United States, California..I like getting down and dirty with my polygons and I never sleep. I have always used the Nurbs modeling mode in Maya and it translates over nicely to Fusion 360. Its all the same ideals and methodology. I have a huge passion for working on environments and hard-surface pieces and props. I’m always trying new programs because you never know when that next big program will be worked into the pipeline.

Hunter Rosenberg

w w w . h r o s e n b e r g . c o m

VERTEX


146

Maciej Kuciara

www.facebook.com/showtimebook


147

Rafael Grassetti

www.grassetti.wordpress.com


The Raider

148

THE RAIDERSteven Stahlberg

Creating The Raider Image by:

Why did I make this image?

I do a lot of boring, repetitious work for pay. Especially back then for the Elder Scrolls – hair styles, beards, jewelry, etc. Every once in a while, I try to make an image just for myself. This idea came from the 2013 Tomb Raider Reborn contest, to create an iconic Lara Croft image in time for the launch of the new game. I never planned to submit it because it broke the rules right from the start – it used the old Lara outfit and not the new one. The idea just jumped at me when I read about the contest and had a passing thought like, “I wonder what I would do if I was entering the contest”, and bang...there it was. That doesn’t happen too often, so you have to catch it when it does.

Getting Started

I started with a pen sketch on paper. I always steal a stack of copy machine paper and leave it near my work station ready to catch ideas. Later, at home, I traced the sketch in Photoshop on my Wacom and then colored it. The colors naturally grew out of the lighting idea and the lighting idea came from my habit of trying to integrate the lighting inside the image and if possible even make it a main feature of the image. In my opinion, this improves logic, makes the image more self-contained and more interesting, and the elements more integrated. Interesting lighting is beautiful lighting. In this case, the flash of the guns is pretty much the only source of light, which I think works well aesthetically. It also increases the drama, emphasizing the violence of the explosions and freezing the moment as if with a camera flash.

VERTEX


The Raider

149

But ironically, a little later, the idea changed into “Not Lara” - instead some generic protagonist with a laser rifle in place of hand guns. I thought the laser rifle looked more impressive than the hand guns. The laser bolt allows for even more creative lighting and for smoke rising, not right in front of the protagonist (obscuring her), but off to the side, and for a continuous burn – which in turn allows for a long stretched out wound and more tension in the action. Other changes included a tongue that now touches the protagonist, a bigger monster, and a stronger perspective (larger legs, smaller body). Problems I noted at this stage: the perspective of the monster is wrong. I haven’t decided on clothing yet because the character is still generic in my mind. The monster is also too generic.

Adding Details

Below, I worked out some more details. I gave her a pleated skirt (obviously because I’m a perv), some generic background made from an interesting photo of trees I googled. I strengthened her hair blowing backwards since I imagined the monster exhaling forcefully, maybe roaring in pain. This also happens to flip the skirt up. Hey, internal logic is everything! But, the perspective is still wrong, giving the monster head a twisted badly constructed feel. In fact, I only made it worse with my efforts. I think it was around this time that I decided I would eventually build the whole thing in 3D, even if just roughly, to make sure I got that damned perspective. Blocking things out in 3D also helps with lighting.

VERTEX


The Raider

150

I tried moving the eye down and changing the direction of the burn scar, but I don’t think it improved anything, rather the opposite. So…

Adding 3D

I took the scene into 3d by modeling some simple shapes in Maya. This only took a couple hours as I had a poseable female figure already handy. The rigging is crappy, but the shape and proportions are pretty good. The topology is good for Zbrush sculpting. Concept artists and matte painters do this all the time, and I find it helps a lot. In this case it solved my perspective problem, though the pose is not quite resolved yet. She doesn’t have both feet firmly planted in a logical way. I also added rough lighting, it doesn’t take long and can also help a lot. I just added 4 point lights, 3 positioned along the upper part of the laser beam, and 1 placed where the beam exits.

VERTEX


The Raider

151

I moved the tongue to be behind the girl, for several reasons. It obscures her less, seems more natural for the creature, and follows the flow of composition better.

Then, I thought that instead of spending a week sketching and painting and re-painting detail in Photoshop, continuously guesstimating shading, color and shape, I decided to spend a couple days sculpting the simple Maya shape in a 3D sculpting software.

VERTEX


The Raider

152

I took the scene into Zbrush and started adding details. This is always easier to do in 3D than by painting and for several reasons, 1: you’re only thinking about one thing, the shape. 2: you can use symmetry and only need to model half the model. 3: coloring is a separate pass, making it possible to focus on it separately. 4: lighting is also separate, and automated, and therefore always better than what we can imagine and paint. 5: Experimenting is quicker and easier, allowing for easier flow of ideas in shorter time. At this stage, I started thinking that maybe in addition to a 2D final image, I might also be able to make a 3D print out of it.

I added the character, refined the pose, detailed the gun, sculpted anatomy and started the hair. I made some simple fallen-down sock shapes – because I had now decided to make the character a Japanese school girl. It was the best I could come up with, to fit with the pleated skirt. Perhaps I should have made her a super-hi-tech sci-fi assassin in a tight cyber suit instead… but I’m so bored with those I could cry. Or, perhaps a medieval fantasy theme, complete with illogical non-functional female armor? Also incredibly boring to me. (Sorry if that’s your thing.) I added more scales to the monster. Also note the edits to the area that would be just above the upper lip on a human – adding some more interest instead of a boring straight line, which is what it was before.

VERTEX


The Raider

153

Here I’ve sculpted the socks (from image-googled reference), finished the hair in a kind of stylized cartoony way, made the tongue pointier, and detailed the gun further. I also made shoes for her, but they can’t be seen from this angle. The above is a hardware rendering from inside Zbrush, using Wax Preview and a skin shader downloaded from the internet. Here I’ve also been doing some painting, particularly effective is the painting using different automated masking options, such as “Detail”, “Curvature”, “Occlusion” etc that take advantage of all the hard sculpting work that has already been done. This worked sort of like Ambient Occlusion to really bring out the shapes, darkening grooves and highlighting scales.

VERTEX


The Raider

154

Here, I’ve added the skirt modeled in Maya and a torn jacket, then exported the mesh back into Maya and rendered it in Mental Ray. I used Final Gather with a background color of medium grey, placed about 7 or 8 point lights along the path of the laser beam. I used a simple Phong shader native to Maya . (I only bothered to texture the monster head at this stage.) Now I started a series of renders that I planned to layer in Photoshop to better control the lighting. First I rendered one with no falloff on the point lights.

Then I switched the linear falloff back on (and increased intensity by a lot, otherwise there’s hardly any light at all).

VERTEX


The Raider

155

Then I rendered a third image with no lights and only Final Gather.

These were then combined in Photoshop like this, layered with Screen mode. This is what all the layers look like at 100%. I also added a specular pass. As you can see this is much too bright, and also rather ugly with disgusting colors – it’s also too low resolution, both in the exported textures, and the overall rendering size. So, now begins a very extensive repainting process in Photoshop. First, I tuned the layers by making some of them more transparent, until the whole looked…

VERTEX


The Raider

156

More like this. Then I collapsed the layers, raised the resolution and zoomed in to paint colors and cleaner details.

VERTEX


The Raider

157

The final image, but still quite low resolution.

VERTEX


The Raider

158

Next, I had to raise the resolution and the only way to do that after the fact is by painting. I probably should have kept the Zbrush and Maya renders higher res, but the computer was struggling and I don’t mind painting. I had to make lots of little adjustments anyway, in Photoshop. It only took about 2 days. The final painting is on the left, on the right is the image before I began.

VERTEX


The Raider

159

Here’s the final high res image, notice I cropped the eye out. I felt that the skin of the monster was just too samey and boring, when it occupied almost 70% of the image area.

Conclusion

I recommend using 3d to inform your 2d work. And, I also recommend using 2d to augment your 3d work. I’ve heard some artists claim that both methods are cheating, but these artists are almost always not professionals. Also, keep practicing your drawing skills! Thanks for reading, hope you liked it! Check out my graphic novel on facebook, or on the website with the same name, “Android Blues”!

About Me

Hi, I’m Steven Stahlberg, Swedish/Australian artist 55 years old with a wife, 2 grown sons, 3 motorbikes and 3 cats. I’ve lived in Hong Kong, Malaysia and Texas. When I made this tutorial I worked in Baltimore on the Elder Scrolls Online MMO. I currently work with Darkside Games in Florida. I’ve been interested in, and created, art my whole life, as far back as memory serves. My training happened in a couple different art schools, then on the job, my influences are mostly comic artists and scifi book cover artists and record cover artists. I dig music and play a bit, too. Comics, movies and music are my big interests. I used to read a lot of scifi until I was about 30. After that I guess I just sort of ran out of free time.

Steven Stahlberg

www.patreon.com/Stahlberg

VERTEX


160

Khyzyl Saleem

www.artstation.com/artist/khyzylsaleem


161

Vince Rizzi

www.artstation.com/artist/vincerizzi


Silhouette Modelling

162

Silhouette Modelling

Back to the Basics of Character Modeling by: Steffen Unger

Intro

With ZBrush getting better at creation of geometry, a lot of people, especially beginners, tend to forget simple and effective modeling methods which are easy to use. By creating basic shapes, it’s easier to follow concepts and iterate them upon request. I do use dynamesh when concepting in 3d, in cases where shapes are not clear from the start, but working with already designed concepts, consisting of complex and clean shapes, I would definitely not chose dynamesh as my most preferred approach. There is time and space for all these methods, but this procedure suits me best for working on client projects with a given concept. When improvising I usually use a mix between this method and dynamesh.

VERTEX


Silhouette Modelling

163

I call it Silhouette Modelling as it starts with the object’s most prominent silhouette. I always preferred it over other classical poly modelling methods such as box modelling and edge by edge modelling. Box Modelling usually is a backwards thought process- you take a box and subdivide it to deform it into shape. Edge by edge starts with the topology in mind, making it easier to create certain shapes, but I don’t want to worry too much about topology at the early shape definition stages. I concentrate on the big shapes instead of taking care of topology and edge flow. I will now show it on one of the characters created for the Microsoft & Moon Studios Game “Ori and the Blind Forest”. This is one of the concepts for Ori by Johannes Figlhuber, which got changed during the process and iterated upon in 3d, so the result will not follow this concept entirely. I picked Ori as he is a pretty simple character to demonstrate the approach. The game is based on easy readable silhouettes, so it is a perfect fit for this demo. I use this technique for almost anything, be it on the models I created for Halo, Crysis or even for complex-stylized-looks like Settlers. I usually start with drawing out my model parts from their most important angle. This is the sideview for pieces such as the torso, head, ears, tail and leg. I start with the angle that has the majority of pieces in it, so I have a lot of the landmarks set early on.

First I create one silhouette from the side view

VERTEX


Silhouette Modelling

164

The next step would be to create pieces that cannot be captured from the side. In case of this model, this would be the arm and possibly the antlers. But in this example I will ignore these details to keep the visualisation of the workflow clean and simple.

Now, it is time to establish the 90째 angles from our initial silhouettes to add some depth to the whole model. While Topology slightly starts to take a role here now, this stage is still very simple and things are easy to change. At this stage I play a lot with the silhouette of things to make them match the concept but also work good in 3d. Often times, things can get lost in translation from two dimensions into the third.

This is pretty much the core to the method, from there on it is only about adding more geometry to support the shapes After this is done, I start adding most, if not all back sides and make sure all parts are at least 4 sided. This way I make sure that the silhouette keeps up at all possible angles. While being abstract still, the model starts to take shape in all dimension and can be viewed from pretty much any angle.

VERTEX


Silhouette Modelling

165

This makes it easy to tune things if needed. This is also the stage where I start taking care of the distances between edges so I get a nice and even topology later on.

Once you have everything set up to be 4 sided, tools like flow connect or set flow make the creation of in-betweens a lot easier

All in-betweens modelled in From there on it is all only about cutting in details, making connections between the joints and just subdividing surfaces more to add detail and extra pieces like his antlers. So far most pieces have been kept power of 2, we started with 2 edges, made the pieces 4 sided and then 8 sided. Subdividing this makes sure that connections between joints will be easy to create.

VERTEX


Silhouette Modelling

166

Final Mesh

VERTEX


Silhouette Modelling

167

Conclusion

Every method has it’s upsides and downsides, but with new approaches being introduced, old and established ones get neglected more and more, even though they still have a lot of substance. Beginners starting in our field only see the latest stuff and follow those approaches, without ever learning how simple it is to create clean meshes in a very fast way. Dynamesh, as great as it is, is not the best workflow when it is about following a concept, iterating from it and making it work in 3d. It is always simpler to have a handful of polygons to define a certain shape and then subdividing it to make it smooth, rather than having to push and pull around thousands of polygons and smoothing out possible bumps one will create in that process.

About Me

I am the founder and lead character artist at Airborn Studios, a small game art outsourcing company from Berlin, Germany. I joined the industry in 2001, starting as an environment artist, but quickly settling over to character art. After 5 years of working on the same project, it was time for a change, so I went freelancing. I had the chance to work on many more projects than I would have if I stayed inhouse. By now I have worked on nearly 50 game productions, though only around 50% of those became real products or will soon. I am very happy I chose this path, I got to work with giant studios on huge projects, but also on smaller ones with more creative freedom like Ori and the Blind Forest.

Steffen Unger

www.artstation.com/artist/airbornstudios

VERTEX


168

Layna Lazar

www.artstation.com/artist/lazar


169

Markus Neidel

www.artstation.com/artist/markus


Voxel House

170

Voxel House

Breaking Down the Voxel House Demo by: Oskar Stålberg

Introduction

My projects typically revolve around some central idea that I want to explore. Here, that central idea is a particular content driven approach to modular tilesets that I’ve had on my mind for a while. This project could have been created as a Python script in Maya or a node graph in Houdini. However, since I don’t want my final presentation material to be a dull narrated youtube clip set in a grey-boxed Maya scene, I created an interactive web demo instead. As a tech artist, the width of my skill set is crucial; I’m not a master artist nor a proper coder, but I’ve got a slice of both in me. I’m most comfortable in the very intersection of art and tech; of procedure and craftsmanship. A web demo is the perfect medium to display those skills.

Figuring out the Tiles.

The core concept is this: the tiles are places in the corners between blocks, not in the center of the blocks. The tiles are defined by the blocks that surround them: a tile adjacent to one block in the corner would be 1,0,0,0,0,0,0; a tile representing a straight wall would be 1,1,1,1,0,0,0,0.

VERTEX


Voxel House

171

Since each corner is surrounded by 8 possible blocks, each of which can be of the 2 possible states of existence or non-existence, the number of possible tiles are 2^8= 256. That is way more blocks than I want to model, so I wrote a script to figure out which of these tiles were truly unique, and which tiles were just rotations of other tiles. The script told me that I had to model 67 unique tiles - a much more manageable number.

I could have excluded flipped version of other tiles as well, which would have brought the number down even further. However, I decided to keep those so that I could make some asymmetrically tiling features. The drain pipes you see in concave corners of the building is one example of that.

Boolean setup in Maya.

Being the tech artist that I am, I often spend more time on my workflow than on my actual work. Even accounting for rotational permutations, this project still involved a large amount of 3D meshes to manually create and keep track of. The modular nature of the project also made it important to continuously see and evaluate the models in their proper context outside of Maya. The export process had to be quick and easy and I decided to write a small python script to help me out. First, the script merges all my meshes into one piece. Second, a bounding box for each tile proceeds to cut out its particular slice of this merged mesh using Maya’s boolean operation. All the cutout pieces inherit the name and transform from their bounding box and are exported together as an fbx. Not only did this make the export process a one-button solution, it also meant that I didn’t have to keep my Maya scene that tidy. It didn’t matter what meshes were named, how they were parented or whether they were properly merged or not. I adapted my Maya script to allow several variations of the same tile type. My Unity script then chose randomly from that pool of variation where it existed. In the image below, you can see that some of the bounding boxes are bigger than the others. Those are for tiles that have vertices that stretch outside their allotted volume.

VERTEX


Voxel House

172

Ambient Occulusion

Lighting is crucial to convey 3D shapes and a good sense of space. Due to the technical limitations in the free version of Unity, I didn’t have access to either real time shadows or ssao - nor could I write my own, since free Unity does not allow render targets. The solution was found in the blocky nature of this project. Each block was made to represent a voxel in a 3D texture. While Unity does not allow me to draw render targets on the GPU, it does allow me to manipulate textures from script on the CPU. (This is of course much slower per pixel, but more than fast enough for my purposes.) Simply sampling that pixel in the general direction of the normal gives me a decent ambient occlusion approximation. I tried to multiply this AO on top of my unlit color texture, but the result was too dark and boring. I decided on an approach that took advantage on my newly acquired experience in 3D textures: Instead of just making pixels darker, the AO lerps the pixel towards a 3D LUT that makes it bluer and less saturated. The result gives me a great variation in hue without too harsh a variation in value. This lighting model gave me the soft and tranquil feeling I was aiming for in this project.

Special Pieces

VERTEX


Voxel House

173

When you launch the demo, it will auto generate a random structure for you. By design, that structure does not contain any loose or suspended blocks. I know that a seasoned tool-user will try to break the tool straight away by seeing how it might treat these type of abnormal structures. I decided to show off by making these tiles extra special, displaying features such as arcs, passages, and pillars.

VERTEX


Voxel House

174

Floating Pieces

There is nothing in my project preventing a user from creating free-floating chunks, and that’s the way I wanted to keep it. But I also wanted to show the user that I had, indeed, thought about that possibility. My solution to this was to let the freefloating chunks slowly bob up and down. This required me to create a fun little algorithm to figure out in real time which blocks were connected to the base and which weren’t: The base blocks each get a logical distance of 0. The other block check if any of their neighbors have a shorter logical distance than themselves; if they do, they adopt that value and add 1 to it. Thus, if you disconnect a chunk there will be nothing grounding those blocks to the 0 of the base blocks and their logical distance will quickly go through the roof. That is when they start to bob. The slow bobbing of the floating chunks add some nice ambient animation to the scene.

Art Choices

Picking a style is a fun and important part of any project. The style should highlight the features relevant to a particular project. In this project, I wanted a style that would emphasize blockiness and modularity rather than hiding it. The clear green lines outline the terraces, the walls are plain and have lines of darker brick marking each floor, the windows are evenly spaced, and the dirt at the bottom is smooth and sedimented in straight lines. Corners are heavily beveled to emphasize that the tiles fit together seamlessly. The terraces are supposed to look like cozy secret spaces where you could enjoy a slow brunch on a quiet Sunday morning. Overall, the piece is peaceful and friendly - a homage to the tranquility of bourgeois life, if you will.

Animation

It should be fun and responsive to interact with the piece. I created an animated effect for adding and removing blocks. The effect is a simple combination of a vertex shader that pushes the vertices out along their normals and a pixel shader that breaks up the surface over time. A nice twist is that I was able to use the 3D texture created for the AO to constrain the vertices along the edge of the effect - this is what creates the bulge along the middle seen in the picture.

VERTEX


Voxel House

175

Conclusion

The final result is like a tool, but not. It’s an interactive piece of art that runs in your browser. It can be evaluated for it’s technical aspects, it’s potential as a level editor tool, it’s shader work, it’s execution and finish, or just as a fun thing to play around with. My hope is that it can appeal to developers and laymen alike. In a way, a web demo like this is simply a mischievous way to trick people into looking at your art longer than they otherwise would.

About Me

I’m Oskar Stålberg. I grew up in the university town of Uppsala, Sweden. I have two brothers. The three of us share a love for math, science, and problem solving. I do, however, beat them at drawing, so it was obvious to me from a young age that I had to do something related to that. After secondary school, I spent two incredibly stressful, yet fruitful years at The Game Assembly learning all the basics of game art. Ubisoft Massive picked me up as a technical artist and that’s where I’ve been since. I like creating things that are beautiful, interactive, and responsive. I value elegant solutions. I am the most comfortable in the very intersection between tech and art and I’m always on the lookout for new intersections as such.

Oskar Stålberg

w w w . o s k a r s t a l b e r g . c o m

VERTEX


176

Dmitriy Barbashin

www.artstation.com/artist/dmitriy_barbashin


177

Jang wook Kim

www.artstation.com/artist/azure


Matte Painting

178

Matte Painting

Alien: Isolation’s Matte Painting Workflow by: Creative Assembly

Introduction

During the development of Alien: Isolation, we faced many interesting technical challenges. One of them was that in a game focused on highly detailed, small interior environments, we really needed to have areas where the player is able to see “outside” of the Sevastopol space station, which the game takes place on. Being able to see the exterior of the station and the vast emptiness of space outside was key in contextualizing the player’s location in the world, as well as signposting and foreshadowing the player’s journey through Sevastopol.

VERTEX


Matte Painting

179

There was no way we could model, texture, and light pieces of architecture which were in some cases up to 0.5km tall in real world scale using the same technology and methods we were using to build the rest of the game. We needed a solution to render detailed distant backgrounds – and cheaply. The solution we settled on was to use digital matte paintings – a technique often used in the Visual Effects industry to extend the backgrounds of shots in film and TV. Our approach was to composite a digital image generated offline (rendered using VRay from highly detailed scenes, and then painted over by a concept artist) into the game at runtime.

An example of a matte painting used to depict a distant background in film VFX (Cloud Atlas, 2012).

VERTEX


Matte Painting

180

Process Breakdown

Our process for the production of each matte painting would broadly break down into several stages: 路 High-res geometry and detailed lighting are set up in 3DS Max 路 The scene is rendered at high resolution using a high-end offline renderer, in our case VRay 路 The composited renders are passed on to a concept artist to do final paintover and detailing 路 The final matte paintings are projected back from the original camera, onto low-res geometry to be used in the game

Camera and hi-res geometry setup

VERTEX

VRay Render


Matte Painting

181

Paintover

Proxy Geometry

The process of creating a render from VRay and then working up to the final matte painting in Photoshop was relatively straightforward, but when it came to correctly rendering the final projected matte paintings in-game, there were several obstacles to overcome.

VERTEX


Matte Painting

182

Final Matte Image

VERTEX


Matte Painting

183

Final Matte Image

VERTEX


Matte Painting

184

First steps: Camera-aligned UVs

So our goal was to project a single, visually coherent painting onto arbitrary geometry from a camera. Our first approach to the projection was to simply align the UV coordinates in 0-1 space relative to their position in the camera’s screen-space using the “Camera Map” modifier in 3DS Max. This approach caused a number of artefacts.

VERTEX


Matte Painting

185

These issues manifested as all kinds of triangular stretching artifacts which appeared when the texture was applied to the geometry, especially on faces where their normals were near perpendicular to the camera view angle and/or with high FOV cameras, as illustrated below:

Test Scene

Test Camera View

What we quickly realized was that the traditional quadrilateral UV interpolation was not suitable for what we were trying to do.

VERTEX


Matte Painting

186

Result

Desired Result

Technical Hurdle #1 – UV Interpolation

Using quadrilateral texcoord interpolation, we get the typical diagonal seams – this is exacerbated by using sparsely tessellated geometry. If we take our “desired result” from the previous diagram, you’ll see that what we want is for our UV interpolation to be linear in the screen-space of the camera. However, this means that the UV interpolation will be non-linear in world-space:

VERTEX


Matte Painting

187

What we see here is a typical and well known situation where quadrilateral texcoord interpolation breaks down. What we need to do here is use projective interpolation - which is the inverse of the math transform which we commonly use to map a 3D scene onto a 2D image. In short, the fix was to do the inverse of world-view-projection for our UVs on a per-pixel basis in our shader, based on the camera which was being used to project the texture. In our final implementation, we used a standalone script in 3DS Max to bake the projection matrix of the camera into the constants of the shader which would be used on the geometry. The final shader doesn’t actually use the UV coordinates from the model at all (we didn’t even export UV coords to the final models to save on memory). Instead, the shader uses the stored projection matrix from the camera to reconstruct projected vertex positions in object-space. This means that the geometry can be edited and the projection will update in real-time; because we store the projected vertex positions in object-space, that also means that we are free to move and reposition the matte paintings (in-game geometry) at runtime.

Technical Implementation / Shader Code:

Let’s take a quick look at the math behind perspective projection. We will start with a simplifying assumption about the camera and then extend our solution to work for all cases. Suppose that the camera is located at the origin, facing along the Z axis. We want to project a given point P onto a virtual screen situated somewhere in front of that virtual camera. Let us also assume that the screen is one unit away from the camera. It is a rectangle 2 * a units wide, and 2 * b units tall, where a and b depend on the field of view: a = tan(horizontal fov / 2) b = tan(vertical fov / 2).

Our goal is to find the coordinates of point P’, which lies at the intersection of the virtual screen and a line from the origin towards P. From Thales’ theorem, we get: P’.x = P.x / P.z P’.y = P.y / P.z P’.z = 1 We can discard the z coordinate, as it will always be 1, but we are going to use x and y to calculate the projected texture coordinates. The x coordinate of all visible points will be between -a and a, whereas the y coordinate will be between -b and b. Assuming that the texture origin is in the lower left corner, the u and v coordinates will be:

VERTEX


Matte Painting

188

u = P’.x / (2*a) + 0.5 v = P’.y / (2*b) + 0.5 Substituting the equations for P’ into the above, we get: u = (0.5/a * P.x) / P.z + 0.5 v = (0.5/b * P.y) / P.z + 0.5 Lastly, let us also rearrange the terms a bit: u = (0.5/a * P.x + 0.5 * P.z) / P.z v = (0.5/b * P.y + 0.5 * P.z) / P.z We now have the formulae to calculate texture coordinates from any vertex in the scene. If we perform this calculation for every pixel, we will have the correctly interpolated coordinate to use in a texture lookup. In that case, our vertex shader would simply pass P into the pixel shader. But we can move a part of the calculation into the vertex shader, so that we calculate as little as possible per-pixel. The parenthesized expressions above are only linearly dependent on P. Anything that is linearly dependent on the inputs to the pixel shader can be moved to the vertex shader. Therefore, the vertex shader will calculate a vector V: V.x = (0.5/a * P.x + 0.5 * P.z) V.y = (0.5/b * P.y + 0.5 * P.z) V.z = P.z Then, the pixel shader must only perform the final division: u = V.x / V.z v = V.y / V.z Finally, the system of equations for calculating V can be expressed in matrix form: [(0.5/a) 0 0.5 0] [ 0 (0.5/b) 0.5 0] [ 0 0 1 0] [ 0 0 0 1] This projection matrix can then be concatenated with a camera transformation matrix, thereby allowing arbitrary positioning of the matte painting viewport. We used a MaxScript tool to calculate the concatenated view-projection matrix, and save it as a shader constant. Our final shader performs two scalar divisions in the pixel stage, and a single vectormatrix multiplication in the vertex stage (in addition to the calculations necessary to render the geometry, of course).

Technical hurdle #2 – Parallax

Another problem we encountered was that the geometry that was overlapping from the POV of the camera would cause the far geometry to receive the projection for the object in front of it. This is fine when viewed from the exact same position the matte painting was projected from, but as the viewer moves further away from that point, the artifacts become more obvious:

VERTEX


Matte Painting

189

In offline rendering, you would typically separate each layer of parallax into a separate image/layer, but in realtime rendering we have to be mindful of texture memory constraints so this isn’t a viable approach:

The ideal solution would be to support multiple projections in one texture, with manual or automatic packing. Unfortunately, due to game production pressures, we couldn’t justify the time to create a proper workflow to support that. Also, the amount of wasted or “unused” pixels in the texture is likely to be quite large. Therefore, our solution was to use regular UV mapping for some parts of the mesh (usually anything which did not produce the kind of UV artifacting which required us to use projective UV interpolation). So, our matte painting shader was split into two in-game shaders which are identical, except that one uses regular UVs with quadrilateral interpolation and one uses our previously discussed projective approach. These will be referred to as the Projection Shader and the UV Shader (in real world terms we used an ubershader approach with multiple discreet features). What we would do here is render our overlapping parallax layers out in two separate images; we would then composite them in such a way that the objects in each painting are not overlapping (in this case by moving only the Red teapot into unused space on the image):

VERTEX


Matte Painting

190

Now, using the same approach as our very first initial render, we do a camera-space projection for the object UVs (in our case using the Camera Map modifier in 3DS Max) and then manually offset the UVs to align to the correct part of the texture by eye :

VERTEX


Matte Painting

191

This results in an image which doesn’t have incorrect projection artifacts when viewed from an angle that is quite different to the original camera’s position and orientation:

However, using the UV Shader does reintroduce the triangular seams we had before we added perspective UV interpolation to the Projection Shader, as we’re using quadrilateral UV interpolation again. Luckily, those interpolation artifacts don’t appear universally across all pieces of geometry, so our approach was to use the Projection Shader as infrequently as possible and only where UV interpolation artifacts were obviously present, and to try to use the UV Shader as the dominant method. This worked very well for us and allowed us to get the same quality with one matte painting texture as we would, in one particular case, have otherwise needed up to 11 separate paintings to eliminate the same parallax artifacts. In some cases though (below), if either the shot presented too severe perspective distortion issues or there wasn’t enough unused space on the texture sheet – we did also split different depth layers up into separate paintings:

VERTEX


Matte Painting

192

Finished Matte Paintings

Top: Artist-delivered matte painting with several overlapping layers. Bottom: Final texture used in the game. Note that the bottom-left and bottom-right segments of the final texture remain in the same place as they are projected using the Projection Shader, all other depth layers in the image have been repositioned to fit on the same texture page with no overlapping so that they can use the UV Shader.

VERTEX


Matte Painting

193

Green: Uses Projection shader. Red: Uses UV Shader

Ingame screenshot.

VERTEX


Matte Painting

194

Top: Ingame screenshot. Bottom: Green areas are matte painted. Note: all of the matte paintings were stored in DXT1 (or BC7 on next-gen) including a 1-bit alpha mask which is stored in the Alpha channel of the texture, this adds a lot of silhouette information beyond the geometry silhouette and allows us to use just a plane for some parts of the projection – very lowpoly!

Conclusion

In summary, this approach allowed us to use extremely cheap lowpoly geometry, shaded using an extremely cheap shader (yet, unfortunately, using quite a high resolution DXT1 texture, usually 1024x2048 or above on last-gen) to provide very convincing distant vistas in the game. We provided the game with richly detailed backdrops for a reasonably minimal realtime performance cost and I would strongly recommend the approach to anyone looking to fulfill similar goals on any time frame, budget, or platform. The development time for each matte painting was reasonably minimal, averaging around a week for offline geometry/shading/lighting setup and rendering, a day or two for concept art paintover and a couple of hours for in-game implementation.

VERTEX


Matte Painting

195

The development time for each matte painting was reasonably minimal, averaging around a week for offline geometry/shading/lighting setup and rendering, a day or two for concept art paintover and a couple of hours for in-game implementation.

Special Thanks To:

Stefano Tsai - Bradley Wright - Ben Hutchings - David Foss

About Me

I worked on Alien: Isolation as a graphics programmer for a bit over three years. I mostly focused on lighting and character rendering systems as well as developing a global PBR rendering implementation, but also enjoyed crafting custom workflows and scripts for the art team. I worship the teapot and sometimes wear a lab coat at work.

Tomasz Stachowiak About Me I’m a Technical Artist at Creative Assembly and have been working in games for around 5 years. I am 26 years old and currently live in Brighton, UK.

Mark Sneddon

I’ve always been a huge fan of science fiction, so the chance to work on an Alien game was a dream come true for me. I worked on Alien: Isolation for around 3.5 years mostly on PBR materials\shading development, art tools, matte paintings, and performance optimisation. “

Alien: Isolation, Alien, Aliens, Alien 3 TM & © 2014 Twentieth Century Fox Film Corporation. All rights reserved. Twentieth Century Fox, Alien, Aliens, Alien 3 and their associated logos are registered trademarks or trademarks of Twentieth Century Fox Film Corporation. Alien: Isolation game software, excluding Twentieth Century Fox elements © SEGA. Developed by The Creative Assembly Limited. Creative Assembly and the Creative Assembly logo are either registered trade marks or trade marks of The Creative Assembly Limited. SEGA and the SEGA logo are either registered trade marks or trade marks of SEGA Corporation. All rights reserved.

VERTEX


196

Gilberto Magno

www.artstation.com/artist/gilbertomagno


197

Cathleen McAllister

www.artstation.com/artist/catconcepts


Efficient Highpoly Modeling

198

Efficient Highpoly Modeling

Approaches and Techniques by: Simon Fuchs

Introduction

In this tutorial, I will describe my approach for creating highpoly models that are used as source meshes for ingame bakes. I will show you how I handle creating highpoly assets from the initial design to the final model. In addition, I will go into detail on a few key techniques that will help you to quickly add detail to your models and become more efficient when working. Specifically, you will learn how to Spline-deform, UV-deform and use Zbrush to cut holes into your meshes and fill them with detailpieces.

Mindset

Being an artist in a game team usually means that I have to deal with tight deadlines and can`t afford to waste time at any stage in the process. I always need to be aware of how much time I have to finish a task. As a rule of thumb for estimating how long it takes to finish an asset, I usually multiply my initial estimate by the factor of 1,5 in an established production environment (defined art-style, working pipeline, no technical challenges to be expected) or by the factor of 2,5 when working in a new production environment (art-style / design not set in stone, pipeline not fully in place, technical challenges to be expected) as there are always challenges that will come up during production. A good mindset when giving time estimates is to underpromise and overdeliver.

VERTEX


Efficient Highpoly Modeling

199

Game development is, by nature, an iterative process. Consequently, I need to be flexible and work in a way that allows me to react to changes quickly. Whenever I create something, I try to do it in a nondestructive way and set up my work so that I can modify it easily later on. Lastly, I want to be able to hit a certain level of detail and quality without spending a lot of time while doing so. This is where experience and technique is important. You want to focus on working smarter, not harder. I will explain several useful techniques later on that help with this. To summarize, here is what you should focus on when creating any game asset: - Underpromise and overdeliver when giving time estimations - Be flexible and work non destructively so that you can react to changes quickly - Work smarter, not harder by learning some tricks and techniques

Blocking out the Initial Design

Modularity: When designing an asset, I`ll make sure to use as much modularity within it as possible. This saves me a lot of time as I only need to add detail to each module once. I work with references so that each module automatically updates when changing it. In addition, modules will save me modeling-, baking- and texturing time later on in production. The following image shows a breakdown of the individual modules used on this asset. Notice how simple the general shapes are. It is not very time consuming for me to model these shapes as they are simple and thus I can iterate on them very fast as I will rely on my detail library to add interesting design elements later on. At this stage, I am purely focusing on the basic shapes and I am going from big shapes to small shapes until I have something that reads well from various distances. I am focusing on the silhouette and the general shape language and readability without getting caught up in any detail work.

Symmetry: Symmetry is another powerful design technique that allows me to quickly achieve results without having to model many unique parts. I try to make use of it as much as possible given that the design of the asset allows it. Keep in mind that you can always make individual symmetrized elements unique by adding floaters or giving them their own unique texture space in the final lowpoly.

VERTEX


Efficient Highpoly Modeling

200

When working from preexisting concepts, I usually talk to my art director and concept artist before starting to model it and ask if I can simplify elements or add modularity and symmetry. As long as the general look and feel of the asset stays the same, people are usually pretty open and allow you to change things to make your life a little easier. Time is money so I always try to make sure to be as efficient as I can - if slightly changing the look of the asset it will save me a great deal of work I won’t hesitate to suggest that and get approval.

Flexibility: When modeling, I want to be able to modify my mesh quickly so that I can iterate fast. This means that I try to keep the control cage as light as possible. I try to avoid control edges and dense tesselation as much as I can so that I can modify the major shapes without having to deal with a lot of geometry.

There are various ways of achieving this. Most modeling tools today support open subdiv which allows you to crease your edges giving them weight/hardness without having to use support edges. If you are on an older version of 3d Studio Max, you can also use different smoothing groups on your geometry with two turbosmooth modifiers on top. Whenever two different smoothing groups meet is where you will have a hard edge in your highpoly. In the following image, notice how light the control cages are on the left side and how you can still achieve hard edges in the highpoly using opensubdiv or two turbosmooth modifiers.

VERTEX


Efficient Highpoly Modeling

201

Adding Detail to the Mesh Create and use a detail library: A part of working smart and not hard is to work with a library of model pieces that can be reused throughout your project. After finishing an asset, I will go through the model and identify the elements I can reuse on other meshes. I will then go ahead and save all of these parts in a separate model library file that I will reuse throughout the entire production. After doing this for some time you will have a large amount of pieces to work with. This method has several advantages: - You will save a lot of time because you are able to quickly add detail to your model without having to model each new detail piece from scratch. - By using the same detail pieces on a set of meshes you can make sure that your art is consistent and fits within defined art style. - You can quickly iterate on the look of your model by trying out different detail pieces in the same area. - You can share the model library with all the artists in your team so that everyone can contribute and benefit from pre-existing meshes. This can also be done in between game teams or departments to get an even greater library. Think about asking other teams if they are willing to share their assets and try to reuse as much as possible to make your life a little easier.

In order to get the most out of your library, make sure to stick to the following rules for your library pieces: - Evenly tesselate each detail piece so that it will support deformation without showing any pinching or artifacts. Each detail piece should be able to deform into extreme angles like circles or spirals without showing any pinching or mesh artifacts. - Create a beginning, middle and end piece for pieces that can vary in size. Think about creating a starting piece, a tiling middle piece and then an end piece for pieces that will not always be used tiling with themselves, like a hand rail. - Make sure to design tiling pieces and patterns when you can in order to allow them to be used in as many situations as possible. Each detailpiece should be generic enough that you can change the length / height easily. - Add unique detail pieces to your library as well. If you add these to your library, other people can try to use them in different locations and use the to quickly kitbash a new asset.

VERTEX


Efficient Highpoly Modeling

202

Deforming library pieces and using them on the model: There are many techniques that I use to deform my detail pieces. The most basic one is using a Bend modifier in combination with an FFD modifier. First, I select the flat piece detail geometry. Then, I apply an FFD modifier and give it some vertical deformation by moving the control points in the FFD modifier up. After that, I apply the Bend modifier to get the radial deformation. Once I am happy with the shape, I apply turbosmooth modifiers to subdivide it. I am using the previously described technique using smoothing groups to create hard edges on the geometry.

VERTEX


Efficient Highpoly Modeling

203

Almost all of the circular-shaped detail pieces on my mesh have been created this way. Having an FFD modifier on the mesh before applying the bend modifier is great as this allows you to modify the angle of your detail piece lateron and it helps you to easily line it up with other objects. If you are working in 3d Studio max, check out the Bend of Brothers maxscript. This script makes deforming assets super quick and is quite user friendly. There is a great tutorial for it here. Using Paths to deform your geometry: Another way that I use to deform geometry is using splines. Splines are quite powerful and give you a very precise deformation. Almost all 3d authoring tools have a spline deformation function and you should be able to replicate the results by following these simple steps. On my mesh, I want to add a handlebar that follows the curvature of a specific edge. First, I create an edge loop on the target geometry that represents the curvature that I want the detail geometry to follow .

Since I want to use a spline deformer to deform the handlebar, I need to convert the selected edge into a spline. With the edge on my object selected, I click the “Create Shape from Selection” button under “Edit Edges” in the “Editable Polygon” modifier. This opens up a dialogue bar that asks me if I want to create a “Smooth” or “Linear” shape type. In this case, I choose “Smooth” as I want to create a spline based on the smoothed version of the selected edge loop. If I would select “Linear”, it would create an exact copy of the edge selection, including the hard transitions between the vertices which I want to avoid.

VERTEX


Efficient Highpoly Modeling

204

After clicking OK, the spline will be created in the exact location of the selected edge loop. I make sure to reset X-Form on the spline, otherwise the next step will not work correctly. I select the handlebar mesh and apply a “Path Deform WSM” modifier to it. In the modifier, I click on “Pick Path” and select the spline that I have just created.

In the modifier, I make the following adjustments: - I set the Path Deform Axis to “Y”. After that, I click on “Move to Path”. This makes the handlebar follow the spline in the correct orientation and will also move the handlebar onto the path based on its pivot point. Each time you want to change the pivot point you need to click on “Move to Path” for your changes to update. - I set “Percent” to 50 which will center my mesh in the middle of the spline. By changing this value, I can adjust where the mesh will be applied to the spline. - I leave the Stretch Value at 1. Stretch scales your mesh along the spline and allows you to make your object longer or shorter. As this will scale the detail geometry non uniformly, make sure to only use it on objects that support it in their design. Using it on the handlebar would not look good as it would change the thickness of the bar unevenly which does not look realistic. - I adjust the rotation to 64 to rotate the object into the right angle. - I leave “Twist” at Zero as I do not want my mesh to twist along the Path. Using a different detail piece then a handle bar, you could achieve interesting results by playing with this value. Once I am happy with the shape of my mesh, I will duplicate it using the “Snapshot” function in the “Tools” menu This will create an “Editable Mesh” copy of the selected geometry without the Path Deform modifier that I can easily convert to an “Edit Poly” object to add any further detail. You can find a video tutorial on how I use the Path Deform modifier on my blog.

VERTEX


Efficient Highpoly Modeling

205

Using UV`s to deform your geometry: One of my favorite ways of deforming detail geometry is using UV coordinates. Here is an overview of the process: 1. Unwrap the area of your highpoly mesh that you want to add detail to 2. Convert the UV coordinates into geometry using the slideknit script. 3. Arrange the detailgeometry on top of the uv geometry and apply a skin wrap modifier using the UV`s as a control surface 4. Morph the UV coordinates back into their original shape which will deform the detail geometry into the same shape Step One: First, I need to select an area on the highpoly that I want to add my detail geometry onto. For this example, I chose the front face of the following piece. I create a UV Layout for this area by using a quick planar map and aligning the UV`s. I make sure the final UV layout has a rectangular outline so that it matches the outline of the detail geometry that I want to deform.

Next, I detach the area that I have just unwrapped and turbosmooth the new mesh. I am doing this for two reasons. Firstly, the turbosmoothed shape is the shape that I want to morph the detail geometry into. Secondly, I need the UV geometry to be dense enough so that it has enough points used for derforming the detail geometry. Turbosmoothing gives me that density. If the UV geometry is not dense enough, the final deformation would result in artifacts and would not give you a smooth result.

VERTEX


Efficient Highpoly Modeling

206

Step Two: First, you need to install the Slideknit script. Once it’s downloaded, place it in the following folder: “C:\ Program Files\Autodesk\3ds Max 2015\UI_ln\MacroScripts”. Run it once by selecting MAXScript - Run Script and then select the SlideTools-SlideKnit.mcr from the folder above. This will make the script appear in the “Main UI” under the “SlideTools” category. Either assign a hotkey or create a button in your UI for the script.

With the turbosmoothed and unwrapped geometry selected, I run the slideknit script which brings up the SlideKnit Script Dialogue. As I have used UV Channel 1 to unwrap my mesh, I will use it in the first box. The second box - UV scale - allows you to scale the newly created piece of geometry. If you unwrap your geometry in the 0-1 area, a value of 100.0 usually works pretty well. Next, I click on “Unwrap selected” to create the Deformer Piece based on the UV Geometry. I make sure that the newly created piece roughly matches the scale of the highpoly in order to avoid too much scaling when morphing pieces. If you ever encounter too much scaling, then play around with the UV Scale value in the SlideKnit script until you get geometry that matches in scale.

Step Three: Once I have created the UV geometry, all I need to do is move the detail geometry on top of the UV geometry and align their outlines. I do this by scaling the detail geometry to roughly match the shape of the uv geometry. Once they are in the same ballpark, I apply a FFD modifier and match their outlines. I then collapse the FFD modifier and Reset X-Form on my detail geometry to get rid of the scale values on it. This is important, as any other values other than 100 will give you errors in the following steps. I make sure that the two pieces are close to each other and then select the detail geometry. I need to apply a “Skin Wrap” modifier to it.

VERTEX


Efficient Highpoly Modeling

207

Under Parameters, click on Add and select the UV geometry that is located underneath the detail geometry. This will tell the SkinWrap deformer to use the uv geometry as a deformer for your detail piece. Set the Deformation Engine to Face Deformation and set the Falloff to 0.001, the smallest value possible. If you get any errors in your deformation, you can try Vertex Deformation as Deformation Engine, but I have gotten the best results with the settings just described. If any vertices in your detail geometry don’t get deformed at all, make sure to increase the Threshold value and apply the modifier again. This will increase the distance that radius, that the modifier is looking for points to deform. I leave all other settings at their default setting. Step Four: All that is left to do for me now is to select the uv geometry. The slideknit script has automatically applied a morpher modifier on top of the Editable Mesh object. Go into the Channel list, select the first entry from the list and increase the value from 0 to 100. This will morph your Uv geometry back into its original shape and at the same time deform your detail geometry with it. In the following image, you can see the morpher modifier on the uv geometry on the right side. You can also see 4 different stages of deformation, 0, 33, 66 and 100. The upper row shows uv geometry and detail geometry on top of each other where as the lower row only shows the actual uv geometry itself morphing back into its original shape. Since this can be a fairly complex process and there are quite a few steps involved I recommend to watch my video tutorial on this technnique. I go into detail on all the steps involved and there is a troubleshooting section at the end of the video as well.

VERTEX


Efficient Highpoly Modeling

208

Adding Details to the Mesh in ZBrush

Checking in with the art director: Up to this point, everything on my mesh can be changed easily as most of my pieces are built using modifiers or have been deformed in a non destructive way. The control cage of each module is very light which means I can easily change the major shapes. Before moving forward, with adding additional small scale detail, I make sure to talk to my art director and get him or her to sign off on the mesh in its current state. All of the remaining work will be done in Zbrush on dynameshed subtools which means the design of the asset needs to be locked down to prevent major rework later on. Using dynamesh to create evenly tessellated objects: At this stage, I will bring every object into Zbrush and dynamesh it so that I have an evenly tesselated basemesh to work with. Dynameshing gives me the flexibility of not having to worry about poly distribution as I polymodel. Once I run dynamesh, it will create an evenly tesselated object for me. In order to make sure that each dynameshed object is dense enough for me to sculpt on, I use Dynamesh Master with a resolution of around 3-4 million polygons depending on the size of the object.

Using alphas and brushes to add detail: Comparable to the detail mesh library, I have a big library of Zbrush alphas generated from various meshes. These vary from fairly small and generic geometric shapes like triangles or circular indents to more custom alphas like exhaust turbines or air intakes.

VERTEX


Efficient Highpoly Modeling

209

Creating a custom alpha for Zbrush is really easy. First, I create my geometric shape in 3d Studio max. I make sure that the shape is embedded into a flat surface so that I get an alpha with a flat background when generating it later in Zbrush. On the left, you can see how simple the geometry actually is and on the right you can see the turbosmoothed version.

Once I have the turbosmoothed shape, I export it as an obj. Next, I open Zbrush and set the document resolution to 2048*2048 as this is the texture resolution that I like to use on all alphas that I generate. To do this, I click on Documents, turn off constrain Proportions and then set the resolution to 2048*2048. Once everything is set to the correct values, I click on resize.

Next, I need to bring in the highpoly geometry for the alpha created earlier. Once I have it in the viewport, I scale it up until the geometry fills the entire screen. The easiest way to do this is importing the obj, placing it in the viewport and then scaling it using the Scale button on the right side of your screen. After that, I to click on the Alpha button on the left side of the screen and select “Grab doc� on the lower right side of the Alpha menu. This saves the viewport as an Alpha image into your alpha palette. If you want to save the alpha, use the Export button in your Alpha palette to save it or just create a new brush for the alpha and save that.

VERTEX


Efficient Highpoly Modeling

210

Before I can use the alpha correctly, I need to adjust the mid value for this brush. To do this, I go to the Alpha menu and set the Mid Value to 100 under the Modify tab. Mid Value allows me to set the zero displacement value. If I set it to 100, then white is considered zero displacement and all sculpting will push in on the model. If I set it to 0, then black is considered zero displacement and all sculpting will push out of the model. Since the background on my alpha is white, I need to set the Mid value to 100 for this brush.

To paint with this brush, I either use the “Drag Dot” stroke mode if I want to keep the size the same across the mesh or by using the “Drag Rect” stroke mode if I need to be more precise in placing the alphas. You can achieve great results by just using alphas and Zadding / Zsubtracting them onto your mesh. The advantage of doing your detail this way instead of with floating pieces of geometry is that when you bake down the texture onto your mesh you will not have to deal with shadow artifacts from floaters in your ambient occlusion bake. In addition, if you want to use a height map for tesselation or parallax occlusion mapping, you will not have to adjust the height value of your details for correct displacement.

VERTEX


Efficient Highpoly Modeling

211

Carving out areas of your highpoly: Often times, I want to carve specific detail pieces into the highpoly mesh on a curved surface. Polygon modelling this would be very time consuming. A quicker way to do it is to create a piece of geometry that matches the outline of your detail piece and subtract that piece from your highpoly mesh in Zbrush. Using this technique, I don`t have to deal with any pinching or other types of artifacts on the highpoly geometry. In the long run, this saves me a lot of modeling time. In the following image, you can see the basic setup for this operation. I have the highpoly mesh (grey, front transparent for visibility) and the detail object (red) as well as the outline object (blue). I position the detail object with the carving object on the highpoly mesh at the location where I want it to be carved into the mesh. I used the spline deformer technique described earlier to position it and make it follow the curvature of my highpoly mesh.

Next, I export all objects to Zbrush individually and append them to the same subtool. I position the highpoly mesh on top of the carving object and hide the detail object in the “SubTool” palette for now. I select the carving object and make sure the second button from the left is enabled in the “SubTool” palette. This tells Zbrush to subtract the carving object from the highpoly when dynameshing. With these settings, I select the highpoly and merge it down onto the carving object. All I need to do now is run dynamesh with the desired settings and the new mesh will have a hole subtracted where my carving object used to be. As I want to refine the edge some more, I mask it and run “Polish” on this area to get a nicer bevel on the edge. I can see the final result by unhiding my detail geometry.

VERTEX


Efficient Highpoly Modeling

212

Panel lines: To add panel lines to my mesh, I use the MAHCUT brushes. I specifically like the “MAHcut Mech A� brush. This brush is great as it creates a sharp outline for the panel lines which is what I want. Since I am working on symmetrical objects, I can use the Radial Symmetry feature on those brushes to achieve some nice looking results in certain areas. From this point on, all I do is further add detail using alphas, the MAHcut brushes and the described carving technique until I am happy with the level of detail on my object.

VERTEX


Efficient Highpoly Modeling

213

Summary and Conclusion

I`ve now given you an overview of my general approach and workflow when it comes to creating highpoly models. To summarize, here are the individual steps that I went through Creating the basic highpoly modules.

In this first step, I focused on creating the main modules for my asset. I tried creating an interesting silhouette for each module and just focused on simple, readable shapes. Try to keep the control surfaces simple by using Open Subdiv, the 2x turbosmooth modifier technique or similar workflows Make use of Symmetry to save time if the design of the asset allows it.

VERTEX


Efficient Highpoly Modeling

214

Placing the basic modules

In this phase, I duplicated my modules and tried arranging them in a visually interesting way. I was trying to get a feel for how they look in different locations and in different amounts until I had something that I thought looked good. I did not worry about small details in this phase at all but just focused on making use of my modules as best as I could. This step is essential for the final asset as it defines the proportions and the general look and feel. Make sure to iterate in this phase as much as you need to. It is always a good idea to create a bunch of variations and choose the best one.

Using the detail library to add detail to the mesh In this stage of the modeling process, I added all the medium sized details using the detail pieces library. I used the Path Deform modifier, the Bend modifier, FFD modifiers as well as the UV technique. This stage is super fun as you can really iterate with different detail pieces in different locations. I tried to add interesting detail pieces to locations where I thought it made sense while still trying to keep a good ratio of visually busy and visually calm areas. Up to this point I am able to change shapes and detail easily as I am using mostly modifiers and modules to create the mesh.

VERTEX


Efficient Highpoly Modeling

215

Using Zbrush to add final detail This is the final stage of the modeling process. I brought all pieces to Zbrush and dynameshed them to add final detail using Alphas, Brushes and the desscribed carving technique. Before adding final detail, I made sure to get a sign off on the asset as all steps involved in this phase are done using destructive workflows, meaning I can`t change any of the underlying shapes without a lot of rework. In this stage, I focused on the final, small scale details like panel lines and and indents to give the object a sense of scale.

VERTEX


Efficient Highpoly Modeling

216

VERTEX


Efficient Highpoly Modeling

217

About Me

I am a self-taught artist from Germany that is passionate about games, learning new techniques, and teaching. I started out as a hobbyist working on various Counter Strike mods making maps and models until I discovered that I enjoyed working on games so much that I wanted to do it professionally. After working for a few smaller games companies in Germany and finishing an apprenticeship at 4head studios / Cranberry Productions, I was able to get a job at Crytek working on the Crysis franchise which was super fun and very rewarding as I could learn a lot from all the talent there. I am currently a Senior Environment artist at Blizzard Entertainment working on Starcraft II: Legacy of the Void. I have been in the industry for around 10 years and and am still very passionate about my job. If you have any questions on this tutorial, feel free to contact me through my blog where you can find video tutorials on most of the topics covered here or through my website . I hope you have learned something from this article and I can`t wait to see what you can come up with using the described methods and techniques.

Simon Fuchs

w w w . s i m o n f u c h s . n e t

VERTEX


218

Sam Burton

www.artstation.com/artist/sleepfight


219

Alexey Egorov

www.artstation.com/artist/air-66


Art Direction

220

Art Direction

Becoming your own Art Director by: Josh Herman

Professional & Personal Work

What separates personal work from professional work? A lot of people would answer that question by saying something along the lines of “professionals get paid, personal work is a hobby”, and to an extent, that’s true. But my question is, why do they have to be different? I’ve noticed that people always seem to specify the type of work they do. Student work, professional work, personal work, freelance work, etc. Why can’t it just be “work”? I don’t really have a problem with this. I just find it interesting that we break each one into it’s own category: depending on the location, time in our lives, and whether we get paid or not. My answer to the question is that you can define them differently by when you did them, but there shouldn’t be anything different about the work itself. Personal should be of the same quality and demand the same focus that your professional work does. It can take longer or be of a different style or even be in a different medium, but they should both be treated with the same amount of respect. Here are things that I do to try to maintain that equality. Throughout my career, I’ve always enjoyed doing personal artwork while I’m at home. I have seen plenty of other artists do this and an equal amount have little interest in it. Personally, I need it. It helps keep me sane, refreshes me on techniques that I may be rusty on, let’s me try new techniques before I bring them into the professional arena, lets me take home some things I have learned at work to try on a project of my own, but most of all is very fun and satisfying.

VERTEX


Art Direction

221

But it isn’t always easy. For every one project I’ve finished, I probably have another 10 sitting unfinished on my hard drive. Hundreds of sketches and ideas just sitting around that have barely scratched the surface, or were scratched just deep enough to let me get a taste and know that I didn’t want to go any further. In my professional work, I always finish on time and usually finish pretty strong. Yet, when I just started doing personal work, sometimes I found myself falling into the trap of feeling unfocused or just unsure of what the next step should be. Other times, I just gave up. It was depressing after a while when you don’t produce anything worth that time investment. But why was that? I was doing the same things for my professional work as I did in my personal work, yet there was a disconnect that made one more difficult than the other.

Personal Work Examples: TMNT

After some time of being frustrated that I couldn’t get the same quality, I decided to take a look at why I couldn’t do it. I tried to figure out what it was that made me not as productive at home. What I found for me, was that it came down to focus. Not just being unable to focus at home due to distractions, but a lack of an artistic focus. When you are in a work setting, there is a direction that drives most of the decisions. There are directors, art directors, production designers, game designers, CEOs, and many others that are all crafting the overall vision. You will have you chance to do your own thing as well, but typically within a set of guidelines they have established. Rarely are you told to just do whatever you want. What I needed to do was to create a focus. I needed to bring the same mindsets and tools that I, or other people, were doing on the job side, and bring those home. I decided that when I was at home doing work, that I would treat it like an art director does.

Professional Work Example: IronMan

VERTEX


Art Direction

222

What does an Art Director do?

Well they do a few things, but the main thing they do is they create/craft a vision for a product and then they guide it to see that vision completed. Now, I know what some of you may be thinking, “But Josh, I already have an idea in my head and I’m going to make that.” That may be well and good, but the thing about ideas is that they change. And, changing means losing time and focus. Every TV show, film, and game started as an idea, but then they wrote it down into something tangible, something they could evaluate and edit. So let’s talk about some ways that we can define and make our ideas clear and easy to follow for ourselves. Let’s be our own Art Directors. First, we are going to define our project in a few simple terms. These can be more specific if you are just doing a character or prop or level, but if you are trying to do a whole world, then go broad. For example, when I started my Silver Surfer piece, I chose the words: Elegant, Anatomical, Sleek, Art Deco. Some made it into the final product more than others, but defining these terms made it easier for me to start trimming down what I liked and didn’t like. This made it easier to look for reference and brainstorm ideas because I had an idea of what I was looking for, which leads me to my next point.

Professional Work Example: Groot

VERTEX


Art Direction

223

VERTEX


Art Direction

224

VERTEX


Art Direction

225

Alright, now we know what we’re looking for and we need to actually find it. We’re going to make reference and mood boards for every project we’re going to make from now on. And, we’re going to look at them. A lot. Make sure that the images you are putting in them reflect the terms you chose above. Reference and mood boards are very common and really simple to make. Find images that fit what you want your final product to look like, or inspire you to get there and put them on a giant photoshop document. Lots of artists do these, but a lot of artists that do these forget they have them after the first few days of working on a project. You made it for a reason! Look at it! If you have to print it out and put it on your wall next to your monitors, do that, but have them up otherwise they aren’t worth making. Reference Board

VERTEX


Art Direction

226

Lastly, and while we’re on the topic of having things to look at for our projects, let’s talk about putting things in your office space. You should strive to make your office space a place you like to be in. I know this sounds silly, but it can really affect your ability to focus. Think of your desk at work or school. Now think of your desk at home. Do they look the same? How different are they? Why? I have a cloned machine that matches exactly what I have at work. This way, I know that whatever quality I’m working on at work can be done at home. I do however have one difference. I’ve found that distractions can really effect me at home so I only work on one monitor. It’s a 30” monitor, so it has some extra real estate and I can have both photoshop and Zbrush or Maya next to each other and it’s really not that bad. At work, there are people all around me working and it makes it easy to get into the zone. At home, it’s just me. It might not work for you, but it works better for me when I don’t have Facebook sitting on the second monitor half the time. As for the workspace itself, I have some things that I like to have handy at all times. As a character artist, I find that I am always needing to be reminded of human proportions and anatomy. What I like to do is have some of those fundamentals around. Pinned, framed or taped to my walls as well as having statues and some quality reference books nearby. But don’t make it too sterile, I also have fun statues and posters from movies and games. This is your space and you should enjoy it. Next, invest in a nice chair. Trust me, you’ll be sitting in it a lot more if you like it. Have some sketchpads and pens around. Yes, I know you can type words into notepad, or in Photoshop, but nothing matches the real thing. And, you can take it with you to the bathroom and take it to the couch when you’re tired of sitting at the desk.

Home Setup

VERTEX


Art Direction

227

Alright, we’ve got ourselves set up with our terms, our boards, and our desk and workplace is exactly how we want it. NOW ready to get to work! Now that we’ve got everything set up, I wanted to bring up something that is a pet peeve of mine. Personal work is not all about speed. Yes, being a fast artist is an asset. But it’s not the only asset. I’ve seen a lot of people post images and say “this is xx minutes” and that’s great for a speed exercise. But when we are at our real jobs, that’s not how it works. Since we are trying to emulate a work environment, we should probably not care as much about how long it took to make something and just focus on making quality work. With most of my personal projects, I try not to keep a timeline. Other times I will participate in a contest or give myself a deadline. Deadline or no deadline, we should still be able to pull back and look critically about our work. Here’s some easy things I do to help me keep a fresh eye on work at home:

Take a Break

The easiest thing you can do to keep your eyes fresh is to take a break and come back to it later. If you have something else you can work on, do that. If not, get up walk around for 5-10 minutes. (Don’t doctors tell you to do that anyways?). Additionally, something else that I do for myself is that I will take a screenshot of my model from multiple angles before I take my break and put it up on my screen. This makes it so that when I come back, the first thing I see will be my screenshot. Then I take a minute and just stare at it and evaluate how it’s looking with a critical eye. If it’s looking bad, acknowledge it and work to fix it.

Paint Overs

Professional Work Example: WarMachine

Like I mentioned above, we are aiming to be our own art directors and we should do the same things an art director would do for us. So what is one typical job of an art director? Paint overs. It’s pretty simple, take a screenshot of your model and paint on top of it in photoshop. You can go the extreme of what you think the final should be and do a full on final painting, or you can just give yourself call outs and notes. Once you do this, save it and put it in that file with the reference boards. Paint overs are a way to jot down notes to yourself as well. You don’t have to fix all the problems there, it’s just a way to think about it from a different angle.

VERTEX


Art Direction

228

Don’t work in a Vacuum

Just because you are working at home and don’t have co-workers or bosses to give you notes doesn’t mean you can’t ask other people you know! Get out there on a forum or Facebook and post your work. Ask for feedback. Prepare for some ruthlessly, brutal and honest feedback. The internet does strange things to people and they will tell you exactly what they think. Usually but not in a malicious way, most are truly trying to help. Don’t get offended. If you’re not to the point where you want to show it to someone, send it to friends for feedback. This surprisingly took me a long time to do. I don’t like people seeing my work when it isn’t finished. I’m afraid they will think I’m a fraud because when I’m showing it to them, it sucks, and I know it sucks, and I know they will think it sucks. But find some friends that can give you a velvet punch and some solid feedback. It will help you a LOT!

VERTEX


Art Direction

229

Professional Work Example: HulkBuster

Challenge Yourself

At work, every project comes with it’s own challenges. Work at home should, too. Sure, sometimes you just want to do a study, or sculpt your favorite thing, but even within that you should be able to find a way to make it more interesting and to push yourself. With Silver Surfer, I wanted to try to redesign a character like we do at work. I specifically chose a character that hadn’t ever been re-designed and was a challenge. Along with that, I wanted to put into practice some things about hard surface character design that I felt I had gotten from my work on Avengers and Iron Man 3. When I look at it now, I still see things I would change, but overall I’m pleased with it because I know how much thought I put into it and it was a big challenge.

Silver Surfer ColorBreakup

VERTEX


Art Direction

230

Silver Surfer Detail Paint

VERTEX


Art Direction

231

Silver Surfer Final BG

Do Something New

Go outside, go travel, go see something. Go read an amazing book or see a film you know nothing about. Go experience something new. I went on a walking tour of LA and learned about Art Deco buildings and how and why they were made. I thought I would hate it, but it was amazing! I went to Europe on my honeymoon and saw the beauty of Paris, the incredible history and strife of the Irish, and the Dutch just being really cool to everyone in Amsterdam. Each place made me think of 50 new thoughts or ideas to add to my list of things I find interesting (in my sketchpad). I will choose a random highly rated audiobook and listen to it on a long car ride or even while working (Night Circus, Ready Player One, Boneshaker to name a few). In the age of the internet and pinterest where everyone is re-pinning pins of pins and looking at the same artwork, go find something new. Keep it to yourself and make those ideas yours. That’s it. It’s that simple. Okay, maybe not. You still have to do the actual work! But, hopefully, this can put you on a path that’s better and more focused that you are right now. Some of these options might not work for you,and that’s okay! They are just suggestions that work for me. At the end of the day, it’s more important that you find your own ways to create a focus, be comfortable, and push yourself to the same limits that you do at work.

About Me

My name is Josh Herman and I’m a character sculptor for Marvel Studios. I’ve been working in the entertainment industry for 5 years and have worked in all types of media from film, to games, to collectibles on titles like: The Avengers, Guardians of the Galaxy, Uncharted 3: Drakes Deception and many others. I’m a massive comic book and movie nerd, but enjoy games even more. If it’s considered nerdy, I probably like it. I also love teaching and have taught a lot of classes and workshops such as CGworkshops, CDA, Stan Winston School, and Gnomon and I plan to continue that as long as I can.

Josh Herman

www.PolyGroupAcademy.com

VERTEX


232

Ali Zafati

www.artstation.com/artist/zaliti


233

Andrey Tkachenko

www.artstation.com/artist/atdesign


Post Marvelous Designer

234

Post Marvelous Designer

Techniques for adding details after Simulation by: Seth Nash

Intro

In this article, I’m going to write a little about the post Marvelous Designer workflow and a few techniques that can be useful in getting your cloth sim into Zbrush for additional work. Beyond that, I will be giving a brief overview of pattern layout in MD with UV’s in mind, and finally what can be achieved with a few displacement maps, taking advantage of those UV’s in Zbrush. Other artists have written some top information on MD in this edition of vertex, so I will avoid the basics. I will mention just one or two things that I usually do while creating simulations, before we get on to the main body of the article, just to give you more options with your setup when creating in the software. I often work in real world CM measurements for characters. I find that scaling avatars up 130% before import gives me better crease definition at reasonable particle distances. MD’s scale is fixed and it will essentially treat a larger avatar as a bigger object. I also adjust the simulation thickness of my fabric to 1mm from 3mm unless I am creating something particularly heavy such as thick leather. I adjust the avatars skin offset to 1mm from 3mm also, (Avatar> Avatar Properties> skin offset) as this will help when morphing in extra garment layers or anything else that interacts with the cloth surface. I always smooth anything that I import to MD to interact with the cloth. At the lowest particle distances, the cloth can pick up on any lower poly geo and mimic the facets as it drapes over the surface. I think that the real trick to MD is knowing when to cut and run, getting a feel for how far you can go with the program before it’s better to move on. I have lost more time trying to keep complex multi layered garments under control than I can count, and prefer to avoid complex stitching now in favor of doing any hemming effects and extra detail in Zbrush after laying down a few internal lines as a guide and to alter the pattern as though there were a stitch there. Anyway, onwards.

VERTEX


Post Marvelous Designer

235

Body

Final garment with displacement applied. Extra equipment modelled in Max and Zbrush

I’m going to start assuming that you have already created a garment in MD to a standard that you are happy with or have selected a pre-made pattern from an in-house library of base templates. Most concepts will have something interacting with the garment, such as straps, webbing, additional garments, or protective clothing. MD’s morph target feature will allow you to simulate in objects that may trap or alter the creasing of your garment and offer a cleaner workflow for additional garments rather than creating, layering, and simulating all in one scene which can make the scene very sluggish very quickly.

VERTEX


Post Marvelous Designer

236

I’m going to use a Dropleg harness as an example for bringing in straps and webbing to interact with your garment. I model out the basic webbing pattern in max over the basemesh that I am using as an avatar. Because the cloth will need room to simulate under this geo, I tend to keep an offset of about 10 mm from the avatar surface. The modeling can be as rough or as polished as you like at this stage, it’s really just there to create creasing. Once the webbing geo is created, it’s attached to a copy of the avatar. This, in turn, is cloned and the webbing geo is pushed out. I tend to bring in an OBJ of the basic garment at this point. It’s important that the pushed geo doesn’t intersect with the garment that it’s going to be used with. This should leave you with two meshes to export as avatars, one with the webbing in the correct position, and one altered to make a good starting base for the morph

From basemesh, to basic webbing model, then to pushed and scaled webbing. Making sure that these parts are smoothed will ensure no faceting artifacts in the simulation. To set up the morph, import first the pushed mesh into MD and load as avatar. Adjust the skin offset as mentioned previously in the property editor (this adjustment will carry through to the morph target when loaded). Next, import the avatar with the webbing in the intended final position, but load this as a morph target. Double check that your scale settings are using the same units between both avatars, (I always use centimeters) and hit ok. This will begin the morph process, depending on particle distance this could take some time. Better get a coffee.

VERTEX


Post Marvelous Designer

237

It may take a few goes to get your fabric not to intersect with the avatars it constricts. This is just a case of going back into your 3d package and adjusting the target mesh a little, exporting that back out and repeating the morph process. Personally, I keep the particle distance low for morphing and won’t step back up to anything high after the morph is complete, as this tends to allow the cloth a chance to intersect with the avatar and becomes quite a mess. Major garment intersections with the avatar can be fixed by adjusting the target mesh in your favorite 3d package and repeating the morph process until you achieve the desired result. Finally, re-import the morph target as an avatar. MD will remember the pre morph avatar if you save at this point. Leaving a morph target in place can cause export crashes later on.

A note on the belt loops: after the belt was morphed into place the loop patterns were selected and reset to 3D arrangement, (right click on selected pattern in 3d view for menu, or Ctrl-f) these were then positioned close to the garment before re-simulation so that the fabric didn’t distort too far in the stitching phase of the sim. Once completed, the pins holding up the trousers were removed and the garment simulated again to give the look that the garment hangs off the belt. More often than not, I find that this morphing technique is useful for adding additional garments to your scene. It just requires a little pre planning. As an example, I will add a kneepad to the trousers from a separately created MD project file. Rather than either creating the garment in the same scene as the trousers and hoping to simulate over them, or creating the kneepad in a separate scene then again bringing the garment in to simulate. I prefer to bring in the garment as a morph target as this gives cleaner, more precise cloth trapping and keeps your main scene uncomplicated. When generating the kneepad, I use the same base human avatar as used in the trouser scene. However, I give the mesh a 10mm push knowing that this garment will sit on top of the other one. It gives space for the underlying fabric to sit without intersections. I, then, generate the kneepad as standard.

Kneepad – additional internal lines are added at points of stitching and a little pressure is used to simulate padding.

VERTEX


Post Marvelous Designer

238

It’s worth noting that as this garment is final, I have rearranged the pattern into a reasonable UV layout for later use. I tend to draw out a square pattern and fit the garment pieces as tightly into that as possible. MD will take the outermost points and square them off to create your UV boundaries. MD’s free, perfectly relaxed high poly UV’s are a great little bonus. The kneepad is exported, decimated in Zbrush, and then used in the same morph process as mentioned before ie; attached to the avatar mesh, cloned, and pushed appropriately before importing both generated avatars into MD for morphing.

The completed kneepad morph. As before, make sure that it isn’t simulating erratically after the morph and then load the target base mesh in as an avatar. With the additional MD work completed, I begin the export process. First, I will quadrangulate all patterns if I intend to use UV’s with the high poly. Next, I arrange the patterns into a more UV friendly layout, again using a square pattern as a guide. Scale is unimportant as MD will square off the UV’s based on the greatest distance between two extremities, either horizontal or vertical. I just want something that I can work with. If I don’t intend to work with the high poly UV’s, then I will skip these steps and just go straight to export. I export out selected patterns based on layers starting with the largest mass, then pockets and such, and finally flaps and belt loops. The image below gives a good indication of my thought process here. I never export thickness as the edges are often very scrappy, panel looping in Zbrush gives cleaner more controlled results.

VERTEX


Post Marvelous Designer

239

UV layout/export sets for Trousers, I will also grab a snapshot of the layout to help use as a guide later for additional details in Photoshop. To get the parts into Zbrush, I make sure the weld tolerance is set all the way up at 0.01 ..(wow) before import. Once I have all the meshes in, there are two options for clean up ready to sculpt, depending on whether you have quadrangular and want the High poly UV’s intact.

The imported garment in Zbrush, showing the layer separation and the all important weld tollerance The first method assumes that you don’t want the UV’s. Duplicate the mesh, ZRemesh with freeze borders checked, subdivide and Project against the original subtool that you have duplicated, repeat until you have captured the detail of the simulation delete the lower subdivisions, then panel loop. I generally use/alter the following settings: Polish : 0 Loops : 5 Bevel : 20 And most importantly elevation: - (minus) 100 Elevation decides which way along the normal direction to push the thickness, 100 is out – 100 is in. I find that in helps a lot to get a clean look to your edges. Thickness is obviously subjective, depending on the fabric that you are wanting to replicate and if you have Zbrush’s scale correct. Once panel looped, if the interior of the garment is not visible, I tend to delete the internal polygroup. This reduces point count and avoids complications in the mesh from pulling through back faces while sculpting.

VERTEX


Post Marvelous Designer

240

Panel looping the imported garment pieces to achieve crisp edges and a readable thickness to the cloth I will avoid going over the basics of cloth sculpting here, as the major creases have already been provided through MD. Generally, with an MD over-sculpt, you are adding wear, memory creases, and stitching to the garment, giving it a bit of life. Depending on the look you are going for, you may even just sharpen up what’s already there for a clean, graphic style. I tend to do a first pass getting in some of the interaction at seams, which can be tricky to generate in MD using the elastic feature, and adding/adjusting creasing what was lacking in the original simulation. I prefer to use a standard brush with alpha 37 at a mid teen’s intensity and a clay brush with a very low intensity, building up some edges, and deepening other depressions. I often increase the volumes in trapped areas with the move brush, too. Places, such as between the kneepad straps, really benefit from a bit of exaggeration just to give a more interesting silhouette. Once that’s completed, I cheat a bit, creating memory creasing with an adjusted crumpled paper alpha which I then smooth out or work into, depending on the area and the look that I want. Finally, I will add stitching using a stitch IMM brush that I picked up on the internet a while back. Using an IMM allows you to keep point counts sensible on the garment being stitched and can be polypainted to create a stitch mask for later texturing.

Before and after of the extra detail sculpted into the garment

VERTEX


Post Marvelous Designer

241

The second method assumes that you do require the UV sets that MD provides, opening up possibilities to create details via displacement maps. Simply import the quadrangulate mesh as before and panel loop as above. Subdivide as necessary to capture the detail in any applied displacement. This method isn’t as sculpt friendly as completely remeshing, but I would assume that if you are making use of the UV’s then sculpting will at best be to add detail rather than to make drastic form changes. This method is excellent where you have the target engine map resolution to show accurate stitching and fabric surface patterns.

Displacement used in Zbrush to capture stitch and weave details Zbrush flips vertically the UV’s of the imported mesh so be sure to do the same with any displacement maps used. To generate a template to work with in Ps, I grab a UV snapshot of the MD mesh from max, showing only the border edges. This gives me something to line up the screen grab previously obtained from the MD 2D pattern view. Alternatively, you could subdivide the mesh a few times in Zbrush, drop back down the subdivisions and bake out a displacement in Zbrush. This will give you a decent enough greyscale image to use as a guide, but is a little less precise than compositing the UV and pattern view snapshots together. I use a combination of custom pattern fill layers for weave and fabric detail, and vector paths for stitching, which I stroke with one of a selection of stitch brushes that I have acquired here and there. Once completed, I place the displacement maps to the relevant meshes in Zbrush and sculpt additional details into the cloth, such as depressions and gather points caused by the stitching, and memory creases. One of the main advantages of using displacement for stitching and weave detail, is that its non-destructive. Until applied to the mesh, it can be sculpted into/under without distorting or damaging the detail, allowing you to bed the stitching into the fabric. If the stitch isn’t reading well then you can easily adjust the size and spacing in Photoshop. It does negate the possibility of large sculptural changes to your garment, and your workflow may not require an extreme level of micro detail in the high poly. Once I am happy that the detail is reading well and I have sculpted everything that I want, the final step is to bake the displacement information down, creating additional subdivisions if necessary, to hold the details. To make sure that I have a mask for things such as stitches, I create a colored version of the displacement map, apply as a texture, and create polypaint from texture in the polypaint dropdown of the tool pallet. This will allow me to bake out vertex color for selection masks during texturing.

VERTEX


Post Marvelous Designer

242

1) The mesh imported from MD, panel looped and subdivided. 2) Displacement map placed. 3) Displacement and additional sculpt work. 4) Sculpt work alone, displacement turned off. 5) Displacement applied to mesh, colored map applied and baked to polypaint.

VERTEX


Post Marvelous Designer

243

Displacement maps aren’t just useful for realistic characters. The trousers and pendant on this collectable piece were made in MD, then sculpted over in Zbrush with the detail added via displacement.

Conclusion

I hope this gives you a good overview of the techniques that you can apply to your MD exports as they transition to finalized high poly models. Let your art style be the guide as to how far you take the MD stage, and at what point you pull it into Zbrush for finishing. Thanks for reading.

About Me

My name is Seth. I have been a character artist in the games industry since 2011. Before that, I made toy soldiers for several years with clay, putty, and glue. I like digital because there is less chance of supergluing my fingers together or dropping scalpels on my feet. I have a very understanding family and a cat called Keith. Huge thanks to the giants whose shoulders I have stood on in this particular facet of character production: Madina Chionidi, Sven Juhlin, and Andy Matthews.

Seth Nash

w w w . s e t h n a s h . c o m

VERTEX


244

Georgi Simeonov

www.artstation.com/artist/calader


245

Efflam Mercier

www.artstation.com/artist/efflam


Storytelling

246

Storytelling

Storytelling with vehicles by: Matthias Develtere

Introduction

Personally, I see vehicles as storytellers. Just think about it; the main character fills up 20 percent of the screen, the rest is environment and it doesn’t matter what sort of game you play. There will be always some sort of vehicles in there. What really comes up a lot is that most games copy the same vehicles all over the game and just give them different color textures. That way it doesn’t feel like you’re crossing the same vehicle over and over again. Vehicles should tell stories as well, just like characters, weapons and environments do. I won’t explain modeling tips, because basically, everybody can model; it just depends how much you practice and the time you want to put into it.

VERTEX


Storytelling

247

Should you just collapse the symmetry?

If you want a vehicle that you can only use once, yeah go ahead. Imagine the awkward double gas tanks you end up with then. That’s not what you want in the long run. A car has two sides and you should use this in your advantage. This could sound strange, but let’s take two easy props as examples:

VERTEX


Storytelling

248

It’s like making a furniture asset and deleting the backsides - that sucks because then you’re never able to use the model in different situations. So, let’s take the ammo boxes now: by making both sides unique, you can make one model and get two models for it in return by just rotating it 180 degrees. Super handy for these caliber boxes - knowing you can bake those almost completely flat.

Adding attachments on these models makes it even harder to spot the fact that it’s one and the same model. Make sure that both sides get the same amount of attention if it comes to details. So if this works for assets, why wouldn’t it work for vehicles? Let’s take this TATA truck as a quick example.

VERTEX


Storytelling

249

We all know that most cars get used for the same function in games: (explosive) cover, and that’s okay, but come on let’s be honest. Shooting random places on vehicles shouldn’t make the car explode. “Yeah, let’s shoot the tires, that will make the car explode!” That’s why I added jerry cans - it breaks the symmetry for the vehicle and adds an extra function to it. By placing the jerry cans only at one place of the vehicle, we create a different shape and function for both sides. It suddenly becomes something players have to pay attention to before they start firing wild bullets. That’s something I found awkward in games: “Placing a gas truck in an environment and suddenly the tank is the only thing that can catch on fire.” Well, if that’s your gameplay design, you should have it for all your vehicles. A mistake I made, with this design, is that the gas tank is on the same side as the jerry cans. It would have been interesting to have some fire opportunities on both sides, but in completely different places and heights. Another quick example is the bumper at the back. One side is bent and some tires are added as a shock bumper - again all to break up symmetry patterns. For the front, I just placed something on the middle of the symmetry line. That way, the symmetry in the front is a bit less noticeable - same for the two staircases at the front of the vehicle. Next to this, it’s good to keep in mind that you don’t put your detail in one straight horizontal line. You add details to break up patterns, so you should try to exaggerate with this. We can take the TATA truck again as an example. The splines attached to the flags are slightly bent.

Let’s take this tank as another example. The tires are in such a shape and are placed so that they feel interesting - the one in the middle and the back are both rotated 35 degrees in X and Y axis, so it breaks up the pattern, and even more importantly, adds something to the silhouette instead of just being a local extrusion of the body. To go even further with this idea, I started adding sandbags in the tires that were completely horizontal so that these felt more like an extension of the vehicle; while rotated ones are empty, to really catch depth into them. Something else you will notice is one of the antennas is bent, again - to break up patterns.

Personality

If you make a vehicle, you always ask yourself the question: “Is it going to be animated? First person or third person? Is it just a prop to take cover at?” I like to get these questions answered right away. What’s so good about vehicles is that they are completely symmetrical and have a lot of blank spaces. All the time you save with this, you can spend coming up with an original design and filling it up with details. Jerry Cans, Boxes, backpacks, sacks, Warning Triangles, Water Barrels, Ammo Boxes, Cables, Ropes, Spa retires, Reserve pieces , Communications pieces, Broken Car Glass etc. Okay, maybe it’s not always easy to find unique ideas, but a good idea is to search for vehicle dioramas. Hobby people put a lot of work into making their piece memorable. Everybody can go to a store and buy a vehicle assembly box, but making it stand out is something different, so keep an eye on such a piece.

VERTEX


Storytelling

250

Pay close attention to the wide range of props they use to give vehicles some story and the color palette they use for props in contrast against the de-saturated models. A mistake a lot of people make is going for a fully “organic detailed” model (backpacks, Sacks, Clothes, etc.) or a total hard surface model (Radio’s - hand tools, crates) etc. That’s not something you want. You want to break up your hard surface model with some organic details. Just don’t overdo it. You should try to aim for a 70 - 30. The nice thing about organic pieces, like cloths and bags, is the wide range of colors you can go for. This breaks up the pattern.

Different Functions

If you make iterations of vehicles, you should ask yourself if you want to change the function of the vehicles. If I put these three next to each other, you will get the idea that they all have a different function.

VERTEX


Storytelling

251

VERTEX


Storytelling

252

The first one looks like it has an anti-riot function, the second one is a bomb squad function, and the third one a Supply function, but it’s not difficult to see that it’s still the same base tank. You can see the silhouettes are completely different which is good. For example, let’s go for the third version. If, for gameplay reasons, the tank with the supplies gives you the option to refill your ammo stock, you want to make the player similar with this silhouette. The bomb squad one has a gas refill station at the back, so it would make sense that if you shoot that, it would explode - so there’s a fire function there. What stands out is the extra shape that was added to it. Generally, it’s somewhere in the back or the front, so that it’s easy to spot. That way, if you look at the vehicle from a far distance, it looks and feels like a specific silhouette. A good way to test this is to preview with consistent colors -without lights or shadows, just like a black thumbnail. Afterwards, you can go in and add some extra small details all over the model - as long as they are not silhouette breakers. Next to that, you can try to find pieces that can be animated/rotated/closed etc. For the tank, I just played around with the gun barrel holder. In the first picture, the holder isn’t used, but in the second and the third it’s used. Later in this article, you will see a version where the turret holder stands open. All of these things don’t cost time as long as you model them correctly, so they can actually work. In the next model you see a more extreme version of different iterations:

VERTEX


Storytelling

253

Again, it should be clear that all three of them have a different function. Making different vehicles is really awesome. For example, I could have made another LMV with half of a truck covered underneath a cloth and an anti aircraft on the other half - that way I would’ve gotten function for the vehicle. Afterwards, we would’ve been able to take off the anti aircraft gun and use it as an asset in the game. The limit is just your own imagination. Let’s go back to the renders now. Picture one has a gigantic fuel tank, so it could be used as a potential fire target. Picture two could serve as a gigantic cover piece or progression blocker, while picture three could have a refill function. The difference between the first and second picture feels small, but tells a whole different story. The last version maybe feels like a lot of extra work and feels distant from the two other versions, but that’s not true. I just made one complete LMTV truck and sliced the hood in the middle. I emptied the truck and added a combination of hard surface and organic props. I finished it off with different height levels, so they look completely different unlike the first two. And, that’s it. Again, I added something to the front or back. In this case both, (there are four extra reserve tires at the back.) to change the complete look. Just to make sure these things work, I always make sure I make everything from the base vehicle: interior, bottom, etc. Otherwise, trying these things would have been a big risk. For the front part of the body, you have to make a new lowpoly mesh, of course, but that’s it. For the rest, you’re able to get away with the same mesh. We can take another example for generalizing this idea. A city can have hundreds of different car types, but it’s a wise decision to build a few of them that are unqiue/typical for that region. Just by retexturing or adding some extra assets to it, you can recreate different vehicles with different functions.

Vehicle Iterations But, what if we want to make iterations on these models; just to make the world feel more alive and less repetitive. Does this take a lot of time? NO, NOT ALL! It’s all about removing and adding details to the former silhouette. Let’s take the Supply tank as an example again. You still want to make sure that the player recognizes that model and links it with restocking ammo. What gives it that specific silhouette is the cloth shape, so that’s something we don’t want to ruin. In the next picture, you see a good and bad example.

VERTEX


Storytelling

254

VERTEX


Storytelling

255

Number two is a good example - we remove the cloth, but because of the frame underneath it, we still got the same silhouette in there. It doesn’t take a lot time to switch between these iterations: you just remove/add pieces. Example number three is not so good. With this one, the wrapped up version, you lose a big chunk of recognition, but at the same time it’s just a different interpretation. You can solve this by using a consistent color for the cloths. Knowing that military vehicles always use a grayish/solid color palette, you can use a bright color for the cloth. That way players will still recognize this tank and the functions that comes with it. Next, you can break up patterns by moving small props to different positions. As long as you don’t break up the silhouette it’s all okay.

It gets Easier

The first time you do this, it will take some time. The main reason is the fact that you have to make everything from scratch, but once you’re done making your first iterations, it can be handy to save the props/assets you used as details, as a kit-bash kit. What I do to make sure I have enough props, with different sizes and silhouettes props, is I make at least four iterations of every props. For example: (4) Backpacks - (4) Pieces cloths - (4) Ammo Crates - (4) Boxes - (4) Radios - (4) Weapons - (4) Cans - (4) Work tools - (4) Pieces of beverages and food etc. To make sure this works well, I always make sure that all these assets have unique sides. That way, using the same prop and rotating it 90 or 180 makes it look a bit different. Next to that, I add some silhouette breakers (small props) that just have some unique shapes that break up patterns - these are, of course, unique for every vehicle.

VERTEX


Storytelling

256

On this beetle, you can see how four organic pieces break up the roof. I just bent and squeezed some of the rolled up cloth to make it look interesting. That’s only with backpacks. In the next example, you see a wider use of different props. I always make sure every asset feels flat so that it’s easy to stack different props onto each other. Baking gets really easy afterwards. Sometimes, I even add extra props to fill up gaps, so I can bake it easier and with a smaller tricount budget. “It’s not because it’s “Next Gen” you should model every bolt on a crate you’re going to add to your vehicle; instead you should add more crates with the same quality like you would have done before.” (Check VERTEX 1 - Tor Frick article to get more information about this) Don’t forget that all these props that you make are not a waste of time after you’ve finished your vehicle. They can be used as environmental props, so don’t go into tunnel vision mode and search for really specific vehicle props. You want to find/design props that can be used over the complete game world afterwards. The bulletproof vest hanging on the car seat is a bad example of this. It would have been better if I had put a suitcase on the seat instead. For the rest, I am pretty sure the props can be reused all over the map. The key is just making a wide variaton of props so that dressing up the vehicle is not a lot of work and you can bring the idea over to the team. That way it’s also easy for them to see what props they could use for the environment.

VERTEX


Storytelling

257

Civilian/Common Vehicles ?

We have taken some military vehicles as examples, but let’s go for some civilian cars now. When I get a random/ normal car assigned, I always go for a combination of clichés and ask myself the same questions over and over again. - In which environment can I find this vehicle? - Who would be using it? - What are the characteristics of this vehicle? Let’s make some examples here. Starting with SUV’s, I ask myself the question, who will use this vehicle? FBI/SWAT etc. This is followed by combining it with a cliché: Black SUV’s with tinted black glass. Great, we have a design, but what if we reverse this idea. What SUV’s don’t have tinted glass/aren’t black or get used by the FBI? Then, you can go for an embassy car and add a flag on the hood. That way you have a different iteration of the same car. To make a different iteration function for it, just go for a different silhouette again. Open the trunk and add some ammo crates, for example. And, voila, you have another vehicle iteration and another function for it. Let’s go for another example to make it clearer. BMV wagon: Ask the question, what are the characteristics of this vehicle? Super shiny vehicle. Followed by combining it with a cliché: Used by rich people. Again, let’s revert this idea. A BMW that doesn’t get used by rich people and can be all dirty. Well, in Eastern countries, you have a lot of beige BMW taxis. Knowing that Mexico is one of the overseas subsidiaries, we can start working with this idea in mind. Knowing the drug stories, you could convert your vehicle into a family escape car, filled with precious goods. To make if feel like a Mexican car, you can add some red/green/yellow festival flags on it and make the overall look dirty. And, voila, we have a unique car design.

VERTEX


Storytelling

258

Let’s Recap

Let’s take one more example where we can combine a lot of techniques we used over the previous steps. The first thing people always link a pickup truck, with a “heavy nose” shape, with is America. So, again, go the opposite way with it: Get away from the patriarchal idea and turn into a rebel car by making the nose as light as possible. That way you can still add a truck front bumper to it afterwards and use it as a modular piece again. Another characteristic of a pickup truck is, of course, the tailgate. So, I opened it completely. I used the extension it gave the model to add some extra assets. Here, I broke the symmetry by adding some jerry cans (those gave an extra explosive/fire opportunity to the vehicle). I added some more props into the trunk, where one side had completely vertical or horizontal props, while the other side had some rotated wooden pallets to break up the pattern. I finished the trunk by mounting a machinegun on top of it which adds a lot to the silhouette by letting the ammunition swing a little bit around car shape. To add some more story and depth, to front of the model, I made a gigantic crack in the window. That way players would be interested to look in the inside of the vehicle. The inside was finished up by cutting out a piece of the dashboard and steering wheel so that I could add some airbags. By making these separate objects you would still be able to close those shapes.

Just to make it even clearer, I made another very quick version of the same vehicle which shows how quickly you are able to turn a vehicle in another storytelling product. Take a closer look at what happened in the change of the silhouette. I could have gone way further with making a “new” vehicle out if it. Adding different wheel rims can make a big difference. Most of the time, you make a separate UV map for these, so you wouldn’t lose time adjusting the full texture of the vehicle.

VERTEX


Storytelling

259

Useful Tools & Speedup Tips

Trainmaker: Making correct tire tracks can take some time. A good script for this is trainmaker 1.5. Which can be found at this link here. http://www.scriptspot.com/3ds-max/scripts/trainmaker Working with this script is super easy. Let’s go over some basic steps. You start with making your track piece by applying an Xform modifier. Then, you start with making a spline that will form the shape that the track will be moving on. Model one piece of the track and put the axis at the rotation center. Finish it off with an Xform, just to be sure. After, run the Trainmaker script and from that point on, it could be that you have to tweak some values. It’s a very clear script to use, so enjoy it. No Blockouts: I am to lazy to make blockouts for professional and for my freetime projects. Is this a bad thing? Yes, but it just doesn’t work for me to block my meshes out. I see concept art as a sort of blockout, you have to make that design better. To comfort myself, I always start working on a really interesting area of a vehicle/detailed part. That way you’re pumped to keep working on it and you can figure out the style you’re going for. So, I start on one speficic area of a vehicle and make that one as Alpha as possible. I can bring over my design to the team and show them the vision of the vehicle, instead of having to say, “This is just a blockout. Imagine details being in there.” Instead of “wasting”my time for this, I put it in, getting the correct shape.

VERTEX


Storytelling

260

I search for some big images to see how the light reacts on them. Overpaints are super handy to do before starting modeling as it can help you get the flow correct. This works for low poly and SubD, but is SUPER important for highpoly. You just search how the flow of the vehicles goes and that way you know how your loops are supposed to go. These days I don’t bother myself doing those on paper or in photoshop, but I still do it in my head, without even noticing it. It’s the best way to start on your vehicle. You just don’t want to notice, in the middle of your project, that the flow of the vehicle doesn’t match for ref and you have to restart. Or, you could draw them over real vehicles, but that’s less safe. Note, this one is not as clean/correct as it should be but it was too cool not to include.

StarterTip: If you want to start getting into vehicles, don’t rush. Start with understanding flow and elegancy. You don’t even have to start with SUBD models. Not at all. Just make some fast studies everyday of just car bodies. Aim for 2500 triangles and try to catch the characteristics from that vehicle because that’s something a lot of people miss. Start with this and this will get you further than working on a highpoly car for more than four months. Here’s an example of some of my old studies from 3 years ago.

VERTEX


Storytelling

261

Pro optimizer: Sometimes you spend a lot of time in sculpting something really detailed and you would like to keep this for presenting your SubD model to the industry. But, working with a non-decimated sculpt is not always possible. This is what I do to fix this: I export my sculpt to a detailed version. Import it into Max and the first second it’s in there, I apply a pro optimizer modifier and put it on 10%. That way, I can run my model in a smooth way. At the end, I am able to turn the modifier off and render my presentation shot in full glory. Patterns: If you don’t want to waste too much time on patterns, you can always rely on Photoshop and Zbrush. You just search for “a non-perspective” texture , convert it to a bump texture, and use it as an alpha mask in Zbrush. Now, apply that texture as an alpha with a drag rect brush in Zbrush. Some quick examples you can use this for are: tire tracks, cracked glass, etc.

VERTEX


Storytelling

262

Conclusion

Hopefully now, you have a slightly different vision about vehicles. They can be equally interesting as characters or environments. It’s “really easy” to make a vehicle, render it and put it on your portfolio, but most people will go: “oooh another vehicle”. You don’t want that - especially for a specific job naming as this. It’s all about the level of personality you give them. Every vehicle I made is like a child to me. I try to give it personality, a face, so people see it as something more than an empty chassis with wheels. In the next shot, you even see a funny example of this. I just combined two old vehicles and got a new one.

VERTEX


Storytelling

263

About Me

My name is Matthias Develtere. I am currently living in Sweden and working at MachineGames as a Junior 3D Artist. I’m originally from a very small town in Belgium, that nobody has heard of, so as a result I went to very small Digital/Entertainment/Art school that didn’t have any idea what or how the game industry worked. There just was no passion like you need to make it in this hard world. The only option for me was going to the library and making vehicle studies at every break I found. I didn’t make vehicles because I liked them or anything, but because I was told it was the hardest thing to make in the Hard surface branch, so I just went for it. When I told people I was working as a 3D artist, they were always super enthusiastic, until you told them that you were making vehicles. I always got the same reply: “oooh just vehicles.” For me, my goal became clear. I would make vehicles that got seen as “more than just a vehicle”. After, I started loving this. To me, making vehicles in this breathtaking industry is the best job in the world. I always start with the same thought in my mind: This needs to be more than JUST a vehicle. Every flow/line has to be elegant and done with a lot care. I probably sound silly, but I could be tweaking a couple of polygons for hours just to make sure it catches light in the correct way. Every line and every surface is designed with a purpose, nothing is done without a reason. If an artist takes a minute to check out my vehicle, then I accomplished my goal. If a gamer takes a minute/takes a bullet for it, then I am completely sure I did a good job. I would like to end with thanks to Laurens Corijn, Tor Frick, Jobye Karmaker, and my parents for being a big inspiration for me.

Matthias Develtere

w w w. d e v e l t e r e m a t t h i a s .w o r d p r e s s . c o m

VERTEX


264

David Lojaya

www.artstation.com/artist/david_lojaya


265

Christine Gourvest

www.artstation.com/artist/christinegourvest


Challenge

266

Challenge

Designing to Increase Challenge (Not Punishment) by: Drew Rechner

Introduction

Games, amongst other things, are incredible teachers. When making a game, we are inherently asking the player to step into an alternate universe, learn its rules, practice conquering challenges, and master the challenges presented. This process, even if completely isolated from any other extraneous rewards systems, can be an extremely rewarding experience for the player. Almost any design accomplishes exactly this to some extent without much effort; however, the best designs are able to continually introduce new and sometimes more complex challenges to the player without adding too much frustration to the player’s experience. Complexity can play a major role in determining the duration between the introduction of a new mechanic or challenge and the mastery of it. This generally comes at the expense of accessibility, so most game designers decide to separate and slow the introduction of new challenges to the player until they believe they have been given adequate time to practice and even master them. Introducing a new mechanic too soon may lead to frustration as many games tend to build challenges based upon the past experiences of the player.

VERTEX


Challenge

267

Each new challenge builds upon the player’s past experiences. Of particular, personal interest to me is the topic of enemy AI, which are sadly often neglected from this methodology. Quite often, the player will encounter all the enemies the game has to offer within the first several hours of a game. The problem with this is that it allows for stagnation later in the game as the player has few challenges left to master. Level Designers (and occasionally players) are often burdened with “exotic” content in an attempt to keep the player on their toes and to prevent them from becoming too bored and losing interest in the game. As a cost-effective and alternate solution, many games simply increase the damage and health of the enemies. While this solution does provide somewhat of a new challenge for players--successfully hit the target more without getting hit back--it severely deprives the player of the learning challenge they desire. Most importantly, however, this solution increases the punishment not the challenge to the player.

Challenge vs. Punishment

It’s important to understand the difference between something that is challenging and something that is punishing. Generally speaking, a punishing game is not very fun and frustrating whereas a challenging game can be a lot of fun; punishment feels unfair while challenges are almost always fair. In punishing games, the player is often left with the feeling that they are unequipped with the knowledge of how to deal with the situations the game offers. On the other hand, challenging gameplay can test either or both the mental and physical response of the player, but is fair and obvious regarding its expectations of the player. Most importantly, the line between a challenging experience and a punishing experience is often drawn at whether the player was provided with enough information in order to act appropriately in response. This highlights the importance of clearly showing, or signaling, to the player of the challenge that awaits them. For example, if a very visually heavily armored enemy approaches to the player, the player can expect that the enemy will take a much greater beating than his previous foes. Note that it’s very important that any signals provided to the player be as unambiguous as possible, so the player can easily predict a cause and effect relationship between their actions and the response of the challenge. Following unambiguous signaling, the player requires clear feedback when they have attempted to interact with a challenge— success or failure. With clear feedback, the player can understand whether they should continue with their course of action (which they performed based off the signaling presented to them) or attempt to change their methods in order to succeed.

VERTEX


Challenge

268

Toolbox

Unambiguous signaling and clear feedback are always incredibly important, but their importance becomes even greater when introducing a new challenge to the player since they will not have prior experience with the exact challenge being presented. The good news is that the toolbox for signaling and feedback is vast with nearly limitless options. In the limited examples below, I’ll focus on enemy AI in a first-person shooter, though this can be applied to any challenge. In the first example for signaling, assume the intention is to introduce a new enemy AI to the player: Signaling Example

VERTEX


Challenge

269

A clear silhouette can easily signal a new type of challenge.

In the example for feedback, assume the challenge is shooting and successfully damaging an enemy AI:

Feedback Example

VERTEX


Challenge

270

Building an Example

Now that we’ve covered the multitude of ways to provide signaling and feedback to the player, let’s take a pretty basic enemy seen in a lot of first-person shooters, and demonstrate the ways in which we can introduce new challenges to the player. In this example, our enemy mainly fights from cover, occasionally popping out to shoot at the player with a mid-range weapon. The base challenge we’re asking the player to master is both of dexterity and intellect: shoot the enemy as he is emerging from cover and avoid getting hit once he starts firing. It is this base challenge that will remain and act as the foundation for which all other new challenges will be added. This base challenge should keep the player occupied and happy for a short while, but soon the player will get close to mastering that challenge after a few iterations of practice. Now that the player is primed and waiting for something new, we can introduce something new: the enemy will now start throwing grenades at the player in an effort to flush them out from their cover. This new challenge adds on to the basic cover shooting challenge and requires the player to practice their mobility and combat positioning tactics. After the player has had a chance to practice against this upgraded enemy, we can slowly add additional elements to increase the challenge, such as quicker reactions to the player’s actions, the ability to throw back player grenades, and the ability to flank the player.

VERTEX


Challenge

271

With all of these new elements added to the fight, we can ensure that the player will be constantly learning and being challenged, which will undoubtedly lead to the player having fun! It is incredibly important to note, however, that unambiguous signaling and clear feedback must be utilized for each new layer that is added to avoid frustration.

Importance of Rewards

It would be irresponsible to not at least briefly mention the importance of adequately rewarding the player for overcoming each new challenge. While conquering challenges can be an incredibly rewarding experience on its own, it should not be left in isolation under most circumstances. Coupling the successful completion of an encounter with an adequate reward reinforces the concept of the player succeeding. Ideally, rewards can scale with the player’s performance and challenge level, though this may not always be possible.

Conclusion

Games need to continually challenge the player in new ways to hold their interest for long; however, it’s important to identify a good pace of introduction, unambiguous signals, and clear feedback when presenting new challenges to the player to avoid the slippery slope leading to a frustrating experience. While the process of creating and tuning these elements can be difficult and time-consuming, it’s almost certainly worth the effort.

About Me

My name is Drew Rechner and I’m in Game Design currently working at Ubisoft Massive. Previously, I worked on Section 8, Section 8: Prejudice, and Aliens: Colonial Marines while at TimeGate Studios. I have a strong passion for designing and implementing combat and AI systems for games. In addition to my professional career, I remade Baldur’s Gate as a mod for Neverwinter Nights 2 and am currently in the process of remaking Baldur’s Gate II: Shadows of Amn for Neverwinter Nights 2. Outside of games, I love playing football (soccer), lifting weights, traveling, hiking, cooking, eating new and interesting food, drinking good beers, making cocktails, and generally spending time with my amazing wife.

Drew Rechner

w w w. l i n ke d i n . c o m / i n / d r e w r e c h n e r

VERTEX


272

Nestor Carpintero

www.artstation.com/artist/nestorcarpintero


273

Karakter Design Studio

www.karakter.de


Set Dressing

274

Set Dressing

Using Set Dressing to Sell the Believability by: Devon Fay

Introduction

(fig.01)

Successful set dressing an environment should be more than just filling a room with great looking props. When you set dress, your props should be used to emphasize the story of your environment. When done well, these stories will help sell your scene and add to its realism and believability. There are many considerations to be taken while trying to sell believable environments. Thought, preparation, planning, and constant adjustment will all have to be used. In this article, we will take a top down look at the steps I take when approaching the task of set dressing an environment. I will use the above image (fig.01) as an example for many of the sections, but these steps and thought processes are used when I approach any new environment.

Prop Creation

Once your environment is decided on and blocked out, it’s ready for set dressing. The next step is creating the props that will be used. It’s hard to place props and complete a scene if you don’t have anything to place! For me, starting with an initial “asset list” to help decide what needs to be made is a good first step. I tend to use lots of photography, movie, TV, and other media reference to help inform the asset list, keeping in mind my initial concept and theming. Set dressing works smoother when I can add more than one prop in the scene at a time. Spending time to create multiple props before placing them will lead to a clear idea of where things should go and should lead to less wasted time adjusting large amounts of props in the later stages of set dressing. For our example (fig. 01), I used movies, such as Alien, 2001, and other classic scifi movies, as well as reference and inspiration, for building out my asset list.

VERTEX


Set Dressing

275

(fig .02)

Prop Diminishing Returns

When actually creating a prop, it’s important to understand where and how it’ll be used. Things like, how much detail it will need, how much time you should spend on it, and roughly it’s distance from the character/camera and the angle it is seen should be noted. Once you have this information, you can make a better decision on how to approach its creation. It’s common to find yourself overengineering a prop, whether it be for a game, a movie, or a personal piece.

(fig. 03)

Doing this not only adds up to wasted time during the creation process, but can very well lead to massive amounts of wasted time during rendering (for prerendered environments) or optimization (for game environments).

VERTEX


Set Dressing

276

In my example scene (fig. 01), and seen closer above (fig .03), you can see that most of these props wouldn’t stand up under close scrutiny and could not be used as “hero props”. Most have “dirty” geo, stretched, missing, or repeating textures, and even floater textures for logos and details. However, I knew how they would be used in the scene, so I was able to model them quickly and efficiently. To clarify, I’m not saying to make bad looking props. I’m saying you should know how they will be used and seen, and to then use your best judgment to efficiently make them look great. Understanding the distinction of when something is good enough comes with experience, planning, and practice. This concept is a great way to speed up your professional and personal workflows.

Initial Set Dressing and Telling a Story

Now that we have some of our props created, it’s time to decide where and how to place them. This tends to be one of the most time consuming processes when creating an environment, but it’s also one of the most important steps. Once again, reference from everyday life, movies, and photography is useful when deciding where to place your objects. Observing how clutter naturally tends to build in offices, living rooms, or restaurants is helpful for deciding how to dress your scene. You should avoid too much randomness and repetition. Instead, focus on placing props where they naturally would end up. Papers, notes, and books on and around desks. Cups, plates, and trash near eating areas. Keeping a real world logic to the placement of props will help keep the viewer invested in your environment. We don’t want to just naturally dress our scene, however. We want to try to dress it in an ideal way.

What that means is, we want it to be as interesting and pleasing to view as possible. When done well, we can use this step to really help sell the realism and interest of our environments. Using well made photography and movie reference can be a great way to get a sense to how this can be done right. Remember, we have spent a lot of effort to compose our shot, using the ideal lighting, camera angle, and setting to create a compelling image. Using that same effort, while placing our props, can be just as important. Tell a story with your props. Avoid just haphazardly slapping props all over your environment. Take some time to think about why this prop would be in that area. How did it get there? Who left it there? Is it out of place or is it put where it should be? Thinking about these things will lead to more natural feeling to your scene but also add that subtle feeling of realism. If done well, it will help the viewer accept that space as believable because on some level they will understand how the space was being used.

(fig. 04)

VERTEX


Set Dressing

277

In my example image (fig. 01), care was taken to tell little mini stories with my prop placement: having a chess game be in midplay, having a deck of cards be knocked over (fig. 04); maintenance being half done, and then abandoned in the hallway. These all give a small amount of interest and believability to the area and tell small stories. I also took care to be sure that things like empty beer cans and food containers are in areas that make sense. They are mostly found near the kitchen and booth, but not in bookshelves or on the floor. Effort was made to tell interesting stories with the placement, but to also keep it feeling natural. Combining those two aspects is something that can take an interesting to the next level.

(fig. 05)

This example shows how duplicating a lot of the same props can cause issues in your scene. In this case: it was easy to duplicate the beer cans around to fill the scene. The side-effect is that now it looks like a slobbish alcoholic lives here, which is not the intent of the image. Remember, your props tell a story.

Overall Composition

Let’s recap: we have the general theme of our area, we’ve spent some time researching props and filling out a bit of a prop library to start using, and we’ve started to place them throughout the scene, keeping in mind the most natural and interesting places for them to be. What now? Never forget the importance of our overall scene composition. The overall scene composition should always be a high consideration when building, lighting, and set dressing your scene. Every placement decision should be a little piece working towards that end. It’s fairly easy to start filling up every surface with props and detail, but this can quickly get out of control. Even if the structure (the walls, windows, doors, and other initial layout) of the scene is planned and laid out well, overclutter can easily lead to a flatter looking image or a confusing composition. It’s important to have unused areas and areas for the viewer’s eye to rest. This means having some surfaces with less or no props on them at all or some areas with props with less “visual weight” to them.

Visual Weight

(fig. 06)

VERTEX


Set Dressing

278

Initial Set Dressing and Telling a Story

(Compare our can to the interesting shapes and color contrasts to common beer cans today)

What is “visual weight”? One way to think of a prop’s “visual weight” is by how interesting it is for the viewer. The more interest, the higher visual weight. The weight of a prop can depend on its complexity, its texture details, and its overall contrast. As with composition, in general, props with high detail and high contrast tend to draw the attention of the viewer more. Understanding and utilizing those differences is an important process to creating an interesting, believable image. The props in the previous example (fig. 06) have very different “visual weight”. The bonsai tree, for instance, was meant to draw attention to itself; each leaf is individually modeled, the trunk has a complex sweeping shape, there is a bright white light providing high contrast; it’s also one of the only green colors in the scene, and it’s the only living thing in the room.

(fig. 07) Basically, this prop demands attention. Because it demands attention, we need to be careful where we place it, and be sure that it’s working towards our overall compositional goal. Let’s compare that to a prop with less “visual weight”: the cans of beer. The shape is just a cylinder; it has very little complexity to it. The colors are very similar to many already used in the rest of the scene: light shades of grey and tan. Even the texture is fairly low contrast overall. WIth these cans, I can group many of them and they still don’t have the “visual weight” that the single bonsai has. I wanted them to be less interesting because I knew I would be using many of them throughout the scene. If I wanted the cans to have more of a visual weight and impact, I could have chosen something more interesting, like some beer cans found today. (fig. 07) Training your eye to see and use the discrepancy in prop weight will become invaluable when you start making your decisions on your overall composition.

Tertiary World Building Details

The last thing I want to talk about is a more subtle thing to consider while selling believability and storytelling with your props. I call these the “tertiary world building details,” mostly because I don’t know what else to call it. These don’t normally add to the overall read of your environment or even help with the composition. These are much more of a polish step and are best used once you’ve already put in the effort to nailing the other aspects of creating a scene. Some examples of these: Have period/genre relevant book titles in the shelves. Have a chess board or other kind of game? Look at famous games played and recreate an actual turn opposed to randomly placing pieces. Use labels and business titles that people recognize or that give a nod to the overall story you are telling. Have paperwork, receipts, computer screens, etc, all be related and connected. Many small details like this were used to an extreme, in the example image (fig 01), partly because it was an homage to classic scifi, but also because these small details add a recognition and believability to the overall feeling of the scene.

VERTEX


Set Dressing

279

Conclusion

(fig. 08)

As with any guideline, trying to be put on art or creativity, these will not always be correct. There are always exceptions to the rule. It is, however, still very important to take time to stop and analyze these options. You want to be sure that you are making the best possible choices for whatever the goal of your environment is and avoid falling into easy mistakes. So, before you place that next beer can prop, be sure to ask yourself: “Who left this here?”

About Me

After graduating from Gnomon School of VFX, I started my career in Blizzard Entertainment’s film department. I began working on matte painted environments for projects such as Starcraft II :Wings of liberty and World of Warcraft : Cataclysm. I, then, continued my work at Blizzard as a matte painting lead for Diablo III, World of Warcraft: Mist of Pandaria and Starcraft II: Heart of the swarm. While constantly seeking new challenges and techniques, I decided to transfer my career to real time environments for games. Currently, I am working at Infinity Ward as a Senior Environment Artist. Coming full circle, I have returned to Gnomon School of VFX as an online instructor, bringing my unique perspective to environment creation, drawing from my experience in both pre rendered and in-game environments.

Devon Fay

w w w. a r t s t a t i o n . c o m / a r t i s t / m a l i b u b o b

VERTEX


280

Mohammed Mukhtar

www.artstation.com/artist/mohzart


281

Stephen Todd

www.artstation.com/artist/stephentodd


Service Industry

282

The Service Matthew Industry Lee Life as a Technical Animator by:

Introduction

Technical Animation’s goal is to advance animation, animation systems, and animation pipelines. We are also tasked with making animators work shine through our own knowledge and expertise of development practices, systems and software. Inspired by Ali Mayyasi’s great article back in issue 1 of Vertex (The Glue, Life as a Technical Artist), I will layout in a similar fashion the day to day of Technical Animation within the Games Industry.

Character Pipeline

Technical Animation plays a huge role in the quality of animation that a project is able to achieve. Since we develop a lot of the systems that allow an animator to give a character “The Illusion of Life”, it is important that we are involved in all aspects of the character development process. Our goal is to achieve the best possible deformation with the tools and systems available to us. Model Topology: Understanding how a collection of vertices will deform when skinned is not often considered by 3D artists. However, as the delivered asset will form the foundation of what our animators are able to achieve, it is very important. Giving feedback to artists when an asset is delivered is vital before continuing the character pipeline. Usual things to check for are: good edge loops, non isolated polys, watertight mesh (if needed), polycount, clean mesh (i.e. contains no history/deformers/modifiers) and scene cleanliness. Another potential problem can be textured clothing seams; if these are in a place that will be deformed a lot (the shoulder/clavicle are prime examples) it’s a good idea to place edge loops at either side to allow skinning to keep the desired deformation without stretching the texture. Recommended reading : Brian Tindell’s : The Art of Moving Points

VERTEX


Service Industry

283

Anatomy: A strong understanding of the human form and systems within the body is vital to creating the core of our character rig. Knowing why a joint sits in the place it does and the effect it has on the surrounding muscle groups, connecting joints, and skin will greatly help us achieve realistic and stable deformation. This understanding of anatomy can elevate a good Technical Animator to a great one, and will greatly raise the quality of animation possible on a production. Life drawing classes are a great way to improve your understanding of anatomy. Recommended reading : George B. Bridgman’s : Bridgman’s Life Drawing

Skinning: Skinning is the process of binding vertices to joints, to create smooth deformation. In the games industry, we usually allow for up to eight joint influences to affect any one vertex. Switching between the skinning process and joint placement process can be a tiresome task; however, as with getting the topology correct, laying down a good foundation here will save a huge amount of time later when trying to fix deformation problems through the rig. It is good practice to gradually refine your skinning over multiple iterations, starting extremely rough and each time improving the level of polish. If you find yourself unable to achieve a desired deformation at this stage, try going back a step and repositioning the joint. If you still can’t achieve the result that you want, there are still a couple of options available later in the pipeline. Deformers : Deformer is a way to drive/sculpt components of a mesh using mathematical calculations. As mentioned previously, when we are unable to achieve a desired deformation through skinning alone, we can use/write a deformer. Deformers are usually Digital Content Creation (DCC) specific, expensive and incompatible with most game engines - as such they are usually avoided. However, with point cache formats gaining popularity, it is possible to bake out results and import them into some engines. A recent example of this can be seen in the great VFX work done on Crytek’s Ryse. It is important to note, however, that presently this is not the industry standard, and usually we will create a rig to give the desired deformation through joints and skinning.

VERTEX


Service Industry

284

Rigging: Regardless of which DCC application is used, the same truths hold for all rigs. As the title from the great tutorial series suggests, Animator Friendly Rigging is our focus. We strive to deliver user friendly, well deforming and stable rigs. If the previous steps in the process have gone well, this part of the job is usually a lot of fun. We want to give animators both flexibility and choice in how they animate and have a wealth of options open to us. It’s useful (as suggested in the aforementioned tutorial series) to break the body up into separate parts and design a specific solution for each part. Usually, I will separate a rig into: torso, arm, hand, leg, foot, neck, and head. We now also have a chance to revisit some of the areas we were not happy with in the previous steps and apply a rigging solution. Typically, these are twist joints, roll extractions, and scapular deformation among many others. Mocap: Since the design of the characters rig and skeleton can have a big impact on how motion re-targets, making sure Animation Tech and the mocap group are on the same page is vital. Having a plan for marker placement (body/face/prop) well before the shoot will ease the process and usually be much appreciated by the technicians at the studio. Simply having a picture available of the character with joints or markers visible is usually enough. I like to think our job here is to let the Animation Director focus completely on directing, while we take care of any technical requirements important to our own pipeline. Game Engine Animation Systems: The product of all the above steps leads us to the final and most exciting stage in our pipeline, seeing our character in game and animated. While most animation systems will be set up by the animation department, having a fundamental knowledge of animation trees, animation blending and animation compression techniques allow Tech Animators to offer continued support at this stage of development.

VERTEX


Service Industry

285

Tools

Most animators want to focus entirely on their craft, rather than become involved in the technical side of game development. This is important to bear in mind when developing any type of tool or pipeline that an animator will use frequently - the technical knowledge of the end user cannot be assumed. It’s also important that our tools and code are understandable to any other user that may want to extend, modify, or reuse parts of them. Think Intuitive and Robust when designing any tool and you can’t go far wrong. Animation Tools: While most DCC’s contain a fair amount of animation tools, it is essential to be able to offer animators bespoke tools to help with their workflow or pipeline needs. This can range from animation exporters, character loaders, asset browsers, to anything that can improve the animator’s workflow. While the tools may vary in scope and use, it is important to keep an unified look between tools for both readability and ease of use. It is also a good practice to put all reusable code into libraries, meaning that other Technical Animators/Artists can easily write new tools without having to rewrite code. Python packages are perfect for this. Recommended reading : Mark Lutz’s : Learning Python Deformers: As mentioned previously, deformers can be great way of achieving deformation not possible with only joints and skinning. If a deformer does not exist in a DCC, we can usually write one as a plugin. Advances are currently underway regarding DCC specific deformers. Fabric Engine is a standalone development framework that allows Tech Animators/Artists to develop cross DCC solutions using their Splice API - this includes deformers. A combination of this and the previously mentioned point cache practices will significantly increase the quality of deformations and simulations we can achieve within a game engine.

Conclusion

Tool Deployment: Tool deployment is something that most people working day to day in a DCC application will encounter. It is important for Technical Animation/Art to be able to quickly and easily provide solutions to artists while minimizing the amount of support required for deployment, setup, and use. This is even more valuable when working between multiple studios that can be separated by distance, time zones, and language. With the adaptation of Python into most current DCC’s, it is not only possible, but relatively easy to have a shared tool distribution system between all DCC’s. A simple combination of Python packages, environment variables and the NSIS installer can make an incredibly simple but effective tool distribution system.

While this article in no way covers everything that comes our way, I feel that anyone interested in getting into Technical Animation can use this article as an insight as to what we do. Some people specialize completely in one of these areas while others choose to be more generalized. I hope that my article complements Ali Mayyasi’s in Vertex 1, and also highlights the differences between Technical Animation and Technical Art.

About Me

Originally from Leeds, UK, I trained at Animation Mentor before starting my first job as an animator at Crytek in Frankfurt, Germany. After getting more into the technical aspects of animation, I then trained at Rigging Dojo. Now working at Ubisoft Massive on Tom Clancy’s The Division, I have been working in the games industry for around 7 years. In that time, I’ve been fortunate to work with and learn from many extremely talented people. I love working in this industry because of the amazing projects we get to work on, the teamwork involved in bringing those projects to life, and the sense of community we have within our industry. GGWP!

Matthew Lee

w w w. m a t t h e w l e e a n i m a t o r. b l o g s p o t . c o m

VERTEX


286

Mark Van Haitsma

www.artstation.com/artist/mvhaitsma


287

Simon Barle

www.artstation.com/artist/simonbarle


Iron Bull

288

Iron Bull

Dragon Age Inquisition character art by : Patrik Karlsson

Introduction

Here, I will be going through some of my methods and techniques tackling a highly customizable character while being on tight time-frame. I wrote this with the intermediate artist in mind, so basically having some knowledge of modeling, texturing, and general art terminology is highly recommended. I will be taking you through my process of work in Maya, Zbrush, Photoshop etc., but you can certainly tailor it towards whichever software you prefer. When working in the gaming industry, the process of workflow changes a lot depending on the project, team structure, the limitations of technology, the time-frame, and a whole array of other things. The way to be successful is trying to learn and adapt to your environment, while maintaining focus on your final objective.

Set Up

For this particular character, I worked very closely with the concept artist and it was extremely rewarding. The general idea was to make a heavier armor set. As this was another outfit of an already established character, it made things easier. Designing a character while building it at same time – however, made it a bit trickier.

VERTEX


Iron Bull

289

After getting a few rough concepts, I started getting the bigger shapes done in Maya. I was very curious about what kind of functionality it would be possible to get out of it. Making armor work on a proper human is one thing, but having the limitations of the joints/rig/animation for in-game use is entirely different matter, especially if there is a lot of armor that needs to go on there. With that in mind, the character needed to have a lot of geometry added for each progression to give the player that big payoff.

As you can see on here - the leg armor plates look great on concept, but when implemented onto game mesh they do break quite a bit. This is with having plates both stacking on top of each other and behind each other. As there was no possibility to add extra joints there, we had to come up with a solution/design that would work better. So what we decided to do instead was to have the top part being all leather with a smaller metal plate and a larger metal part on the low end where there wouldn’t be much deformation, as it would be skinned to the femur bone/ thigh. Also, just moving it slightly lower helped a lot. These changes ended up working extremely well without the need of adding any extra joints or animations, yet maintaining the original idea. I come from an animation/skinning background and that is extremely helpful. But even with that in mind, it could be quite tricky to predict how some of the shapes will behave in certain poses/animations. Making sure the functionality of the armor and it’s progressions work at an early stage is extremely important because changes at a later stage will mostly likely be very costly, both in design and time. Make sure to have a nice Tech Artist handy for those early quick tests!

Modeling Process

VERTEX


Iron Bull

290

When doing this piece, to minimize downtime, it was important to separate workload into parts. I would rough in, for instance, legs and check with TA that it was working then pass it off to a concept artist for detail work and tweaks, while I work on torso and shoulders. Once that part was done I would get back to it and pass off another one. It’s a bit of juggling work, but keeps everyone in the loop and does ensure a consistent result. Plus, it’s a lot of fun and makes you feel like you truly working in a team environment with good ideas flowing.

Whenever I rough things in, I often do it in Maya first (depending on the character). On this character, a lot of the armor is mirrored to save texture space. The parts that are mirrored, I did in exploded state, with groups that were instanced (a good tip here is to keep it always at full values if possible). So for keyframe0, it was in exploded state and then at keyframe1, brought everything back into proper position. That way I can, with minimum effort, keep up the modifications on one part carrying over to the other side. Plus, when making high fidelity detail later on, I could make sure that some of the scratches and bumps were not too obviously mirrored. By doing this, I could always have it perfectly centered, as the group original position would be dead on center in both scale and rotation. This is something I came up with, on the fly, while making this specific character. I quickly realized later on, when making low res, that this would be extremely useful.

VERTEX


Iron Bull

291

As I am actually key-framing groups and not the geometry itself, I can build my lowpoly in the center/mirrored. Then once it’s done, I would move it over to the keyed group for that part and go to keyframe1 and will have it positioned in correct place. This made my in-game mesh building process extremely fast and precise. I always recommend having your layers nice and neat to keep track of all this. In this case, I had a layer for each progression and named them accordingly.

Having everything visible is good for seeing how things work in relation to each other, but could make you miss some detail behind or underneath. Hiding parts manually each time is too time consuming and definitely has a higher chance of error attached to it, rather than just switching a layer on/off.

VERTEX


Iron Bull

292

VERTEX


Iron Bull

293

This is roughly how the mesh would look in Maya once I am done with “display smoothness” on. I am very keen on keeping geometry nice, simple, and clean. All the parts are there and have proper shape/ feel to them. At this stage, I make sure that the topology is sold for high surface detail work in Zbrush. I do this while modeling too, of course, but it’s always good to have a second look. Once everything is in place, I merge the objects together into parts that make sense and are easy to work with, removing all the instances and temp objects in the scene (alternatively you can make parts that you could easily display various progression states with). Working with a complex character like this one, I don’t want to be dealing with more sub tools than necessary in Zbrush. Naming is very important here also, to keep structure strong for you and the people who might need to work with your files later on, or even yourself once you might come back to it at a later time. This feels like an obvious thing, but quite often gets overlooked even at senior level.

This is how the model looked right after I imported all the subtools into Zbrush. As you can see, it has a lot of detail already. All the pieces are there and now I just have to tessellate things up and make them feel representative of whatever material and level of wear that is required. I do assign various shaders to different parts so I can get a sense of material better. The more accurate representation you can get, the better it is for your final product. When it comes to metal parts, I wanted the metal feel like it has been hammered to give it its shape/wear. This aspect I actually did exaggerate somewhat for a fine armor like this one just from knowing that quite a bit of detail work like this gets lost after making it into game. On top of this, I added scratches and tear on edges. This is a fairly new armor you craft, so I did keep it fairly clean.

VERTEX


Iron Bull

294

Now, let’s talk about treatment in Zbrush. Often times, to achieve the best result, you don’t need to have lots of complex operations. Starting with figuring out what kind of wear/tear you are going for and then breaking it down to what kind of tools are needed to achieve the job, is a good way to go. In this case, I do want to have the hammered feel on the metal along with scratches/punctures and beat up edges. On top of this, adding a bit of dirt/distress/high frequency detail on some broader surfaces. To do that, I only need a handful of brushes with proper settings and a couple handy alphas. What’s important here is to get the feel of the object. Figure out where most blows might have landed and how the materials would have been altered during course of time on the character at hand. Then carry that information over to other parts and make things consistent. I start off with bigger shapes. Using clayBuildup brush on Zsub and having Draw size fairly large with Spray setting, you can achieve the beat-up metal feel on a large surface in no time. After that, I would go and beat it up some more with the ClayBuildup brush square Alpha and freehand setting. Later, I would calm the surface down a bit with hPolish to remove some of the less wanted artifacts from the last operation. I prefer this over smooth especially for the metal surfaces. Once that is done go on with Dam_Standard for scratches and punctures. Finally, I would do some small pushes with regular Move brush to make armor silhouettes more interesting and remove some of the straight lines of the borders. For the last step, using DragRect, I would drop some of the Alphas on to give it some extra fidelity and structure. Once you find your flow and start feeling comfortable with your processes, I do recommend to do a bit of a experimentation as well. Quite often, I picked up something new from doing a procedure in different matter. It’s all about finding that nice balance of doing what you know will work and trying a few new things to keep it fresh. This is the final result in Zbrush after all the surface treatment is completed.

VERTEX


Iron Bull

295

Once the high res is done, it’s time to move on to in game model. Here, I will go somewhat more into detail about the process, as there seems to be quite a few people out there who make high res stuff in Zbrush, but find it hard to know what to do next, in order to get that mesh into an in game environment. In order to build my low poly mesh, I will require getting those zbrush subtools into maya as reference. For that, I usually don’t decimate my meshes, but rather go down to appropriate subdivision level and export it out. This depends a bit on how your original mesh is made. If you capture the shapes you need, with a low amount of geometry, yet keep the polygon sizes somewhat consistent, then that should be no problem. Another tip here is to hide parts you don’t need and export only visible areas for that subtool (if say an object is mirrored). Avoiding pre calculation and then decimation for a complex character (all those subtools) is not a huge timesaver, but a timesaver never the less. Still, if you are fairly new to this process, then decimation master is there to help you out. Once you have all your reference subtools imported into maya, you could ether start your lowpoly from scratch or your base subdivision level from zbrush. I usually make three folders when exporting things out from Zbrush. One for high res trace, one for reference - that is usually the subdivision level 3-4, and the other for base meshes (because they can change quite a bit from the original you imported into Zbrush). First off, you need to make your ref object Live.

VERTEX


Iron Bull

296

Using your base subdivision level as a start for low poly model has various advantages, keeping UV’s if you have already done those in advance for say polypaint purpose and easy bake over. You also can already see how the geometry follows the ref shape. For some, that makes things easier. So it becomes not so much creating as correcting. I would recommend using tweak mode for some of this process.

VERTEX


Iron Bull

297

If you are creating your lowpoly from scratch, then Quad Draw Tool is your friend.

It’s fun, easy, and a fast way to build your geometry around your reference. At times, I do assign different shaders to reference object vs lowpoly to get better visualization or I have 2 view-ports active, one being the just the lowpoly I am creating and the other with them both in the view. It usually does take quite a bit of time to make a good low res that follows high res well and keeps good edge flow. Keep in mind that this will become your final model from this point on. So, take your time and make sure you get those shapes in. This process doesn’t have a whole lot of shortcuts. You just have to power through it all and the more you build the more you learn about how to make things more efficient and clean. Hopefully, this is where you notice that minor things, like two belts being closer together or an armor piece attached a bit differently, would have saved you a lot of extra work. Those things make practically no difference for the final model, but make quite a bit of difference budget/time wise. It’s still important to make those choices of asymmetry and randomness, but it’s good to know what kind of cost is attached to it and where you would benefit most in having it. Once you have all of your high res geometry covered with low res to support shape and functionality, it’s time to do uv.s. It’s a good thing to start off with getting everything UV mapped at the same ratio. However, after completion, I often go through the model and check where I might need to have extra detail and upscale that UV part somewhat. Then I reduce other parts in less need of UV space. The scale changes I do are still fairly minor because you still want things to feel like they belong, but it does help. I, then, scale everything to roughly the area of where my UV’s would go. After that, I move all the large parts in first, followed by the medium, and lastly small UV islands. Tracing could be quite a tricky process. We usually use Xnormal for most of this, but on occasion I did use maya. Almost no tracing is perfect right away. You might need to play with how deep the tracing values are, flip normals or alter your low res mesh to produce a better result. When it came to the texture process, we found ourselves sort of midway into PBR on this project. It being released on both current and last gen. This made us do some compromising. Moving on to full PBR, things will become a lot more straight forward for a lot of studios. I am not going to go over this too much, as in this case textures were made quite specific for our shaders and the frostbite. I did, however, want to point out that a lot of textures were made for each character and quite a few of them had various masks for specific materials (metal/cloth/ leather), tintable areas etc. Those were later pushed into the RGBA layers of the tga’s and eventually picked out by shaders in engine. This was mainly done to cut the costs. Also, having very heavy customization even on the texture side, we had to have a lot of areas gray in the diffuse to achive the maximum tinting options. This went hand in hand with some of those masks I mentioned earlier.

VERTEX


Iron Bull

298

As always, I did find the use of quixels dDo (http://quixel.se) to be very helpful in my texture process. Let’s get back to this model and some numbers. The most basic base for this model was around 10.5k tris and that combined together with all the progressions minus the back-faces was around 26.4k consisting of three parts (body, arms, legs). Textures used were 4096x4096 for base and another 4096x4096 for bitpack, that got compressed later on for in-game use. UV’s are stacked neatly to make the most of it. These are some of the textures and various masks just for the bitpack.

Then, there is the second bitpack, both bases and nude textures along with texture for the helm and misc items. Here is a shot with a couple of examples. Once all of that is done, it’s time to start a new character!

VERTEX


Iron Bull

299

Conclusion

I hope that this tutorial had some useful points for anyone who is interested in character art creation. I started it with the idea of keeping it small and just to give some pointers, but ended up touching on quite a broad array of things. My main advice is, however, to always be aware of how you can improve your work-flow. Ask yourself if there is something you could have done better/faster and try to do it on your next asset. I feel like there is still tons more to cover, but it will have to do for now. I must also mention that it was a blast making this asset and contributing to the Dragon Age universe as a whole. I did manage to refine a lot of my own processes in art creation, as well as having privilege of learning a bunch of new things in a variety of fields. The whole team did an awesome job and I am super thankful for sharing that experience.

About Me

I’m currently a Senior Character Artist at BioWare. I have been working here for roughly three years now, most of which was on Dragon Age Inquisition. Now, I am working on another exiting IP here. I have always had a great passion for movies and games. I never actually thought I would end up working in this field. I was shooting for some sort of safe and technical job, but things worked out quite differently. From a very young age, I’ve been into all sorts of art – drawing, sculpting, woodwork, etc. But after I found 3D, I was totally hooked and have been doing that nonstop pretty much from that point on. I love both modeling and animation. I started out as animator in the industry, but later on, my career path changed over to modeling. I have been in the game industry for about 13 years now and I’m still loving it. I hope you guys found this an interesting read.

Patrik Karlsson

www.patrikmadkarlsson.com

VERTEX


300

Joshua Lynch

www.artstation.com/artist/joshlynch


301

Lap Pun Cheung

www.artstation.com/artist/c780162


Concept Illustration

302

Concept Illustration Benjamin Last Introduction to a Concept Illustration Workflow :

Introduction

This following workflow will highlight, albeit briefly, my design process when creating realistic hero images for spacecraft concepts created by me at Karakter Design Studio for Edge Case Games ‘Fractured Space’ using both Adobe Photoshop and Modo. This workflow is essentially the same workflow I use everyday, altered to the day’s working requirements, whether it’s for designing vehicles, environments, characters, or props. This walk through will detail the latter stages of my typical workflow, usually starting with reference gathering and thumbnail sketching and gradually working my way through to the final hero image.

Research, References and Sketches

Before I get into the sketching phase, I need to understand and immerse myself in the brief. Whether I have set it for a personal project or if a client has given me a proposal, script, or gameplay idea, my first point of call is to familiarize myself with the related material, delve into the depths and surround myself with reference. Collecting images online along with field trips to photograph relevant subject matter goes a long way in helping familiarize myself with the subject matter and, therefore, generating more believable designs. It also provides much needed images that I later use as textures in Photoshop to provide a realistic touch and save time.

VERTEX


Concept Illustration

303

Every time I approach a brief to design something – be it vehicles, environments, characters, or props; I always begin with a sketch. Whether it’s in a sketchbook, on loose paper, on the back of a napkin, or digitally; I always like to quickly translate my thoughts into images. These initial sketches are here to establish a variety of forms, silhouettes and design themes. I was looking for a strong contrast between positive and negative forms, wanting to subset details into the negative spaces between large panels to establish a sense of scale for this larger ship. For this design, I moved quickly (due to project time constraints) into 3D after establishing a unique silhouette in the sketching phase.

From 2D To 3D and Back Again...

I begin building a block-out in Modo, roughly establishing design dimensions and any other criteria that have been communicated through the brief for this particular ship. This provides a rough structure to either start painting or to continue modeling. In this case, the project dictated a more refined 3D model, so I worked the model (along with Mike Hill) into a more finished design. Establishing a harmony between the positive and negative shapes was important, allowing for details to be subset behind the larger exterior bodies. Utilizing Modo’s Instance generators, large amounts of detail can be generated relatively quickly, giving a sense of scale to an object almost a kilometer in length. Once the design is realised, I render it out with a simple grey shader, along with a simple lighting setup. This provides me with a clean base which I can paint and manipulate in Photoshop without having to strictly adhere to a fully polished 3D render.

VERTEX


Concept Illustration

304

I return to Modo briefly before continuing in Photoshop. Having saved off my render camera, I decided to do a basic color breakup and re-render another pass with a stronger spotlight to emphasize the front of the spacecraft. This could be done in Photoshop, but with a larger rendering, I decided to complete this step in Modo to save time. An Alpha pass was also created to allow easy removal of the spacecraft from the background. Once again, this can be done with a selection tool in Photoshop. With this layered in Photoshop, I continue to use the path tool to create detailed selections with a mixture of the ‘overlay’ and ‘multiply’ layers to paint further material breakups.

Establishing a Light Source And Focus

I continue to manipulate the current layers through level adjustments, brightness, and contrast as well as painting in shadows and shadow cores to establish a stronger lighting setup. These changes are performed on another set of adjustment layers so that I can alter the transparency to varying lighting effects at a later stage. I wanted the eye to be drawn to the lower front of the ship, gradually working its way towards the back of the ship to the darkened-off areas. I place a theoretical bounce light in the bottom left of the image, possibly suggesting reflected light from a nearby moon. The rim light on the rear of the craft helps to push the silhouette from the background.

VERTEX


Concept Illustration

305

Painting, Photo-Textures And Details

I continue in this image to paint a rim light in areas to pop the silhouette further from the background. I paint a simple gradient, which I will later fill with star field. I want this image to have a lot of contrast and feel cinematic rather than a design rendering, making all features highly visible. I start using reference images of oil rigs at night and overlaying these over the inner parts of the ship to suggest internal lighting. With the ship’s large drivers at the front being a unique design element, I want to put a sense of emphasis on this to develop a visual hierarchy within the image. I also paint in smaller locator lights in red to not only further emphasize a sense of scale but also to provide more subtle information on the shapes that are in darkness.

VERTEX


Concept Illustration

306

I continue working slowly across the image, placing various photo textures to give quick details to the image. As there is a sufficient amount of detail in the subset areas of the model, I can focus placing and painting details in the foreground. Simple photographs of crates and industrial wall fittings work well to provide the look of large panels and surface details. Going back and looking into my reference of aircraft and container ships, I try best to match similar means of construction before switching between ‘overlay’ and ‘darken’ modes to see which gives me the best results, and finally painting in any remaining details on top of the photo-texture. Using the ‘median’ adjustment helps to remove some of the jpeg artifacts and pixelation that can occur when using photographs for textures, helping to preserve a more painterly effect. For a ship this large, using repeated elements helps to maintain its sense of scale.

Once I have established and painted the majority of the details for the ship, I move onto atmospheric lighting, decals and the background to help finish the image. To draw further focus to the front of the ship, I add an adjustment layer and paint in a simple blue haze emanating from the ‘hyper-drive unit’. Small spotlights are painted in to help in establishing the size of this ship. As a human would be too small to view on the ship in this image, I needed to use other elements to draw real world comparisons, such as the spotlights, which we would generally see illuminating billboards or the sides of buildings. The lettering on the side of the ship is reminiscent of the name and country markings on the sides of large shipping oil tankers, once again helping to establish a sense of scale with real world references. I found that adding a simple star field background generated unwanted noise to the image. I painted a simple cloudy nebula to add some color variation to the image and also to wash out the point lights of the stars, generating a simpler background that didn’t compete with the ship in the foreground.

VERTEX


Concept Illustration

307

Finishing Touches

I found the need to add smaller spacecraft to the image to break up the stillness of a single ship in frame. These ships once again assist in further cementing the scale of the larger spacecraft. Smaller engine flares are placed to add a sense of direction to the smaller ships, as if they were a fighter escort for this larger lumbering vessel. I find these elements necessary to add an underlying sense of story to an image. Finally, with almost any finished image, I add a final adjustment layer to harmonize the colors in the image. I apply the ‘unsharp’ filter to regain some harder edges lost in the painting. I find the filter adds a nice grain to the image, although I recommend using it sparingly. The final step involves adding some chromatic aberration to give the image a film quality, some minor tweaks with the levels and voila!

Conclusion

The methods shown here can be adapted to your own personal workflow. Whether you are working from an extremely detailed 3D model or a simple block out, the layers of overpaint and the use of photo textures will help you achieve a realistic hero image to place your design in the world it inhabits.

About Me

Benjamin Last is a Concept Artist/Vehicle Designer who specializes in designing and visualizing for film and games and is currently freelancing alongside working with Emmy award-winning Karakter Design Studio, whose clients include HBO’s ‘Game of Thrones’ and Guerrilla Games. After obtaining his degree in Industrial Design at Monash University, Benjamin enjoyed eight years of professional experience as a car designer for both General Motors and Volkswagen, gaining industry recognition for his role as Lead Designer for the GM Colorado concept, as well as contributing to the design of the C7 Corvette and vehicles featured in the ‘Transformers’ motion pictures. He also worked designing concept vehicles for leading brands such as Volkswagen, Cadillac, Chevrolet, GMC, Opel, Saab and Holden.

Benjamin Last

w w w . b e n j a m i n l a s t . c o m

VERTEX


308

Sung Choi

www.artstation.com/artist/sungchoi


309

Dorje Bellbrook

www.artstation.com/artist/dorje


Content is King

310

Content is King

A game designers take on content by : Tom Mayo

“Blizzard has been trying to generate enough content to keep their huge player base busy for over ten years - some argue the strain is showing.”

Introduction

Nothing beats content, right? You can have clumsy controls, muddled menus, a nonsensical narrative; but as long as the content is amazing – players can forgive almost anything. We are fast approaching the twentieth anniversary of Bill Gates’ article ‘Content is King’. It proved seminal and contentious (if not a wholly original sentiment, as Sumner Redstone fans might haughtily point out). It has been bunked, debunked, and rebunked. Gates’ focus was not primarily on gaming or virtual worlds, but the principle translates extremely well. Once your world’s infrastructure is in place, you need content. Lots of content. No, even more content than that. Lots more. I know you just patched it in, but your loudest players just finished it and consider it terribly passe. It can take weeks to create something that takes minutes to complete. This awkward, inescapably lopsided equation is undoubtedly one of the reasons that there has been so much recent focus on replayable content and player generated content/experiences in sandbox worlds. It’s a familiar challenge, but I wanted to talk about content from a slightly different angle.

“Minecraft is an Anecdote Generation Engine par excellence - players create their own experiences and content”

VERTEX


Content is King

311

The Nitty Gritty

I have a modest three years of design experience in the MMO space. Very early on, I encountered something odd. I started forgetting what it was like to be a player. Like all too many things in life, this was entirely unthinkable right up until the moment it happened. After more than ten years of exploring virtual worlds, with an almost unseemly fervor, I was so confident that I appreciated the nuances of player psychology that I didn’t even bother to codify them for the sake of professional reference. With my designer hat on, giving the players choice felt important, if not obligatory. There are so many ways in which you can slice an audience – by age, play style, region, favorite faction, class, Taylor Swift song, whatever. Serious efforts are made to ensure that as many slices as possible found each piece of new content interesting and valuable. Here’s a deadly sentence. I heard it uttered, and uttered it myself. “Well, it’s optional. They don’t HAVE to do it.” This is disingenuous at best. As designers, we are careful to assign various values to content, along various axes. This gem drops 4% of the time. This sword costs a hundred tokens. This buff lasts for thirty seconds.

“Bungie painstakingly handcrafted a ton of content, then cleverly asked players to visit and revisit it over and over, in slightly different ways.” ‘Optional’ is in the eye of the beholder. Tons of options feel great as a designer, but sometimes overwhelming as a player. Pick one of these three paths? Okay. If I do one, are the others locked out? No? Then I’m absolutely going to do all three. Not because I want to. Not because it’s fun, per se. Because it is rewarded in some way and I like to feel as though I have wrung every last gold piece (or whatever) from this shiny new content. Let’s not even get started on alts. Content has considerable and easily underestimated value just by existing. A lot of the time the players experiencing it are not just invested, they are inhabitants. They greatly value the status they have gained within this world. A chunk of this status depends upon consuming as much content as possible and falling behind the curve can have hugely detrimental effect on efficency.

VERTEX


Content is King

312

“The upcoming Fallout 4 promises to have a staggering variety of optional content.” Efficiency. It sounds so stark and mercenary, but the majority of players will naturally flow through the world in the most efficient way possible. If it helps to add a bit of poetry, think of them as a lovely stream trickling down a mountainside. (Some of them are more like a natural disaster and will fling themselves at every tiny part of your game, smashing away with hammers, but this is also invaluable in its own, painful way.) This is a natural consequence of an XP- and level-based ecosystem, which is the case more often than not. Level 2 is superior to level 1 in every way. You have a little more health, perhaps a new spell. You hit a little harder. Level 3 is better still. Level 100? You’ll be carving your name into the sides of mountains using your laser vision while riding on the back of a screaming flame giant. Mountains on the moon. That’s right. You can fly in space now. MMOs will constantly urge you to follow this progression, in both explicit and implicit ways. Yes, it is absolutely an option to stop and smell the roses along the way. Of course it is. Those who immerse themselves in a world and invest in the narrative often wring considerably more pleasure from proceedings and, not to be grubby about it or anything, stay in the game longer and are more likely to spend money. However. They are still on the same ride as everyone else, and every time they do anything at all, it will nudge them a little closer to that coveted fire giant mount. Stay on the ride. Keep shuffling forward. Keep consuming content. Sometimes we intend content to be fun, frivolous, and fast forgotten. Silly, cheap, randomly dropped Halloween potions that make you fart ghosts for sixty seconds. People will use them all up, chuckle, done and done. Right? But they won’t. Of course they won’t. If this is the first time this item has ever existed in the game, that alone gives it immediate value, regardless of any other qualities. Novelty. If it’s consumable, it’s inherently rare. Anyone who uses it has lost it forever, and there’s no guarantee they’ll ever, ever be able to get another one again. These items end up collecting virtual dust in banks for years. Literally years. Not because they are so incredibly exciting that players can’t bear the thought of losing them, but just because… They’re content. They are a little tiny sliver of status. A memory. I was there that Halloween. Everyone was farting, for some reason. It was weird.

VERTEX


Content is King

313

“Terraria can add tons of content without incurring huge production costs.” New content – particularly anything that requires multiple people – is white hot at launch, but cools very rapidly. There’s a very human desire to be part of the conversation, be an early adopter. We’re all phenomenally good at normalizing new information and experiences. As I mentioned earlier, it’s unthinkable until it’s not. It’s another way in which the traditional subscription MMO model (and, yes, this basically means World of Warcraft) feels like it’s breaking all the rules. Handmade content is unsustainable. It’s like planning an elaborate feast for months and months, sourcing mouthwatering ingredients from all over the world, sweating over each dish, throwing out anything that isn’t perfect, and serving it up only to have the diners picking their teeth and asking what’s for pudding before you’ve made it back to the kitchen. It’s a puzzle without a neat solution so far. World of Warcraft has momentum and tremendous resources to play with, but leaves players twiddling their thumbs for months at a time. Destiny asks players to experience and re-experience the same static, handmade world over and over in slightly different ways. Indie titles like Terraria can churn out a phenomenal amount of new content, partly because their (beautiful) game is 2D and relatively lo-fi. The dream of UGC (user-generated content) draws nearer and nearer, with Minecraft mods a notable success in that area, but remains elusive for most games.

Conclusion

Content is king. Optional content isn’t. I love MMOs dearly - they are absolutely the most satisfying genre I have ever experienced - but I don’t think they have reached a thousandth of their potential yet. I don’t know what the future holds, whether we’ll ever be able to get players involved in content creation in a genuine, meaningful, and lasting way, but I’m excited to find out.

About Me Tom Mayo worked as a magazine journalist for six years, then went on to serve at the SyFy Channel, EA, Realtime Worlds, Jagex, Activision, then Jagex again. He has a dead man’s eye, fought Buffy’s stunt double, and once tried to make Sigourney Weaver laugh. It didn’t work.

Tom Mayo

w w w. d ra ke l a za r u s .w o rd p r e s s . c o m

VERTEX


314

Leonid Kolyagin

www.artstation.com/artist/leonidkolyagin


315

Darek Zabrocki

www.artstation.com/artist/zabrocki


Photogrammetry

316

Photogrammetry

Helping you speed up your workflow by : Guilherme Rambelli

Introduction

Photogrammetry is a simple concept that uses photography to auto-generate 3D Geometry and a base texture map to create PBR Materials for either real-time or 3D engines such as Vray, etc. I’m happy to share some of my experiences with photogrammetry, and hopefully to help people get started with that and help enlarge this community and improve the existent workflows using these techniques. Time has always been a valuable resource in any production. Photogrammetry can help you speed up your workflow and maintain the good quality of your assets. Individual mid sized assets takes an average of 4 hours of work, and another 3 to 4 hours of computer processing. In this breakdown, we will go over the process of how to photoscan a section of an exterior environment naturally lit, and how to isolate and process an individual asset using Agisoft, Maya, Zbrush, and Photoshop.

The Process

The first step of the process is to go out and properly take pictures, the basic concept behind having a good photoscan asset is to understand that Agisoft Photoscan will generate a 3D Geometry based on the angles that you have taken your pictures, so the Capture process is extremely important, and maybe the most important step of all. ●Try to shoot with overcast lighting when outdoors, or diffused lighting indoors/in a studio, to avoid directional lighting. ●Using a DSLR or Mirrorless Camera + 50mm Lens to take pictures and manually make sure to acquire as much data as possible - trying to not under or over expose any of the pictures using a 32bit file format so we can tweak it later and recover some of the data obtained. ●Use a polarizing filter to reduce reflections from glossy/reflective surfaces ●Try to keep your ISO as low as possible to avoid noise ●Use small aperture/ High F-Number ●Use slow shutter speeds to balance the aperture and make sure the picture is not over or under exposed/ for shutter speeds slower than 1/40 the use of tripod is recommended so the photos doesn’t get blurred

VERTEX


Photogrammetry

317 ●Lock exposure by shooting in manual mode or by using AEL (auto exposure lock) on camera. ●If shooting in JPEG, make sure white balance is locked in as well, otherwise auto WB may give you different colored photos in the set. With RAW, this doesn’t matter, can be easily adjusted when processing. ●Use a gray card to make it easy to calibrate white balance, or a macbeth/xrite color checker to calibrate both WB and exposure - take one shot with gray card/color chart next to object at the start of the set, then remove. Could write more about how to calibrate later.

Images from the Agisoft User Manual to help understanding the camera placement through your scene, and what to do or not to do.

VERTEX


Photogrammetry

318

*3 Images covering 15-25째 degrees. Make sure to overlap at least 30% of your photos and to have (if possible) at least one 360 roll of pictures having the whole asset in frame. Once you have captured all the photos, use Photoshop (or any other image editing software) to batch all the RAW files to JPEGS with a basic adjustment of highlights and shadows to get rid of the Light and AO of your pictures. The filter may vary from location to location that have been photo captured.

VERTEX


Photogrammetry

319 Keep in mind that the flatter the image looks, the better the results will be from Agisoft. It will also help getting a better texture map a couple steps ahead. Now that we have all the JPEGS processed, drag and drop all images into Agisoft to create a new “Chunk” on your left “workspace box”. Right click on your chunk and go to Process>Align Photos, set its Accuracy to High for better results, and under “advanced” you can use Point Limit from 40.000 to 60.000. It will vary depending on how large is the environment scanned.

Right click in “Chunk” and go to Process>Build Dense Cloud… set up Quality to High or Ultra High. The results between these two different options doesn’t look relevant now but it does make some difference later in the process. I recommend trying a Low or Lowest quality first to make sure the Dense Cloud looks good and shows fidelity with the asset originally scanned. Once you know it looks good, you can process in “High” or “Ultra high”.

VERTEX


Photogrammetry

320

Once you have a good Dense cloud solve you can build your mesh under Workflow>Build Mesh and set the Dense cloud to be your Source Data and the density of the mesh you want to create. (Medium is enough sometimes.)

Now that you have your mesh built you’re able to export it as a FBX file under the File>Export Model. Now you have a FBX with your Mesh built from your captured pictures ready to be imported to Maya or any other 3D package. Export a OBJ from your original FBX to Zbrush for cleanup purposes (Remember to keep the mesh in its original Translation, Rotation and scale). You can import or use GoZ from your 3d package to bring the mesh to Zbrush. Once you have it there, duplicate your subtool and convert the duplicated subtool to Dynamesh with a low subdiv to unwrap its UV. Now that you have a mesh with no holes, you can increase your sub division until you have enough projected the this Subtool on top of the Original Mesh and still carry all its details. Now you should have a “clean” geo with proper UVs that still contains all the details from the original mesh. The holes that were filled from your Dynamesh are still flat. Use your zbrush techniques to bring some details to those areas, but don’t displace the Geometry too much in or out, otherwise Agisoft will have some errors during the texture process. Export your new mesh from Zbrush into Maya to make sure it’s in the same place that the Original mesh is. (If you didn’t move it around in Zbrush or in Maya when you first brought it into Maya it should still be in the same place.) Relax its UVs and make sure you cover most out of your 0,1 UV space and have the least amount of distortion possible. Now export the new mesh again and let’s go to the final step into Agisoft. In Agisoft, with your earlier project loaded up, go under the menu Tools>Import>Import mesh and leave the values of Shift to zero. To create the texture, you can go under Workflow>Build textures and set Mapping mode to “Keep uv”, blending mode to “Mosaic”, and set the resolution of your map to whatever you want. If your pictures have some color variation due to the light shifting when they were captured, “advanced” and “Enable color correction”.

VERTEX


Photogrammetry

321 To export your texture, go to Tools>Export>Export Texture. Now you’re ready to use that texture to generate your Albedo, Gloss/Rough, Specular in Photoshop, Bitmap2Material or any other software you want.

To generate the Albedo, you can use Photoshop and convert the texture to a Smart Object, and use a “Shadows/Highlights” adjustment layer to get rid of the shadow and light residuous in the texture map. For the Gloss/Rough Map, I desaturate my albedo, and use maps generated in DDO as reference to make sure I’m in the correct grayscale range of a Physically Based material. The same concept applies for the specular map. You can use Xnormals with the mesh that you created in Zbrush as a high poly and the same mesh decimated (converted to a low poly using Zbrush, Maya or whatever software you want) to generate your Height, Normal and Ambient Occlusion map. Now that you have all the maps that you needed to have a seamless PBR Material, and a nice Low Poly Geo generated using photogrammetry as a source mesh, give a try on marmoset to see the results. Here are the Results of the Tree and terrain texture using photogrammetry, combined with Speedtree for the leaves and Megascan for the grass on the ground.

Conclusion

Photogrammetry, as we could see in this breakdown, is a simple process of photo capture an environment or asset, and use a 3D software the stitch all pictures together and give us as a result a 3D Geometry already textures that we can use for either 3D rendering engines, such as V-ray, or real-time engines with almost no difference between its workflows. I hope you guys like it and start to experiment with photogrammetry so in the near future we can have more to share with one another and push photogrammetry to our industry more and more.

About Me

I’m Guilherme Rambelli and I’m currently working as a 3D artist at “There Studios ­Santa Monica, CA” for the last 3 years, and as a 3D Senior artist for the last year. My main responsibilities as an artist consists in creating 3D photorealistic environments for Movies, Commercials and Promos for clients such as, Marvel, Warner Bros.,Hasbro, Bravo, Dell, Toyota and others.

Guilherme Rambelli

w w w. a r t s t a t i o n . c o m / a r t i s t / g r a m b e l l i

VERTEX


322

Efflam Mercier

www.artstation.com/artist/efflam


323

Glauco Longhi

www.artstation.com/artist/glaucolonghi


3D Concept Design

324

Making of “Atom-Eater” 3D Concept Design by : Vitaly Bulgarov

Introduction

I had a great pleasure to contribute an article to Ryan Hawkins’ Vertex 3 this year and one of the goals I had in mind was to maximize the educational impact for it by trying to cover topics that are usually left behind the screen and hopefully go beyond describing the pipeline and toolset. I was particularly interested in sharing my thinking about the concept design process that starts in your head way before you turn on your computer. That’s why I chose to write an article about creating a personal design piece for which I also released a time-lapse video recently. Since the video ( link here https://youtu.be/Yi6Rg4RaIR4 ) visually covers the actual steps taken to create this piece, I can focus the attention of this article towards either the things that were left behind the screen or describe more specifically the steps that aren’t visually self-explanatory. So let’s get started!

VERTEX


3D Concept Design

325

About “Black Phoenix Project”

“Black Phoenix Project” is an ongoing concept design exploration, centered around a fictional robotics corporation, that develops its products in a not so distant future. It’s a playground for both experimental entertainment design ideas as well as for more grounded in reality concepts. With the “Atom-Eater”, I wanted to expand its line of robotic designs with more agile prototypes that use synthetic muscle actuators and overall feel more organic than the blocky early “Black Phoenix” robots.

The Idea

It all starts with an idea. I know it’s a cliché thing to say, but realizing the importance that you need a clear idea first will help you to avoid getting into the trap of creating something flashy that has no depth or meaning beyond its flashy look. This is one of the reasons why you can find it a bit easier to work for a client with a clear goal/direction in mind than to do your personal work when you can do anything you want but don’t really know what it is that you exactly want to do. Every idea worth pursuing should have a “hook”. A hook is, at the same time, an attribute of an idea and a story-telling instrument that should be easy to explain and is something that will make a concept either fresh or fun or visionary or all of it together.

VERTEX


3D Concept Design

326

A good habit to develop is asking yourself “what’s the hook?” and “why is this worth exploring?”. It all goes back to the good old “form follows function” or “good design needs to solve a problem”, but with the entertainment design field, it doesn’t have to be so black and white. If I see something in nature that I find fascinating, either visually or functionally, that alone can be enough to inspire my daily tasks as a concept designer and help to unfold design possibilities as I move through the process. The same thing happened with the “Atom-Eater” design. I was fascinated by an animal called ant-eater and how its seemingly gimmicky anatomy actually makes perfect sense when you find out about the environment it deals with as well as goals it needs to achieve to survive. Inspired by that, I decided to create a robotic design that would be heavily inspired by ant eater’s anatomy. Deciding on that was one of the first important steps in concept design process which already helped me to move forward with that. At the same time, there was a need to decide what this thing would actually do and why this design would need this kind of body structure to perform its tasks. That’s when I thought about the other idea I had, which was to explore a futuristic, quadruped, robotic platform that could be used for such applications as cleaning up radioactive or toxic waste. Also, after a few years of following as well as doing work in the robotics field, I realized that a modular robot with an “open platform” (meaning it can be equipped with tools depending on the tasks it needs to perform) would be a much more desirable product in the future which is an appealing idea even for a sci-fi theme. That’s how a basic foundation for the “Atom-eater” concept was formed. It would be an “open platform”, four-legged robot with anatomy similar to an ant-eater and it could be a radioactive disposal cleaning robot, that could use its elongated head and sensors to reach hard-to-reach locations, hence “Atom-Eater”. I knew exactly how it would look like at the end, but it established the boundaries within which the final design could unfold and that was enough for me to know that I could work on it in a way that would minimize the potential waste along the way. By “waste”, I mean any 3D part that is modeled and then deleted, something that is usually an expensive thing to do for a 3D concept artist. The reason why I wanted to spend some time talking about all this front-end thinking is that any concept worth exploring is usually a product of not just “pulling the vertices” time, but the time spent beforehand on the idea itself. That’s where taking notes is a critical tool. You can use any media/format for it that you want. I constantly write down notes for either ideas on the current projects or future project or non-existing project in a Word document or on my phone if I’m not in front of my computer. I would sincerely recommend this habit to everyone as it will save you time when you actually get to modeling stage.

Staying Creative and Focused

I find it very difficult to maintain a proper balance between staying creative and focused all at the same time. It intuitively feels contradictory to one another. That’s why there are a few of tricks I’m using when trying to stay on target, get things done quickly yet stay open for design opportunities that arise as I move through the modeling process. Most of these tools are coming from basic time management techniques. Once the overall foundation for the Atom-Eater idea was established I spent some time writing a list of items I wanted the design to include. This is the time when the real concepting is happening, but it’s just still in my head. Writing it down (whether it’s on paper or a document) makes it tangible and gives it energy. I would write things like: -

Cybernetic muscles Hard mechanical feet Head-side flexible robotic arms with mechanical claws/hands Partially exposed hydraulics in legs Multiple mechanical “eye” openings looking in different directions Soft pad lower leg protection Soft “skin” material that protects internal mechanics

Since I already knew its body would be based on an ant eater, there were no questions about the overall proportions, flow of the head, body, and the tail. Giving each part a challenging, but realistic deadline for its completion was the key to moving ahead through each part and stage without getting stuck trying to get that one part perfect. Usually my philosophy is captured by this law of forced efficiency: “There is never enough time to do everything, but there is always time to do the most important thing”. This leads to a critical step which is defining what would be the most important elements of the design and how much time you want to spend (or have to spend) on the whole concept. After that, it’s a matter of committing to getting it done within that time frame. I found out that it is easier to stay focused and at the same time be creative when I divide the chunk of work I have to do in smaller portions, all the way to smaller actionable steps and then for a given time frame focus only on that area avoiding the temptation to jump throughout the design trying to refine it evenly everywhere. This also means that before I’d start working I’d budget such technical steps like how much time I want to spend on retopology of a part (Zremeshing), etc.

VERTEX


3D Concept Design

327 That way by the time I finish writing the design item-list it would turn into a detailed plan with each item having a deadline. Even if it ends up taking more time than I planned, it would be much better to work in defined passes using the plan as a road map, going from part to part, making consistent progress than taking a risk of getting stuck on one part trying to get it “right” and ending up spending all the time just on that. I believe that any work tends to fill up the time available for its completion. To take advantage of that, I try to spend my first 25-30% of all the time I have for the entire concept on the three most important areas/elements of the design. That way I would still have time to make consistent progress on the rest of the design. But knowing that the most important parts of it are already done would make you more relaxed and less pressured by the passing time and upcoming deadline, therefore letting you be more creative. It’s an interesting balance that can first feel awkward, tricky, and mechanical, but when becoming a habit can help to take on big projects with a challenging deadline without feeling stressed out which is important for getting the creative juice going.

Pipeline and Overall Process

After writing down the initial idea and describing the visual targets of the design, it’s time to actually create the thing in 3D. Here are the further stages I went through: Building a 3D blockout from a primitive in ZBrush using Dynamesh feature

Subdividing the blockout 1-2 times for initial refinement pass of the big shapes in ZBrush with no attention to details yet. Basically, it’s just quick indicating where the future details are going to be placed and what direction the flow will have.

Adding initial details using pre-modeled kitbash parts assigned to IMM brushes

VERTEX


3D Concept Design

328

What’s really great about this approach is that you can use ZBrush Transpose tool to bend the parts to make sure they flow along the organic surface.

Retopo-ing the model using Zremesher feature

Exporting the models from ZBrush into a polygonal modeling software (in this case Softimage XSI) . Finishing off the surfaces and finalizing the integration between the parts using mostly SUBD-based poly-modelling tools. This was the longest stage consisting of these substeps: Taking advantage of Zremeshed geometry and cutting body surfaces in separate parts

VERTEX


3D Concept Design

329

Adding details, applying thickness

VERTEX


3D Concept Design

330

Integrating details added in ZBrush with the layers of body surfaces

Adding a few Non-SUBD parts (Feet and a few hydraulic parts)

VERTEX


3D Concept Design

331 Separating meshes per material and assigning material groups

Look-Dev in Keyshot

VERTEX


3D Concept Design

332

Final Rendering in Keyshot

Quick Post-work in Photoshop

VERTEX


3D Concept Design

333 The pipeline and used software varies from project to project. For example, I started using CAD modeling software, Moi3D, more and more not just for industrial design projects, but also for entertainment designs when I need hardedge mechanical parts with a very realistic machined-style look. But for this concept (mostly because of its organic style) it was enough for me to rely on older tools that I have been using for years: Zbrush and Softimage XSI. In terms of modeling techniques, these are the approaches I used for creating Atom-Eater: -

Digital Sculpting SUBD Poly modeling Non-SUBD Poly modeling

Sculpting was done in Zbrush using its awesome Dynamesh feature. The power of it is that you can get a nice organic shape that establishes overall flow of the future design in minutes by just using Move brush and Standard brush starting from a sphere (or in this case from a cylinder) and using Shift to smooth the form along the way. For SUBD (quad-based topology with supporting edges) and Non-Subd Poly modeling technique, I used Softimage XSI, but you can use any other 3D modeling software you’re comfortable with like 3D Max, Modo, Maya, etc. Most of the work was done using SUBD-based topology to allow subdividing the mesh before rendering to get smoother surfaces and micro-fillets from supporting edges. For concept work, you don’t need to do that unless you need the specific look that SUBD modeling can give you. With this design, I needed that organic flow with tight transitions and surface tension that is difficult for me to achieve using any other approach. Non-SUBD Poly modeling was also done in XSI as it is a very fast way to “fake” the machined look that is hard to get with subd-based approach. Some of the mechanical parts and also Atom-Eater’s feet were modeled that way. The biggest advantage of this approach is that you can disregard the topology and basically “Boolean” your way out. What I like about XSI Booleans operations is that you can apply those on objects that have n-gons and open edges (holes) in it and still make it Boolean while keeping the history live, so you can change the position of the operands to change the output of final Boolean.

ZBrush Insert Multi Mesh brushes

Zbrush IMM brushes were very helpful in terms of accelerating the design/modeling process. Right after I finished a rough 3d block-out, I started applying Insert brushes with assigned pre-modeled mechanical parts. That helped me to populate the models with mechanical details very fast while placing the parts correctly along the normal of the surface, which is something ZBrush IMM brushes are very good at. You can find out more about mechanical parts I created here: www.3dkitbashstore.com

VERTEX


3D Concept Design

334

Small Tips&Tricks that Make the Difference

ZBrush: Another super helpful Zbrush feature I used for this project was ZRemesher. The zremeshed blockout of the AtomEater’s body became a great quad-based base for further SUBD-based modeling in Softimage XSI. I would zremesh a section of the Atom-Eater, for example head, then I would bring it to XSI, relax the topology, re-arrange the flow of the edgeloops in some areas using XSI’s “slide” along the surface with Proportional Modeling On (similar to Soft Selection). That patch of geometry would become a foundation which I would refine using polygonal modeling tools, add thickness to it, as well as seams and other details. Non-SUBD modelling: 30deg auto-smooth. When modeling using Booleans and N-gons in a polygonal software, one thing you’ll start noticing will be artifacts that are the result of surface shading continuity throughout a coarse topology. One of the ways to fix that is to apply to a lower angle threshold value to a normal angle autosmooth in order to “break” that continuity in a curved or non-straight surface. In XSI, it’s in Geometry Approximation panel – Discontinuity Angle of Polygon Mesh. In 3D max, the modifier is called Smooth. And in Maya, the command is called Set Normal Angle. Usually when I use a non-subd approach, I just set a 30deg. Angle value from the start.

Split edges or Separate polygons for non-SUBD geometry. If a lower degree auto-smooth doesn’t help to fix the problem you can always just disconnect the surface or edges to abrupt the surface continuity and get rid of the artifacts. SUBD Modeling: Relax Tool. I use Relax tool quite a bit when building smooth surfaces with a lot of polygons in it. The thing is you don’t want to spend too much time tweaking a lot of vertices by hand because of how time consuming it is to make nice transitions by hand when dealing with heavy geo. The general rule is, if you have to pull more than four vertices into a specific configuration, that usually means there is already a redundant step. The beauty of SUBD modeling is in what happens when you subdivide the geo, so let the computer do the math and try to keep mesh as simple as possible and only add geo when and where it’s needed. You can subdivide geometry and add more details, but before you do, make sure you’re happy with the overall form and how the surface flows because as soon as you subdivide it and add a few details on top, it will be difficult to change topology on the fly because of how many new polygons were created in subdivision.

VERTEX


3D Concept Design

335 Adjacent Selection. Another tool I use a lot is adjacent selection, especially adjacent selection from polygons to edges. This helps me to quickly select the edges without actually clicking at them. So I recommend applying Adjacent Selection commands to hotkeys that are easily accessible. I’m using it every time I need to add supporting edges to newly extruded polygons since the edges adjacent to those polygons are the ones that need to be tessellated with parallel edgeloops.

Render prep work: I usually do all the materials separation work before I import the model to Keyshot. So with the Atom-Eater, I created a few materials with different colors in XSI and applied them to model parts accordingly. The main goal here is not to get the colors right, but just get the proper separation of materials for the future work in Keyshot. That way, when I import the model in Keyshot, all the parts are linked properly together to a material they belong to, which makes it easy to start working with the shaders. Keyshot: One of the ways to efficiently use Keyshot in look-dev stage is to import the model in lower resolution first. When I was done with the Atom-Eater model, I first imported it in Keyshot without subdividing it. That way the initial test renders were much faster so I could I iterate on materials and lighting settings much faster. Once I was happy with shaders/ lighting, I subdivided the model two times in XSI and imported a final version of the mesh for the final renders. Using Keyshot’s Rounded Edges feature allows you to get micro-fillets without the need to build them in geo. This is a very quick and efficient way to give a more realistic look to your mechanical models and works especially well on nonSUBD parts with a lot of hard edges.

VERTEX


3D Concept Design

336

Post-work: I almost always tweak the Keyshot renders in Photoshop even if it’s just a quick brightness/contrast adjustment. I also like tweaking levels and color balance to create a more specific mood that fits the image.

Few words on XSI…

I get asked quite often about what 3d modeling software I am going to switch to since Softimage XSI was discontinued. So I thought I’d take an opportunity to response here. My answer is, I still use XSI and am going to continue using XSI until I find a better alternative, which is not the case at the moment. I’ve been using it for a long time which made me learn a lot of subtleties on how to make the best use of it and be quite efficient when it comes to polygonal modeling. I’ve tried to use Max, Maya and Modo and I wasn’t 100% happy with it at the time I tried it which was a few years ago. I am sure these programs will be getting exponentially better in years to come and I wouldn’t be surprised if a whole new modeling software will appear that will be so good I’ll switch to it right away. At the moment, I’m using a combination of these modelling tools: -

Softimage XSI for polygonal modelling ZBrush for sculpting Moi3D for Nurbs modelling

I am very happy about the combo of these three programs and still feel there is a TON of things for me to learn about each of them including the already discontinued Softimage XSI. That being said, I’m keeping an eye on anything new that comes out, including latest versions of 3D Max and Modo which get more and more powerful so I won’t be surprised if I try another polygonal modeling very soon.

VERTEX


3D Concept Design

337

Conclusion

If there would be a last tip/advice that could also be a conclusion, I’d suggest trying to really master not just the software/toolset you’re using but also the entire process from the very vague initial idea in your head to how you follow through it until the process becomes fully transparent to you. Asking the right questions and being honest with yourself will help to define what to focus on. I believe every artist is unique in how he/she thinks. So setting up the design/modeling/rendering process to your own preference until you feel fully comfortable with it is the key to unlocking the creative potential. I hope you enjoyed reading this article and found a few helpful ideas in it.

About Me

Vitaly Bulgarov is a California-based artist who specializes in mechanical design and currently works as a Concept Designer at Intel’s “New Devices Group” where he’s involved in developing next generation wearable technology. Before joining Intel, Vitaly Bulgarov had worked for more than a year at Intuitive Surgical where he was developing concept designs for the next generation products in Surgical Robotics field. Throughout the course of 12 years Vitaly had worked with such companies as Blizzard, LucasFilm, Paramount Pictures, DreamWorks, Intuitive Surgical, Oakley, etc. Some of the Vitaly’s notable robotics design work for film includes Robocop (2014), Transformers 4 as well as upcoming Terminator: Genisys and Ghost In The Shell (2017) Some of the video game projects include: Starcraft 2, World Of Warcraft, Diablo 3, etc.

Vitaly Bulgarov

w w w . v i t a l y b u l g a r o v . c o m

VERTEX


The End

338

Well, folks, we have done it again. We finished yet another addition to the library. Thanks to all of our supporters and those who contributed their time to create content for the book. I would also like to thank friends and family. All of this would not be possible if it was not for you, the entertainment communities, letting us know that there is a valid need for something like this out there. By sharing our workflows, tips, and tricks with one another, it can only make our industries stronger and the content we create that much better. It is also important to know that all the content created by the artists was done entirely in their own spare time, set aside from their jobs and social lives, to contribute to VERTEX. None of the artists and contributors asked for any financial compensation for their time and effort spent putting this together. I really hope you enjoy all the content that is in the book and we have done the best that we can in such a limited time and with limited resources. However, if you do spot something, please share it with us via email on our website. VERTEX 3 took a little longer than we wanted to finish due to our artists getting busy with life. We appreciate your patience and thank you for allowing us to take our time on the books. To all of our readers, I would like to thank you for downloading this and/or spreading it to friends and co-workers. We are still a new thing to the entertainment industries and a great number of people have yet to know about our existence. Please continue to help us spread the word by either sharing the books with people or sending them to our site. PLEASE VISIT AND LIKE US!! Facebook Page: https://www.facebook.com/artbypapercut Main Website: http://www.artbypapercut.com CONTENT CONTRIBUTION!! If you would like to contribute to creating an article or would like to have a pimp page of your work, we are always looking for new content for current and future books. Please visit our website and send us your information and we will review your material or content and see about putting you in the next VERTEX. Thanks for yet another awesome book, VERTEX 4 TO BE CONTINUED...

Ryan Hawkins Editor ryan@artbypapercut.com

VERTEX


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.