3D World

Mar 162012
 

Transformers week draws to a close with this video training from Dan DeEntremont. Master key animation skills in this Transformer-style tutorial as you rig a robot to animate its transformation from train to humanoid

To celebrate Digital-Tutors’ new Transformation training, we thought we’d make an event of it and post online all things Transformery!

We’ve already posted up two making-of Transformers articles:
The making of Transformers
The making of Transformers 2

And you can also read a making-of article on the cult Transformers-style advert for Citroën – The Embassy: The art of Robotics

To complete the Transformers roundup, here’s the train-transforming video tutorial from issue 119 of 3D World magazine.

Rig your own Transformer in LightWave

This article demonstrates the workflow for creating a transforming robot, similar to the designs in the live-action Transformers movies.

The films’ director, Michael Bay, is well known for taking the level of special and visual effects over the top in his films. These Transformers are not the simple ones everyone has seen in the cartoon show, but a gigantic mass of moving metal that somehow manages to form a human-like (or sometimes animal-like) robot. The transforming train that you’ll rig and animate has about 300 moving parts.

There are three videos available to show in detail the techniques described here. The first two videos focus on setting up the upper arms of the robot: you can take the principles you learn and apply them to the remaining parts of the figure. The first video will focus on rigging the upper arms. In this section, you will also be setting up the rig in a way that helps to cut the animation time nearly in half, using the useful Follower modifier. Bones will be used for moving the parts instead of separate layers.

In Video 2, we will continue with the animation of the rigged upper arms. Here, I will demonstrate how to bring each piece to its destination position without causing geometry to intersect. I will also demonstrate a number of key techniques to help the movement of the different sections to be more realistic.

Discover how you can rig this robot to animate its transformation from train to humanoid

Video 3 has a completely rigged transforming train. This video focuses on animating the transformation from a train to a robot, ending in a cool pose. A proxy version of the train will be used in this video. IK/FK Blending will also be implemented to allow smoothness in the transformation. If you get stuck at any point, the CD includes completed scene files you can study.

Click Next to begin the train-transforming video tutorial



Mar 162012
 

Disney's John Carter movie still

Cinesite has completed 831 VFX shots and converted 87 minutes of film into stereoscopic 3D for Disney’s John Carter, which hit cinemas last week

The 3D work in John Carter – Andrew Stanton’s first live-action feature film, based on Edgar Rice Burroughs ‘Mars’ series of novels – was split between three leading London FX houses: Cinesite, Double Negative and The Moving Picture Company.

Cinesite, renowned for its photoreal environment work, was responsible for creating and populating the majority of environments for John Carter. The team of 310-strong completed 831 visual effects shots, which included creating and populating the majority of environments for the film. They also converted 87 minutes of the film into stereo 3D.

Cinesite’s VFX supervisor Sue Rowe spent several months on set in the UK and Utah. Due to the scale of the project, Rowe divided the work between four other VFX supervisors.

John Carter VFX shots

Helium is shown from different angles throughout the film, and is used as the backdrop for the final battle sequence

Christian Irles supervised work on Princess Dejah’s city, Helium. The city presented a challenge as it had to match the art department concept stills. While this was easy enough to do in matte painting, it was very time-consuming and render heavy to get actual full 3D renders. Projections were created for the terrain and these were worked up in matte painting to achieve the level of detail required.

John Carter Helium city

Cinesite created a matte painting of the outside of the city of Helium and, using projections, built up the terrain using high-res stills taken on location in Utah

The shots presented the city as a whole with both Helium Major and Helium Minor visible, resulting in a huge amount of texture maps and shaders. Render time was very high for these shots and all layers, such as crowds, terrain, etc were rendered separately.

Helium stats:

  • 346 models in the city structure
  • 74 individual props created

The mobile city of Zodanga crawls like a myriapod across the surface of Mars: giving the city a sense of scale and animating the digital legs was challenging

Jonathan Neill supervised Cinesite’s work on the mobile city of Zodanga, a mile-long rusty metal tanker that crawls like a myriapod across the surface of Mars. The city was heavily textured using a combination of Photoshop, Mari and Mudbox in tandem with bespoke shaders and lighting development, to give an industrial look and feel.

A handful of sets were built which were locations within the city, but these needed considerable extension work to make the depth and scale of the city believable. Cinesite modelled thousands of pieces of geometry for the city buildings, and created hundreds of CG props to dress the sets.

John Carter VFX. Interior of city

Cinesite filled the city with warships and troops, before dressing it with hundreds of CG props

With 674 legs, the mobile city was technically challenging to animate: Timed animation caches were used to ensure the digital legs moved in a random fashion. “Variations in movement and secondary animation such as cogs and cabling were used to create interest in the leg movement,” says Cinesite.

Zodanga City Model stats:

  • 291 structural element models
  • Up to 20,000 objects in a single shot
  • 1-2 billion polygons, dependent on camera position and detail required
  • 242 CG props created to populate the city

Zodanga City Legs stats:

  • 674 legs
  • 44 claws
John Carter

The texturing and detailing of the giant airships had to be spot on since they feature in many close up shots

Ben Shepherd oversaw the huge aerial battle between Zodanga and Helium. His team created each side’s airships which use solar wings to travel on light, as well as explosions, fire, people and set extensions.

The giant airships needed to be finely detailed for close-up shots. A challenge for look development was that they were required to be more like a 19th-Century sailing ship, than the type of spaceship which a modern-day audience might expect.

For Sab’s flagship corsair, a partial set was created for the bridge/cockpit and one deck of a single ship. This was scanned and photographed for reference and recreated. The remaining areas were created as full CG models.

Dejah’s ship and the flagship Helium ship, the Xavarian, were created in 3D also. Each ship had a full set of wings which were sized and laid out specifically for each ship. These were controlled by pulleys and ratchet-type controls to give a sailing look. Each of the wings was covered in hundreds of individual solar tiles which needed to be able to be controlled in animation.

John Carter movie

The entire Thern effect system was designed and built from scratch using a combination of Maya, Houdini and custom software developed in house

Simon Stanley-Clamp directed work on the Thern sanctuary, a huge underground cave that forms around Carter and Dejah as self-illuminating blue branches as the characters walk through it.

The entire Thern effect system was designed and built from scratch using a combination of Maya, Houdini and custom software developed in house. Based on the principles of nanotechnology, the system provided a semi-automated way to ‘grow’ Thern into any environment and geometry. It took a full year of development time to evolve and bring to the big screen.

These ‘growing Thern’ shots were some of the most complex VFX shots Cinesite undertook, and can be seen to great effect in 3D

In the sequence, as the tunnel itself ends, the main Thern Sanctuary room is seen to build itself, opening out within the Thern matrix of the pyramid interior. This shot required extensive Thern simulation and growing effects, blending multiple elements together in Nuke to build the shot up.

John Carter is in cinemas now. We’ve not seen the film yet, and reviews so far seem to be fairly mixed, so if you do go, let us know what you think of it via the comments box below, or on Facebook or Twitter

The making of John Carter

This article focuses on Cinesite’s contribution to the film, but the 3D work was split between three leading London FX houses: Cinesite, Double Negative and The Moving Picture Company.

Read the making-of John Carter article in issue 155 of 3D World magazine, where Renee Dunlop takes us behind the scenes of all three VFX facilities.

Issue 155 of 3D World goes on sale on 27th March



Mar 152012
 

Weta Digital embarked on a new quest with The Adventures of Tintin, complete with crashing waves, pirate battles and an extremely stylish wardrobe. Renee Dunlop takes us behind the scenes

As the Blu-ray of The Adventures of Tintin goes on sale, we thought we’d share this article from issue 152 of 3D World magazine.

If you haven’t watched the film already, we suggest you do – The Adventures of Tintin looks like a mix of live-action and CG, which adds up to something unique on screen. It’s possible that The Adventures of Tintin missed out at the Oscars this year because of this very thing, which is a real shame as we think the film has some of the best CG we’ve ever seen.

Weta’s Adventures of Tintin

Weta Digital is delving into a new world – that of the journalist. Enter Tintin, a popular post-World War One comic strip hero who travels about with his dog, Snowy, cracking cases with a little help from his friends. Created in 1929 by the artist and writer best known as Hergé, the stellar artists of Weta, led by director Steven Spielberg and producer Peter Jackson, have brought the story to 3D animated life on the big screen.

It took some of Weta’s best to tackle the wide array of arduous effects required to complete the film. Keith Miller, one of five VFX supervisors, was among those appointed to the task. He was in charge of roughly 340 shots.

An epic sea battle required Weta Digital’s team to simulate stormy ocean waves

For Miller, the big challenge was the pirate battle. “It’s such a dynamic sequence,” he says. “There are nearly 60 pirates running about, two ships that are sailing in 60-metre seas complete with lightning storms, rain, hurricane winds, fire, explosions – you name it, it’s all there.” The most difficult challenge was the water, with 60-metre waves interacting with the ships that needed to compositionally match the representations provided by the pre-viz team.

Miller’s team approached the work from a few different angles. “First, we updated our FFT [fast Fourier transform] library, a system of generating waves using measurements collected in oceanic research,” says Miller. They also completely rewrote their library using a more up-to-date spectrum that provided the ability to incorporate the ideas of the depth of the ocean and the fetch, or the distance that wind stays at a constant velocity. “We added those new variables into the system and we were able to generate much more realistic wave scenarios for the high wind systems,” he adds.

Weta’s FX team did quite a bit of work approximating the surface velocity from the newly generated ocean surfaces and applied those to Smoothed Particle Hydrodynamics (SPH) particle simulations, much of which was used for the white water simulation, breaking waves that rode on top of the ocean surface. These were pushed through Weta’s in-house 3D effects solution, Synapse, a node-based system that’s a container for solvers. In some cases, Naiad data was also incorporated into Synapse for the initial bounded simulation elements.

The battle sequence combines water, fire, wind and lightning, and featured as many as 60 pirates in combat

In addition to reworking the FFT system, senior water TD Chris Horvath updated Weta’s shading model for raytraced water, using an improved model for participating media for underwater light extinction and scattering. He also made improvements to the procedural texture foam system.

Creating the hands

While Miller and his team battled with the pirate ships, Weta’s digital creature supervisor Simon Clutterbuck focused on some of the smallest of details through his modelling department. “We build the animation puppets, the deformation rigs, we do all the cloth and hair simulations, muscle dynamics, flesh dynamics – anything that has to do with the monster or character,” he says. “We interact with all the departments in the studio to produce stuff for them to use, like the puppets or the baked light, and we work closely with shots and animation.”

The creature department work includes providing all the puppets for the animators. “Our animation puppet isn’t the thing that gets cached and ends up in the shot,” says Clutterbuck. “The animation puppets are kind of an interactive, almost real-time version of the character. They don’t have to see amazing hand deformations to pose the hand correctly, so they’re just posing [and] animating this thing that’s much lower resolution.” Clutterbuck’s Creature Department provides the animators with approximations of clothes and low-res hands and bodies that allow for faster animation. “Then the animation data is cached off of that puppet and plugged into a high-resolution creature rig, which gets cached and given to lighting,” he says. “This way there’s no requirement for interactivity in our actual deformation models.”

A single complex rig was used as the basis for all characters’ hands

It’s hardly all low-res work, though. “There’s a big focus on faces and hands in the show, so a good deal of time was focused on building a detailed hand rig,” says Clutterbuck. “We had all these incredibly close shots of Tintin’s hands. It’s a treasure hunt, so there are all these clues that lead to the treasure, and there are lots of shots where he’s inspecting things. The shots are incredibly long, so you’ll have minutes focused on their face or hands. The stability of the cloth solve, the fidelity of the hands [and] the deformation all had to be very high. It was pretty unforgiving.”

Weta Digital’s workflow uses a generic model called Gen Man as a baseline for building humanoid characters. This starting point is used for reference, scanning and motion capturing, tailoring clothes, and even cross-referencing MRI data. Clutterbuck explains: “We produced a whole bunch of life casts in all different poses that were used to build support moulds, 36 in all, that went into the MRI machine, so the character could put his hand into a similar pose and hold it there. Then we could derive the meshes of his joints from the MRIs.” The result was a series of high-resolution joint meshes of his actual skeleton in the selected poses.

The story requires characters to grip and manipulate objects

The story is a treasure hunt, so there are lots of shots where the characters have to pick things up and be able to manipulate them

“The metacarpals in the wrist do all these crazy rolling motions – it’s really complex,” Clutterbuck says. “We couldn’t build that complexity into the animation puppets because it would have been prohibitive to animate with, but we also needed the correct degrees of freedom in the wrist and joints to give us the right deformations of the hand.” It took nearly five months to get the hands working the way they wanted.

“The hand rig looks pretty amazing,” says Clutterbuck. “The hand model propagates out into the show, procedurally warped into new shapes, so we built one hand rig and it was fitted to all the characters’ hands. We have a process that was developed on Avatar to transfer the rig and deformation data onto other models.”

Weta Digital’s model supervisor Marco Revelant was responsible for all the assets created in the model department and was involved with grooming and developing the fur system from the user side for the dog, Snowy. However, it was the clothing that both Clutterbuck and Revelant found the most challenging. The multiple layers and the way the different fabrics fell and moved presented a daunting task.

Folding the clothes

Weta Digital set up a Tintin-specific costume department that helped define the design of the clothing, offering insight into how the fabric would drape and move over a character. “The problem is,” says Revelant, “when you do digital clothing and give it to a modeller, the modeller will try to put in features like wrinkles and folds, but won’t necessarily take into account the quantity of fabric.”

Care was taken with getting clothing folds to animate correctly

To manage this issue, the Creature Department worked closely with modelling, providing tools that helped drape the character as they were modelling so that they could see how the fabric was behaving, rather than waiting until the Creature Department ran their simulations. Weta used NCloth in Maya, but spent a huge amount of time up-front shooting parameters and getting the topology in the models and construction correct, especially in cross-sections such as sleeves.

There are eight principal characters, and several have multiple costumes. In all, there were 551 individual costumes to build for the film

Several characters had multiple layers of clothing, requiring layers of geometry to simulate friction. There were eight principal characters, and several – including Tintin, Captain Haddock and Sakharine – have multiple costumes. In all, there were 551 individual costumes to build for the film. “Take the Captain,” Clutterbuck says. “He had a big woollen jacket, a woollen jumper, trousers, and socks and shoes.” Again, proper reference was key. Weta filmed a man running on a treadmill wearing a tailored suit they provided, and gathered reference on how cloth breaks across the seams, collecting data on details such as the effects of double versus single stitching.

Weta first tried just solving the visible clothing, but found that it didn’t quite look correct. “We ended up going for full coupled solutions where everything was solved,” says Clutterbuck. “Tintin might enter with his trench coat on, then take it off and toss it onto the back of a chair, and continue the scene wearing the rest of his costume,” says Clutterbuck. “We had to handle this level of complexity where we had all these variations of costume elements and they had to solve coupled. We hadn’t really done anything that complicated before in terms of clothes.”

Coupling affects even supporting characters such as Silk, who dresses in a formal jacket, a waistcoat and a shirt. “We didn’t solve the shirt, then put the waistcoat on, then the jacket,” says Clutterbuck. “We solved everything at the same time, so the solutions were all fully coupled. All the costume elements are plugged into one solver. Since they’re all plugged in, they all interact.”

Weta defers everything to its render wall. The costumes were assembled as a master file that contained a costume description. During the baked simulation step that file would assemble the costume, plug it into all the solvers, bring it in, attach it to the character, then do the simulation. The result was a final sim and a series of files generated to show what Weta calls pre-files, which are pre-simulation. The individual costume assets are iterated in parallel as an ensemble of costume elements.

“There’s a big focus on faces and hands in the show, so a good deal of time was focused on building a detailed hand rig,” says Clutterbuck

The costumes took several minutes a frame to simulate, but there was no interactivity requirement because that’s all happening on the render wall and animation was working with real-time puppet versions. “So we have these two parts of every character, with the puppet which goes to animation and the creature deformation model that’s the thing the animation curves get plugged into that simulates on the wall,” says Clutterbuck.

Weta’s flexible pipeline paid off, according to Miller. “I know a lot of facilities tend to lock down their technology, branch it off and continue developing it outside of current shows, but that’s very different from how Weta works,” he says. “It’s got pros and cons for sure, but it’s one of those things that helps us to stay at the leading edge of technology. We’re constantly throwing in new technology and updating and developing new aspects, and trying to get it pushed into production all the way through the course of the show.”

Setting the scene

The entire Tintin project was done in-house at Weta Digital, including the artwork for the environment and character studies. The translation of the environments from 2D to 3D was left to Weta’s modelling department under modelling supervisor Marco Revelant’s guidance. An internal art department was assembled to research information about the time when the film takes place.

“Every element that was drawn in the book, we tried to find the respective real element from that period that could have been the inspiration for the Hergé drawing," explains Revelant. "Everything was checked against real period data.”

“One important thing is [creator] Hergé was very careful in depicting a kind of reality that was around the 1940s,” explains Revelant. “Every element that was drawn in the book, we tried to find the respective real element from that period that could have been the inspiration for the Hergé drawing. Everything was checked against real period data.”

Creating the hair

Weta was working on Rise of the Planet of the Apes and Tintin at the same time. While the requirements for hair on Tintin weren’t anything near what they were for Apes, some of the aspects translated over. Tintin required wind effects, wet hair and a lot of development to get the hair to work coupled with the clothes.

Character hair in The Adventures of Tintin has to interact with objects and the environment

With the hat on, the Captain has a groom, styled so his hair doesn’t stick through the hat. When the hat is off, the hair is groomed appropriately. Sometimes the Captain put his hat on or took it off, so transitional shots with appropriate grooms were needed. The Captain’s hair ended up having a very dense particle set on the hair and collision objects with the hat, and his hair would spring up a bit during the transition.

Buy issue 152 of 3D World magazine to read the full article

Buy the Blu-ray of The Adventures of Tintin via Amazon



Mar 132012
 

Do you remember the original Transformers-style Citroën C4 spot? Five years ago it became a worldwide cult hit and we asked The Embassy’s CG team to reveal some of the ad’s technical secrets. Catch up as Transformers week continues…

To celebrate Digital-Tutors’ new Transformation training, we thought we’d make an event of it and post online all things Transformery!

We’ve already posted up two making of Transformers articles:
The making of Transformers
The making of Transformers 2

We plan to post up a train-transforming walkthrough tutorial this week too, so remember to check back.

Here’s the Embassy’s making of the Citroën ‘Runner’ spot

ABOUT THE ORIGINAL AD

Created for the launch of Citroën’s C4 range, The Embassy Visual Effects’s original 2004 ad, ‘Alive with Technology’, opens with a hand-held camera shot of a car that transforms into a robot, performs an impromptu series of dance moves and then reverts back to vehicular form.

The Embassy Visual Effects’s original 2004 ad, ‘Alive with Technology’, opens with a hand-held camera shot of a car that transforms into a robot, performs an impromptu series of dance moves and then reverts back to vehicular form

As well as making other VFX teams green with envy, the spot proved to have unexpected longevity. A full two years on, it was regularly appearing on TV, picking up fresh awards, and inspiring numerous spoofs and tributes, including a memorable parody replacing the C4 with a rather less glamorous Citroën 2CV and a viral for Danish bacon. The Mill even got a shot at producing a follow-up, before The Embassy itself jumped back on board for a third in the series.

“It’s hard to say what it was about that original ad that hit people,” says studio president Winston Helgason. “Technically we did a good job, but something else struck a chord with them. While the ad has that geek factor, it’s just really fun to watch.”

Television audiences got their first taste of vehicular dancefloor magic back in 2004. A relative newcomer to the field of CG, Vancouver-based VFX studio The Embassy Visual Effects had already turned heads with its viral short film Tetra Vaal and some impressively photoreal ads for the likes of Nike.

But it was the Citroën ‘Alive with Technology’ ad that really put the studio on the map – and a spring in the step of CG‑based car ads. Fusing perfectly believable virtual visuals, directorial flair, and some seriously cool dance moves, The Embassy created what is now regarded as a genuine classic.

Watch the Citroën ‘Alive with Technology’ spot

Now the studio is back on board for the third spot in what is becoming an increasingly long-running campaign, and has been working hard to push the concept of a car that transforms into a robot to even greater heights.

In contrast to the original spot, for which director Neill Blomkamp utilised a virtual camera and 3D environment constructed from photographs, the new ad’s director, Trevor Cawood, chose to undertake a live shoot in South Africa – a location chosen principally for its favourable lighting conditions. A new transforming CG vehicle was then integrated into the plates with the help of elements rebuilt in 3D to aid the creation of shadows and reflections.

“The brief was pretty open,” says The Embassy president Winston Helgason. “The idea was to have the robot running, but other than that, it was simply ‘make it look cool’. The client did come back and ask if we could find something else for the robot to do, though, so we came up with the rail slide [which the bot performs along the restraining barrier by the side of the road].”

Here, the studio’s 3D and compositing staff reveal just how their cybernetic star was rigged and animated to perform such a stunt. They also explore some of the shading and lighting techniques used to generate the photorealistic renders of the modified car necessary to composite it seamlessly into the background plate.

Helgason reveals that the studio’s preferred tool for this kind of work is LightWave 3D’s own renderer, its raytracing proving particularly well suited to hard surface lighting. Dropping HDRI set data into the program and adding additional lights, the studio is able to get a scene fully lit in a matter of minutes. But ultimately, he says that the real secret of photorealism in the Citroën ads is simply attention to detail.

“The most important thing is to understand how lighting really works, and then learn to match the way it reacts to metallic surfaces,” he says. “That, and then adding loads of extra model detail is what makes the results so effective.”

Watch the Citroën ‘Runner’ spot

Click Next to read about how ILM had to rip apart the original robot design



Mar 122012
 

iClone's new toon shader

Real-time 3D animation tool, iClone, has everything you need to set up your directorial 
debut. But is it too limited, asks Paul Champion?

PRICE
: $200.
 Upgrade from $120. 

Other editions: Standard, $80


PLATFORM
: Windows


MAIN FEATURES:

  • Real-time animation

  • In-screen motion editing and puppeteering

  • Advanced timeline editing with transition curve

  • Animate in real-time with motion-capture device



DEVELOPER: Reallusion

Converting your finished story idea into a pre-viz or polished animation often presents some daunting challenges, and selecting the right software applications to use can be a key factor in the time (and cost) spent completing it.

iClone5 Pro offers a happy medium between high-end applications that have seemingly endless options to tweak, and frustratingly feeble, user-unfriendly low-end software. The latest version of iClone has new animation tools, is still a breeze to use and remains competitively priced.


iClone5

Now you can create your own version of Pixar’s The Incredibles, in a park and on a merry-go-round!

If you’re unfamiliar with iClone, it’s primarily a template-based hassle-free solution for real-time animation with plenty of bells and whistles. In terms of workflow, you’re limited to working with the rudimentary content supplied with the application, unless you’re prepared to buy additional assets via Reallusion’s Content Marketplace (which always seems to have some sort of deal on offer).

Getting your own assets into iClone5 Pro is quick and easy, but it requires Reallusion’s 3DXchange4, which converts files from applications such as ZBrush, Photoshop, Blender, Poser, Daz Studio, Vue and Maya, and costs $80 for the Standard version. You’ll need 3DXchange4 Pro ($120) to use assets in FBX, 3DS, OBJ and SKP formats. 


Pre-viz users or anyone presenting a concept pitch to clients should find that the content provided is more than adequate for demonstration, where the actual look of assets is less relevant. End users, who will no doubt grow tired of the limited content provided, will be disappointed that they have to shell out for 3DXchange to import more material.

With the assets in place, it’s time to animate, and there are many new tools to help you with this.



iClone5

Whether it’s sexy girls, gym kits or vampires you’re after, the marketplace has plenty of assets for you to buy

New features



Direct Puppet lets you record your actor’s animation in real time, and if necessary lock body parts to locations. MixMoves enables seamless blending between motions. Body Motion Puppeteering enables the user to control the animation speed and direction. 


Simple floor contact is taken care of with Human IK Motion Editing for Actors, and allows props (which can now be animated in real time) to be held onto realistically. The Timeline has been updated so that animation curves can be varied in playback by adding curve adjustments such as Ease In and Ease Out. For physics animation there are Rigid and Soft Body options for simulation, and other uses such as game prototyping.


iClone5

Effects can help to enhance your movies, but they are limited to a maximum of five within a project

The premium new animation tool being touted for use with iClone5 Pro is the Mocap Device plug-in. With this you can act out your animations in real-time – the recorded mocap data is then applied to actors.

At $140, this is a lot cheaper than buying your own professional mocap studio, although it requires you to have an Xbox 360 with Kinect. It’s also only compatible with the Pro edition.

The plug-in is a significant add-on that falls outside the remit of this review. Judging by forum responses, however, it’s a successful product and great for anyone who wants to physically generate their own movement.


Other notable tools and settings, with which Reallusion is catching up with market competitors rather than introducing groundbreaking innovations, include Ambient Occlusion, which improves the quality of visual output with barely any impact on render times; post-FX tools for colour and blur, which are easy to apply; and cartoon rendering, which can be achieved with just a few clicks and some minor texture corrections. 


iClone5

Rigid body simulations can pep up high-speed chases. In iClone5 Pro they are easy to deploy

There’s still plenty of room for improvement in the renderer. The options are minimal and simplistic – which is part of the general charm of iClone, but it doesn’t always do justice to the end result. Multiple cameras and Picture-In-Picture features offer greater control between shots. Much-requested duplication settings enable you to instance objects with ease, and adjustable pivots, snapping and aligning tools are now possible for objects.


More resource-hungry improvements include higher poly counts for actors, with notable increases to head meshes, which enable more natural deformations. In practice this works far better than before, and since faces are areas that most viewers’ eyes are naturally drawn to, it’s a clearly visible improvement. However, it can still be difficult and time-consuming to tweak.

iClone's new toon shader

The new Toon Shader is found in the Atmosphere section of the Stage tab, and can be adjusted for your project needs

Cartoon character facial controls have been advanced to include exaggeration. Height Map Terrains now allow bigger landscapes, but they are limited to just five. Smart iProps have been updated for game-like interaction.


During testing, these new tools all worked admirably, yet iClone crashed a number of times for no apparent reason. When pushed to reasonable extremes for any shot – such as 20 actors set up with different parameters and animations applied – iClone responded well. But other times it would crash with, for example, a fairly empty scene during terrain set-up. Ordinarily, this would only be a minor annoyance, but since there’s no autosave option in the program, it becomes more 
of a frustration.


Hardware-wise, iClone doesn’t require an overly demanding system. Rather misleadingly, it’s listed as being 32-bit and 64-bit Windows compatible, but it’s not actually a native 64-bit release, so it won’t take advantage of any extra memory installed over 32-bit limitations. It’s rumoured that a 64-bit update will be released, although this was unconfirmed as we went to press.


Overall, iClone5 Pro remains an easy-to-use application, and it can be a real time-saver for pre-viz work and presentations. The learning curve isn’t too steep, and setting up shots is intuitive. For existing iClone users, it should be a no-brainer to upgrade because content from previous versions is compatible, the upgrade price is good, and the new tools (and mocap plug-in, if you choose to buy it) will enhance its usability. New users will need to assess whether they have the funds for additional content and a copy of 3DXchange.

VERDICT

PROS

  • Simple for novices without previous animation experience

  • Easily modifiable preset models

  • Ready-made animation categories

  • Intuitive editing

  • Options for advanced animators



CONS

  • Facial profiles are difficult to tweak

  • Additional content incurs extra costs
  • Rendering options still limited

  • Not a true 64-bit application



A speedy solution for pre-viz but hampered by limited content options, basic render settings, and lack of true 64-bit support

About the author


Paul Champion is the demonstrator for undergraduate and postgraduate 3D and VFX courses at the National Centre for Computer Animation, Bournemouth

Win a copy of Reallusion’s iClone5 Pro

Enter our iClone5 Pro prize draw for your chance to win one of four packages featuring Reallusion’s real-time animation suite, worth $1,352 in total



Mar 092012
 

Mass Effect 3 trailer

Remember the phenomenal CG trailer that Bioware released a few weeks ago? Don’t worry if you missed it: as Mass Effect 3 goes on sale in the UK, we take a look at Bioware’s cinematic for the final installment of the massively popular RPG

We’re so lucky that big budget games can’t be released without an accompanying cinematic teaser to go alongside the standard gameplay trailer, as the animations produced are truly stunning.

Bioware’s Mass Effect 3 trailer is a fine example – the teaser features children, aliens and devastating lasers. The cinematic action really gets your adrenaline pumping.

One YouTube user commented that if the video keeps going like it does, [the character you play] Shepard has only got about 30-35 minutes to take back Earth, tops, before it’s completely annihilated!

Watch Bioware’s Mass Effect 3 trailer online

Want to learn how Bioware created the cinematic?

So do we, that’s why we’ve asked Bioware to contribute a ‘making of’ article for 3D World magazine. So look out for that in the next issue!

Want more like this?

Watch Platige Image’s Witcher 2 trailer and making of video



Mar 092012
 

Witcher 2

Watch the jaw-dropping animation in the trailer for The Witcher 2: Assassins of Kings. Then catch Platige Image’s ‘making of’ video too…

In January, Warner Bros. Interactive Entertainment and CD Projekt RED released the epic and impactful CG intro trailer for The Witcher 2: Assassins of Kings, produced by the award-winning animation studio, Platige Image.

We were simply blown away when we watched this four-minute cinematic online at the start of the year. Directed by Tomek Bagiński and produced by Platige Image, this trailer is packed with stunning animation and effects. Watch it below.

Now Platige Image has produced a behind-the-scenes look at the production of the trailer, from the early renders and mocap sessions to the final clip. You can also watch this three-minute video below.

Behind the scenes of Witcher 2

“The idea of the script came from CD Projekt couple years ago, right after the premiere of ‘The Witcher’. It was so called ‘soft’ version and we used it as basis for the work. In spite of vast changes we made, the project went on hold for almost two years until it was brought back to life in Xbox 360 version.”

“As it turned out these two years gave both parties necessary perspective. Once we started working on the script again we were able to create new, better and richer version very fast.”

“What was left from the original is the ship and main characters. All the rest has been changed. For example in the first version a hornet’s nest was used in the attack. The ship was turned into chaos – all crew started running around. They looked like a group of crazy or electrocuted people. Well… we got rid of this ‘dance’ but left the chaos and add a lot of steroids. It helped,” says director Tomek BagiÅ„ski.

“It was one of the most demanding projects in Platige Image history. The script set up very high standards. As for such a short movie there were a lot of main, detailed characters, difficult face close-ups, very dynamic action full of special effects: cloth and particles simulation and hard slowmotion shots.”

“The ship also became one of the main characters. First our graphic artists created a fantastic scenery and most of all great, very detailed sailing ship. Then the particles simulation team went rough with it. They created a vast interaction system covering the whole construction with milions of ice crystals and they smashed the whole thing,” adds CG Supervisor Maciek Jackiewicz.

The team of 40 graphic designers and animators was involved in the project for couple moths.

The Witcher 2: Assassins of Kings is due on on Xbox 360 on 17 April. The Windows version is shipping now.

Watch the Witcher 2: Assassins of Kings trailer:

Watch the making-of video for the cinematic:

Read a longer interview with CG supervisor Maciej Jackiewicz on CGSociety

If you liked this, look out for our ‘making of’ Mass Effect 3 cinematic, due in the next issue of 3D World



Mar 082012
 

Transformers 2

Transformers 2: Revenge of the Fallen blasts your eyes with thousands of hard metal parts, spinning and sliding gears, wheels, crankshafts, pistons and headlamps as nearly 60 jaw-dropping robots fight to the bitter end. Read how Industrial Light & Magic rose to the challenge of animating the Transformers sequel

A couple of days ago, Digital-Tutors unveiled its new Transformation training, and we thought we’d make an event of it and have a Transformers week.

Over the week we plan to bring you a train-transforming walkthrough tutorial, a step-by-step tutorial by the Embassy of the infamous Transformer-style advert for Citroën C4, and two making of Transformers articles.

We’ve already posted the first making of Transformers article, and here’s the second making of article, published in the August 2009 issue of 3D World.

The making of Transformers 2: Revenge of the Fallen

For 2007’s Transformers, which raked in more than $700 million at the box office, lead VFX studio Industrial Light & Magic turned the cool little 1980s Hasbro toys into 14 bone-crushing giants that crashed through the streets of Los Angeles, smashing buildings and each other in over-the-top, rock-hard action sequences. But that was just a warm-up for Revenge of the Fallen…

transformers 2

Many of the digital stars of the original film return in Transformers: Revenge of the Fallen, including Optimus Prime and Bumblebee

The sequel blasts your eyes with thousands of hard metal parts, spinning and sliding gears, wheels, crankshafts, pistons and headlamps as nearly 60 jaw-dropping robots take their battle to seven states, three countries, and one alien planet. Add in another 40 CG vehicles, and the total count is more than 100 major 3D assets.

Massing the troops

As before, ILM took the lead in the effects work, creating both the lion’s share of the robots – 45 in total – and the largest and most complex machines. The studio created all of the heroic Autobots, including their returning leader, Optimus Prime; and many of the evil Decepticons, including both Megatron – seemingly destroyed at the end of the first movie – and the megamonster Devastator, who is five times the size of Optimus Prime.

“The locations, the effects, the style are all huge,” says ILM’s Scott Farrar, who reprises his role of VFX supervisor from the original movie. Rendering shots for Transformers took between 16 and 20 terabytes of disk space in the studio’s renderfarm. Transformers: Revenge of the Fallen took 140 terabytes. “The amount of render time is colossal,” Farrar says. “The whole movie is that way.” Bay’s own studio, Digital Domain, created 13 Decepticons, ranging in size from little ball-bearing ‘microcons’ that transform into a razor-bladed creature called Reedman to the enormous Soundwave, which links up to communications satellites. Digital Domain also created Wheelie, a rascally little bot transformed from a remote control vehicle, and Alice, the ‘pretender’ who transforms from a human seductress.

transformers

In Transformers 2, both Autobots and Decepticons take on more human characteristics. Bumblebee ‘cries’ tears of windscreen-washer fluid in key scenes, for example

Such human qualities are one of the most significant ways in which the robots in Revenge of the Fallen differ from those of their predecessors. “People will be amazed at the interaction of the robots with the environment,” Farrar says. “They kick up dirt. They interact with trees. And they sweat, spit, drip, and squirt fluids like blood.” The fluids, which help give the robots a more organic feel, were added almost as an afterthought.

For a shot in which Starscream, one of the returning Decepticons, has to react to Sam, ILM was tasked with finding ways to make the interaction more dramatic. The solution was simple, but crucial: “We decided to have Starscream spit at Sam,” says associate visual effects supervisor Jeff White. The effect worked so well that the animators began looking for more opportunities to have the robots dispense water, sparks, gas and smoke.

This more organic approach carried over into the design of the Transformers themselves. As the Anatomy of an Autobot page reveals, the robots’ faces have been given significantly more human characteristics, enabling them to act, emote and talk. In total, 40 of the robots have at least one line of dialogue.

With little more than a year to complete the effects, ILM significantly increased the size of the animation team to cope with the increased scope of the project. “We had fewer than 20 animators on the first film,” says Benza. “This time, we had over 50 because we had so many more robots, and three times the complexity of the work.”

With animators often facing complicated 500-frame shots with three robots, Benza cast his staff according to their skill sets. “Some animators were interested in animating particular scenes, so I’d shift things around to give them a chance to do those shots,” he says. “Specialists in animal behaviour would get the Ravage shots because he was a cat-based Decepticon. Others were specialists in dialogue and acting performances.”

To create these performances, team members were aided by a rigging system developed by ILM for the previous movie, through which they could choose what parts of the model to connect. “An animator can animate any individual part or any groups of parts,” Benza says. In addition, a new system provided the animators with a little procedural help on the more complex shots. “The creature development team gave us the flexibility to put the robots into any pose and not have interpenetrations in adjoining areas,” Benza says. “With so many moving parts, it’s quite a task to make sure every individual part doesn’t collide with its neighbours.”

But even with the rigging system to call upon, the first pass at preventing penetrations was carried out by hand – and with Benza, normally in charge of procedural animation, busy in a supervisory role – there were no shortcuts for final-quality animation, either. For example, the Decepticon Scorponok, which was procedurally animated on the original movie, is now entirely keyframed.

A question of scale

The scale of the task ILM’s animation team faced on Revenge of the Fallen becomes even more apparent when you consider the movie’s showpiece scenes: the transformations of the robots from one form to another. The biggest transformation is unlike any seen in the previous film, and it’s for the biggest Decepticon ever. Standing 100 feet tall, the Devastator has 13 million polygons and 52,632 parts. Moreover, he’s formed out of six separate robots.

devaster_transformers2

Devastator is made up of six robots, each with a ‘hero build’ as complex as that of Optimus Prime. Despite this, on seeing the first version of this shot, director Michael Bay felt that it was boring, causing ILM to add yet more detail to both the foreground and background of the composition – for even higher render times

“The idea we came up with was that the Devastator forms in a violent fashion,” Benza says. “We rooted his transformation in the biggest Constructicon.” The ‘Constructicon’ in question, Scavenger – a big red mining excavator – transforms first, then smashes itself into one construction vehicle after another, each transforming and connecting to the previous bots to form the giant monster. His head is all mouth: a cement mixer. It’s a tremendously dramatic scene, and the effects are awe-inspiring.

Devaster kills ILM’s computers!

To make matters more complex, Devastator is so big and has so many parts that the animation crew couldn’t treat him as a single asset. “When we tried to load the entire model in high res, it would grind the machines to a halt,” Benza says. “We had two machines fail trying to work with him. One literally smoked. We don’t know for sure if it was a direct result of working with this character, but it certainly did get overloaded – and fried.”

The scene of Devastator forming was completed in 4K resolution for IMAX screens. The scale of the work forced the crew to develop a set of tools that enabled the animators to work in layers of complexity. “We had seven choices for resolution,” says digital production supervisor Jason Smith. Options for the level of detail at which each part of the model was displayed ranged from proxy geometry and 25K resolution at the low end to 1,300K resolution at the high end. “To control the system, animators had what we called ‘Mr Potatohead buttons’,” says Smith. “They could select any part. For example, they could set Devastator’s arm at 25K resolution, and his head at 31K.”

Transformers 2

The scope of the work was also increased by the need to render for IMAX. In certain key scenes, Optimus Prime appears life-size on an IMAX screen

While the level of detail system speeded up the animation work, when it came to final output, truly massive rendering power was still required. In one of the film’s biggest scenes, Devastator climbs a pyramid in Egypt and once at the top, begins ripping the massive structure apart. For this shot, which was also at 4K resolution, ILM used a fluid simulation to move the sand and a rigid body simulation to break the pyramid into millions of blocks.

transformers 2

With ILM’s effects occupying 51 minutes of screen time, rendering the shots took up a mammoth 140 terabytes of renderfarm space

“Some frames would have taken 72 hours to run on a single processor,” Smith says. “We used multiprocessors – 26 processors – to chew through the work faster.” But for all of the new technology ILM developed for Revenge of the Fallen, not to mention the complexities of wrestling with IMAX resolution for several shots, the studio describes the key innovations of the movie as creative, not technical.

First, the VFX supervisor and animation director shot plates on location and provided guides for the film’s editors. For a three-minute fight scene set in a forest between Optimus Prime and several Decepticons, Farrar and Benza ended up supervising the plate photography, and took first cut at editing the footage. “We don’t typically get involved with editorial decisions,” Benza says. “But because we shot the material and knew what the plates were intended for, we took a first pass at putting the forest back together as a template for the editors.”

Key desert sequences were shot on location in Alamogordo, New Mexico, on a set close to that used for the Scorponok sequences in the first fi lm

Second, the animators contributed dialogue and choreographed two key fi ght scenes in the movie: both the forest battle and a sequence in which Bumblebee literally tears Rampage apart, giving a little satisfied nod of the head after he has decapitated the Decepticon.

“We designed the first version of the fight not knowing how it would be used in the movie, with Bumblebee and the cop car robot from the first movie, Barricade,” says Benza. “Michael Bay liked how tight the edit was and the brutality when Bumblebee tears the limbs off, so he found a place for the fight in the movie and substituted a different robot.” And finally, the director ‘shot’ scenes directly on ILM’s motion-capture stage, working in collaboration with the animators. “We blocked out the scenes, loaded them up on our stage, and put a virtual camera into Michael’s hands,” Benza says. “On the set, you often see Michael behind the camera, so we wanted to give him a hands-on experience for the digital scenes.”

The resulting movie is an enormous human achievement, not only considering the vast scale of the work undertaken, but also for the way in which ILM’s staff became involved in tasks traditionally thought to lie outside the control of visual effects artists.

The crew considers Transformers: Revenge of the Fallen as one of the most collaborative films they have worked on. As visual effects steadily become a part of the production as well as the post-production process, it only remains to be seen where Industrial Light & Magic’s increasing level of creative control will take the studio – and the movies it works on – next.

VITAL STATISTICS

Title: Transformers: Revenge of the Fallen
Lead Studio: Industrial Light & Magic
Other Studios: Asylum, Digital Domain
Budget: $200 million (estimated)
Project Duration: 16 months
Team Size: 350
Software used: Maya, Zeno, Nuke, Photoshop

Read the making of Transformers (2007)

Elsewhere, you’ll find VFX breakdown videos as ILM reveal the VFX of Transformers: Dark of the Moon

For the Anatomy of an Autobot click Next



Mar 082012
 

If you’ve decided working in 3D is for you then check out this article. Our 10-point guide will help you to craft job-landing showreels

You’ve probably seen movies, games and adverts full of amazing CG elements and thought, “I wonder how they do that” and “I’d like to give that a go”. So now that you’ve decided working in 3D is for you, how do you go about applying for 3D jobs?

In this article, you’ll discover how to create a job-winning showreel with a ten-point guide created by the people who hire.

And don’t forget to follow the links at the bottom of this page for more guidance on getting started in 3D.

What will you need for a job application?

Before you apply for a job in the CG industry, you’ll need three things: a covering letter, a CV and a showreel.

Writing a CV, or resume, and a covering letter are relatively easy tasks and there’s loads of help online that’s applicable no matter what job you’ve going for.

But there’s an art to creating a winning showreel, and since it’s one of the most important pieces in getting a job in the 3D industry, it’s important you get it spot on.

Physical reels are quickly going out of favour, at least on unsolicited applications. In our experience, all studios accept and many prefer, online reels; although some still require a physical reel at the interview stage. If possible, host the reel on your own website. If not, put it on a video streaming site. The quality on Vimeo is acceptable for most studios; YouTube less so.

TEN GOLDEN RULES

What should you put on your showreel?

Visualisation artists: don’t switch off just yet, the same principles apply to print portfolios.

Keep it short

In large studios,recruiters may have to watch over a hundred reels in a day. Don’t make this more painful than it needs to be. Less is most definitely more.

Put your best work up front

For the same reason, many studios say that you have only around 30 seconds to make your mark. If they haven’t seen anything they like by then, it’s in the bin.

Only include your best work

Anything else raises doubts in the studio’s mind as to whether the good stuff at the start was a fluke, or, just as bad, that you can’t see the difference.

Include contact details

Give your current phone number and email address, and put them on the title screen of the reel: packaging often gets lost or binned when reels are stored.

Include a CV

A single side of A4 or Letter paper, tops. Put any commercial experience up front. And if you have a degree, no-one needs to hear about your GCSEs.

Don’t plagiarise anyone

It may seem like a big industry, but it isn’t. Try to pass off anyone else’s work as your own, and you will be found out. This kills careers.

Provide a physical shot list

Most people still expect one. But you should also put the information on your reel itself, as captions at the bottom or side of the screen.

Say exactly what you did on each shot

Studios see the same graduation short on reels from every team member. If you only did the lighting, put ‘Lighting only’ at the foot of the screen.

Keep the music discreet

Music is the norm, but loud dance tracks or anything with lyrics distracts from your images. Many studios will turn the sound off, anyway.

Don’t come over all Orson Welles

Do not begin with ‘Written, produced and animated by…’. Who else’s work could it be? A title screen with ‘John Smith, Animator, john@animator.com’ suffices.

Want more help in finding a job in 3D?

Follow these links to help kick-start your CG career:

10 simple steps to getting a job in CG

How to get hired in 3D
Main image created by Graham Linfield. Graham’s showreel was so good it helped him to win a job at Taylor James! Here you can watch showreels and see portfolios that helped graduates to land their dream jobs.


3D World Jobs brings together CG job-seekers and studios

Tips for getting that 3D job

Find a 3D Course

Savannah College of Art and Design: Animation courses



Mar 072012
 

Watch the stunning animation in the movie trailer for Ice Age: Continental Drift and check out ‘The Artist’ parody trailer too

Ice Age: Continental Drift, co-directed by Steve Martino and Mike Thurmeier at Blue Sky Studios, is due this summer and we can’t wait. Will Sid, Diego and Manny evade the seafaring pirates preventing them from going home?

The trailer shows some truly truly stunning animation and the storyline doesn’t look too bad either! Make sure you watch ‘The Artist’ parody trailer below too.

The story continues

Scrat’s nutty pursuit of the cursed acorn, which he’s been after since the dawn of time, has world-changing consequences – a continental cataclysm that triggers the greatest adventure of all for Manny, Diego and Sid. In the wake of these upheavals, Sid reunites with his cantankerous Granny, and the herd encounters a ragtag menagerie of seafaring pirates determined to stop them from returning home.

Take a peek, and let us know if you are excited to see it!

Watch Ice Age: Continental Drift – International Trailer

Watch a new promo for Ice Age: Continental Drift parodies The Artist

20th Century Fox has just released a new promo which parodies this year’s Oscar frontrunner ‘The Artist’ to attract attention.

Ice Age: Continental Drift is scheduled to be released in 3D in July