Mar 162012
 

Disney's John Carter movie still

Cinesite has completed 831 VFX shots and converted 87 minutes of film into stereoscopic 3D for Disney’s John Carter, which hit cinemas last week

The 3D work in John Carter – Andrew Stanton’s first live-action feature film, based on Edgar Rice Burroughs ‘Mars’ series of novels – was split between three leading London FX houses: Cinesite, Double Negative and The Moving Picture Company.

Cinesite, renowned for its photoreal environment work, was responsible for creating and populating the majority of environments for John Carter. The team of 310-strong completed 831 visual effects shots, which included creating and populating the majority of environments for the film. They also converted 87 minutes of the film into stereo 3D.

Cinesite’s VFX supervisor Sue Rowe spent several months on set in the UK and Utah. Due to the scale of the project, Rowe divided the work between four other VFX supervisors.

John Carter VFX shots

Helium is shown from different angles throughout the film, and is used as the backdrop for the final battle sequence

Christian Irles supervised work on Princess Dejah’s city, Helium. The city presented a challenge as it had to match the art department concept stills. While this was easy enough to do in matte painting, it was very time-consuming and render heavy to get actual full 3D renders. Projections were created for the terrain and these were worked up in matte painting to achieve the level of detail required.

John Carter Helium city

Cinesite created a matte painting of the outside of the city of Helium and, using projections, built up the terrain using high-res stills taken on location in Utah

The shots presented the city as a whole with both Helium Major and Helium Minor visible, resulting in a huge amount of texture maps and shaders. Render time was very high for these shots and all layers, such as crowds, terrain, etc were rendered separately.

Helium stats:

  • 346 models in the city structure
  • 74 individual props created

The mobile city of Zodanga crawls like a myriapod across the surface of Mars: giving the city a sense of scale and animating the digital legs was challenging

Jonathan Neill supervised Cinesite’s work on the mobile city of Zodanga, a mile-long rusty metal tanker that crawls like a myriapod across the surface of Mars. The city was heavily textured using a combination of Photoshop, Mari and Mudbox in tandem with bespoke shaders and lighting development, to give an industrial look and feel.

A handful of sets were built which were locations within the city, but these needed considerable extension work to make the depth and scale of the city believable. Cinesite modelled thousands of pieces of geometry for the city buildings, and created hundreds of CG props to dress the sets.

John Carter VFX. Interior of city

Cinesite filled the city with warships and troops, before dressing it with hundreds of CG props

With 674 legs, the mobile city was technically challenging to animate: Timed animation caches were used to ensure the digital legs moved in a random fashion. “Variations in movement and secondary animation such as cogs and cabling were used to create interest in the leg movement,” says Cinesite.

Zodanga City Model stats:

  • 291 structural element models
  • Up to 20,000 objects in a single shot
  • 1-2 billion polygons, dependent on camera position and detail required
  • 242 CG props created to populate the city

Zodanga City Legs stats:

  • 674 legs
  • 44 claws
John Carter

The texturing and detailing of the giant airships had to be spot on since they feature in many close up shots

Ben Shepherd oversaw the huge aerial battle between Zodanga and Helium. His team created each side’s airships which use solar wings to travel on light, as well as explosions, fire, people and set extensions.

The giant airships needed to be finely detailed for close-up shots. A challenge for look development was that they were required to be more like a 19th-Century sailing ship, than the type of spaceship which a modern-day audience might expect.

For Sab’s flagship corsair, a partial set was created for the bridge/cockpit and one deck of a single ship. This was scanned and photographed for reference and recreated. The remaining areas were created as full CG models.

Dejah’s ship and the flagship Helium ship, the Xavarian, were created in 3D also. Each ship had a full set of wings which were sized and laid out specifically for each ship. These were controlled by pulleys and ratchet-type controls to give a sailing look. Each of the wings was covered in hundreds of individual solar tiles which needed to be able to be controlled in animation.

John Carter movie

The entire Thern effect system was designed and built from scratch using a combination of Maya, Houdini and custom software developed in house

Simon Stanley-Clamp directed work on the Thern sanctuary, a huge underground cave that forms around Carter and Dejah as self-illuminating blue branches as the characters walk through it.

The entire Thern effect system was designed and built from scratch using a combination of Maya, Houdini and custom software developed in house. Based on the principles of nanotechnology, the system provided a semi-automated way to ‘grow’ Thern into any environment and geometry. It took a full year of development time to evolve and bring to the big screen.

These ‘growing Thern’ shots were some of the most complex VFX shots Cinesite undertook, and can be seen to great effect in 3D

In the sequence, as the tunnel itself ends, the main Thern Sanctuary room is seen to build itself, opening out within the Thern matrix of the pyramid interior. This shot required extensive Thern simulation and growing effects, blending multiple elements together in Nuke to build the shot up.

John Carter is in cinemas now. We’ve not seen the film yet, and reviews so far seem to be fairly mixed, so if you do go, let us know what you think of it via the comments box below, or on Facebook or Twitter

The making of John Carter

This article focuses on Cinesite’s contribution to the film, but the 3D work was split between three leading London FX houses: Cinesite, Double Negative and The Moving Picture Company.

Read the making-of John Carter article in issue 155 of 3D World magazine, where Renee Dunlop takes us behind the scenes of all three VFX facilities.

Issue 155 of 3D World goes on sale on 27th March



Mar 152012
 

Weta Digital embarked on a new quest with The Adventures of Tintin, complete with crashing waves, pirate battles and an extremely stylish wardrobe. Renee Dunlop takes us behind the scenes

As the Blu-ray of The Adventures of Tintin goes on sale, we thought we’d share this article from issue 152 of 3D World magazine.

If you haven’t watched the film already, we suggest you do – The Adventures of Tintin looks like a mix of live-action and CG, which adds up to something unique on screen. It’s possible that The Adventures of Tintin missed out at the Oscars this year because of this very thing, which is a real shame as we think the film has some of the best CG we’ve ever seen.

Weta’s Adventures of Tintin

Weta Digital is delving into a new world – that of the journalist. Enter Tintin, a popular post-World War One comic strip hero who travels about with his dog, Snowy, cracking cases with a little help from his friends. Created in 1929 by the artist and writer best known as Hergé, the stellar artists of Weta, led by director Steven Spielberg and producer Peter Jackson, have brought the story to 3D animated life on the big screen.

It took some of Weta’s best to tackle the wide array of arduous effects required to complete the film. Keith Miller, one of five VFX supervisors, was among those appointed to the task. He was in charge of roughly 340 shots.

An epic sea battle required Weta Digital’s team to simulate stormy ocean waves

For Miller, the big challenge was the pirate battle. “It’s such a dynamic sequence,” he says. “There are nearly 60 pirates running about, two ships that are sailing in 60-metre seas complete with lightning storms, rain, hurricane winds, fire, explosions – you name it, it’s all there.” The most difficult challenge was the water, with 60-metre waves interacting with the ships that needed to compositionally match the representations provided by the pre-viz team.

Miller’s team approached the work from a few different angles. “First, we updated our FFT [fast Fourier transform] library, a system of generating waves using measurements collected in oceanic research,” says Miller. They also completely rewrote their library using a more up-to-date spectrum that provided the ability to incorporate the ideas of the depth of the ocean and the fetch, or the distance that wind stays at a constant velocity. “We added those new variables into the system and we were able to generate much more realistic wave scenarios for the high wind systems,” he adds.

Weta’s FX team did quite a bit of work approximating the surface velocity from the newly generated ocean surfaces and applied those to Smoothed Particle Hydrodynamics (SPH) particle simulations, much of which was used for the white water simulation, breaking waves that rode on top of the ocean surface. These were pushed through Weta’s in-house 3D effects solution, Synapse, a node-based system that’s a container for solvers. In some cases, Naiad data was also incorporated into Synapse for the initial bounded simulation elements.

The battle sequence combines water, fire, wind and lightning, and featured as many as 60 pirates in combat

In addition to reworking the FFT system, senior water TD Chris Horvath updated Weta’s shading model for raytraced water, using an improved model for participating media for underwater light extinction and scattering. He also made improvements to the procedural texture foam system.

Creating the hands

While Miller and his team battled with the pirate ships, Weta’s digital creature supervisor Simon Clutterbuck focused on some of the smallest of details through his modelling department. “We build the animation puppets, the deformation rigs, we do all the cloth and hair simulations, muscle dynamics, flesh dynamics – anything that has to do with the monster or character,” he says. “We interact with all the departments in the studio to produce stuff for them to use, like the puppets or the baked light, and we work closely with shots and animation.”

The creature department work includes providing all the puppets for the animators. “Our animation puppet isn’t the thing that gets cached and ends up in the shot,” says Clutterbuck. “The animation puppets are kind of an interactive, almost real-time version of the character. They don’t have to see amazing hand deformations to pose the hand correctly, so they’re just posing [and] animating this thing that’s much lower resolution.” Clutterbuck’s Creature Department provides the animators with approximations of clothes and low-res hands and bodies that allow for faster animation. “Then the animation data is cached off of that puppet and plugged into a high-resolution creature rig, which gets cached and given to lighting,” he says. “This way there’s no requirement for interactivity in our actual deformation models.”

A single complex rig was used as the basis for all characters’ hands

It’s hardly all low-res work, though. “There’s a big focus on faces and hands in the show, so a good deal of time was focused on building a detailed hand rig,” says Clutterbuck. “We had all these incredibly close shots of Tintin’s hands. It’s a treasure hunt, so there are all these clues that lead to the treasure, and there are lots of shots where he’s inspecting things. The shots are incredibly long, so you’ll have minutes focused on their face or hands. The stability of the cloth solve, the fidelity of the hands [and] the deformation all had to be very high. It was pretty unforgiving.”

Weta Digital’s workflow uses a generic model called Gen Man as a baseline for building humanoid characters. This starting point is used for reference, scanning and motion capturing, tailoring clothes, and even cross-referencing MRI data. Clutterbuck explains: “We produced a whole bunch of life casts in all different poses that were used to build support moulds, 36 in all, that went into the MRI machine, so the character could put his hand into a similar pose and hold it there. Then we could derive the meshes of his joints from the MRIs.” The result was a series of high-resolution joint meshes of his actual skeleton in the selected poses.

The story requires characters to grip and manipulate objects

The story is a treasure hunt, so there are lots of shots where the characters have to pick things up and be able to manipulate them

“The metacarpals in the wrist do all these crazy rolling motions – it’s really complex,” Clutterbuck says. “We couldn’t build that complexity into the animation puppets because it would have been prohibitive to animate with, but we also needed the correct degrees of freedom in the wrist and joints to give us the right deformations of the hand.” It took nearly five months to get the hands working the way they wanted.

“The hand rig looks pretty amazing,” says Clutterbuck. “The hand model propagates out into the show, procedurally warped into new shapes, so we built one hand rig and it was fitted to all the characters’ hands. We have a process that was developed on Avatar to transfer the rig and deformation data onto other models.”

Weta Digital’s model supervisor Marco Revelant was responsible for all the assets created in the model department and was involved with grooming and developing the fur system from the user side for the dog, Snowy. However, it was the clothing that both Clutterbuck and Revelant found the most challenging. The multiple layers and the way the different fabrics fell and moved presented a daunting task.

Folding the clothes

Weta Digital set up a Tintin-specific costume department that helped define the design of the clothing, offering insight into how the fabric would drape and move over a character. “The problem is,” says Revelant, “when you do digital clothing and give it to a modeller, the modeller will try to put in features like wrinkles and folds, but won’t necessarily take into account the quantity of fabric.”

Care was taken with getting clothing folds to animate correctly

To manage this issue, the Creature Department worked closely with modelling, providing tools that helped drape the character as they were modelling so that they could see how the fabric was behaving, rather than waiting until the Creature Department ran their simulations. Weta used NCloth in Maya, but spent a huge amount of time up-front shooting parameters and getting the topology in the models and construction correct, especially in cross-sections such as sleeves.

There are eight principal characters, and several have multiple costumes. In all, there were 551 individual costumes to build for the film

Several characters had multiple layers of clothing, requiring layers of geometry to simulate friction. There were eight principal characters, and several – including Tintin, Captain Haddock and Sakharine – have multiple costumes. In all, there were 551 individual costumes to build for the film. “Take the Captain,” Clutterbuck says. “He had a big woollen jacket, a woollen jumper, trousers, and socks and shoes.” Again, proper reference was key. Weta filmed a man running on a treadmill wearing a tailored suit they provided, and gathered reference on how cloth breaks across the seams, collecting data on details such as the effects of double versus single stitching.

Weta first tried just solving the visible clothing, but found that it didn’t quite look correct. “We ended up going for full coupled solutions where everything was solved,” says Clutterbuck. “Tintin might enter with his trench coat on, then take it off and toss it onto the back of a chair, and continue the scene wearing the rest of his costume,” says Clutterbuck. “We had to handle this level of complexity where we had all these variations of costume elements and they had to solve coupled. We hadn’t really done anything that complicated before in terms of clothes.”

Coupling affects even supporting characters such as Silk, who dresses in a formal jacket, a waistcoat and a shirt. “We didn’t solve the shirt, then put the waistcoat on, then the jacket,” says Clutterbuck. “We solved everything at the same time, so the solutions were all fully coupled. All the costume elements are plugged into one solver. Since they’re all plugged in, they all interact.”

Weta defers everything to its render wall. The costumes were assembled as a master file that contained a costume description. During the baked simulation step that file would assemble the costume, plug it into all the solvers, bring it in, attach it to the character, then do the simulation. The result was a final sim and a series of files generated to show what Weta calls pre-files, which are pre-simulation. The individual costume assets are iterated in parallel as an ensemble of costume elements.

“There’s a big focus on faces and hands in the show, so a good deal of time was focused on building a detailed hand rig,” says Clutterbuck

The costumes took several minutes a frame to simulate, but there was no interactivity requirement because that’s all happening on the render wall and animation was working with real-time puppet versions. “So we have these two parts of every character, with the puppet which goes to animation and the creature deformation model that’s the thing the animation curves get plugged into that simulates on the wall,” says Clutterbuck.

Weta’s flexible pipeline paid off, according to Miller. “I know a lot of facilities tend to lock down their technology, branch it off and continue developing it outside of current shows, but that’s very different from how Weta works,” he says. “It’s got pros and cons for sure, but it’s one of those things that helps us to stay at the leading edge of technology. We’re constantly throwing in new technology and updating and developing new aspects, and trying to get it pushed into production all the way through the course of the show.”

Setting the scene

The entire Tintin project was done in-house at Weta Digital, including the artwork for the environment and character studies. The translation of the environments from 2D to 3D was left to Weta’s modelling department under modelling supervisor Marco Revelant’s guidance. An internal art department was assembled to research information about the time when the film takes place.

“Every element that was drawn in the book, we tried to find the respective real element from that period that could have been the inspiration for the Hergé drawing," explains Revelant. "Everything was checked against real period data.”

“One important thing is [creator] Hergé was very careful in depicting a kind of reality that was around the 1940s,” explains Revelant. “Every element that was drawn in the book, we tried to find the respective real element from that period that could have been the inspiration for the Hergé drawing. Everything was checked against real period data.”

Creating the hair

Weta was working on Rise of the Planet of the Apes and Tintin at the same time. While the requirements for hair on Tintin weren’t anything near what they were for Apes, some of the aspects translated over. Tintin required wind effects, wet hair and a lot of development to get the hair to work coupled with the clothes.

Character hair in The Adventures of Tintin has to interact with objects and the environment

With the hat on, the Captain has a groom, styled so his hair doesn’t stick through the hat. When the hat is off, the hair is groomed appropriately. Sometimes the Captain put his hat on or took it off, so transitional shots with appropriate grooms were needed. The Captain’s hair ended up having a very dense particle set on the hair and collision objects with the hat, and his hair would spring up a bit during the transition.

Buy issue 152 of 3D World magazine to read the full article

Buy the Blu-ray of The Adventures of Tintin via Amazon



Mar 132012
 

Do you remember the original Transformers-style Citroën C4 spot? Five years ago it became a worldwide cult hit and we asked The Embassy’s CG team to reveal some of the ad’s technical secrets. Catch up as Transformers week continues…

To celebrate Digital-Tutors’ new Transformation training, we thought we’d make an event of it and post online all things Transformery!

We’ve already posted up two making of Transformers articles:
The making of Transformers
The making of Transformers 2

We plan to post up a train-transforming walkthrough tutorial this week too, so remember to check back.

Here’s the Embassy’s making of the Citroën ‘Runner’ spot

ABOUT THE ORIGINAL AD

Created for the launch of Citroën’s C4 range, The Embassy Visual Effects’s original 2004 ad, ‘Alive with Technology’, opens with a hand-held camera shot of a car that transforms into a robot, performs an impromptu series of dance moves and then reverts back to vehicular form.

The Embassy Visual Effects’s original 2004 ad, ‘Alive with Technology’, opens with a hand-held camera shot of a car that transforms into a robot, performs an impromptu series of dance moves and then reverts back to vehicular form

As well as making other VFX teams green with envy, the spot proved to have unexpected longevity. A full two years on, it was regularly appearing on TV, picking up fresh awards, and inspiring numerous spoofs and tributes, including a memorable parody replacing the C4 with a rather less glamorous Citroën 2CV and a viral for Danish bacon. The Mill even got a shot at producing a follow-up, before The Embassy itself jumped back on board for a third in the series.

“It’s hard to say what it was about that original ad that hit people,” says studio president Winston Helgason. “Technically we did a good job, but something else struck a chord with them. While the ad has that geek factor, it’s just really fun to watch.”

Television audiences got their first taste of vehicular dancefloor magic back in 2004. A relative newcomer to the field of CG, Vancouver-based VFX studio The Embassy Visual Effects had already turned heads with its viral short film Tetra Vaal and some impressively photoreal ads for the likes of Nike.

But it was the Citroën ‘Alive with Technology’ ad that really put the studio on the map – and a spring in the step of CG‑based car ads. Fusing perfectly believable virtual visuals, directorial flair, and some seriously cool dance moves, The Embassy created what is now regarded as a genuine classic.

Watch the Citroën ‘Alive with Technology’ spot

Now the studio is back on board for the third spot in what is becoming an increasingly long-running campaign, and has been working hard to push the concept of a car that transforms into a robot to even greater heights.

In contrast to the original spot, for which director Neill Blomkamp utilised a virtual camera and 3D environment constructed from photographs, the new ad’s director, Trevor Cawood, chose to undertake a live shoot in South Africa – a location chosen principally for its favourable lighting conditions. A new transforming CG vehicle was then integrated into the plates with the help of elements rebuilt in 3D to aid the creation of shadows and reflections.

“The brief was pretty open,” says The Embassy president Winston Helgason. “The idea was to have the robot running, but other than that, it was simply ‘make it look cool’. The client did come back and ask if we could find something else for the robot to do, though, so we came up with the rail slide [which the bot performs along the restraining barrier by the side of the road].”

Here, the studio’s 3D and compositing staff reveal just how their cybernetic star was rigged and animated to perform such a stunt. They also explore some of the shading and lighting techniques used to generate the photorealistic renders of the modified car necessary to composite it seamlessly into the background plate.

Helgason reveals that the studio’s preferred tool for this kind of work is LightWave 3D’s own renderer, its raytracing proving particularly well suited to hard surface lighting. Dropping HDRI set data into the program and adding additional lights, the studio is able to get a scene fully lit in a matter of minutes. But ultimately, he says that the real secret of photorealism in the Citroën ads is simply attention to detail.

“The most important thing is to understand how lighting really works, and then learn to match the way it reacts to metallic surfaces,” he says. “That, and then adding loads of extra model detail is what makes the results so effective.”

Watch the Citroën ‘Runner’ spot

Click Next to read about how ILM had to rip apart the original robot design



Mar 072012
 

Transformers Optimus Prime

They’re 30-foot-high shapeshifters. They’re made out of thousands of moving parts. And they’re out to beat the living cogs out of one another. Transformers might have posed something of a problem for an average VFX studio. Fortunately, ILM is anything but average…

As Digital-Tutors unveils its new Transformation training, we thought we’d make an event of it and have a Transformers week.

Over the week we plan to bring you a train-transforming walkthrough tutorial, a step-by-step tutorial by the Embassy of the infamous Transformer-style advert for Citroën C4, and two making of Transformers articles. (This is the first one.)

All of these will be coming to a computer screen near you over the course of the week, so if you like the sound of all that, why not bookmark this page so you can revisit easily?

Elsewhere, you’ll find VFX breakdown videos as ILM reveal the VFX of Transformers: Dark of the Moon

Here’s the first making of article, published in the September 2007 issue of 3D World:

See page two and three of this post for ILM’s step-by-step guide on how rigs, match move and live-action footage were used to make the Transformers come alive, and to find out some jaw-dropping statistics on individual Transformers.

ROBOT WARS

When Industrial Light & Magic began working on Michael Bay’s Transformers, the VFX crew thought they would be modelling three or four hero robots that might do 14 transformations. One year later, the team had assembled 60,217 vehicle parts and over 12.5 million polygons into 14 awesome automatons that smash each other, flip cars in the air, crash into buildings and generally cause enough mayhem to make even the most jaded moviegoer feel like a 10-year-old again.

To add tyre treads, dirt, scratches, colour and other textures, painters applied 34,215 texture maps to the parts. Animators and character developers transformed the robots 48 times, moving digital headlights, bumpers, engines, tailpipes, doors, gaskets, bolts, tyres, and other pieces to and from CG jets, cars, helicopters, trucks and other vehicles.

Transformers

The animators and character technical directors crafted each transformation by hand, manipulating the machines by using 144,341 rigging nodes, and sent them into battle. If you haven’t already guessed, these aren’t the lovingly remembered TV cartoon robots. They’re 21st-century, giant, badass, big-screen fighting machines.

VFX supervisor Scott Farrar has a toy Optimus Prime on his desk at ILM, one of the original Autobots. It has 51 parts; he can hold it in his hand. The Optimus Prime that ILM created for Transformers has 10,108 parts and stands 28 feet tall.

Things have changed since Hasbro and Takara introduced the first Transformers toys in 1984. Since then, the robot aliens from the planet Cybertron that can disguise themselves by transforming into different types of vehicles have starred in comic books, video games, a television series, and an animated film.

But, until now, the huge robots have never fought their war on earth in a live-action film.
The movie stars Shia LaBeouf and Megan Fox as Sam and Mikaela, the two kids the Autobots protect, and who become caught up in the action when Sam buys a secondhand Chevy Camaro, which turns out to be the Autobot Bumblebee in disguise. (In a nod to the original cartoons, Sam skips past a Volkswagen Beetle – Bumblebee’s original form – when picking out the car.)

All told, ILM created around 450 shots for the film, with Digital Domain supplying another 95, including a transforming ‘Nokiabot’, a digital Mountain Dew machine and an Xbox, and contributed to the flashback and desert sequences.

Digital transformations

However, this is very much Industrial Light & Magic’s show. ILM built and transformed all the hero robots, and built CG cars and military vehicles for the transformations.

The Autobots include Ratchet, Jazz, Bumblebee – who appears as both a 1974 and 2007 Camaro – Ironhide, and Optimus Prime. The Decepticons are Bonecrusher, Starscream, Megatron, Brawl, Barricade, Blackout, Soundbyte and Scorponok. Although a real ‘puppet’ of Bumblebee appears standing fairly still in some close-ups, all the running, jumping, fighting and transforming robots are digital.

transformers_swimming

Before production began, pre-viz artists worked with Bay to develop the fighting sequences and at ILM, animation supervisor Scott Benza worked with the director to develop the robots’ characteristics. In part, this simply involved archive research. But Benza also asked Bay to pick reference characters from movies to help establish each robot’s personality, especially the Autobots.

For example, “[Bay] picked Michael J Fox in Back to the Future as Bumblebee,” Benza says, “and Liam Neeson as Optimus Prime, the leader, who is soft-spoken but has a big presence. One of the first things I was surprised to see is that Optimus Prime has a completely articulating face and a speaking role. In the cartoon series, he had a battle mask that he spoke through, but Michael felt it was important for this character to connect with Sam and with the audience. To care about him and have him deliver an emotional performance, we needed to see his entire face.”

To create facial expressions, the modelling team, led by Dave Fogler, created sliding pieces for the cheeks and jaws, multi-segmented parts for the lips, and a turbine system for the eyes that turned to simulate pupils dilating. Optimus Prime had around 200 facial parts, and the animators could move each one.

The Transformers’ physical performances were even harder to nail than their facial expressions. “We had to find a balance between selling the weight of these heavy characters and athleticism,” says Benza. “Michael never wanted to see these guys as lumbering robots. He wanted them to be agile, not limited by their weight. It was always a problem. In animation, you need to slow movement to get [the convincing impression of] weight.”

Athletic martial arts move

Working from Bay’s animatics, Benza motion-captured fight scenes and used those along with footage of a variety of stunts as reference to create some shots of robots fighting. The results were promising, but Bay wanted more of a martial arts feel.

“He wanted the action to be fast, so he shot reference of stunt guys doing the spins, kicks and the martial arts actions he wanted,” says Paul Kavanagh, one of five animation leads on the show. “We got that footage and knew exactly what he required. But the action Michael wanted was performed by a 160-pound martial arts guy, and we were animating 6,000-pound robots!”

transformers_police

To bring the animation back into the realms of plausibility, the animators added extra frames to the reference footage of the stunt actors to slow their movements. “We could animate one-to-one to the reference, then go into the animation curve and stretch it out until it looked right to get a little more weight,” says Kavanagh.

On other occasions, the team kept the same frame rate, but added an extra action in the middle of a shot.
The animators also discovered that the closer the robots were to the camera, the faster the movements they could get away with. When their entire bodies were visible, the robot had to slow down. For emphasis, the animators even moved the robots from real time to slow motion within a shot.

“Michael had an amazing eye,” says Kavanagh. “He was always pushing us to get more and more action into each shot; to be more and more creative. We animated things over and over again until he’d say, ‘That’s how I saw it.’ I can’t think of any shots he didn’t turn up to 11 on the cool factor scale. We were psyched. We knew we were doing something special.”

Fitting in

When the modellers were building the robots and the vehicles, they did so without regard to the transformation. The CG vehicles had to match real vehicles and the robots had to match the approved concept art. “There’s not a lot of logic in how the parts function in the robots,” says Fogler. “Trying to reconcile the robot artwork to the car was pretty much impossible because the robots were so abstract in their shapes. We left it to the animators to work out the transformations. It was a leap of faith, but it worked.”

transformers_car_crash

As a result, the animators worked closely with TDs to do the transformations, each one hand-animated from scratch for the camera view. Kavanagh collaborated with character TD Keiji Yamaguchi on one of the first transformations, where Barricade changed from his robot form into a police cruiser.

‘We had only the model of the car and the model of the robot,” says Kavanagh. “We didn’t have an in-between model. But it turned out not to be as complicated as we thought it would be.”

To make it possible for the animators and TDs to control creatures made from thousands of parts, ILM developed a dynamic rig. “We could select any piece of hi-res geometry or any group of pieces, create an animation controller, and choose where to put the pivots,” says Kelly.

When he animated Bonecrusher’s face smashing to pieces, he did so by selecting various parts and creating transformation controls. “We could connect parts from anywhere on the body and give them a unified pivot point,” he says. “We could move anything on these guys anywhere at any time. It was the most liberating experience.”

To create the transformations, the animators would start by animating the Transformers in one of their extreme forms: usually the robot, but sometimes the vehicle. Next, they’d fold the robot into the vehicle, doing whatever they needed to do to get it to fit. “Sometimes we had to break legs or shoulders and push arms through the chest to get the robot in the car,” says Kavanagh.

The last stage of the whole process was to animate the transformed robot standing up and moving toward the camera.

“Say you have Optimus Prime moving down the highway in his truck form and he needs to transform,” says Kelly. “We’d animate the truck as it slams on its brakes and starts skidding, and animate the robot all folded up in a position similar to the truck, and then animate the robot standing up and running away.” Once the animators received a sign-off on the animation, they then gave the pieces to Yamaguchi, or another creature TD.

Transformation super-ninja

All of the TDs’ work took place during the seconds when the robot is getting up out of the hunched position. “Keiji was the transformation super-ninja,” says Kelly. “The stuff he did was crazily complex and really intense. He used our timing and motion, but he’s the one who cut the robot into pieces and figured out how to get the pieces into the pose.”

Although he had animatics to start from, he always drew the transformation before he began animating. “The rhythm was very important for the transformations,” he says. “Also the silhouette. I wanted something very stylish, like Japanese animation or Hong Kong fighting. I also thought of gymnastics, when the gymnasts flip in the air and come down perfectly on the balance bar.”

To ease the process, Yamaguchi looked for familiar, recognisable parts in the designs – a window in Optimus Prime’s chest, for example – and moved them from the truck to the robot. He hid the small parts in the back. “I’d sketch each moving part and arrange them like orchestration for music,” he says. Once he had animated the main ‘performance’, he might use procedural animation for some of the hoses and other dangling parts, but this was a trick he used sparingly. “Simulation doesn’t have a rhythm,” he says.
“This was an action movie. It needed to be strong and vigorous.”

transformers bumblebee

All in all, Yamaguchi’s longest transformation was 300 frames; the fastest was of Megatron transforming into a jet. He also transformed Bonecrusher from an army truck, Starscream from a fighter jet in flight, Optimus Prime, Jazz and Blackout.

Kavanagh also animated transformations, including Bumblebee transforming from a 74 Camaro, Blackout transforming from a helicopter, and the first Barricade transformation. “At first, the TDs had to put controllers on the geometry, but once we started using the dynamic rigging tools, the workflow got easier,” he says.

The animators also used the dynamic rigs to add secondary motion, to fix intersections which had occurred between the thousands of parts, and to move pieces that blocked the camera. “It was a key to getting this movie to work,” says Kavanagh.

During the past few years, like many other digital animation studios, ILM has been perfecting its character and creature tools; hard-surface models, on the other hand, have received much less attention. “The hard-surface shows were seen as being not as difficult as creature shows, with their issues of skin and hair, but Transformers proved to be extremely challenging,” says Russell Earl, associate visual effects supervisor. The pain was partly self-inflicted, however. “One of the early shots we finished was of the helicopter transforming into a robot. We saw that and said, ‘We have to do more of this.’ I don’t want to say we made it difficult for ourselves – but we did set the bar high early on.”

As a result, ILM had to solve problems ranging from rigging characters with thousands of parts, to lighting shots with thousands of reflecting surfaces, to managing the level of detail sufficiently to make rendering the shots feasible. “We’ve never done a hard-body show at this level before,” says Farrar.

Fact file

Lead VFX Studio: Industrial Light & Magic
Estimated budget: $150 million
Project duration: One year
Team size: Peaked at 350
Software used: Maya, Zeno, mental ray, RenderMan, Photoshop, Shake, in-house tools

With Transformers, Industrial Light & Magic pushed the limits of what can be achieved in computer animation, but find out how they took the second film to another level in our second Transformers article: The making of Transformers: Revenge of the Fallen.

Now check out ILM’s step-by-step guide showing how rigs, match move and live-action footage were used to make the Transformers come alive, by clicking Next