Hostingheaderbarlogoj
Join InMotion Hosting for $3.49/mo & get a year on Tuts+ FREE (worth $180). Start today.
Advertisement

A Look at the 85th Academy Awards for Best VFX Nominated Films

by
Gift

Want a free year on Tuts+ (worth $180)? Start an InMotion Hosting plan for $3.49/mo.

This is a brief introduction of how the most outstanding aspects of each nominated film in the category of Best Visual Effects for the 85th Academy Awards were accomplished. This article will describe the making of movies: Prometheus, The Avengers, Life of Pi, Snow White and the Huntsman and The Hobbit.


Prometheus (2012)

VFX Team


Development of the Visual Effects

Planet LV-223

LV-223 is the planet where the Prometheus spaceship arrives. The artists created many concept images to previsualize how the LV-223 planet would look. In order to create the planet atmosphere, the team realized some aerial shots in Iceland, which then were combined with cloud backgrounds done by the VFX Team. After taking a deep look to the concept images, the team found an appropriate landscape, which fitted their vision, called Wadi Rum located in Jordan.

The team analyzed the area using satellite images from Google Earth and together with digital elevation files, they reconstructed the landscape in Maya and for the textures, the team photographed Hi-Res 360 HDRIs of the Wadi Rum's Valleys, creating in this way the digital environment. Also for ground-level shots of the planet, the team went to the volcanic terrain of Hekla, located in Iceland.

The Engineer

The Engineer is presented as a creature resembling an idealized human, who has come to earth to seed it with life. Bringing off a photoreal digital humanoid very close to camera was a great challenge. The first task was to improve their sub-surface scattering (SSS) algorithms to produce a believable close-up character work, so they moved to a translucency quantized-diffusion (QD) model, which allows them to resemble skin complex combination of scattered and surface properties accurately. On the other hand the team had a hard work to precisely match the movements of a physical actor on plate with a silicon prosthetic suit with the ones of the digital character, this adding the fact that the whole film was shot in stereo. The team needed to tone down and augment how they would normally do the muscles, but also add some more life than the prosthetic would normally get.

Source: Image taken from the original Blu-ray copy

The Orrery

The Orrery is an interactive 3D map of the known universe. This 3D representation of the universe has between 80 and 100 million polygons. The scene when the Orrery appears was filmed in the James Bond €œ007 stage, with the actor only interacting with a sphere, which then was digitally removed to simulate one of the planets. These shots took several weeks to only render a complete shot, so it incorporated Nuke'€™s deep compositing tools, which instead of rendering flat frames, this toolset allowed to render 3D data, which then is combined in 3D space, reducing in this way the rendering time. The team spent approximately 7 months producing the environments, effects and characters, to integrate these into the live 3D stereoscopic footage then.


VFX Breakdown Video

This video will show you the amazing VFX Scenes, how they made it and some scenes of compositing.


The Avengers(2012)

VFX Team


Development of the Visual Effects

CG Set Design

For many of the scenes in these movie were required CG set extensions and Luma Pictures, Fuel VFX, and Trixter were in charge of this. For the New York battle sequence, the team built a set in New Mexico, which contained 40€™ green screens on each side, some damaged cars, dust, and debris.

The team built the city around the set by photographing nearly 2000 tiles spheres, creating sort of a Google Street view in high resolution, also they added CG dust, debris, smoke, embers, and every window was replaced with a CG version with improved reflections to prevent that the environment appeared too static.

The team built a library of digital assets, which they used to populate the NY streets, as the team painted out all the people, cars and object in order to get a clear texture.

The Hulk

To create this character, the team began with the creation of digital doubles of Mark Ruffalo, starting with a live scanning of the actor, which was later modified in ZBrush to become The Hulk, while still retaining the aspect of the original actor. This was achieved by studying Ruffalo down to the pore level, they did a cast of his teeth, shot the corners of his eyeballs, took images of his gums, space between fingers, even every hair, mole or scar, his actual skin was scanned and cast on The Hulk.

One of the great difficulties they had was to manage its skin color, after they had done the initial skin look in different environments, interior, sky blue, etc., his color changed so much in every shot that the team had to work hard to maintain consistency. Both the face and body movement were a seamless integration of motion capture and keyframe animation. There was a great amount of work needed to get the performance of the actor to read correctly and the weight of the character's movement right.

The Helicarrier

The S.H.I.E.L.D Helicarrier is certainly the largest visual effect asset of the film. The trick to create such a believable vehicle was the incorporation of aspects and features that are present in the reality, which is why its design was based on an aircraft carrier, giving a foundation that the audience would be familiar. The addition of launch strips, blast doors, and arrestor cables, moving vehicles and digital crew on the set sells the size of the vehicle. All the models were created in combination of Maya and 3ds Max.

Using Flowline the team completed several types of water simulations, from flat ocean, to heavy bubbling and the dribbling water when the craft lifts from the sea, with a rainbow effect composite with Nuke. For the inside screens of the ship, several empty glass panels were shot, for the later addition of both 2D and 3D elements generated in After Effects and Cinema 4d by Cantina.


VFX Breakdown Video

This video will show you the amazing VFX Scenes, how they made it and some scenes of compositing.


Life of Pi (2012)

VFX Team


Development of the Visual Effects

Richard Parker

Richard Parker is the Bengal tiger, who accompanies the character throughout his shipwreck. Rhythm & Hues were in charge of creating this realistic CG tiger. The team began studying hundreds of hours of reference footage of real tigers, and for every one of the shots there would be some reference to base it on, avoiding in this way the possibility of making the tiger react like an human, which would be against of the whole point of the movie.

Almost 90% of Richard Parker is CG. Real tigers were used only for a few shots, including one when the tiger is swimming in the ocean to eat a fish.

The team started with the skeleton, to control basic movements, and then added a muscles system. The face was one of the most important aspects of Richard Parker; its facial rigging was a combination of a more muscle-driven underlayer with more blend shapes on top, which were partially created again in a procedural way. Another key aspect is the skin and fur, the team used subsurface scattering which softened out the fur just a little bit, so the light penetrated deeper into the tiger and his fur. It was mostly raytraced and the team used as much HDRI as possible to make Richard Parker fully reflect his environment. All these aspects make Richard Parker one of the best photorealistic CG animal ever created.

The Ocean

The scenes in the ocean were filmed in a wave-generating tank of 230 by 100 feet located in Taiwan and contained 1.7 million gallons of water....

For more than 50 minutes of the movie, the character is alone in the ocean with just a Bengal tiger, named Richard Parker, and as Westenhofer said, the challenge for the team was to portray the ocean as much a character as possible. The scenes in the ocean were filmed in a wave-generating tank of 230 by 100 feet located in Taiwan and contained 1.7 million gallons of water, the set was entirely surrounded by bluescreens and a painted grid to help the tracking.

The Team worked with Scanline on the implementation of a new surface simulation method called "Refined Sheet" for Flowline, which used animated geometry as a base layer for the simulation that were created inside Maya using Tessendorf and Gestner wave deformers, following with the emission of a thin sheet of voxels over it to solve for water flow and interactions. This allowed them to increase the amount of details on the water surface, finishing with the simulation of what they call "the elements", the spray on the crest of the waves, that would become mist if caught by the wind, bubbles that would become foam when rising up to the surface. Getting the those elements in balance proved to be very important to achieve the look and to get the detail required by those long shots, simulating hundreds of millions of particles in each shot.

Stereo 3D

There are tons of moments in this movie that take your breath away with the 3D skills, putting all kind of animals and objects in front of you but also it makes you explore the depth of the ocean and another incredible environments. One of the challenges for the team was to shot in life stereo on water, because water is very reflective, depending on your viewpoint, you will get different reflections, to solve this issue, automatic tools processed disparity maps and a stereo quality control studio manually checked every scene, and fixed them.

Another aspect that makes you feel you are really there is the skies, R&H used its custom tool, Rampage, which is a 3D projection-mapping software that allowed the team to easily replace the skies with their own matte paintings in every shot, the team generated over 110 full HDRI skies in stereo 3D. With Rampage, the team was able to project their custom matte paintings onto basic 3D geometry, and review them in real time, generating in this way lightning reference images to the other teams.


VFX Breakdown Video

This video will show you the amazing VFX Scenes, how they made it and some scenes of compositing.


Snow White and the Huntsman (2012)

VFX Team


Development of the Visual Effects

The Mirror Man

One of the most remarkable effects of the film is the scene in which the Queen asks her famous question: "Who is the fairest of them all?" to a liquid and metallic Mirror Man that arises from her wall.

The idea for the man came when the director saw a sculpture by Kevin Francis Gray called Face-Off, and asked him to redesign it for the movie.

The team organized a studio shoot for a day to record many liquids in slow motion to explore how different kinds of materials would react in the real world. The mill used Maya nCloth for the simulation of the liquid form of the Mirror Man and many custom forces to control the cloth shape until it rises up into the solid figure. On the set, Charlize Theron's was in front of a reflective prop with a Red Epic Camera inside to capture the actress performance. This was used then for the reflections on the Mirror Man using UV normals and point position in world space to control the look, but the real physical reflection would have made her almost invisible so they cheated it and developed a shader that had a lot of different blur reflections, oxidations and little scratches.

Shattering Soldiers

Pixomondo was the responsible for bringing to life the amazing battles seen on the movie between the Dark Army and the Knights. Using 3DS Max, Vray and Thinking Particles to achieve the look of the Dark Army, which would shatter into black shards when they were strike by a knight. Thinking Particles allowed the team to set up a volume break system that match the move of the sword, the angle and velocity when it impacts on the surface. The soldiers would just dynamically break while the knight continued to animate, and having the fragmentation going all through the body, getting that way a more fluid effect, over 130,000 fragments contribute to the dynamic simulation of just one scene.

The team used Massive along with hand targeted-animation for the overall army made up of over 1,500 soldiers, for the surrounding knights the team used an script that would randomly select animation clips from a source of about 1,000 frames, and would them place them in the field at a random scale and rotation to get an unique effect.

Creatures

Rhythm & Hues were in charge of creating lots of creatures and plant life for the film, especially the ones that populate the woods, a mystical stag, a fox, a badger, rabbits, hedgehogs, birds, snakes, mushrooms, fluorescent scarabs, fairies and others including the film signatures sequence of the troll, using motion capture for its body motion and hand animation for its face. The rest of its body is made up of rocks and roots like muscles and a coarse skin texture.

On set, production filmed with eyeline poles and air cannons with water splashes to give the actress a clue of how she would have to react. On the other hand Double Negative created the creatures from the dark forest, twisted trees and branches, oozing muscles, a bat creature, and the dark fairies conjured by the queen, made up of about 30,000 obsidian like-shards. Using Houdini and a FX toolset for the animation that allowed the artists to choreograph the movement and allowing the shards to move and detach in certain ways.


VFX Breakdown Video

This video will show you the amazing VFX Scenes, how they create it and some scenes of compositing.


The Hobbit: An Unexpected Journey (2012)

VFX Team


Development of the Visual Effects

Shooting at 3D HFR

One of the most outstanding aspects of this movie is that Peter Jackson decided to shoot the movie using the High Frame Rate (HFR) format. HFR productions of 48 fps record and play visuals at twice the current rate, which is 24 fps. The movie has been filmed with 48 Epic Red cameras at 48 fps in 5K resolution. In order to capture 3D images, the team hired 3ality to build a rig where one camera shoot directly and the other camera is pointed at a mirror, this was necessary because the 3D Cameras needs to be mounted at the same distance between your eyes, which is not possible given the size of the Epic Red cameras. In addition to this, the team had to over-color everything because Epic Cameras tend to desaturate the images.

Creation of the Trolls

When Bilbo and the Dwarves walk into Trollshaw Forest, they stumble with three hungry trolls. This scene came to live by the hand of Weta Digital. Before starting any shooting, the team pre visualized it to work out some beats and help the filming. The creatures were modeled and skinned with a Tissue system that allowed the simulation of muscles, fat and skin, while the animators get the performance from the actors on motion capture suits that would later be re-targeted for the representation of the four meter Trolls.

At HFR, they had to pay much more attention to details like lip-sync and blinks. On the other hand, Martin Freeman and the Duarf actors filmed their part on a forest set, with the help of spheres on poles for eye-lines of the Trolls, which were digitally added later to the scene. Finally the compositors combine the CG Trolls, the live action actors, the forest set, digital fire, interactive lighting among other effects to complete sequence.

Gollum 2.0

It was challenge a for him to perform a slightly younger Gollum because he practically had to unlearn everything that had happened on LOTR.

In this movie, we have the chance to see Gollum again ten years after Lord of the Rings. Andy Serkis had the chance to replay his role as the tortured creature. It was a challenge for him to perform a slightly younger Gollum because he practically had to unlearn everything that had happened on LOTR.

Due to the HFR and stereo nature of the film a lot more of detail was needed, that's why the character was completely remodeled counting now with around ten times more polygons and three times more facial shapes than before. The artist had the task to ensure that the character's original appeal and feel was the same we see 10 years ago, while they upgraded his look and the way he moved, they paid close attention to his facial puppet controls following the Facial Action Coding System (FACS). A significant shift was the ability to perform on location in a motion capture suit, which also incorporated a facial capture camera.


VFX Breakdown Video

This video will show you the amazing VFX Scenes, how they create it and some scenes of compositing.


Which Do You Think Will Win?


Video Sources for all the videos can be downloaded here.

Advertisement