With the 2020 Academy Award nominations now announced, IBC365 looks at who is leading the way for a gong in the VFX category.

Rendering The Lion King

The Lion King: Rendering VFX

The frontrunners for a VFX Oscar is probably a shoot-out between the groundbreaking virtual production of Disney’s live-action animation hybrid The Lion King and ILM’s extensive work in de-aging actors while minimising the impact of their on-set performance in The Irishman.

Avengers: End Game spun an epic story with extensive VFX and came away as the biggest grossing film of all time, toppling Avatar this year to reach $2.8 billion but Star Wars, another ILM gig, may get Academy sentiment since the franchise hasn’t won an Oscar for 35 years. The surprise package is 1917, an avowedly non-VFX heavy treatment where the skill lay in planning and blending post-produced shots into the background.

The Lion King

Entirely animated and featuring singing, talking animals, Disney’s feature is designed to look as if it were shot out of Africa. Its DNA has confused the industry with the Hollywood Foreign Press Association nominating it for Best Animated Feature at the Globes, ignoring Disney’s marketing of the film as a drama. It’s also symbolic of the astonishing photoreal qualities of CGI and the blurred boundaries between shots made in-camera (nominally real) and those created in post (supposedly fake).

On The Lion King, Unity was the games engine of choice into which CG landscapes (digitised from real Kenyan safari vistas) and pre-built character animation were rendered. Viewing this in VR, the filmmakers did everything they would normally do from location scout to blocking scenes and shooting multiple takes for coverage.

Director Jon Favreau believed physical simulations of traditional film equipment like a Steadicam would help key crew feel as if the film were being photographed, rather than made with a computer.

“Even though the sensor is the size of a hockey puck, we built it onto a real dolly and a real dolly track,” Favreau explained. “We have a real dolly grip pushing it that is then interacting with [cinematographer Caleb Deschanel] who is working real wheels that encode that data and move the camera in virtual space. [When you do this] there are a lot of little idiosyncrasies that occur that you would never have the wherewithal to include in a digital shot.”

Rob Legato, the film’s overall VFX Supervisor, explained that the whole idea of virtual production is where new multi-user workflows meet old school cinematography and filmmaking.

“The closer the technology gets to imitating real life, the closer it comes to real life. It’s kind of magical when you see it go from one state to another and it just leaps off the screen.”

The Vancouver division of MPC which helmed much of the VFX on this project as well as Detective Pikachu and Sonic the Hedgehog, was closed by parent company Technicolor before Christmas citing competitive pressures.

The Irishman

Much of the talk ahead of release was whether the de-aging of actors Robert De Niro, Joe Pesci and Al Pacino would stand up to the scrutiny of a dialogue heavy drama. Far from getting in the way of the audience’s engagement, the VFX blended with the performances justifying director Martin Scorsese’s decision to make the story his way with the backing of Netflix.

“This was a movie about conversations and we needed a new system because the actors wanted to be themselves,” ILM’s VFX supervisor Pablo Helman told Indiewire.

The Irishman’s achievement is to create 1,750 shots using unobtrusive markerless performances with normal theatrical lighting. Critical elements of the technique included a three-headed camera rig designed by ILM with cinematographer Rodrigo Prieto and ARRI. The rig was made 30-inches wide so it would fit through a standard 32-inch US doorframe.

The Irishman

The Irishman VFX: De-aging actors Robert De Niro, Joe Pesci and Al Pacino

“We wanted to be sure Scorsese was not limited in any way in the design of shots,” Prieto explained to IBC365. “We failed the first time. After some trial and error, new motors and more lightweight materials we got there.”

ILM spent two years developing software called FLUX to read the infra-red information about lighting and texture thrown onto the actors on-set by the two witness cameras. It then converted the data, alongside the primary image photographed on Red Helium, into 3D geometry and retargeted each actor’s performance into versions of their younger selves.

The characterisation of the actor’s in relative youth wasn’t shortcut by working from any of the films in their extensive back catalogue (Goodfellas, Home Alone or Carlito’s Way). For example, Frank Sheeran has a wider face than De Niro had aged 40; Pesci is depicted thinner than he was aged 53.

“It was really important that the characters looked as natural as possible, so we didn’t use any keyframe animation to change any of the performances,” Helman says. “What you end up with is the raw performance of the actors, and then a reinterpretation of that performance that is designed to look like a younger version of those characters.”

1917

MPC Film land a second nomination this year for its work on Sam Mendes extraordinary war drama. Universal have embargoed any specific detail of MPC’s work until January 24, which is not unusual with the film only just out on cinema release, but with Oscar voting closing on less than a week later it’s not easy to market this aspect of the production.

VFX Supervisor Guillaume Rocheron led MPC Film’s work on the drama which included digital environments, extensions and effects in collaboration with the movie’s special effects and editorial team. One such collaboration is a scene depicting an aerial dog fight far in the distance which turns dramatically into action at close quarters as one of the plane’s crash lands. The VFX team also had work to do in blending shots which were filmed at the Govan Docks in Glasgow and on Salisbury Plain as if all were shot on the western front.

Avengers: Endgame

 A mammoth hit, earning more than $1 billion at the box office, Disney’s MCU opus drew to a climax with a final battle featuring hundreds of heroes and villains. Weta built digital doubles of all the characters and all the creatures — essentially a kit they used to create the battle.

“There’s a shot when the two armies clash with each other at the start of the battle [which] is one of the longest shots we produced,” Matt Aitken, VFX supervisor at Weta explained to Post. “There are plate elements, but essentially it’s a CG shot that easily could have become muddled. Instead, our animation team produced something spectacular and very easy to watch.”

Of the 2700 shots in the entire movie, 2500 contain VFX with VFX supervisor Dan DeLeeuw enlisting the skills of 13 facilities including ILM, Digital Domain, Framestore, Cinesite and Dneg.

The performance-captured animation of Thanos (Josh Brolin) and Hulk (Mark Ruffalo) was advanced too on previous Marvel movies.

Scans of the actor’s faces formed the basis for underlying facial shapes onto which ILM, Digital Domain and Weta applied new software for more subtle interpretations of performance. ILM also used two head-mounted cameras positioned in front of the Ruffalo’s face enabling facial animation to be acquired on set.

 

Star Wars: The Rise of Skywalker

ILM has been nominated for each film in this latest trio of franchise reboots as well as the spin-offs Rogue One and Solo but has secured no wins. In fact, the saga hasn’t won any Academy gongs since the Return of the Jedi landed a visual effects achievement award in 1983. Could this be its year?

The finale features the familiar but no less accomplished blend of special (practical) effects, creature effects and 1,900 VFX shots.

One of the challenges was how to include Princess Leia in the story given that Carrie Fisher died in 2016. “It was a gigantic puzzle,” VFX supervisor Roger Guyett told Business Insider “When you see Leia in ‘Episode IX,’ basically it’s a live-action element of her face with a completely digital character.”

Far from a cut and paste of Fisher’s face onto a digital body, the team tracked her posture and body movements from The Force Awakens and had to work back from actual lines of dialogue they had in the archive, like ‘Never underestimate a droid’ which writer-director JJ Abrams then wove into the script for Skywalker.

The Battle of Exegol alone required more than 1,000 Star Destroyers and 16,000 Galaxy ships while ILM’s creature dept. designed the largest range of practical characters for any Star Wars film, including Ewoks, while sister teams devised digital creations like the horse-like Orbak.

Near misses

Alita: Battle Angel

Having passed under the radar on release, Alita: Battle Angel is well worth a detour as its depiction of a manga-infused universe is inventive and a lot of fun. Adapted by James Cameron and directed by Robert Rodriguez, Alita combines performance capture with live-action filming on set. The star asset is a female cyborg reactivated after being junked who spends most of the movie rediscovering her superhuman powers and her artificially intelligent identity. It’s a marvel of design from Weta Digital which counts Alita as its first fully CG humanoid.

Performance capture was achieved by placing two cameras on actress Rosa Salazar in addition to her full body and hand capture suit. The two-camera helmet cam provided depth information for input to Weta’s facial pipeline.

Weta built a fully textured digital copy of the actress for re-targeting motion onto the character’s body with 8,000 different pieces of geometry.

DNEG and Framestore also clocked up work on this show. DNEG’s 250 shots included cyborg animation for the ‘Factory Gang’ in the crazy bar fight scene, as well as the cyborg, ‘Amok’, in the Ido flashback scene.

In addition, DNEG completed environment work for the climactic rollerball game, as well as all the rollerball skates within that same sequence.

The techniques employed on Alita have informed the production of Avatar 2-4, the long-awaited sequels designed by Cameron with Weta. He told the NZ Herald, “[In the upcoming Avatar films] there are several live-action characters that go through a lot of the CG scenes so it’s about integrating the live characters into the CG scenes. Kind of the opposite to Alita where we had a few CG characters mostly integrated into live-action settings.”

Gemini Man Will Smith

Gemini Man: Will Smith wearing facial recmposition gear

Gemini Man

In Gemini Man, Will Smith’s government agent confronts Will Smith, his younger clone. Smith’s Junior was ‘de-aged’ by 27 years by Weta Digital but this required far more than just a facial recomposition.

For some scenes, Smith played Junior with only his head and face requiring digital de-ageing using performance capture data. In scenes in which both ‘Smiths’ are together, Junior is played by a stand-in actor and later, on a mocap stage, the scene was replicated with Smith wearing a full bodysuit.

“We’d record that with a dozen cameras to get the references from all angles from which I’d pick the best angle,” explains editor Tim Squyres. “For a long time, our rough cut would feature shots of our stand-in actor below a picture-in-picture of Will Smith’s motion-capture filmed at approximately the same angle.”

Weta’s animation was based on the morphology of aging including more realistic skin simulation and detailed eye work, and the development of a procedural software for skin pores along with robust modelling.

Oscars 2020: A closer look at the VFX award nominees