New Pixar animation Elemental is the Walt Disney Company’s most technically complex feature film to date and required a new data storage pipe that lays the foundation for use of AI, reports Adrian Pennington.

“We are not actively using AI yet, but we have laid the foundation,” began Eric Bermender, Head of Data Center and IT Infrastructure at Pixar Animation Studios.

bio_pic_EricBermender copy

Eric Bermender, Pixar Animation Studios

“One thing we have done is taken our entire our library of finished shots and takes for every single feature and short - everything we’ve ever done, before even 1995’s Toy Story - and put it all online and available and all sitting on the VAST cluster.”

He continued, “As you can imagine all that data in the future could be used as a training data. We’re talking not just final images but all the setup files used to generate those images as well. The library is valuable as training data but the actual applications themselves don’t exist at the moment.”

Read more Artificial intelligence in broadcasting

The data-intensive animation technology used to make Elemental would not have been possible without deploying a data/storage platform from VAST Data.

AI-Powered Volumetric Animation

“Traditional animation uses geometry and texture maps with the geometry deformed by a skeletal system,” Eric Bermender, Head of Data Center and IT Infrastructure at Pixar Animation Studios explained to IBC365. “For example, if you saw a shot of Buzz Lightyear walking, the geometry and texture maps will be the same from frame to frame albeit deformed in some particular way.

“Those assets might be large but they don’t change by frame so we can cache them. However, volumetric characters don’t have that. Every single frame is a new simulation. We lost the ability to cache because everything is unique per frame and the IOPS (Input/output operations per second) went up significantly.”

In Elemental directed by Peter Sohn, characters representing the four elements (air, fire, water and earth) live in proximity (though, of course, elements can’t mix…) in and around a society known as Element City. These characters don’t have traditional geometry and texture maps but are volumetric or simulated.

“This means that every time the animation team iterates the frame it creates a new simulation and that meant that our compute and store capacity needed started to accelerate quickly. Instead of one geometry file and one set of character maps, now every single frame is a unique simulation of that character.”

Pixar’s first experiment with volumetric animation was in creating the ethereal ghost like characters in the ‘great before’ of Soul (2020). This was also the first project on which Pixar worked with VAST.

Read more Interview: Douglas Rushkoff on AI in Media

“With Elemental the characters are much more animated [than in Soul] and every single character is a volumetric character. Even some of the background set pieces and buildings are volumetric animations. Soul was our practice run; Elemental is the full deal.”

Faster Storage for an AI Future

VAST uses all-flash storage as a replacement for the 20-to-30-year-old storage paradigm based on hard disk drives (HDD) and tape and data tiering. Its architecture allows Pixar to have information stored in computer memory and available for rapid access.

VAST-Data-Pixar-Elemental-press-release

Pixar Animation Studio’s Elemental

For context Toy Story in 1995 utilised just under 300 computers and Monsters, Inc (2001) took nearly 700 computers. In 2003 Finding Nemo used about 1,000.

With Elemental, the core render farm on Pixar’s California campus boasts more than 150,000 computers to render nearly 150,000 volumetric frames and 10,000 points of articulation for each of the main characters, Wade and Ember. By contrast, typical Pixar character models only have about 4,000 points.

Elemental created six times the data footprint and computational demands for data than that of Soul. By moving 7.3 petabytes of data to a single datastore cluster VAST provides real-time access to keep Pixar’s renderfarm constantly busy.

“In the past, we would have to segment separate [Pixar film projects] onto separate network interfaces,” explained Bermender. “We did that because a show that’s in active production has historically generated the most IOPS and capacity growth as we render out.”

However, the new IT system now allows for shows that are in development to be able to trial new methods of animation with an efficiency not previously possible.

“Maybe we are working on a new environment or new character that’s never done before and we hit go for render and it overwhelms the cluster with IOPs. Now, with VAST, we can segment different projects with different paths to the storage and data resource and it doesn’t slow the whole pipeline down.”

He reveals that during production somebody had accidently made a mistake and set the whole system to regenerate every single character and shot overnight.

“We didn’t notice until the next morning that the system had written out as much data as all of Toy Story 3 in a 12-hour period. The system itself was performant and able to do it. It was pretty amazing to me that we literally rendered out the entire footprint of a movie we only made in 2010 in just a few hours.”

Given this boost in rendering speed you would think that the typically lengthy multi-year process of creating an animated feature could be reduced.

Bermender disagreed, saying that even as compute and storage tech advances, the animators will take advantage of that capacity to create more complex images.

“As we create the ability to iterate faster it frees the creative process for artists to create more complex scenes resulting in the same amount of time needed to render an image. Animators will work on a scene during the day and send a job to render overnight. That job has to be done by the next morning so by the time the animators come in they can begin work on dailies.”

He added, “Artificial Intelligence has the potential of enabling more creative and complex images than perhaps we see now but I don’t think it will actually reduce the time taken to render them.”

Greater Processing Capacity adds Flexibility

The ability to deliver large volumes of data at render time will help Pixar as it prepares to leverage AI for future films.

For instance, RenderMan, the Pixar-owned company that created the software that paints the final images, recently released ML algorithm ‘Denoiser’ to the market.

“We’ve been using Denoiser for a long time. We take old shots and curate them and RenderMan uses these curated copies of the images as training data so it knows how to smooth out during path tracing. To do that successfully the denoiser has to be ‘aware’ of what is the scene.”

He says the type of AI image manipulation that solves a practical problem is more useful than the more generic type of image generator.

“It’s one thing to generate an image using something like Midjourney, quite another to do it for animation storytelling where you need to have control.”

Read more AI and ML: driving content automation