‘If you want a painterly bush, paint the bush’ - befores & afters

16 min read Original article ↗

How the painterly style of ‘The Wild Robot’ was realized. An excerpt from issue #25 of befores & afters magazine.

For DreamWorks Animation’s The Wild Robot, based on the book by Peter Brown, writer/director Chris Sanders sought to bring a painterly aspect to the 3D animated film. The studio capitalized on stylized workflows it had developed previously on Puss in Boots: The Last Wish and The Bad Guys to take things even further for The Wild Robot, which follows the service robot Roz (Lupita Nyong’o) who is shipwrecked on a wildlife-filled island, and who eventually becomes the adoptive mother of an orphaned goose, Brightbill (Kit Connor).

In this excerpt, visual effects supervisor Jeff Budsberg gets into the weeds with befores & afters about exactly how the painterly aspects of the film were made, the tools developed at DreamWorks Animation, and, importantly, what this process added to the storytelling.

b&a: Jeff, you really elevated things again with the stylization here, but it’s actually even a different approach to what Puss in Boots: The Last Wish and The Bad Guys did. What was your brief?

Jeff Budsberg: I had come off of The Bad Guys as Head of Look and I was talking to Chris Sanders and Jeff Hermann and, right off the bat, Chris was interested in this space. Computer graphics has been in this pursuit of realism, which has been amazing and there’s been so much innovation. But there’s something that we’ve lost along the way, specifically in animation. If you go back to those 40s and 50s animations like Snow White, Bambi, Sleeping Beauty, there’s something endearing about feeling the artist’s hand. I was just watching 101 Dalmatians with my kids this weekend and just feeling the stroke of the drawn line, the imperfections there. There’s something that’s magical or just endearing about that. You feel the craft in there. And it’s not just that. It’s also being deliberate about where there is information and where you’re guiding the eye. There’s something about that that we really were interested in.

issue #25 – animation conversations

I think the other part is when I talk to directors and producers about stylized CG films is that really you don’t just want to ‘put’ Spider-Verse or The Bad Guys onto another film. It doesn’t work. It doesn’t make sense. You have to find the style that makes sense for that film, right? And that’s one thing that Chris and I talked about a lot at the beginning of the movie is that we wanted to deposit Roz on this island, like a fish out of water. She doesn’t belong there. So, on the surface, obviously she doesn’t belong there, but we didn’t want her to belong there aesthetically. She’s this precise, futuristic machine in a very loose, painterly deconstructed world, and immediately there’s a juxtaposition there that is a conflict. And so you feel that she shouldn’t belong there, but she really doesn’t feel like she belongs there. And through the course of the film, she gets beat up and dirty and banged up and all that. So there’s a progression of her wear, but we wanted there to be a very subtle progression of her aesthetic as well.

And that’s what was really exciting because it comes back to serving the story, in that she slowly starts to make an impact on the island with this relentless pursuit of kindness trying to help everyone. And, at the end of the day, the island is actually impacting her as well. So her aesthetic is changing sequence by sequence ever so slightly. And if we did it successfully, you’re not paying attention to that. But then at the end of the movie, when Roz comes into contact with Vontra and the other robots, you’re like, ‘Holy crap, she now fits in the world and they do not.’ Now it’s jarring for them, because they don’t belong there.

That’s what was really exciting, because we were able to weave in this aesthetic that supported the storytelling in a really novel way. We evolve other things as well. Roz’s locomotion changes from something that is more rigid or efficient, robotic-y, for lack of better nomenclature, into something that’s more fluidic or animalistic, like S-curves with her arms. She does less peacocking; she uses less of her futuristic tools as she moves through the film. She’s doing all these crazy light shows and using all these things at the beginning of the movie. But then as she progresses, she’s more restrained, a little bit more subtle in her mannerisms. So there’s all these subtle cues that the audience might not pick up on. It’s almost imperceptible, but the amalgamation of them, you feel her progression through the film. I think that’s what was really rewarding aesthetically is you could use that, the style, to drive the story.

b&a: What were the tools that DreamWorks already had in the bag to do this, but then where did you take things further? Doodle is a tool, for example, that enables you to do a lot of 2D animated elements as part of the 3D. Where do you take it here?

Jeff Budsberg: We were using Doodle to some extent in visual effects. We had this idea, coming out of The Bad Guys and Puss Boots: The Last Wish, that you wouldn’t ever make a realistic bush and then filter it to make it look painterly. It doesn’t make sense, it’s not efficient and it doesn’t give you a very pleasing result. So if you want a painterly bush, paint the bush, right? The takeaway there is, find the best place to solve the problem where every department along the way had to adjust their workflows to make the final image. So, on the film, modeling is not meticulously constructing a network of branches and leaves, they are drawing the bush in 3D with Doodle. It lets them think about shape language in strokes, splatters, and brushstrokes. The leaves don’t need to connect, they could be splatters of flowers and color.

Similarly for look dev and surfacing, think about how an artist would paint volumetric shading. What are the non-physical shaders that you need so that you’re adding high-frequency texture in the key light, but on the shadow side, removing a lot of that superfluous information. Take for example bark on a tree; we would actually paint the lit side of the tree with a different texture than the shadow side of the tree. How do you build that into the renderer where you could swap textures and detail on the fly based on the light conditions—that’s what we had to solve.

Then there’s feathers and fur. You want the richness and sophistication of real fur and feathers because you want to be able to see the fur moving in a sophisticated way. You want to be able to feel characters running their hands through fur, fluff their feathers, or have wind in there. You want to have these micro details that are amazing with fur and feathers. But we don’t want to see every fiber of the hair. That’s not how one would paint it. So how do you reveal detail very surgically in specific areas? And how do you do that in a way that might not correspond to the geometry at all? Perhaps you want to add splatter or spongy texture in the key light of the fur, you’re like, ‘It doesn’t even make sense.’ So your geometry might be fur, but we’re using brushstrokes to inherit the render properties that would drive the derivatives, the normals, the opacity. So, we use all sorts of different geometry to manipulate the light response driving non-physical shaders. Similar to a painter’s approach our shaders start with a really rough underlayer of loose detail and then you’ll add textural details on top of that as different accents, and those are only revealed through specific lights.

b&a: I was talking to Chris Sanders about how actually doing it that way, it’s more stylized looking, but in some ways, it’s more believable.

Jeff Budsberg: I think it comes back to what I mentioned before is that a handcrafted quality allows the audience a way to enter the film and fill it in their brain. It gives you a more immersive, imaginative experience compared to something where every leaf was detailed out, or every piece of grass. It’s almost like your brain takes a step back. You’re like, ‘Oh, this is so much visual information, I need to take a step back.’ But I think that’s why we all go to the gallery to view paintings; you get to feel and experience a world through the glasses of the artist. You get to see the vision through their eyes and experience it with them. And I think it brings you something that’s a little bit more of a visceral experience. And maybe that’s what Chris is getting at when it feels more real because I think it feels just more inviting, a little bit more endearing. It feels handcrafted, it feels well-thought-out.

b&a: How were you dressing the set, especially the island, on this film?

Jeff Budsberg: We used a tool called Sprinkles, which is an art directed set dressing tool. I think one of the key developments for dressing the world is this integration between set dressing and modeling and being able to design bespoke plants on the fly and draw them. We’re living in this interesting space between 2D and 3D. So, you could be in 3D dressing plants, but then in 2D, just drawing the plant, placing them, and you’re living in this world where you bridge between illustrator and 3D artist. For Doodle, you’re building these animation rigs on the fly as well. So all these plants could be articulated, they could all be interacting with the character and blow in the wind.

b&a: I think that’s what maybe people who are not familiar with the 3D process probably don’t realize. They’re like, ‘Well, if it’s so 2D, why not just draw it?

Jeff Budsberg: Exactly.

b&a: But the characters brush past it, the camera moves.

Jeff Budsberg: You step on the plants, you brush past them. This is one thing we talked about a lot is that we can give the audience something that you cannot do in 2D. You can do those dynamic camera moves. You can move through the space. You can do wind or deformation of the plants. You can do art directed depth of field, like rack focuses. There are things that you can do in the 3D world that you cannot do in the 2D world. And I think that allows you to move through the space in a way that is immersive and really inviting. But on the 2D side, there’s something endearing about the handcrafted-ness. So, if we can live in this space between the two, I think that gives you something novel and really exciting. And I think that’s what people grab onto like, ‘It looks like a painting, but it’s moving. What is going on? This is crazy.’

b&a: When you mentioned a moving painting there, that is how I felt sometimes watching the film.

Jeff Budsberg: Yes. The other key developments we made were adding brushed partial-transparency everywhere. So that’s the other very typical thing of a CG render is that every edge is very hard and crisp. In a ray tracer, it’s very expensive to use transparency or even just onerous because we would want to do some processing in compositing, but a lot of compositing operations require AOVs like depth information or positional information, normals. But the problem with traditional renders is you only have one sample of the position at that pixel. But what if you have transparency? There are actually multiple objects at that pixel. You might rely on deep images, but deep images have significant problems because they’re they’re expensive to render and it makes your compositing very slow. You also lose pixel filtering and a lot of other features that you have in your traditional AOVs.

So, we actually created an extension to the Cryptomatte data format where it’s a layered approach to your data channels. So every pixel in a Cryptomatte has an ID of what asset it is and the coverage information of that pixel. But then you store multiple layers, you have the sum of all of the assets at that pixel. Well, we decided why don’t you do that for position? Why don’t you do that for the normal? Why don’t you do that for other data channels? Because what that allows us to do is use the really novel smart filtering operations from Bad Guys or Puss in Boots, in Wild Robot with transparent assets.

We were able to use really sophisticated filtering operations, but with layered transparency, which would be normally very difficult to do. We weren’t really using a whole lot of transparent assets on Bad Guys and Puss in Boots. Having feather transparency and broken up edges really added to that believability that it felt like a brush had been applied and that texture was running over the page.

b&a: What other things were you doing to make it feel more painterly?

Jeff Budsberg: We had a scene sprites tool, where the lighters could decide that they wanted to perturb the image even more. Everyone’s reaching across disciplines in novel ways. The modelers are thinking about drawing plants. Surfacers are set dressing, could also draw the plants, and are thinking about lighting. Character effects, they usually do the groom, but the grooms influence the final aesthetics. Everyone’s thinking about the final image. The lighters, they’re authoring new assets with these sprites that are adding textural detail into the scene. But then they could use those in compositing to manipulate the image in a way that’s still coherent spatially and temporally. And so they could add splatters of paint in the environment to help break up edges. We call that badger brushing, where you smear edges almost with a bristle brush.

They could take those assets which are already pretty stylized, but they can push it even further and be like, ‘Okay, well this shot needs some sort of painterly depth. I’m going to layer in a couple of different textures of these scene sprites in the scene.’ They exist in 3D space so that the camera can move through the space and the characters could interact with the space and we could use those deep Cryptomatte-based filtering operations to push and smear the frame around like a painter would by painting wet-on-wet.

b&a: There’s a lot of birds in the film, what did you have to do in terms of feather development or even crowds here?

Jeff Budsberg: There was a lot of novel feather development in the rig. We actually developed new approaches for scapular feathers in the back. There’s a really fascinating way when birds fold their wings where this scapular, you can think of it almost like a cape where they fold their wings out and you’re like, ‘I don’t even know where the wing went.’ It folds in a really novel way and so we wanted to build that into the rigs. And same thing with the pocket. So trying to put the wings in the pocket, which is really fascinating how they nuzzle them in and the wing disappears.

In terms of other novel developments in the rigging for birds, there’s this flap of skin called the propatagium. We were trying to simulate how that deforms and stretches as the birds are extending their wings. Those are some of just a handful of new developments for the bird rigs. And then for crowds, obviously it’s scaling that to thousands and thousands of birds. So, you have your fully fleshed out rig, then you have multiple simplified rigs along the way.
But that wasn’t necessarily the largest challenge. The largest challenge is, ‘How do you simplify the geometry but then still make it feel painterly?’ You want to start to remove the same way you would in a painting, the birds that are further away, they’re almost look like blobs of paint, right? So you’re trying to remove a lot of that high-frequency information. So, it’s the same thing where we’re using a lot of those tools I talked about, those scribbles and smears and processing the geometry in a way that’s temporally stable and also respected to how far these things away from camera.

That required a lot of investigation of, how do we achieve what we love about the Studio Ghibli films and Miyazaki’s world, but we want to live in this world of 2D and 3D? You wouldn’t just use a 2D tool and just paint everything because that doesn’t allow you the sophistication of a 3D world. So, Doodle allows us to take the 2D stroke data and project it in the 3D space, allowing you to spawn new simulations off of those. You can draw an explosion, which could be an emitter into a smoke simulation. And then that smoke simulation could affect brushstrokes through it, and then you could smear the brushstrokes in compositing. So you have this really interesting blend of 2D, 3D, simulation, compositing, and you’re going back and forth almost like a painter would. You’re building up the image in a way that you’re trying to use the best of 2D and 3D.

If we need something like an ocean or a lake surface, we could use some sort of FFT or procedural way to deform the ocean or the water, but you don’t want to see every micro ripple. So, we thought, we could process the geometry so you don’t see all those high-frequency ripples, but then draw in some splashes. It’s really interesting as an effects artist, to try to push the image in a way that does not need to be physically accurate, but it is physically believable. It is all the same motif of editing the image through the artist’s lens and trying to make it feel like it was handcrafted.

Read the full coverage of The Wild Robot, and several other animated features, in issue #25 of befores & afters magazine.