Part 2 of our interview with Luc Delamare and Kevin Stewart, two DPs from LA who made the jump into virtual production, where we discuss the similarities and differences between traditional cinematography and real-time filmmaking. This second and final section goes into more detail, in particular regarding in-engine lighting and camera movement with Unreal Engine.
Read Part 1 here: Virtual production for cinematographers: Part I, how to get started?
If you’re interested in getting into virtual production or Unreal Engine yourself, make sure to read our own introduction to real-time filmmaking which covers the basics of in-engine lighting, camera work and much more.
Table of Contents
- How easily can you replicate live action shots with in-engine lighting?
- The importance of high quality assets in VP
- A virtual dilemma: is the freedom of in-engine lighting always a good thing?
- Camera work: live action vs in-engine
- Virtual cameras and in-engine lighting: keeping it real(istic)
- What is the future of VP? Is real-time filmmaking a new medium?
- Future projects
How easily can you replicate live action shots with in-engine lighting?
Welcome back! Speaking of lighting specifically, and because your mood board is made of live action shots and high end CGI shots, how difficult was it to replicate these shots in Unreal, using in-engine lighting in real-time?
Yeah, it is. I worked on some of the daylight exterior shots and those were tricky ones. It took some time to get that dialed in because you’re not necessarily dealing with artificial light. You’re trying to recreate natural light, and that’s not something, as cinematographers, that we control too much. Sure, we can add flags and and bounce and all that stuff, but you can’t control the sky or the sun in real life. In the engine, you have to and they have to be good. The shadows have to be realistic, the sky has to be a certain brightness, the fog and how it falls off, the color of the fill… you’re trying to make it natural, but it’s part of the engine.
And then on top of that, once you got that, then we brought in some lights and some flags, some negative bounce, some negative fill to kind of shape and create the shot as best we could. So it took some time to get that to look right. And I know we got some challenges with some daylight interior stuff, because that’s where the GI comes into play.
It was challenging because we kind of had to learn Unreal Engine and get the in-engine lighting to a baseline of what light would normally do in a physically-accurate, physically-based renderer. Because Unreal Engine doesn’t do that off the bat, just clicking, “ray trace on” did not solve all our problems. We had to go a lot deeper than just clicking a few checkboxes. So we had to learn to get to this baseline of just understanding how the ray tracing worked and how it could work efficiently. It wasn’t just about what types of lights we use. Some of the lights actually do different things with global illumination, it’s not a perfect thing yet. Because it is all cheated, it’s still a game engine at the end of the day, it’s still a lot of smoke and mirrors. But as cinematographers, we know the right cheat to get that look, to get it to look good.
Once we learned to get this baseline, then we could kind of elaborate. For Kevin, with the forest, it was about getting the ambiance right. And then we went from there for some of the interiors. It was like, “how do we get this practical lamp to look like a real lamp?”. Because we weren’t doing baked in-engine lighting in Nemosyne, we wanted the light quality to be accurate, but as cinematographers, we also wanted it to be pleasing. There’s a fine line with that as well. Two thirds to three quarters of the time spent learning Unreal Engine was just the lighting and the ray tracing, getting it to the point that, as cinematographers, we were happy with the results. And once we learned to use those tools, we had this moment where both of us were like, “oh, I think I think we get it now!”. I think we’ve unlocked the lighting and any type of scenario we would be comfortable lighting. There were restrictions, I struggled with a daylight interior we ended up cutting completely because the global illumination didn’t look good enough.
The importance of high quality assets in VP
So, your original idea was to have some macro close ups and the short does begin with a close up of Grace, with a really good quality. Do you think that it all relies on having good assets? I’ve found that the asset quality, even going into the marketplace varies a lot. How much do you feel that you have control as cinematographers over how good a closeup can look using in-engine lighting, or is it really dependent on the quality of the asset?
That’s a great question and it’s funny, we were just literally talking about this earlier. Yeah, digital humans, I mean… There’s just not great stuff out there right now, and the Grace model looked fantastic. That was the first thing Luc and I said when we opened the project: oh wow, this looks so good! The skin, the pores, all of the little facial features… we changed the eyes because the eyes were these “robotic” eyes so we added some Paragon eyes that we had from Unreal. And those look fantastic.
But, yeah, I think we do rely a lot on the digital humans, for them to look good because a certain type of skin texture is going to receive light a different way, kind of like in real life. Somebody might have facial features that are very pock-marked and they just don’t receive light very well. Sometimes you have to kind of dance around it. If you find an actor that just doesn’t look well, maybe they look “unhealthy” because their skin’s a little bit pale, you have to kind of work around that as a DP.
But that’s a separate thing, to answer your question. I think we do rely on the assets themselves big time, and if they’re good, then our skills come into play to make them look as good as possible with in-engine lighting and composition.
I agree with Kevin here, but I do think at the same time that there is an understanding for how to shoot. You started this with asking about close ups: how do you shoot this model and have it look good close up? Uncanny valley is a thing, “video gamey-ness” is a thing… All issues that Unreal Engine has struggled with, that lots of current projects are struggling with, and that we struggled with. We’d point the camera and we’d say, “this looks like a video game”. Even with our lighting knowledge, it still didn’t work.
While the asset does have to be a certain quality for it to look good, you combine that with our knowledge of how to actually light a close up, how to light a human face from our live action careers and where to place the camera so that it looks the best. You add that to the mix and that ups the production value by a huge amount. That opening shot, it was a very rough in-engine lighting setup that we did and we were just previewing that camera move. It was suddenly like, oh wow, this looks great! It wasn’t really planned. That was just an experiment like every step of our process. But I do think it’s a blend of the quality of the asset and also understanding how to light with the Unreal Engine.
A virtual dilemma: is the freedom of in-engine lighting always a good thing?
Imagine you’re shooting live action, you’re going outside. It’s a cloudy day. You already have your lights, right? In contrast to in-engine lighting, you have something to work with. You’re not creating the sun, creating the skies and deciding how intense the sun should be. What are your thoughts about this? Would you rather have an artist create a scene, separating the cinematography from the environment design? Or would you rather do everything yourselves and control everything, from the intensity of the sunlight to the settings of the skylight?
I’m not sure I know the answer to that. Naturally, it’s appealing to cinematographers to have a scene that we can kind of go in and start playing around in. So do we want to always be the ones who really do set everything up and decide all those things? I mean, we cinematographers are control freaks, so we would lean towards yes. Now, do we want to actually do all the nit-picky stuff? No, like learning about how to get the atmosphere right in Unreal Engine was a huge challenge. And so at a certain point, we’d hope that things progress so that it’s easier to do or that there’s production pipelines in place. It shouldn’t just be a sole person’s job to do all this.
But I think that’s part of what we both were really drawn to, that we just had all of the control and the creative freedom to kind of paint with the Unreal Engine, paint from a blank canvas. As cinematographers, we knew what we wanted to see. We had references. We didn’t have to learn how to do every single step for that to happen.
Yeah, it’s a great question because you’re talking with DPs, who are notorious for wanting to control everything. If you’re on set on a daylight exterior and a cloud cover comes over… you just can’t do anything about that. It sucks. But in the end, as much as I would like to control everything all the time, there’s a charm to coming and trying to shape the life that’s there and to think about where to place the camera in relation to the sun, or the set, or the actor, and to really think about all those variables that are not controllable. That’s kind of what makes it exciting to a certain degree. So I guess it’s a little bit of both. Yeah, it’s great to control a set and every aspect of it like we do in Unreal, but also very exciting to just have to deal with those variables on set as well, because that’s what makes it spontaneous.
Camera work: live action vs in-engine
How different was dealing with camera movement in the engine compared to how you work as DPs? Did you notice a difference in terms of the quality of the movements that you could achieve? Generally speaking, how would you compare a camera work from live action to working in Unreal Engine?
So as you may already know, there’s a way to use an HTC Vive headset and controller and parent it to a virtual camera in Unreal, having those two kind of sync together so that when you’re moving your Vive controller in real life, in this office, for example, you’re also moving the virtual camera.
And so our camera movement in real life became the camera movement in the engine, which is great. We had to tweak a bunch of little things like the smoothness of the movements and blueprints stuff that we followed some tutorials on. But that was probably one of the most exciting parts of this whole process. It was really like blending the two worlds, like ok, now I’m shooting a movie because I am moving a camera. We knew we wanted to have mostly that kind of floaty hand-held feel for this short. And so we knew that it would be appropriate. We also had it in slow motion the entire time.
Being able to pick up a camera and frame up a shot like we would on a shoot, with the knowledge and experience that we’ve had for ten plus years shooting movies, that was probably the most exciting part because you could frame up, find compositions, find shots, record them, and then watch them or do a bunch of takes. Eventually we kind of got to that process where we figured out how to just record these shots, do six or seven takes, tilting down from the sky, panning on, close up, pushing into the face. So you just do another take and another take and then we pick and choose the best ones.
Just to add on to that, it allowed us to do the hand-held movement we wanted and to give it an organic feel that we couldn’t have keyframed and that we couldn’t have done with the mouse. We couldn’t have done it with a gamepad. You can fake it to a certain degree, but that wasn’t the point here. The other thing is it allowed us to just experiment in an environment just like we would in a real life environment.
And so I think this almost goes back to what we were talking about earlier. Is it a good thing to start with a blank slate? Is it a good thing to start with a black empty void? And so we actually purchased some assets, we kind of kitbashed some stuff. And when we had these little levels built, then we would walk around them with our VR cameras and start thinking about shot placement.
And that’s not something that is as easy to do with a mouse and keyboard. It has been done to a certain extent, animations have been made like this, but if you take the bedroom set, in the forest, or even the opening shot were all experiments. We had no idea that was how the short was going to open. We just had a level where she was laying on this surface and we’re just walking around our living rooms and finding the shot just like we would on a live action set. And that’s something we both found incredibly exciting: being able to bring that experience and that type of filmmaking into a virtual world – during the pandemic too. Both of us were at our houses not having worked in a long time, and this was a way for us to continue filmmaking in a safe way.
Virtual cameras and in-engine lighting: keeping it real(istic)
Still on the topic of cameras and camera movement, do you feel that the Unreal Engine should be approached differently? Dealing with cameras in live action, you’re holding an actual piece of hardware which has its limitations. With the Unreal Engine, you could actually just go five hundred meters up in the sky and shoot from there or get really close, even put the camera inside an object or spin it around someone. There’s complete freedom to what you can do with a camera. With that in mind, you think that camera work should be different in real-time production, in that it should take advantage of that freedom? Or do you think that it should replicate a live action experience, with its limitations?
Yeah, we talked about this early on, not specifically for cameras, but that things should feel as real as possible. We wanted to make sure that we weren’t doing impossible camera moves that you’re kind of describing, because that’s when gravity goes out the window, and I feel like audiences perceive that impossibility. Suddenly it feels unreal or “uncanny valley”. That can even be applied to camera movement, I would say. And so we try to keep that as a rule: let’s not try any impossible movements in this case, though we kind of broke or bent the rules sometimes.
Same idea: let’s not put lights where we wouldn’t be able to put them because we’re going for an approximation to how we would have lit in real life, and that’s how it would have looked in real life. But because if you put a light in the frame in Unreal, you’re not going to see that light, you can actually put it there. So we admittedly broke that rule sometimes. But for the most part we tried to adhere to that because, again, uncanny valley is a thing, even with in-engine lighting, even with cameras. That being said, yeah, it’s great that you have the freedom to do anything you want. Absolutely. In our case, I think it’s coming from a real life experience. We want to try and keep it as physically accurate as possible.
I think too that it could also come down to a matter of taste. When I think of the uncanny valley in terms of camera movement, I think of The Hobbit, because they were just flying cameras as if it were a Lord of the Rings video game. And it really stands out to live action filmmakers. And honestly, I would think that it stands out to everyone, but that’s a project-dependent factor.
Since we knew we wanted Nemosyne to be grounded and the real goal was photorealism, with the constraints of shooting in a bedroom, shooting in a forest, we knew it had to be live action camera moves.
I think it really depends on the project because you can also put the camera way back and make it look real, if it makes sense that the camera is shooting through a window up in a building. To answer the question, would someone be able to still make it a grounded thing in the sense that it feels real? You could. There are crane moves in real life that are huge. I think it also just enables camera moves that would normally cost a huge sum of money. And you can just really quickly preview it and decide if it works. It really depends on the project.
What is the future of VP? Is real-time filmmaking a new medium?
If I can rephrase what you’re saying, there’s a cinematographic language and what you are doing is you’re applying that language to real-time filmmaking. And maybe this is something that’s sometimes lacking from real-time content, because people who are creating it are not people who have learned and worked with this cinematographic language, which means working with light, camera movements, things that the audience also has learned, a vocabulary that we all share.
Going back to 1895 – the year that is considered in France to be the birth of cinema because it was in France, of course – and up until, let’s say, 1910, there was a lot of filmed theater. Because the first logical idea for a lot of people coming from the art world was that with a camera, you can just film a play. That was before we realized what the cinematographic language could be, before realizing that cinematography brought more to the table than just filming something that already existed.
With that in mind, what would you say are the specifics of working in real-time filmmaking? Do you foresee any elements that are specific to real-time filmmaking, that are the essence of it, elements that people working in virtual production should be focusing on? Things that are the essence and strength of that particular media?
It’s an interesting question, the way theater has its own language. And when cinema started, they didn’t know that cinema was supposed to have its own language as well. As cinematographers coming from the live action cinema world, we’re naturally trying to bring some of its aspects. So that’s what we’ve been focusing on. But in terms of what virtual production and real-time filmmaking offer that is unique to themselves…
I think the catch here is that virtual production is a fluid term. It’s so new that it kind of has a definition. But in terms of real-time filmmaking, for me, it’s still about storytelling, but then again, with the parallels that you’re drawing, I can see how you would want to see it makes sense that it should develop its own thing. Theater is still storytelling and cinema is also storytelling. Naturally we’re trying to just always bring part of what we know from cinema and cinematography and their principles in terms of its own unique thing. It’s like animation is its own unique thing. What does animation have that live action filmmaking doesn’t? And that’s a huge answer, right, but at its core, animation is still about storytelling. So to me, real-time filmmaking is still going to be about storytelling and those principles should still be followed. If anything is new and unique to virtual production and real-time filmmaking, I think it will be a stylistic aspect and not a fundamental shift.
I would agree. And ultimately, I feel like real-time filmmaking, specifically in this context, is trying to be as close to reality as possible. There is so much more that you can do with it, you’re not constrained to physics necessarily. So you can do what you want, like an animated movie has its own language because it can kind of have its own world completely. You don’t have to use physical locations. You don’t have to use sets. It can be magical – and usually animated movies are going to be pretty magical experiences, or fantastical experience for the most part, at least the ones I’m thinking of, like Pixar movies.
I’d have to think more about that, but I feel like virtual production is trying to approximate reality. I think that’s what it’s going to try and do for the next few years, to try and support reality, to try and support the storytelling, of course, but it will have its own style eventually, for sure.
What about your next projects? Do you have any idea what’s what’s next? How does the future look for both of you?
We’ve recently completed a project in which we served as virtual cinematographers for the first ever virtual fashion show. After seeing Nemosyne, the producers approached us to design all the camera and lighting work for the show, fully in-engine. It was a really fun way to put into practice all the techniques we had come to develop on our short. While that project’s aesthetic was dictated by a client and thus a completely different look and style from Nemosyne (we ended up leaning heavily towards a “video-game” and surreal aesthetic), it was still a great way to continue our education of Unreal Engine.
In the near future, we look forward to every sort of virtual production, from fully in-engine feature films to video games that are looking to lean into that cinematic photorealism side of the spectrum. I also think there’s a lot of content coming our way collectively, as audience members, that is going to need filmmakers that know how to work in both worlds, live action and virtual.
On that note of understanding the two worlds: besides fully in-engine lighting work like with Nemosyne and the virtual fashion show, we both have aspirations for mixed reality, specifically with LED volumes. I was fortunate enough to shoot a project on a volume in early October and really get my hands dirty with the new workflow. At this point I’m still ingesting as much knowledge about the process as possible, but what I did find out is that it proved immensely useful to be able to edit the UE levels myself as opposed to having an environment or lighting artist do the work.
Being a DP that not only understands the in-engine lighting, but one that is able to navigate UE, kit-bash environments, and understand both the limitations and strengths of UE, meant that I was the sole artist on the project that could take the cinematic vision of the director from start to finish, as well as be in full control of both the live action foreground and virtual background. It was a wonderful position to be in, and while I will always welcome collaboration with other departments and specialized UE artists, there’s something uniquely efficient and powerful about understanding both the game engine and the on-set cinematic aesthetic.
Kevin Stewart Born in Paris, France and raised in Portugal until the age of 17, Kevin’s early fascination with movies was so strong his parents moved the family to Portland, Oregon to help facilitate his dream. After graduating from the School of Film and Television at Loyola Marymount, he spent several years as one of the main shooters for Funny or Die, working on projects with Will Farrell, Don Cheadle, and Steve Carell. His commercial work includes brands such as Starbucks, Adult Swim and Walgreens. Kevin recently shot the Rosa Parks biopic Behind the MovementBehind Movement for TV One, Blumhouse’s Unfriended: Dark WebUnfriended: Web and the award-winning indie success The Head HunterThe Hunter, which he co-wrote and produced. Luc Delamare Luc Delamare is a Los Angeles based Director of Photography whose initial passion for film developed as a self taught visual effects compositor. Born in Silicon Valley, Luc Delamare grew up splitting his time between his hometown of Los Altos and Southern France. Following his formal education at the School of Film and Television at Loyola Marymount, Luc has built an extensive portfolio that encompasses a diverse array of formats and genres, including feature film, documentary, web series, music videos, and now virtual production. Luc’s commercial DP work has featured personalities such as Olivia Munn, Alex Morgan and Canelo Alvarez. As a VFX supervisor he has worked with various clients including Apple, NASA, Western Union, Vivo, and 20th Century Fox.