Sebastian Deredas on Creating Film Quality Surfaces and Shaders

0Article by Filip Radivojevic
Sebastian Deredas, Senior and Lead Surfacing Artist, has contributed to major productions spanning high-end film, episodic work, and game cinematics, including titles like The Witcher IV, Love, Death and Robots, Dying Light 2, and Star Wars: Outlaws. In this interview, he discusses the dual nature of his role, from creating hero-level textures and shaders to leading teams across complex pipelines, and shares practical insights into references, optimization, material construction, and the evolving tools shaping modern surfacing work.
Introduction and Professional Scope
Sebastian, for our audience at RenderHub, how would you describe your core responsibilities today as a Senior/Lead Surfacing Artist, and what defines the standard you aim for on major productions?
Depending on the project and the studio, my role may be strictly artistic or strictly leadership-focused - sometimes both at once. Generally speaking, as a Surfacing Artist I am responsible for creating textures and shading. After years in the industry, this typically means working on hero characters and challenging assets with a lot of close-ups.
Good surfacing is built on several factors: I start by ensuring the asset stays as close as possible to the concept art, then I make sure it behaves correctly under different lighting conditions, and finally I analyze which areas will appear in close-up shots and polish those to the highest possible standard. Unfortunately, whether it's episodic work or cinematics, deadlines are always hanging over us and our time is limited. This means we rarely get the opportunity to refine an entire character so it looks flawless from every possible angle. Those situations do happen, but across an artist's entire career they're exceptions - though we all wish it were otherwise. Understanding where a character will appear on screen helps determine early on where to invest more care and time.

As a Lead Surfacing Artist, my work shifts away from specialized tools toward Slack, ShotGrid, and Excel. My day often involves writing ongoing feedback for artists, solving problems, presenting team renders to Art Directors or Supervisors, and coordinating with production - distributing tasks, estimating time for characters, or even helping select freelancers. There's a lot to manage, especially on large productions where the team grows quickly. Each project is different and brings its own challenges. Leadership roles are demanding and often stressful, but extremely rewarding in the end. On smaller projects, the artist and lead roles often blend, so in addition to leadership tasks I also handle surfacing myself, just with a more stretched timeline.
Major Production Experience
You've contributed surfacing work to titles such as Love, Death & Robots, Dying Light 2, The Witcher IV, Star Wars: Outlaws, and many others. What are the biggest differences you notice between working for game cinematics versus episodic-film productions in terms of surfacing demands, pipeline, and team interaction?
I think the differences are smaller than people might assume. Let's first consider pre-rendered cinematics versus VFX for film or episodic productions. For example, I worked episodically on the first two seasons of The Witcher for Netflix. From a surfacing perspective, the work, workflow, and quality expectations are essentially the same as in cinematics. The differences appear later - mainly in rendering and compositing, because we are integrating our work with live-action footage. Cinematics have evolved significantly over the years. Client expectations keep rising, and their quality is often close to what you'd expect from big-screen productions.

I feel the bigger differences arise between pre-rendered cinematics and real-time cinematics, especially on a technical level. Even here, though, the gap has narrowed tremendously over the years. Real-time engines are far more capable today, offering many more features that streamline the workflow. The texturing process is often similar in both cases, but real-time productions require much stronger attention to optimization.
In pre-rendered cinematics - let's say using Arnold, it's normal for a hero character to have 50+ UDIMs, and sometimes textures are exported in 8K just to squeeze out a few more percentage points of quality. Even with the progress in real-time engines and GPUs, that workflow simply doesn't translate or only works with major difficulty. Technically you can do it, but it's expensive in terms of rendering performance. Think of the village shot from The Witcher IV trailer, where Mioni walks among many characters, you can imagine how heavy that becomes in real-time. That's where optimization techniques come in: tileable textures, shared texture sets, fewer texture sets, lower resolutions, and so on.


From a shading standpoint, the workflows differ significantly. While the theoretical foundations are similar across Unreal Engine, Arnold, Redshift, Karma, and others, real-time rendering often requires more "tricks" to approach the quality of pre-rendered work. Tools like MetaHuman have definitely made it easier to achieve good skin quality quickly, without heavily impacting performance. I believe that eventually these two worlds will merge even further. We already see incredibly high-quality real-time productions, and Unreal Engine is used directly on film sets for major productions like Star Wars.
Regarding team interaction, I haven't noticed major differences. The production-side workflow is quite similar. The main unique element in real-time projects is submitting changes to source control, which requires understanding how to handle conflicts correctly.
Creating Hero Assets for Close-Up Shots
Many of your projects feature extreme close-up fidelity, from human skin to alien surfaces to stylized characters. When developing a hero asset or character intended for tight framing, what technical and artistic decisions do you consider essential for achieving believable detail and material richness?
The most important part is reference analysis and trying to replicate it. It's not an easy task, because references often depict the same element in completely different ways, and on top of that you need to fit everything within the concept art. So the process becomes a balancing act - many small decisions and iterations.


Let's say we're working on a character whose skin transforms into a zombie-like state, with lots of veins appearing. Once you gather references (or receive a brief from the Art Director), you immediately see that there are countless types of veins - small, large, more or less engorged, greener, bluer, yellowish, and so on. Ideally you would pick one or two references that perfectly match the concept and recreate them as closely as possible. In reality, that's rarely possible. Art Directors often ask for a blend of different references - "a bit from this, a bit from that" - which is not ideal for the artist and definitely not easy. This often leads to inventing details that don't actually exist in the references. I think this is simply how our brains work, and it takes effort to unlearn it.
I see this as one of the two biggest early-career mistakes for surfacing artists. You start painting scratches or damage that wouldn't appear on that type of surface. The second mistake is adding too much repetitive detail from grunge maps or generators. Such repetitive noise is rarely found in real life. Tools like Substance Painter tempt us with quick, "detailed" results, but they should be used carefully. Every scratch, stain, and color shift should have a story - think about how and why it would appear.

On the technical side, it's hard to give universal rules. But one thing I see people overlook - yet it contributes greatly to realism is shader layering: dust, dirt, blood, smudges all on separate layers with separate control. When these elements are baked into the base textures, they often behave worse under lighting than when controlled independently. Of course, this increases render times, so it's not always feasible under tight deadlines or optimization constraints.
Building Materials and Shaders
When creating materials, how do you decide which elements should be procedural, which should be hand-painted, and how do you structure layering, breakup, and micro-detail so the shader responds naturally to lighting while maintaining artistic intent?
Honestly, I rarely use procedural materials when building hero assets. I mostly use them in two scenarios: first, for optimization in real-time projects; second, when using XYZ maps for fabrics - they produce much better shading results than painting them directly onto UVs in Substance Painter.

My process is generally always the same: I work from broad to specific. I break the material into layers so it's easier to manage - painting everything at once is nearly impossible.
Let's say I'm creating a metal surface. I begin by establishing the base behavior so that it already responds correctly to light: defining the base color, specular/roughness, IOR. Then I add basic breakup: color variation (real materials almost never have a perfectly uniform color), subtle specular/roughness variation using additional maps. I avoid extreme values because they rarely exist in reality. I consider whether the material needs patterns on the normal map (like flakes or galvanization) or displacement/height variation (like hammering or waves). Then I add damage: scratches, chips, wear. I usually use imperfections maps but always hand-paint adjustments so the details appear in meaningful areas (or at least multiply them with other masks so they're not everywhere). After that I add all hand-painted micro-details, which is time-consuming and requires careful reference analysis. Then I'm adding dirt or dust if applicable. Finally, I generate RGB masks for the shader - useful for layering, damage, dirt, anisotropy, rotation, coat, and any other elements that need individual control.


Workflow and Tool Decisions
What tools do you rely on in your workflow, and what guides your decisions when selecting the right method or toolset for a specific asset or production task?
Workflow and the choice of tools are often dictated by the studio or the project itself. What we use every day doesn't always match what we'll find in-house. It's important to stay open to new software or approaches, because we're just one part of a larger production pipeline. Many studios rely on their own scripts or plugins that streamline specific tasks.
If the asset is fairly standard, we can usually achieve comparable results regardless of the software used. Of course, there are differences. If I'm texturing a dragon across hundreds of UDIMs, Mari is a better choice than Substance Painter because of performance. For skin shading, I personally believe Arnold outperforms Redshift or Karma - but that doesn't mean you can't achieve convincing results in those renderers.

The real challenges come with non-standard tasks that fall outside typical workflows and overlap with multiple departments. Tool decisions at this stage matter and have long-term consequences. Often the decision requires cross-department discussion, supervisor input, and quick tests of different approaches. Time, available manpower, and complexity all factor in.

For example, I recently handled a healing-wounds effect on a character, with a magical component around the closing wounds. The first step was deciding how to execute it: blendshapes from modeling? Animated displacement or masks? A Houdini setup? A simple emission shader combined with compositing effects? Quick testing showed that doing everything in shading was fastest - generating animated masks in After Effects, and adding secondary effects in Nuke. With more time, we might have chosen a slightly different approach with even better results, but you need to adapt to the situation.
Pipeline Collaboration
In large-scale productions, surfacing sits at a crossroads between multiple teams. What pipeline practices do you rely on to ensure your assets integrate smoothly into lighting, animation, and final rendering?
The availability of tools and scripts varies greatly depending on how advanced a studio's pipeline is. I've worked in places where everything was scripted, even the smallest tasks, and in places with only basic functionality.

The most important cross-department tools for surfacing, in my opinion, are: loading geometry with proper versioning and receiving notifications when a new version is available; easily importing groom components into the lookdev scene; publishing and versioning shading setups so other departments (lighting, rendering) can access newest version. Some pipelines also allow publishing textures, copying them to dedicated locations so no one can accidentally overwrite them. With Arnold, automatic generation of .tx files on a separate machine is extremely useful so you don't lock up your own machine. Checkers are also very helpful: model checkers (UVs, NaNs, UDIM count, geometry issues), texture checkers (naming, format), shading checkers (unused components, duplicate shaders, link errors), and more.
Beyond pipeline-level tools, daily-use scripts that automate repetitive tasks are invaluable. I rely on tools that set up lookdev scenes - HDRIs, studio lighting, character rotation, cameras, as well as scripts for setting subdivision, reassigning textures to updated versions, or creating base shaders and connecting maps automatically. All of this frees time to focus on artistic work.

It is also worth noting that tools or scripts developed for specific projects are sometimes needed, especially in cases involving stylized work that requires a particular approach or workflow.
Realism and Story Clarity
How do you decide how much aging, wear, or dirt to include so the asset feels real but still reads clearly in the shot?
These decisions are usually established during pre-production by the Art Director and defined in the brief. Of course, as the character evolves through iterations, the brief often changes.
As I mentioned earlier, artists tend to overuse dirt and grunge. It's an easy, "cheap" detail - generators can produce complex-looking materials very quickly. The problem is that such surfaces rarely occur in the real world, so the eye immediately recognizes them as artificial.


Across all the projects and worlds I've worked on, I can count on one hand the cases where excessive dirt helped. One was the Metro-style world - multiple layers of dirt matched the underground aesthetic. In most cases, dirt must be thoughtful and selective, not spread uniformly across the entire asset.

I often reference Mason from one Love, Death & Robots episode I worked on. He's a farmer who wears the same work clothes every day. You'd expect wear, stains, discoloration. But you can break those into different categories - old dried stains, fresh muddy splashes, dust from walking through a barn. The same goes for fabric wear - tears from a year ago will look different from those appearing yesterday. Maybe he fell the day before and created a new dirt pattern. You can build a story for every stain or scratch. This immediately increases the realism. The worst outcome is covering the entire asset with unrelated dirt layers that don't tell any story.

New Techniques and Industry Changes
Production workflows are evolving quickly. Which new tools or methods do you see influencing surfacing work in the near future, and how do you stay current with these changes?
Looking at the last 10-15 years, I think even more positive changes are coming. Many of us started by painting individual channels in Photoshop, and today we build extremely complex assets in Mari or Substance Painter. Substance Painter was a milestone - much lower entry barrier than Mari and a workflow familiar to anyone who used Photoshop.
Shading has also evolved massively. Renderers now include features that previously required elaborate hacks. Default settings often produce decent results instantly. Look at how far SSS has come - from Diffusional to Randomwalk v2, which gives physically accurate results. GPU renderers and real-time engines have also made huge leaps, allowing us to create near-pre-render quality at dramatically lower time and cost.

I don't watch many tutorials, so I don't rely on them for new knowledge. Instead, I track updates, changelogs, and new features for the tools I use. These usually bring improvements to my daily workflow, so I try to integrate them as soon as possible. For example, the latest Substance Painter update introduced an improved Path Tool - perfect for ribbons, so it will immediately become part of my daily workflow, speeding up stitching on fabrics.
AI is also a major topic. It will undoubtedly influence surfacing. While I'm not a fan of fully AI-generated films that lack artistic soul, I'm curious where this will go and happy to test new tools. I already use Photoshop's AI-powered Content-Aware Fill. Creating seamless textures is dramatically easier now. I'm also curious about auto-generation of 3D models. It's not great yet, but major companies like Meta are releasing tools, so I expect rapid progress. I'd also like to see more AI-assisted tools integrated directly into DCCs like Houdini or Maya - for generating complex noises, patterns, animated texture sequences via prompts. I see similar potential in texturing tools like Substance Painter - AI-generated dirt patterns that mimic hand-painted logic, or imperfections tailored to a specific material type, all that controlled by simple prompts.
These changes are inevitable, and we shouldn't be afraid of them. Instead, we should learn to use them creatively. There will be many global discussions - ethics, copyright, but that's a topic for another conversation.
Guidance for Artists Working Toward Production Quality
Which habits or technical skills have the biggest impact on helping artists raise their surfacing quality for professional film and cinematic work?
The most important habit is observing the world around you. Materials and surfaces are everywhere, and you can analyze them endlessly. Break them down into layers. Think about how objects were made, how surfaces were treated, how scratches, dirt, and damage could have formed. Then translate that understanding into your 3D work. Don't be afraid to experiment, break habits, and play with what you're creating - great results often appear by accident. Use references as much as possible and rely on it. Every brushstroke should carry intention and tell a story. Do things for a reason.

Follow Sebastian Deredas on ArtStation, Instagram and LinkedIn to see more of his incredible work.





























