3D Asset Optimization for Game Engines: Common Best Practices

0Article by Yuri Ilyin
The modern gaming engines (like Unreal Engine, Unity, or CryEngine) are capable of handling far more complex assets than one could have imagined 10-15 years ago. Models in modern games look so complex that they appear to be ultra high-poly.
In fact, they are not, even if they are far more complex than those used in the days of yore. Still, a good gaming asset is one that leaves a footprint as light and elusive as possible. This means keeping geometry to an absolutely necessary minimum, textures optimized and compressed, and ensuring that at every given moment the whole scene (visible within the screenspace) taxes system resources as little as possible.
How can this be achieved?
First things first, prepare to serve a very strict and demanding overlord, his name being Draw Call(s). Draconic as it sounds, it is the key concept on which game performance hinges.
A draw call is a command that the CPU issues to the GPU, telling it to render an object (usually a mesh with textures and shaders) on screen. Each call has CPU overhead, the extra processing time and resources a CPU uses for background tasks, system management, or specific software functions. So even on very high-end systems like the latest-gen gaming consoles or top-of-the-shelf PCs, it is best practice to keep draw call counts tame: the fewer, the better. In fact, the fewer draw calls are issued, the smoother the gaming experience. There are a number of ways to achieve this without noticeably compromising scene quality, and we will try to cover them.
In the past, it was very important to observe the polygon budget, that is, the maximum number of mesh polygons per model. While today this is less of a problem, it is still important to have a clean, uniform mesh without structural errors (such as interior polygons, edges shared by more than two polygons, or floating or hanging vertices or edges), with proper texel values (see below), and without an excess of polygons.
What you want most of the time is a solid and watertight model with the mesh as sparse as possible. This means that if you have a highly detailed, high-poly model, you need to create its lower-poly counterpart with variable mesh density.
This process, called retopology, is one of the most tedious and unloved aspects of 3D graphics. Fortunately, every 3D suite these days has at least some life-saving addons or plugins for this. The process may eventually be offloaded to AI someday, but for now it takes a crafty brain and a trained hand to do it properly, that is, to make the mesh as sparse as possible while retaining higher density only where it is absolutely necessary.
What you would want to retain uncompromised, however, is the silhouette, the outline of the object.
There are a few scenarios, however, where it is acceptable to add extra geometry to ensure that the outline looks right.
For example, if a stone wall in the scene has a visibly jagged edge that affects the silhouette. One popular technique is adding a prosthetic - an object with more complex geometry that seamlessly blends with the rest of the wall via material. This allows the main part of the wall to remain in the low-poly class.
The most recent engines support material displacement maps, that is, using texture maps on a low-poly mesh to make it look very high-poly. Unreal Engine 5 and above offers this courtesy of the Nanite system:
This is apparently going to become a standard one day, but on less performant systems, prosthetics may still be the way to go. This is especially true if the asset is being prepared for a mobile platform.
The next step is UV mapping of the lower-poly asset. UV mapping can be either very quick and easy or tedious and tricky, depending on the complexity of the model and whether a UDIM workflow is used.
UV mapping is essentially the translation of a three-dimensional surface into a two-dimensional coordinate system, enabling the use of flat bitmap images as textures.
UDIM, in turn, is an enhancement to the UV mapping and texturing workflow that enables the use of multiple texture map sets for different parts of the same object. This allows the simulation of diverse and detailed surfaces without using exorbitantly large textures, and without ending up with blurred results.
This explanation of what UDIM is and how to use it is highly recommended.
An ideal UV map covers the entire texture space, meaning the whole grid is covered with a flat image, with little to no empty areas and, generally, with all faces or polygons rectangulated.
In truth, this is impossible to achieve most of the time, especially if the model is organically shaped. Still, it is best practice to get as close to this goal as possible.
More importantly, however, is ensuring that the texel density is uniform across the entire texture map, or across every UDIM set.
Texel is an abbreviation of texture element, also known as a texture pixel. It is a fundamental concept for texture maps, referring to the number of pixels per square unit, such as a square inch or square centimeter.
Proper texel density ensures that textures on an asset appear smooth and detailed when viewed up close, without significant and ugly distortion. It is not exactly pleasant to admire a cool asset only to notice that some screw or nut is distorted beyond recognition, simply because of uneven texel density and errors in map-baking settings (see below).
Proper texel density also helps maintain good performance in real-time applications, such as game engines.
In order to ensure that texel density is correct, a specific type of texture is used in 3D suites. Blender, for example, can simply generate such a map:

And that is how it looks when the UV unwrapping is completed (the mesh could be optimized further, though):

Far from ideal, of course. There is too much empty space, but in a case like this, using non-square (but still power-of-two) textures is justifiable.
The primary goal, however, is more or less achieved: figures and letters help ensure proper texel density, while straight lines help identify areas with an elevated risk of distortion.
One obviously problematic area is highlighted, though:

One way to tackle it is to straighten this island, like this:

But as one can see, the distortion becomes even more egregious, which is almost always a problem with curvy surfaces. Smoothing subdivision helps, but at the expense of increased poly count:

Another approach is to simply leave it as an uncut circle: no distortion, but slightly more unused texture space in the UV map:

So it all comes down to context. Basically, it is experience alone that determines how to do UV unwrapping the right way.
Very slight distortions may not be much of a problem if you are baking various texture maps from a high-poly model to a low-poly one. Usually these include normal or bump maps, ambient occlusion, and displacement. However, the lower the polycount of the optimized asset and the more complex the high-poly model, the higher the risk of various unpleasant distortions.
This tutorial shows some very useful tricks regarding proper texture baking in Marmoset Toolbag:
Alright, the asset is ready. There is no unnecessary geometry, it is nicely textured, normal maps and ambient occlusion are in place, and optionally displacement covers the finer details. Finally, we can drop it into the main pool, that is, a 3D scene. Is that all? It depends.
If you are working not just on a single model, but on an entire scene, there are some extremely important steps required to optimize it. This brings us back to draw calls: the fewer, the better.
This is where instancing comes into play. In very simple terms, it means using the same object multiple times in a scene. For example, if you have a room with four supporting columns, you only need one model. The rest will be instances of the same object, most likely rotated around the Z-axis by 90, 180, and 270 degrees.
If you have repeating architectural elements, such as a fully symmetric gothic arch, you would likely use just one quarter of the arch and instance it multiple times. An expensive draw call references only the progenitor model, while the instances do not generate extra calls. The positive impact on overall performance will be significant.

Coffee beans are instances/particles in the scene, generated from three or four unique objects using the same texture.
The same principle applies to materials. It is a very good idea to join all objects that share the same type of material into a single entity and assign just one material to it, especially in a static environment.
Unreal Engine supports a game-changing, pun intended, technique called Material Instances. It quite literally allows you to change the appearance of a material or simulate several materials on the same object, while the engine still reads them as a single material.
For additional context, see the Epic Games documentation on Creating and Using Material Instances and Material Instances.
It is also necessary to mention texture atlases and trim sheets.
An atlas is a fairly large image that comprises many textures within a single file. Each portion is sampled according to the UV map of the object referencing the file.
A subset of a texture atlas may be a so-called sprite sheet, which is essentially a table of images loaded according to a coded sequence. For example, enemies in classic boomer shooters' like Doom (1993) or Duke Nukem are just sequences of flat, static sprites:

Reference: Baron of Hell
Another very specific type of texture atlas is the trim sheet, generally used to texture flat, more or less uniform surfaces.
Typically, a trim sheet is a set of textures that tile, meaning they can be seamlessly repeated, along a single axis. Metal panels and pipes, smaller wooden surfaces, concrete slabs, and similar assets are all good candidates for trim sheets.
Indeed, this is an optimization approach that is directly opposite to using UDIMs with multiple textures. Depending on context, either method may be appropriate. You may want to use UDIMs for objects that are meant to be viewed up close, and trim sheets for elements that are more distant and monotonous.
A fascinating example of extreme optimization is the work Marc-Antoine Hamelin did for Mass Effect 3 environments:

According to his old post at Polycount.com, 90% of the meshes in the location shown above were textured using only one 20481024 texture to optimize memory usage and free up more memory for light maps.
Top mastery.
Next come LODs, also known as Levels of Detail. This is an asset optimization technique that involves using multiple versions of a 3D model, ranging from highly detailed to very simple, all the way down to flat planes with alpha textures. The closer the camera or character gets to the object, the more detailed version of the model is loaded. Ideally, the swap is unnoticeable, but in reality it is often quite obvious.
There are some tricks to make the swap less pronounced, such as ensuring that the transition happens while the objects are occluded or hidden from the player. This, however, pertains more to proper level design. Faraway or unreachable objects can be kept extremely simple.
These screenshots are from the old game Day of Defeat: Source. Some of the objects are only reachable via console commands, and during normal gameplay they are visible only from a distance.

As one can see, one of the broken trees in the background is just two planes crossing at 90 degrees, with alpha textures outlining the tree:

The game may be 20 years old, but the technique is clearly still usable today.
In essence, optimizing assets and environments for game engines is all about getting rid of anything that is not necessary.
Given that the nasty worldwide shortage of user-grade RAM led to skyrocketing prices (AI be un-praised), it is entirely possible that for the foreseeable future newer software and games will require Herculean optimization efforts in order to squeeze gaming assets into limited RAM budgets.
In many ways, these constraints simply reinforce what good real-time asset creation has always been about: discipline, restraint, and informed decision-making at every stage of the pipeline. From draw calls and instancing, through topology, UVs, texel density, materials, atlases, LODs, and scene composition, performance is never the result of a single trick, but of hundreds of small, deliberate optimizations working together. The more complex engines become, the more valuable this foundational knowledge grows.
If you want to keep exploring production-proven workflows, real-world optimization techniques, and detailed breakdowns like this one, make sure to subscribe to the RenderHub blog, where we regularly publish in-depth articles on real-time graphics and professional 3D workflows.











