What Is 3D Gaussian Splatting and Why Industries Are Rapidly Adopting It

0Article by Yuri Ilyin
In simple terms, Gaussian splatting is a way to turn ordinary photos or video into highly realistic, explorable 3D scenes.
Gaussian splatting, or more formally 3D Gaussian Splatting for Real-Time Radiance Field Rendering, is a volume rendering technique that enables the direct rendering of volumetric scene representations without converting them into surface or line primitives. The resulting data is a radiance field representation that uses a sparse cloud of 3D Gaussians, allowing real-time rendering at a very high level of realism.
The technique was originally introduced as splatting by Lee Westover in the early 1990s. In 2002, EWA Splatting further refined point-based rendering through elliptical weighted average filtering. In its present form, however, 3D Gaussian Splatting originates from a paper published in mid-2023 by the French research group Inria, which was also presented at SIGGRAPH the same year.
To understand why this technology is spreading so fast, RenderHub sat down with Michael Rubloff, the Founder and Managing Editor of RadianceFields.com, to discuss how Gaussian splatting is moving from research into real-world production.
Michael Rubloff runs RadianceFields.com, a platform dedicated to covering the evolution of radiance field technologies. The platform has been operating in its current form since 2022, when Rubloff identified a gap in media coverage around these emerging technologies.
"I have been writing almost daily about lifelike 3D for a little over three years. My role is to stay on top of research and real-world deployment, covering hardware, software, and applied projects using radiance fields," Rubloff told us.
What Are Radiance Fields
At the core of Gaussian splatting lies the concept of a radiance field. While the term may sound poetic, a radiance field is a mathematical representation that assigns a radiance, or color intensity, value to every viewing direction within a given space. In this context, radiance describes the amount of light that passes through or is emitted from a specific area and falls within a given solid angle, effectively quantifying how bright a surface appears when viewed from different directions.
In practical terms, this radiance field representation forms the basis for a fundamentally different approach to 3D reconstruction.
"What Gaussian splatting enables is taking a series of ordinary images or videos as input and reconstructing very lifelike 3D from it," Rubloff said. "You can reconstruct static 3D, which is more akin to photography, but it is also very possible to reconstruct dynamic 3D, which is more like video."
Advantages of Gaussian Splatting
One of the key advantages of Gaussian splatting is that, in addition to high reconstruction quality, it supports view-dependent effects.
As we move around in the real world and look at objects from different angles, our perception of light shifts accordingly. We expect light to behave in a very specific way, and Gaussian splatting allows this behavior to be captured accurately. What is even more significant is that reconstructing highly reflective objects such as mirrors, glass, and water surfaces is not a problem for Gaussian splatting. Meanwhile, more traditional methods like photogrammetry don't fare well with reflective surfaces.
Real-time rendering is critical for applications such as virtual reality, location scouting, robotics, and simulation, where users need to move freely through a scene without delays. Another major advantage of Gaussian splatting is that it can render at "well above real-time rates," as Rubloff puts it.
This gave Gaussian splatting a significant early advantage over its nearest competitor, NeRF, or Neural Radiance Fields. Around 2023, NeRF pipelines were notably slower and, as a neural networkbased approach, required training for each new scene.
While NeRF-based methods have since become much faster, the early performance advantages of Gaussian splatting played an important role in its rapid adoption.
"Gaussian splatting checked enough boxes that people across industries began to realize their work could now be done in very lifelike 3D," according to Rubloff.
How Industries Are Adopting Gaussian Splatting
Starting in late 2023 and continuing through 2024 and 2025, several industries began adopting Gaussian splatting for practical use.
Among the earliest adopters were architecture, engineering, and construction, as well as the geospatial industry.
These industries had previously relied on photogrammetry, which struggles with thin structures and, in particular, with reflective materials, foliage, and water. By contrast, Gaussian splatting is able to reconstruct these elements accurately out of the box.
The geospatial industry, and more specifically the Khronos Group, the consortium that serves as the industry's standards body, has recently announced a baseline extension for glTF with support for Gaussian splatting. This development further accelerates the technology's adoption within the geospatial ecosystem.
Media and entertainment were also quick to adopt Gaussian splatting.
One major use case is location scouting. Instead of flying multiple team members to a location on the other side of the world, productions can now send a single operator equipped with an iPhone, a 360-degree camera, or a SLAM-based camera designed specifically for Gaussian splatting capture.
These SLAM-based cameras represent a rapidly evolving class of hardware. One example is the XGRIDS PortalCam, which features four cameras and a LiDAR unit and is designed to capture highly lifelike 3D environments in a very short amount of time.
Another important use case in media and entertainment is post-scouting pre-visualization. Once a location is selected, the captured dataset can be quickly shared with directors and production teams. Loading this data into tools such as Unreal Engine enables detailed planning of camera placement, blocking, and shooting logistics well before the crew arrives on site.
A more advanced example of this workflow can be seen in last year's Superman film. In the film, Superman's parents appear as hologram-like projections delivering a message.
To achieve this, the actors were brought to Infinite Realities in Ipswich, UK, where they performed their lines inside a rig of roughly 200 cameras. The performance was recorded over the course of about an hour, capturing the actors from every possible angle.
The captured data was then reconstructed into complete dynamic 3D without the intermediate processing steps typically required by traditional motion capture or photogrammetry workflows. Because the scene existed fully in 3D, the production team was able to adjust elements such as camera movement, angles, and focal lengths during post-production with minimal friction.
"If you look closely, when the hologram in Superman breaks apart, you can actually see some of the Gaussians at the breaking points. The filmmakers left this intentionally because it felt like a natural way for a dynamic reconstruction to dissolve," Rubloff said.
The data was processed in SideFX Houdini using the GSOPs plugin from CG Nomads and then brought into Nuke. At the time, Gaussian splatting support in Nuke was provided through a third-party plugin developed by Irrealix. As of Foundry's Nuke 17.0 beta, however, Gaussian splatting is now supported natively, although the Irrealix plugin remains available.
Gaussian Splatting in Robotics and Simulation
Another industry benefiting significantly from Gaussian splatting is robotics and simulation.
Scene reconstructions have become so lifelike that robotic systems can treat them as functional stand-ins for physical environments. For example, a developer of autonomous vehicles can capture an entire city, reconstruct it in 3D, introduce synthetic traffic, and simulate complex driving scenarios without deploying self-driving cars in the real world. This approach is already in use and helps reduce both testing costs and operational risk.
NVIDIA recently announced a platform called Alpamayo at CES, described as an open portfolio of AI models, simulation frameworks, and physical AI datasets designed to accelerate the development of autonomous systems. The technology is intended to support Level 4 autonomous vehicles, which operate with minimal human intervention.
Alpamayo incorporates a variant of Gaussian splatting known as the 3D Gaussian Unscented Transform.
Another example comes from Third Dimension AI, whose product SuperSim ingests data such as LiDAR and RGB imagery to rapidly generate high-fidelity environments for robotic simulation.
Gaussian Splatting vs Mesh-Based 3D
Radiance field technologies do not produce traditional mesh-based 3D models in the way photogrammetry does. Gaussian splatting represents scenes as radiance fields rather than explicit surfaces, making it fundamentally different from mesh reconstruction techniques. As a result, photogrammetry continues to struggle with highly reflective objects and thin structures, areas where Gaussian splatting performs significantly better.
Despite these differences, the two approaches are not mutually exclusive. Photogrammetry and radiance fields are increasingly viewed as complementary rather than competing technologies, with each suited to different use cases. Rather than fully replacing mesh-based workflows, Gaussian splatting is beginning to integrate with them.
Early examples of this hybrid approach are already emerging. Tools such as Houdini and Blender now support attaching skeletons to Gaussian splats, enabling basic rigging and animation, although native support remains in its early stages.
In 2024, a research paper titled Gaussian Frosting introduced a method for combining traditional mesh-based surfaces with an additional layer of 3D Gaussians. This approach allows flat geometry and complex volumetric detail, such as hair or vegetation, to coexist within a single representation. A dedicated Blender plugin based on this research was also released in 2024, further demonstrating early attempts to bridge mesh-based and radiance field workflows. More recently, the same author introduced MiLo (Mesh-In-the-Loop Gaussian Splatting), a follow-up paper that proposes a differentiable mesh extraction framework for 3D Gaussian Splatting, enabling the optimization of meshes directly during training and improving surface quality for animation and simulation applications.
Looking ahead, Michael Rubloff emphasizes that the field remains open-ended rather than settled. "I expect native Gaussian splatting support in Blender relatively soon, and that will be a major milestone," he said. "We do not yet know which representation will ultimately be best. It is very possible that NeRF becomes more popular, that Gaussian splatting keeps dominating, or that a completely new representation emerges. Personally, I am agnostic. I think NeRF can look more lifelike due to ray marching and color accuracy, but I am mainly excited that lifelike 3D now exists."
Gaussian Splatting for VR Applications
Gaussian splatting and virtual reality form a natural pairing. On a practical level, this is already visible in consumer hardware. Meta has integrated a beta feature called Hyperspace Capture into its Quest 3 and 3S headsets, allowing scenes to be captured directly using the headset's onboard cameras. The captured data is processed on Meta's backend and rendered in real time with high visual fidelity, producing reconstructions that are significantly more detailed than one would expect given the limitations of the capture hardware.
This development signals a broader shift. Gaussian splatting is rapidly moving away from reliance on expensive, specialized capture systems and toward more accessible, consumer-level solutions, lowering the barrier to entry and enabling wider adoption across industries.
An increasingly mature software and hardware ecosystem has emerged alongside this shift. Both local and cloud-based platforms now support Gaussian splatting workflows, including tools such as Jawset Postshot, Lichtfeld Studio, Brush, Teleport by Varjo, and KIRI Engine.
The Future of Gaussian Splatting
While Gaussian splatting is already seeing rapid adoption, its broader impact is likely to extend well beyond current 3D workflows. At a fundamental level, the technology points toward a shift away from flat, two-dimensional imaging and toward spatially accurate, interactive representations of reality.
Michael Rubloff believes this transition will affect far more than traditional visualization industries. "Looking forward, I believe that any industry currently using 2D imaging will benefit from this technology and eventually shift toward 3D," he says, pointing to fields such as education, e-commerce, insurance, archaeology, and architecture as natural candidates.
Another important direction lies in the convergence of radiance fields with artificial intelligence. As Gaussian splatting produces increasingly rich spatial data, it will become possible to interact with reconstructed scenes in more intuitive ways. "Being able to talk to a computer about the information contained inside a radiance field feels overwhelmingly likely," Rubloff notes, opening the door to interactive learning environments, guided exploration, and AI-assisted interpretation of 3D spaces.
At the same time, the long-term landscape remains open. Gaussian splatting is advancing alongside other radiance field techniques such as NeRF, and it is still unclear which representations will ultimately dominate or how they will coexist.
Final Thoughts
Gaussian splatting signals a broader shift in how reality may be captured and experienced, extending beyond incremental improvements in 3D workflows toward a new class of imaging altogether. As Michael Rubloff notes, "Photography and video have existed for roughly 200 years. There is no reason they should be the final form of how we capture reality. This technology is only at its early rise right now. We have not seen a plateau yet."
How do you see Gaussian splatting fitting into your own workflow or industry? Join the conversation in the comments.











