NVIDIA's Steps Towards Photorealism

The gaming industry has been having graphical revolutions since its very dawn, as different game companies attempt to one-up each other to deliver the most realistic and eye-pleasing experience possible to their customers. Raytracing is speculated to be the next step taken by the industry to take a leap towards photorealism. A problem with raytracing is that it is very computationally expensive and for this reason, not many games can do global ray tracing, an odd outlier being Minecraft which, with its stationary and blocky world, can take shortcuts to make raytracing possible. NVIDIA also took a shortcut, but instead of cutting out steps for blocky worlds, they instead used AI, in a process called DLSS (Deep Learning Super Sampling), to speed it up. But before we get to how it works, it may be useful to know why raytracing is needed at all and why we can't continue with our current methods.

Scene in Minecraft raytraced with Sonic Ether’s Unbelievable Shaders Image Credit: https://www.sonicether.com/seus/

Scene in Minecraft raytraced with Sonic Ether’s Unbelievable Shaders
Image Credit: https://www.sonicether.com/seus/

The very earliest games didn't even have graphics and simply manipulated text in a terminal, however, since games transitioned to having a graphical interface of their own, the graphics quality of games has been extremely important. The earliest graphical games simply slid 2D sprites along the surface of the screen, like the famous Pong, released by Atari in 1972. The industry later transitioned to moving sprites with animations, as seen in the Mortal Kombat series and its numerous clones. All of which is simple enough and definitely doesn't require anything as computationally expensive as ray tracing. However, graphical engines didn't get the complexity they have today until they entered the 3rd dimension. 3D rendering relies on polygonal three-dimensional models which have points that are drawn at specific points on the screen based on their position and angle. The best example of this is the very early tank game, Battlezone, released in 1980. The game has very basic transparent tank models that move around the screen and the lines drawn depend on the angle of the 3D model of that tank and its position, all of which is worked out through maths that is constantly run by computers while playing the game. This process of going from 3D models with a position to their appropriate rendering on the screen is called rasterization. Drawing models using rasterisation is simple enough, however, once you start to shade them, the complexity rises rapidly. The problem becomes apparent if you are an artist and you look deeply in how typical rasterisation engines work, they start simple but they have lots and lots of tricks to simulate light and shading. In many cases, if an artist creates a 3D model, it may turn out that your engine doesn't handle your model properly and it is not lit up as you would expect it to in real life. Raytracing becomes much more useful in this case. Ray tracing renders your model by essentially bouncing light on it, what you see at the end is as natural as it gets. For this reason, similar methods to ray tracing are used in the movie industry, to make the life of the artist easier. This is also important for AAA games since a lot of their budget is spent on artists too, and making their life easier will lead to better quality graphics in games.

A raytraced image - the light in this scene works exactly as how it would in real life. Image Credit: https://blogs.nvidia.com

A raytraced image - the light in this scene works exactly as how it would in real life.
Image Credit: https://blogs.nvidia.com

Now that you know why ray tracing is needed, we can now move onto how DLSS works. In ray tracing, a lot of light rays are fired into each pixel on the screen to give a clear image. If you didn't do that, your image will be noisy, similar to how dark parts of a photo taken on a camera have noise. Usually, at least 500 are fired and simulated, which ends up being extremely computationally heavy. However, NVIDIA cuts the computation down a lot by employing AI. A low amount of light rays are fired per pixel to generate a noisy image, which is then passed through a machine learning model which takes the noisy image and converts it into a final product, which is displayed on the screen. This does wonders for performance and makes games with ray tracing more feasible.

An example of the dramatic performance improvements with DLSS turned on. Image Credit: https://www.nvidia.com/

An example of the dramatic performance improvements with DLSS turned on.
Image Credit: https://www.nvidia.com/

As a part of their bid for making raytracing more feasible, NVIDIA recently announced DLSS 2.0, which attempts to make the renders that are outputted from DLSS more visually accurate to real life and with fewer artifacts while still maintaining a claimed 70% increase in performance.

The rendered image in DLSS 2.0 is much clearer and less grainy. It is almost as good as the original. Image Credit: https://www.nvidia.com/

The rendered image in DLSS 2.0 is much clearer and less grainy. It is almost as good as the original.
Image Credit: https://www.nvidia.com/

There is no doubt raytracing is the future for gaming with how realistically it can render shading of scenarios. However, the question now is when will raytracing be used in the majority of games? There is also no doubt that this is years away with how much computation power is required even with DLSS, but with NVIDIA's innovations, the day where raytraced game engines are used by the average household draws much closer along with the day live-rendered photorealism is the norm.