NVIDIA DLSS technology in games: what it is and why it is needed

The DLSS image enhancement algorithm is the hallmark of NVidia. His work is always of particular interest to users. With the help of DLSS, games have become faster – FPS grows, maximum resolutions are increased, which ultimately directly affects the “playability” of the content and contributes to deep immersion in it. In this article, we will consider the essence of DLSS technology and its areas of application in the gaming industry.

Ray tracing technology in gaming, one might say, falls short in performance and frame rate. This is especially noticeable in the line of video cards of the RTX 20 family. Ray tracing seems to be declared and is actually present, but it does not really work – there is no breakthrough performance in games. It seems that a new product from NVIDIA should fix the situation – a completely new graphics processor Ampere and graphics cards of the RTX 30 generation, built on its basis. The architecture of the new processor is described in great detail in the blog article of the DNS Club.

DLSS uncut
An experienced gamer, especially a connoisseur of technical solutions from the “green” camp, does not need to explain the essence of DLSS supersampling technology. For those who are just taking their first steps in the world of computer games and while looking for optimal settings for their hardware, it will be useful to get acquainted with the mechanics of the DLSS algorithm.

DLSS (Deep Learning Super Sampling) literally translates as “deep learning anti-aliasing.” At the time of this writing, the world knows two versions of the deep resampling algorithm.

The difference between the versions of the algorithm lies not in the logic of its operation, but in its physical implementation.

In the case of DLSS 1.0, NVIDIA invited game content producers to “run” the graphics scenes of their games through their “supercomputer”, endowed with artificial intelligence. This approach was time-consuming, and, as they say, “did not take off”, since game developers, for the most part, simply ignored it.

The second version of the DLSS 2.0 algorithm became more “client-oriented”, because NVIDIA believed in the success of the technology and included tensor cores in the video cards, thereby endowing its graphics adapters with artificial intelligence.

The main essence of the algorithm is to obtain a high-quality high-resolution image (frame) based on its reduced analogue. Without going into the jungle of tensor calculations and rather complex and cumbersome mathematical operations with matrices, the DLSS algorithm can be simplified as follows.

When rendering simple geometric shapes (in the example, a triangle is used) from small source frames, the determining factor in the quality of the final result is the subpixel mask. For example, using a 4×4 mask to draw a triangle, the end result bears little resemblance to the original shape. When the resampling grid is enlarged by only 4 times – up to 8×8 size – the final image looks more like the original.

VMware 5V0-21.21 PDF Dumps

TCP-BW6 Exam Dumps

700-755 Exam Dumps

VMware 2V0-62.21 BrainDumps

APICS CLTD Dumps

NS0-003 Exam Dumps

NAPLEX Exam Dumps

NS0-162 PDF Dumps

H35-210_V2.5 Practice Exam

H13-611 Exam Dumps

MS-100 Exam Dumps

1Z0-819 PDF Dumps

HashiCorp TA-002-P Dumps

IBM C1000-047 PDF Dumps

Oracle 1Z0-1067-21 Dumps

This is the main “mechanics” of the smoothing algorithm.

The basic principle of DLSS is to convert low-resolution images into higher-resolution frames, up to 4K, without losing the picture quality of the game world.

This approach to rendering game scenes provides the end user with several very important advantages:

reducing the load on the graphics processor, because you have to process smaller images in size and volume. At the same time, the heating of the video card is significantly reduced, and this is the guarantee of its long-term operation without failures;
increase in FPS, since the hardware part is much simpler and easier to process several small-sized frames per unit of time. It has been experimentally proven that enabling the DLSS mode leads to an increase in the frame rate by 1.5-2 times.

Artificial intelligence in the service of creating graphic scenes
The DLSS anti-aliasing algorithm is impossible without the artificial intelligence inherent in every new NVIDIA product. It is he who develops a technique for smoothing certain game scenes based on multimillion-dollar “runs” of reference images and polygons. In the first version DLSS pretreatment graphics frame was conducted based on NVIDIA’s computing power for specific projects: Metro: Exodus , Battlefield the V .

 

The final “recommendations” for improving the scenes were written in the updated versions of the drivers for a specific video card model.

In the second generation DLSS 2.0, the lion’s share of this work is given to the tensor cores of the graphics card itself. This is the fundamental difference between the first and second generations of deep resampling. It opens a simply limitless field for activity for game content producers, who no longer need to create a unique neural network on NVIDIA servers and “run” the polygons of their games on it. It is enough to adapt your code for tensor calculations and calculate scenes using a “universal” neural network. This approach greatly simplifies the life of the content producer and speeds up the release of new products.

Ampere topology
The new line of video cards does not stop at what has been achieved by previous generations of adapters. Each new card from NVIDIA is based on the Ampere processor, manufactured using 8nm technology, which allows placing more semiconductor components on the same die area. For the end user, this gives the increased performance of the graphics chip at the same dimensions.

If we compare the technical characteristics of the new adapters, we can see a decrease in the number of tensor cores in the new models in comparison with the previous generation. And a seditious thought may creep in: “Is everything so good? And what is the reason for the increase in productivity? ”

 

The answer to this question is quite simple. The new line uses third-generation tensor cores, the computing power of which is several dozen times higher than the capabilities of its predecessors.

 

If you compare the specifications of the RTX 20 and RTX 30 adapters head-on, you will notice that there are fewer tensor cores in the new line of video cards. But due to their performance, coupled with an updated calculation algorithm, the rendering of each scene has accelerated significantly. As a result, this allows the player to get high, and most importantly, stable FPS values, play at high resolutions and maximum graphics settings, and producers – to seriously think about producing content in 8K resolution.

The following illustration clearly shows the performance gain in current games (at the time of this writing).

 

The diagrams show that the top-end video card using DLSS technology gives a two- or even three-fold increase in performance in games that are not “light” in terms of graphics.
In stores: in 2 stores
While there is no official information, it can be assumed that the new adapters will sooner or later acquire an updated DLSS algorithm version 3.0, capable of intelligently resampling game scenes in real time in 8K resolution. But it is too early to talk about it. For the DLSS 3.0 era to come, at least every second or third gamer has an 8K monitor on his desk.