How to choose a video card

The gaming potential of a computer is determined primarily by the video card installed in it. And therefore – its choice must be approached no less responsibly than buying a central processor or motherboard.

In this guide, we will once again try to tell you what you really should pay attention to when choosing a video card, and what common beliefs are actually useless and even harmful myths.

What to choose – Nvidia or AMD
Nvidia and AMD are not graphics cards. These, suddenly, are the names of companies.

If you choose a video card, then you need to choose a specific product for a specific price. You should focus on its real characteristics: cost, the number of FPS in games, overclocking capabilities and the performance gain received from it.

For clarity, let’s look at a specific example:

The Radeon RX 5500 XT with 4GB of onboard memory costs the same as the GeForce GTX 1650 Super. Their power consumption is the same, performance is the same.

Obviously, in this case, you can choose one or the other card. And whatever you choose, you have absolutely nothing to lose.

And here is the Radeon RX 5500 XT in the 8 GB version, which costs the same as the GeForce GTX 1660:

At the same price, the card from AMD is already a little slower. Yes, the difference can hardly be called significant, and certainly it will not allow you to play with higher settings or higher resolutions.

But what’s the point of giving the same money for a product that, albeit very conditionally, is weaker? Now, if the price difference is 2-3 thousand rubles, which can be “invested” in other components, then it’s another matter.

Operating with mythical parameters like “brand reputation”, “software stability” or “especially bright, juicy picture” will eventually lead you to buy a video card that will either not cope with your tasks or will prove to be worse than the solution from competitors.

Which company to buy a card from – Gigabyte, Asus or MSI
Each vendor has several product lines aimed at the budget, mid-range and higher price segments. And this also applies to cards based on the same graphics chip.

There are cards that differ from the reference design only by a sticker and a bundle, there are products aimed at simplifying and reducing the cost of the original design, but there are also deeply redesigned devices designed for enthusiasts and overclockers.

 

Accordingly, video cards of each of the three types will differ in different characteristics. For example, a conditional Inno3D video card from the iChill line, with a massive cooler developed by Arctic on special order, will always be colder, quieter and take higher frequencies than a conditional Asus TURBO video card using a reference turbine for cooling. And no amount of “brand reputation” will ever change that.

However, do not think that this does not work in the opposite direction: the Asus ROG STRIX graphics card will always be quieter, cooler and more suitable for overclocking than, for example, Palit StormX on the same graphics chip. And Palit Jetstream or Super Jetstream, in turn, can be in no way inferior to Asus ROG STRIX, but defeat them at the cost …

You need a gaming video card. Will a 4 GB card work?
The question can be rephrased as follows: “I want to buy a car that accelerates to 100 km/h in 7 seconds. Enough for this car with a trunk volume of 520 liters? Which, you see, is completely absurd – the volume of the trunk is obviously not a factor influencing the dynamics of the car’s acceleration.

One of the common misconceptions about graphics cards is that their performance is directly related to the amount of memory installed.

This is partly true, but only partly. The memory installed on the video card is a kind of “trunk”, which stores the entire amount of data generated and used by the GPU: game textures, geometric vertices and buffers. If this volume is enough for a particular game – great, everything will work as usual.

B2C-Commerce-Developer
SC-400
MS-203
312-39
CISMP-V9
Integration-Architecture-Designer
MCD-Level-1
2V0-21.19PSE
NCA-5.15
3V0-32.21
1Z0-1050-21
1Z0-1104-21
1Z0-931-21
AZ-120
5V0-31.20

If the amount of memory is small, problems may arise. On modern platforms, part of the RAM will begin to be allocated for the needs of graphics – however, DRR4 is slower than GDDR5 and even more so GDDR6, and switching to its resources can lead to a temporary drop in performance, which is not very comfortable.

Again, a specific example:

Horizon Zero Dawn , screen resolution – FullHD, graphics settings – high:

As you can see, the 8GB version of the Radeon RX 5500 XT uses its own memory, while the RX 5500 XT 4gb and GTX 1650 Super are forced to use some of the RAM.

What this leads to can be seen in the FPS counter, and in particular – in terms of 1% of events. Both cards with 4 gigabytes of memory on board sag dramatically, which makes the gameplay far from the concept of comfort. On the other hand, in this example we are talking about one screen resolution and one set of settings, in which none of the three cards provides a constant and stable 60 frames per second in terms of average FPS. Therefore, the settings will have to be reduced one way or another.

And what will happen on medium and low graphics settings in the same scene?

As you can see, as the graphics settings are lowered, the video memory consumption decreases. Therefore, even in this demanding game, compromise parameters can be found.

So how much on-board memory should there be in the end? It should not be too much or too little: there should be enough memory to play at those settings at which the GPU is capable of delivering a comfortable FPS level.

Let’s look at this with a specific example: a scene from the built-in benchmark of the game Assassin’s Creed: Valhalla running on a Radeon RX 6500 XT with 4 gigabytes of memory on board:

The example on the left uses the default High Settings profile. Although the average fps is kept at around 34 frames, constant drawdowns of 1% and 0.1% of rare events and an unstable frame time graph make gaming at these graphics settings simply impossible. And the point is not only the lack of memory, but also the fact that the entry-level video card is not designed to play at these graphics settings.

On the right is the same game and the same graphics card, but the graphics settings are lowered to medium. In this case, the consumption of video memory is literally one gigabyte lower, but in addition, the load on the GPU is also significantly reduced. 37 and 49 frames for rare events, 54 and 63 frames for minimum and average fps. And absolutely comfortable gameplay.

The amount of memory is an important characteristic of a graphics card, but it cannot be considered in isolation from the performance of the GPU, games and graphics settings that are expected to be obtained.

And certainly you should not think that the GeForce GT 730 and GeForce GTX 1650 Super with the same amount of memory of 4 gigabytes will allow you to set the same settings and get the same performance.

So how much video memory do you really need?

If we are talking about modern gaming video cards that are REALLY capable of running gaming novelties at high and maximum settings at least in FullHD, 6 or 8 gigabytes of memory on board will be required. And for QuadHD and 4K, this is a necessary minimum, if you do not lower the settings to medium. But if you do not chase the maximum settings, or even prefer the good old games from the era until the mid-2010s to new products, then 4 gigabytes will be enough for you.

For example, here is how much video memory the Witcher 3 (game released in 2015) consumes at maximum graphics settings in FullHD and QuadHD:

As for entry-level video cards, memory above 2 gigabytes is rather redundant for them: this is enough to play media content, browser hardware acceleration and OS interface effects, and their GPU is no longer designed for AAA-class games.

Should I choose cards with a bus of 256 bits instead of 128 bits
Comparing video cards by memory bus “width” is just another attempt to choose one prime number from an incomprehensible and extensive set of parameters. Yes, a 256-bit bus can provide more bandwidth than a 128-bit bus, but you can only compare them if all other parameters are the same. Different types of memory – for example, GDDR5 and GDDR6 – have completely different read and write speeds and even different operating frequency ranges, which already makes a direct comparison meaningless.

But even here everything is not so simple! The point is not only in the memory itself, but also in the characteristics of the GPU: how much data it actually operates on, and most importantly, how efficiently it compresses this data.

Metro: Exodus , screen resolution – FullHD, graphics settings – high, tessellation enabled, PhysX and Hairworks effects disabled:

On the left is the Radeon RX 6500 XT with only a 64-bit bus. On the right is a GeForce GTX 1650 with a 128-bit bus. Both cards use gddr6 memory.

As you can see from the FPS counter and the indicators of rare events, in a real game, the card that was “better” in terms of numbers in the characteristics turns out to be faster at all.

But let’s look at another example:

Star Wars Jedi Fallen Order , screen resolution – FullHD, graphics settings – maximum:

On the left is the Radeon RX 580 with 4 gigabytes of gddr5 memory and a 256-bit bus. On the right is the Radeon RX 5500 XT with the same 4 gigabytes, but already gddr6 on a 128-bit bus.

In this case, the opposite is true: the difference in frames is small, despite the fact that the cards belong to two different generations and use different memory. Yes, a more modern model is faster, but it wins only 2-3 frames.

The temptation to reduce everything to a simple explanation “here the numbers are higher – this card is better” is understandable. But in practice, there will be no benefit from such an explanation: rather, on the contrary, as a result, you will buy a video card that in reality will turn out to be worse than the one that could be purchased for the same budget.

If you really want to simplify everything and compare video cards by only one simple parameter, let this parameter be the number of frames in games.

Do I need to choose video cards with GDDR6 memory?

And again – no.

By itself, the gddr6 standard memory provides a higher data exchange rate with lower power consumption compared to gddr5 – this is an absolute fact. But, as mentioned above, one parameter cannot determine all the characteristics of a video card.

Again, a specific example is Red Dead Redemption II, a built-in benchmark. Screen resolution – FullHD, graphics settings – medium:

On the left is a GeForce GTX 1660 with 6 gigabytes of gddr5 memory. On the right is a GeForce GTX 1650 Super with 4 gigabytes of gddr6 memory.

The first card turns out to be faster: it has a more powerful GPU and a larger amount of on-board memory on its side. Although the latter is not so important here: the benchmark uses only a little over 3.5 gigabytes. The type of onboard memory can be decisive only if all other parameters of the video card are the same. And this happens very rarely.

Perhaps the only good example is the GeForce GTX 1650, which exists in two versions at once: with gddr5 memory, and, accordingly, with gddr6:

In this case, of course, the second version is noticeably faster in games. But apart from the memory, these cards are absolutely no different, and therefore this result is natural.

If we compare cards of different models, and even more so – of different generations, then the type of on-board memory again turns into a single value taken out of context, from which no conclusions can be drawn.

Is it worth changing the motherboard if it has PCI-e 3.0, and the card has PCI-e 4.0
Different versions of the PCI-e interface are compatible with each other: a video card developed in the days of PCI-e 2.0 can be installed in a board with 3.0 and 4.0 interfaces without any problems. Also, nothing forbids a video card with a version 4.0 interface to work in a board with an “old” version of the interface.

As an illustrative example, we can provide detailed materials with an analysis of the work in various modes of the Radeon RX 5700 XT and GeForce RTX 2080 Ti video cards .

The version of the PCI-e interface cannot directly affect the performance of the video card either, even if we consider high-performance models like the GeForce RTX 3080:

However, the word “directly” in the paragraph above is not without reason, and indirect influence still takes place. A high-speed interface is important when the consumption of video memory exceeds the capacity of the card’s own buffer, and it is forced to use part of the RAM as a reserve.

Red Dead Redemption II , FullHD and high graphics settings:

At these settings, the benchmark uses up to 6 gigabytes of video memory, and the Radeon RX 5500 XT with 8 gigabytes on board does not care what version of the interface it works on. By the way, PCI-e 4.0 in this example is on the left, and PCI-e 3.0 is on the right.

But the younger version with 4 gigabytes on board naturally loses performance, but if we compare its results with the RX 5500 XT 8gb example, then losses are observed in BOTH cases. PCI-e 4.0 only allows you to smooth this process a bit.

And it is precisely to smooth, and not to correct or compensate for losses. Since at first the PCI-e 4.0 benchmark seems to allow you to get a higher FPS, but by the middle of the test scene, the results on PCI-e 3.0 and PCI-e 4.0 are evened out.

However, there are rare exceptions when it is the bandwidth of the PCI-e interface that affects:

The Radeon RX 6500XT is the only gaming-class video card so far (albeit an entry-level one) that uses only 4 PCI-e signal lines. In such a configuration, studies have shown that the difference in throughput between PCI-e 4.0 and PCI-e 3.0 already has a real meaning.

True, the difference in real scenarios is far from 50, and not even 20 percent. In some cases, it even equals 2-3 frames per second.

Thus, if you suddenly hear advice that when buying a video card, the processor with the motherboard must also be changed, otherwise nothing will work or the old interface will “cut off” up to 15% (20, 25, 30, and so on) of the performance of the video card , know that the task of this “adviser” is to convince you to make another expensive purchase.

And what about PCI-e 5.0 then?
The interface of the fifth version is implemented on the Intel LGA 1700 platform with Core 12 series processors and motherboards based on 600 series chipsets.

Despite other specifications, externally the PCI-e x16 slots have not changed in any way. And moreover, absolutely no one and nothing forbids installing the latest video cards using the PCI-e 4.0 slot in them! Well, if so, the issue of backward compatibility of PCI-e 5.0 connectors with video cards for earlier versions of the interface, in principle, should not be raised.

But there are no video cards designed specifically for the version 5.0 interface on sale (as of the beginning of 2022), so there is no need to talk about their real characteristics. Everything is still only at the stage of rumors and speculation.

Naturally, PCI-e 5.0 provides a huge increase in bandwidth, but how much it will be used in reality is still a mystery. Now even the flagship video cards in the face of the Radeon RX 6900 XT and GeForce RTX 3090 are not limited by the bandwidth of 16 lanes of PCI-e version 4.0. Will the next generation handle orders of magnitude more data? Without access to next generation cards, it’s hard to say.

A more or less real cause for concern can be the transition of video cards under PCI-e 5.0 to a new additional power connector:

The new standard assumes the possibility of transmitting power up to 600 watts. But, again, without knowing the real power consumption of video cards of the next generations, it is difficult to say how much such power will be in demand. It is possible that only flagship models will be equipped with such connectors, while junior solutions will continue to use the usual 8- and 6-pin headers.

One way or another, you should start worrying about next-generation video cards only when they are introduced to the market in the form of real products.

Ray tracing: important or not?
The latest trend is the introduction of real-time lighting processing into games, which makes it possible to abandon the preliminary placement of many “artificial” light sources across the game level in favor of global illumination. And besides – introducing more realistic effects of reflection and refraction of light from various types of surfaces.

There is nothing wrong with the technology itself. Moreover, it is difficult to expect a revolution in graphics due to a simple increase in the number of pixels on the screen and megabytes in textures: it is the qualitative changes that are needed, and ray tracing fits this definition quite well.

The problem is somewhat different.

Ray tracing is only good if the graphics card you choose is REALLY capable of delivering comfortable FPS in games with ray tracing enabled.

If we are talking about models of video cards from the Radeon RX 6000 and GeForce RTX 3000 family, then, of course, there is no place for reservations: you will get above 60 FPS with beams turned on in any current game at FullHD and QuadHD resolutions. In 4K – not always, but even there it will be possible to choose compromise graphics settings.

What happens if you use ray tracing on more budget models of previous generation cards?

Literally:

Control , high graphics settings, screen resolution – FullHD.

Enabling ray tracing at “full” screen resolution drops performance below 60 frames on all video cards – only the RTX 2080 is approaching the coveted mark (or rather, it produces 59 frames in terms of average FPS).

The use of DLSS upscaling technology corrects the situation, but the image in this case is generated at a lower resolution and then stretched to FullHD, so the final image quality may be worse than “full frame”. Plus, Control is an example of the most complete implementation of DLSS compared to other games that support ray tracing.

So should you choose graphics cards that support ray tracing?

If we are talking about the current generation, the question makes no sense, since both the Radeon RX 6000 and the GeForce RTX 3000 support it anyway. It is not a fact that the video card you have chosen will provide comfortable FPS with beams enabled at the same graphics settings as without them, but the fundamental possibility will still remain.

And what is this DLSS that increases FPS?
An image quality enhancement algorithm supported by GeForce cards with the RTX suffix. That is, for the 2000 and 3000 series. The algorithm requires the video card to have tensor cores, so the GTX 16 ** and earlier generations, alas, pass by.

The essence of the algorithm is to obtain a better picture from a low-resolution image. In other words, in upscale.

Yes, in fact, everything is much more complicated, and the algorithm is most directly related to AI and neural networks, but from the user’s point of view, the work of DLSS looks exactly like this: performance will approximately correspond to 720p resolution, and image quality will approximately correspond to FullHD. Well, or the original FullHD resolution can turn into something like 4K.

Of course, the main goal is achieved, and the number of frames per second approximately corresponds to the original resolution:

But with quality – not everything is so obvious, the final result varies from game to game and from object to object. For example, small contrasting objects can be replete with artifacts:

On the other hand, the faces and hair of the characters, especially if they are in a static position and do not have a motion blur effect applied to them, can look even better with DLSS than in native resolution with TAA anti-aliasing:

 

Speaking of anti-aliasing, in some cases DLSS can also replace it, for example, by drawing edges on objects that look heavily pixelated in their original quality, or even disappear altogether:

On the other hand, although the gaps between the boards disappearing in the original resolution and crumbling into squares, DLSS really finishes painting, the textures in this example, on the contrary, become less clear, and in some places the wooden flooring begins to look like a drawing on a flat surface.

By the way, this is not the only example:

If you pay attention to the branch in the center, you can see that its edges in the DLSS version are not so sharp, and there are no characteristic “ladders” on them. But at the same time, the textures of the flowers noticeably lose detail compared to native 4K.

Of course, this does not mean that the technology itself is bad: you cannot create a universal tool that will be equally good in all tasks. On the contrary, the emergence of such technology is to be welcomed: the massive transition to gaming in native 4K resolution, apparently, is being postponed at least until the release of next-generation video cards. And if you are interested not just in high resolution, but also in realistic lighting, then even longer. And the ability to increase performance in new games in the absence of more powerful video cards and money for them is just great!

DLSS has exactly one minus: it is a rigid binding to Nvidia technologies and the need for GPUs to have hardware cores. That is, if you have at least an RTX 2060 installed, and even better an RTX 3060, you yourself can enable DLSS and check the picture quality in games that support this technology. But with the GTX 1650 or GTX 1050, alas, nothing will come of it.

Well, unless…

Alternative to DLSS
AMD FidelityFX Super Resolution technology does not use AI to fill in the missing parts of the image, but does not require specialized hardware solutions, and works on a much larger number of video cards, even on Nvidia products.

Like DLSS, FSR was designed to improve performance at high resolutions by upscaling a less pixelated image.

This technology copes with its direct task, and copes no worse than DLSS: