もっと詳しく

“Once I thought I was wrong, but I was wrong.” I’m not that type. It’s good when you compare statements made with new information and then admit that things turned out differently. In my everyday work, the combination of a large number of samples and experience saves me from overly bold, leaky statements. The experience, which enables a perspective classification of the facts, needs to be well dosed as a multiplier. When I looked at the “graphics card vs. games” topic, I had anticipated a different development curve from the one that actually existed.

At the end of 2020, the signs were pointing to a new beginning: Brand new GPUs from AMD and Nvidia as well as a fresh generation of consoles promised a wonderful next-gen future. What we got instead is rather unspectacular to this day. Sars-CoV-2 is still raging, a lack of components, crypto-humbug and now a completely unnecessary, brain-burned war are slowing down the spread of new hardware. Without it, optimization is carried out for old components that have been installed millions of times. That also has something good: graphics cards, which I saw as “sewn on edge” at the end of 2020, are still usable properly (8 GiB) to very well (10 GiB) today.


When writing this text, I am of course looking at Nvidia’s Geforce RTX 3080, which was undoubtedly fast when it was released, but had less memory than the previous top model, the RTX 2080 Ti. No, let Nvidia’s official justification that the RTX 3080 be the successor to the RTX 2080 I still don’t use that as an excuse for a lack of progress. The following years played into the hands of the RTX 3080 10GB and Nvidia really knows how to delay deficiency symptoms with the help of aggressive memory management. But postponed is not canceled. From my last tests I can roughly deduce that 10 “Geforce-GiByte” behaves about the same as 12 “Radeon-GiByte”. If the memory is actually filled due to a high load that cannot be streamed, driver tricks do not help either. RTX 3080 10GB, RTX 3070 (Ti) and the like simply have too little capacity for their performance. There are good reasons why most new graphics cards have 12 GiBytes. Nvidia knows that too. The RTX 2060 12GB, RTX 3060, RTX 3080 12GB and RTX 3080 Ti are clear proof and are all “safe” on the memory side. The fact that AMD understood the benefits of real memory should have been clear since the Fiji disaster at the latest. The Radeon makers sometimes forget that (including the RX 5600 XT, RX 6500 XT), but these are not high-end cards, but official savings.

The statement that the next generation of games will blow away many a “edge model” is still valid. Really now, you could say, because things are finally starting to roll. The first games are cutting old-gen pigtails (no more Xbox One and PS4 support), the Unreal Engine 5 is ramping up, and Direct Storage is a major innovation in data streaming. If the past two years have taught us anything, it’s not to be hasty. Direct storage in particular is not implemented overnight – we are talking here from many years to widespread use. So was I premature? Yes. Was this development foreseeable? no Is a graphics card with more memory better, more durable, more future-proof and more valuable when resold than one with less memory? That’s a rhetorical question.

The post Too little video memory for games: I was probably a bit hasty there appeared first on Gamingsym.