• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Plays the VRAM Card Against NVIDIA

Status
Not open for further replies.

techspot just did a new review of vram posted about 50 minutes ago.

AMD is such good value, man I would never buy a 8gb vram card in todays gaming world. wild Nvidia still does this.
Their numbers for Hogwarts Legacy or RE4 don't match TPU's. Probably some others, too, can't check them all atm.
 
Their numbers for Hogwarts Legacy or RE4 don't match TPU's. Probably some others, too, can't check them all atm.

Obviously, games are big and different reviewers use different scenes. Expecting the numbers to match between different sites who test different parts of a game is a folly.
 
Their numbers for Hogwarts Legacy or RE4 don't match TPU's. Probably some others, too, can't check them all atm.
From https://www.techpowerup.com/review/resident-evil-4-benchmark-test-performance-analysis/4.html

For RE4
Techpowerup's 1080p Ultra RT
6800 XT = 105.8 fps
3070 = 88

Techspot's 1080p "Max" with RT
6800 = 91 fps
3070 = crashed

Techpowerup's 1440p Ultra RT
6800 XT = 85 fps
3070 = crashed

Techspot's 1440p "Max" with RT
6800 =77 fps
3070 = crashed

Techpowerup used a faster RX 6800 XT model while Techspot used a lesser RX 6800 model. Techspot's RX 6800 vs Techpowerup's RX 6800 XT numbers are close.
 
From https://www.techpowerup.com/review/resident-evil-4-benchmark-test-performance-analysis/4.html

For RE4
Techpowerup's 1080p Ultra RT
6800 XT = 105.8 fps
3070 = 88

Techspot's 1080p "Max" with RT
6800 = 91 fps
3070 = crashed

Techpowerup's 1440p Ultra RT
6800 XT = 85 fps
3070 = crashed

Techspot's 1440p "Max" with RT
6800 =77 fps
3070 = crashed

Techpowerup used a faster RX 6800 XT model while Techspot used a lesser RX 6800 model. Techspot's RX 6800 vs Techpowerup's RX 6800 XT numbers are close.
I was talking about tests that passed. They're pretty close together on TPU and wide apart on Techspot. And that's despite Techspot using a slower card, as you noted.
 
Honestly, VRAM usage really depends on the resolution. There are gaming cards with 20 and 24gb of VRAM. That is such a waste and only makes the cards more expensive. Typically, the top end cards are not that much fasted the the next in the tier so they load it up with unneeded stuff to justify the higher price and the 90, TI, XTX designations. All that additional VRAM just sits idle doing nothing for the life of a gaming card. Money well spent.

My recommendation is if you are buying a new card in 2023 and play at resolutions higher then 1080p, get a card with 12gb - 16gb card with VRAM. For 99% of games on the market, 12gb is enough. Game manufactures dont want high system requirements for games because fewer people will be able to buy and play their games.

There will always be some poorly optimized console ports that will run poorly and use unreasonable system resources. And there will always be a game or two that pushes the envelop and we ask "Can it run Crysis?"
 
That is an opinion, not fact. Besides, FSR 3.0 is just around the corner, I'd wait for it before calling judgement.


Whether you like FG or not depends on how sensitive you are to input latency. The technology is not without issues at the moment, especially when you generate extra frames from a low frame rate situation.

Personally, I consider FG frame rates irrelevant for comparison.


Not really.
View attachment 291609


Fair enough - I mostly shop at Scan, that's why I was comparing prices from there. A 5% difference might actually be worth it.
It is not an opinion, in many cases the quality mod of FSR looks works than the performance mod of DLSS.
Untitled.jpg


Also FG goes along with Reflex to compensate for latency, and also you don't use FG to hit 60fps, but you use it to hit 100+ fps and the latency in this situation is very good. I was skeptical at first but after I tried FG on several games and I can say it is a game changer and every demanding game need to have it.

This 16% better RT includes games that barely use RT, however in games that heavily use RT and have meaningful visual impact the RT cores on the7900XTX get overwhelmed, that is why the 4080 is 25-45% faster in heavily RT use.
 
  • Like
Reactions: bug
It is not an opinion, in many cases the quality mod of FSR looks works than the performance mod of DLSS.View attachment 292340
I haven't tried the latest versions of DLSS since my 2070 died about 6 months ago, so I'll take your word for it. The picture you posted may be an isolated case, but DLSS does look better there, I'll give you that.

Also FG goes along with Reflex to compensate for latency, and also you don't use FG to hit 60fps, but you use it to hit 100+ fps and the latency in this situation is very good. I was skeptical at first but after I tried FG on several games and I can say it is a game changer and every demanding game need to have it.
That's the thing... I don't need 100+ FPS. I need 60, or at least 40-45 minimum.

This 16% better RT includes games that barely use RT, however in games that heavily use RT and have meaningful visual impact the RT cores on the7900XTX get overwhelmed, that is why the 4080 is 25-45% faster in heavily RT use.
I hope both AMD and Nvidia focus the development of their next architectures on RT. Maybe more RT cores with the same number of traditional shader cores. Raster performance is already at a maximum, imo.
 
The keyword being "yet".
In just 3 months, we've had 4 large games where 8Gb started being a serious problem. I predict that the trend isn't going to stop at all during the next two years.
We'll see just how the 10Gb 3080 lasts, and I think the 12Gb 4070/Ti will be worse. At least with the 3080 you had 2 good years of PS4 era holdovers until the requirements climbed hard. The 4070s feel sufficient "for now" I'm sure, but will it even last 2 years? I highly doubt it.
No man. You're just talking about broken console ports, which work fine even with 8GB VRAM cards. It's not that all of the sudden they just start adding 8K resolution textures.
Plus some of the game engines out there cache the whole VRAM, even if you have 24 or 32GB.
 
This is so much BS that it is ridiculous. I have yet a game to play that would require more than 10GB of VRAM. Even the crappiest of all ports ever released, the "The Last of Us - P1" is smooth as butter on Ultra with G-Sync ON, even if the so called in game VRAM usage is around 12.8GB of VRAM.
This kind of post just reads "I dont get it"

We not talking frames per second, we talking about texture quality, textures going *poof* and games crashing. Also some games stutter due to excessive asset swapping (caused by low VRAM).

Now days many games have dynamic engines which adjust to available VRAM on the fly so the effect of low VRAM is not as abvious as it could be.

Some of us are not ok with PS2/PS3 quality textures in 2023.

Its not important to you personally, thats fine, doesnt mean its BS though.
 
No. All RTX 4000 graphics cards already use 2GB chips. And there definitely won't be any 3GB chips in the next year or two. If 3GB chips do appear, it will only be with the release of the RTX 5000.

How does this one manage 20GB on a 160bit bus then?
 
2 reasons to not wait:
  1. Low performance/€ improvement (so far)
  2. Growing tensions between China and Taiwan
Bonus:
7700/7700XT will likely have similar performance to the 6800XT, but with (only) 12G VRAM
Unlikely. The Angstronomics leak, which was very complete, mentioned a 256 bit bus for Navi 32. Navi 31 has 384.

Logically if the 384 bus was for 24Go, 256 should be for 16.
No dice on Navi 31, still 128 and unless they double the RAM for certain models, it's still going to be an 8.
 
How does this one manage 20GB on a 160bit bus then?
There is a bilateral placement of chips here. Two 2GB chips are placed on each memory controller. One chip is placed on the front side, the other on the back. 160bit / 32bit = 5 controllers , 5 controllers * 2 sides = 10 places for the 2GB gddr6 chips = 20GB of VRAM. But since it is very expensive, we definitely won't see this layout on mid-range cards. The only gaming card with such a chip placement was the expensive 3090, with 24 GDDR6X chips of 1GB each, 12 on each side (at the time of the 3090 release, there were no 2GB GDDR6X chips available).
For example, here is a photo of the Asus Strix 3090.
 

Attachments

  • 55.jpg
    55.jpg
    869 KB · Views: 79
Last edited:
There is a bilateral placement of chips here. Two 2GB chips are placed on each memory controller. One chip is placed on the front side, the other on the back. 160bit / 32bit = 5 controllers , 5 controllers * 2 sides = 10 places for the 2GB gddr6 chips = 20GB of VRAM. But since it is very expensive, we definitely won't see this layout on mid-range cards. The only gaming card with such a chip placement was the expensive 3090, with 24 GDDR6X chips of 1GB each, 12 on each side (at the time of the 3090 release, there were no 2GB GDDR6X chips available).
For example, here is a photo of the Asus Strix 3090.

Jim Carrey Chance GIF
Does it have to be done for all the chips? I.e. in the case of the 4070, would we have a 24gb version or could they just install enough chips in the back to get 16gb?
 
Does it have to be done for all the chips? I.e. in the case of the 4070, would we have a 24gb version or could they just install enough chips in the back to get 16gb?
As far as I know, dual chip placement must be done for all controllers at once. In other words, the 4070 can have either 12GB or 24GB of VRAM.
In my opinion, there is only one way to make the 4070 with 16GB using dual chip placement, which is to use 4 memory controllers. But in this case, the memory bus will be 128-bit, which will negatively impact the performance.
 
Jim Carrey Chance GIF
Does it have to be done for all the chips? I.e. in the case of the 4070, would we have a 24gb version or could they just install enough chips in the back to get 16gb?
Like he said, the arrangement is expensive, so probably not for mid-range cards. Even if the difference wasn't big, who would pay more for a 4070?
 
As far as I know, dual chip placement must be done for all controllers at once. In other words, the 4070 can have either 12GB or 24GB of VRAM.
In my opinion, there is only one way to make the 4070 with 16GB using dual chip placement, which is to use 4 memory controllers. But in this case, the memory bus will be 128-bit, which will negatively impact the performance.
Actually, it can be done, but it will result in uneven bandwidths for different parts of the memory, which is as bad an idea as it sounds:
So we won't be seeing anything like that this time.
 
I'd rather have Nvidia than a headache, thank you very much.
 
Even if the difference wasn't big, who would pay more for a 4070?
I wouldn't. But maybe they would keep it at 600$ and lower the standard 12gb to 500$
 
Like he said, the arrangement is expensive, so probably not for mid-range cards. Even if the difference wasn't big, who would pay more for a 4070?
Well if they not profiteering it might be an extra $50-100 to bump a 4070 to 16 gigs, I probably would be more likely to buy a $700 16 gig 4070 than a $600 12 gig 4070.

But knowing Nvidia if they released a 16 gig model it would cost an extra $300.
 
This kind of post just reads "I dont get it"

We not talking frames per second, we talking about texture quality, textures going *poof* and games crashing. Also some games stutter due to excessive asset swapping (caused by low VRAM).

Now days many games have dynamic engines which adjust to available VRAM on the fly so the effect of low VRAM is not as abvious as it could be.

Some of us are not ok with PS2/PS3 quality textures in 2023.

Its not important to you personally, thats fine, doesnt mean its BS though.
You understood nothing from what I've wrote.
I said I was using ultra settings on that game, INCLUDING textures, and didn't have any crashes or stuttering, or sudden pop up effects or buffering on that game.
 
You understood nothing from what I've wrote.
I said I was using ultra settings on that game, INCLUDING textures, and didn't have any crashes or stuttering, or sudden pop up effects or buffering on that game.
This is so much BS that it is ridiculous. I have yet a game to play that would require more than 10GB of VRAM. Even the crappiest of all ports ever released, the "The Last of Us - P1" is smooth as butter on Ultra with G-Sync ON, even if the so called in game VRAM usage is around 12.8GB of VRAM.

Ok boss.
 
He's talking about how games preload stuff if there's VRAM available. It's done to minimize IO, but it doesn't result in a meaningful performance impact.
 
He's talking about how games preload stuff if there's VRAM available. It's done to minimize IO, but it doesn't result in a meaningful performance impact.
Preloading likely can prevent stutters. Its a good thing, I rather have my assets preloaded than loading on the fly to cause stutters.

I miss the days of load everything into ram, and only then you play, no live loading of stuff in background. I wonder what prevents them doing that now? hmm.
 
Preloading likely can prevent stutters. Its a good thing, I rather have my assets preloaded than loading on the fly to cause stutters.
Well, we're kinda drifting from "must have 16GB VRAM". Unless you're trying something like Rage, textures only change during level changes or moving from one area to another. Not exactly the part of the game that would be ruined by a few stutters.
I miss the days of load everything into ram, and only then you play, no live loading of stuff in background. I wonder what prevents them doing that now? hmm.
There is no way to do that. Try as you may to fit everything into VRAM, there's a kid somewhere with Photoshop and time on their hands that will take your textures, apply 2x scaling on both axes, call that a HD texture pack and boom! you're out of VRAM again. Also, 16GB VRAM is the same as typical RAM in a PC, that's pretty imbalanced.
I think what the developers/engines do right now is pretty well thought: set a baseline and if they find more VRAM than that, try to load some more. Preloading is guesswork though, because you never know what the next area the player visits will be. Or, you can try to predict the next area based on which "exit" the player approaches, but the player changes their mind, you're just preloading things that won;t be used next and you initiate IO that may lead to other performance drops.
 
Well, we're kinda drifting from "must have 16GB VRAM". Unless you're trying something like Rage, textures only change during level changes or moving from one area to another. Not exactly the part of the game that would be ruined by a few stutters.

There is no way to do that. Try as you may to fit everything into VRAM, there's a kid somewhere with Photoshop and time on their hands that will take your textures, apply 2x scaling on both axes, call that a HD texture pack and boom! you're out of VRAM again. Also, 16GB VRAM is the same as typical RAM in a PC, that's pretty imbalanced.
I think what the developers/engines do right now is pretty well thought: set a baseline and if they find more VRAM than that, try to load some more. Preloading is guesswork though, because you never know what the next area the player visits will be. Or, you can try to predict the next area based on which "exit" the player approaches, but the player changes their mind, you're just preloading things that won;t be used next and you initiate IO that may lead to other performance drops.
Ahh so you now acknowledged VRAM capacity can be an issue then?
 
Status
Not open for further replies.
Back
Top