• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Explains GeForce RTX 40 Series VRAM Functionality

yannus1

New Member
Joined
Jan 30, 2023
Messages
23 (0.05/day)
I feel like they have been bashing NVIDIA recently with not one, not two, but at least three videos criticizing the 8 GB RTX 3070. I think they have been advocating for more VRAM instead of being satisfied with the status quo and frame smearing.
They always fake bash. I'll always remember when they said that Nvidia didn't send them a sample to censor them while displaying loop advertisement of RTX. The same here, they say "oh no, it doesn't have enough VRAM" but always ended with a conclusion like " but they have a wonderful DLSS and RTX. This is a common technique of trying to seem opposing someone when in reality you're trying promote his interests.
 
Joined
Dec 25, 2020
Messages
4,908 (3.91/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Ocean Plastic Mouse
Keyboard Galax Stealth
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
They always fake bash. I'll always remember when they said that Nvidia didn't send them a sample to censor them while displaying loop advertisement of RTX. The same here, they say "oh no, it doesn't have enough VRAM" but always ended with a conclusion like " but they have a wonderful DLSS and RTX. This is a common technique of trying to seem opposing someone when in reality you're trying promote his interests.

...which, they do. NVIDIA offers wonderful, concise, well-supported features, and AMD often does not, or they are not good or popular enough to set the industry standard every time. There's no grand conspiracy here. In my opinion, Hardware Unboxed is a trustworthy source and they are generally unbiased, willing to point out strengths and weaknesses regardless of brand or product they are reviewing. Like they said on their 8 vs. 16 GB comparison video, AMD adding more VRAM to their cards isn't done out of kindness of their hearts, but because they had to pitch something to offer gamers.

It is true that their workstation segment is practically moribund (Radeon Pro is and has always been a bit of a mess, their support for most production applications is poor to non-existent especially if an app is designed with CUDA in mind - OpenCL sucks) and their high VRAM models offer 32 GB to those who need to work with extra large data sets, so giving an RX 6800/XT 16 GB isn't as big of deal to them as it is to Nvidia, who wants to ensure that their overpriced enterprise RTX A-series sell. This ensures that "hobbyist-level" creative professionals purchase at a minimum RTX 3090/3090 Ti or 4090 hardware, or supported professional models such as the RTX A4000 instead of a 3070/4070 and calling it a day.
 
Joined
Sep 27, 2008
Messages
1,048 (0.18/day)
...which, they do. NVIDIA offers wonderful, concise, well-supported features, and AMD often does not, or they are not good or popular enough to set the industry standard every time. There's no grand conspiracy here. In my opinion, Hardware Unboxed is a trustworthy source and they are generally unbiased, willing to point out strengths and weaknesses regardless of brand or product they are reviewing. Like they said on their 8 vs. 16 GB comparison video, AMD adding more VRAM to their cards isn't done out of kindness of their hearts, but because they had to pitch something to offer gamers.

It is true that their workstation segment is practically moribund (Radeon Pro is and has always been a bit of a mess, their support for most production applications is poor to non-existent especially if an app is designed with CUDA in mind - OpenCL sucks) and their high VRAM models offer 32 GB to those who need to work with extra large data sets, so giving an RX 6800/XT 16 GB isn't as big of deal to them as it is to Nvidia, who wants to ensure that their overpriced enterprise RTX A-series sell. This ensures that "hobbyist-level" creative professionals purchase at a minimum RTX 3090/3090 Ti or 4090 hardware, or supported professional models such as the RTX A4000 instead of a 3070/4070 and calling it a day.


Nvidia's big success is in CUDA, but they've had some failed standards too. Gsync (in its original form with the monitor modules) isn't nearly as prevalent as Freesync. I haven't heard a peep about GPU-acclerated PhysX in years either.
 
Joined
Dec 25, 2020
Messages
4,908 (3.91/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Ocean Plastic Mouse
Keyboard Galax Stealth
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
Nvidia's big success is in CUDA, but they've had some failed standards too. Gsync (in its original form with the monitor modules) isn't nearly as prevalent as Freesync. I haven't heard a peep about GPU-acclerated PhysX in years either.

The only reason FreeSync took off was cost. Hardware G-Sync is still technically the best, but the added cost and the fact monitors have been steadily improving and panels themselves handling ranges better, it just makes it a very unattractive proposition. This is true even today, see: Alienware AW3423DW (G-Sync Ultimate model) vs. AW3423DWF (same panel without the G-Sync Ultimate module)
 
Joined
Oct 18, 2017
Messages
149 (0.06/day)
System Name Battlestation
Processor intel i9 9900K
Motherboard EVGA Z370 FTW
Cooling Noctua NH-D15
Memory 2X8 GB DDR4 3200 Mhz Corsair
Video Card(s) Nvidia RTX 4070 Founders Edition
Storage Western Digital SN850 1 TB NVME
Display(s) Asus PG248Q
Case Phanteks P600S
Audio Device(s) Steelseries Arctis pro
Power Supply EVGA 1200 P2
Mouse Logitech G PRO
Keyboard Logitech G710+
Benchmark Scores https://www.3dmark.com/spy/38948601 https://i.ibb.co/1MDLrVz/apres.jpg
The Amount of VRAM Is Dependent On GPU Architecture
Gamers often wonder why a graphics card has a certain amount of VRAM. Current-generation GDDR6X and GDDR6 memory is supplied in densities of 8 GB (1 GB of data) and 16Gb (2 GB of data) per chip. Each chip uses two separate 16-bit channels to connect to a single 32-bit Ada memory controller. So a 128-bit GPU can support 4 memory chips, and a 384-bit GPU can support 12 chips (calculated as bus width divided by 32). Higher capacity chips cost more to make, so a balance is required to optimize prices.

On our new 128-bit memory bus GeForce RTX 4060 Ti GPUs, the 8 GB model uses four 16Gb GDDR6 memory chips, and the 16 GB model uses eight 16Gb chips. Mixing densities isn't possible, preventing the creation of a 12 GB model, for example. That's also why the GeForce RTX 4060 Ti has an option with more memory (16 GB) than the GeForce RTX 4070 Ti and 4070, which have 192-bit memory interfaces and therefore 12 GB of VRAM.

Is it me or are they contradicting themselves in this? They say 128bit can support 4 memory chips then say the 4060 ti uses 8 chips?

4060 ti = 128 bit => 128/32 = 4 chips => 8GB 4060 ti uses 4 x 2GB memory chips and the 16GB version uses 4 x 4GB memory chips.

4070 = 192 bit => 192/32 = 6 chips => 6 x 2GB memory chips. Which means they could launch a 6 x 4GB = 24GB 4070 if they wanted.
 
Last edited:
Joined
Dec 25, 2020
Messages
4,908 (3.91/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Ocean Plastic Mouse
Keyboard Galax Stealth
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
Is it me or are they contradicting themselves in this?

4060 ti = 128 bit => 128/32 = 4 chips => 8GB 4060 ti uses 4 x 2GB memory chips and the 16GB version uses 4 x 4GB memory chips.

4070 = 192 bit => 192/32 = 6 chips => 6 x 2GB memory chips. Which means they could launch a 6 x 4GB = 24GB 4070 if they wanted.

No, they cannot, as there are no 32 Gbit GDDR6X modules available in the market. 24 GB would be achievable by using 12 16 Gbit chips in a clamshell configuration, and that's just too expensive for a gaming card of this price.
 
Joined
Sep 27, 2008
Messages
1,048 (0.18/day)
The only reason FreeSync took off was cost. Hardware G-Sync is still technically the best, but the added cost and the fact monitors have been steadily improving and panels themselves handling ranges better, it just makes it a very unattractive proposition. This is true even today, see: Alienware AW3423DW (G-Sync Ultimate model) vs. AW3423DWF (same panel without the G-Sync Ultimate module)
Cost is a big consideration when looking to set widely adopted standards :laugh:
 
Joined
Oct 18, 2017
Messages
149 (0.06/day)
System Name Battlestation
Processor intel i9 9900K
Motherboard EVGA Z370 FTW
Cooling Noctua NH-D15
Memory 2X8 GB DDR4 3200 Mhz Corsair
Video Card(s) Nvidia RTX 4070 Founders Edition
Storage Western Digital SN850 1 TB NVME
Display(s) Asus PG248Q
Case Phanteks P600S
Audio Device(s) Steelseries Arctis pro
Power Supply EVGA 1200 P2
Mouse Logitech G PRO
Keyboard Logitech G710+
Benchmark Scores https://www.3dmark.com/spy/38948601 https://i.ibb.co/1MDLrVz/apres.jpg
No, they cannot, as there are no 32 Gbit GDDR6X modules available in the market. 24 GB would be achievable by using 12 16 Gbit chips in a clamshell configuration, and that's just too expensive for a gaming card of this price.

How can they reach 16 GB of vram on a 4060 ti with a max of 4 chips on a 128 bit bus with only 16Gb memory chips then? I dont get it.
 
Joined
Dec 25, 2020
Messages
4,908 (3.91/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Ocean Plastic Mouse
Keyboard Galax Stealth
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
How can they reach 16 GB of vram on a 4060 ti with a max of 4 chips on a 128 bit bus with only 16Gb memory chips then? I dont get it.

By using 8 chips in a clamshell configuration and raising cost accordingly - that is why the 16 GB version both consumes more energy (higher TDP) and costs $100 more.

For example, the RTX 3080 Ti and the RTX 3090 are both 384-bit cards, but the 3080 Ti has 12 chips installed and the 3090 has 24 (with two attached to each channel and installed on both sides of the PCB).
 
Joined
Oct 18, 2017
Messages
149 (0.06/day)
System Name Battlestation
Processor intel i9 9900K
Motherboard EVGA Z370 FTW
Cooling Noctua NH-D15
Memory 2X8 GB DDR4 3200 Mhz Corsair
Video Card(s) Nvidia RTX 4070 Founders Edition
Storage Western Digital SN850 1 TB NVME
Display(s) Asus PG248Q
Case Phanteks P600S
Audio Device(s) Steelseries Arctis pro
Power Supply EVGA 1200 P2
Mouse Logitech G PRO
Keyboard Logitech G710+
Benchmark Scores https://www.3dmark.com/spy/38948601 https://i.ibb.co/1MDLrVz/apres.jpg
By using 8 chips in a clamshell configuration and raising cost accordingly - that is why the 16 GB version both consumes more energy (higher TDP) and costs $100 more.

For example, the RTX 3080 Ti and the RTX 3090 are both 384-bit cards, but the 3080 Ti has 12 chips installed and the 3090 has 24 (with two attached to each channel and installed on both sides of the PCB).

I see thank you professor! :D

But by this logic, a "clamshelled" 4060 ti for $100 increase means "clamshelling" each original single chip cost around $25 right?

So a "clamshelled" 4070 would be a 6 x 25$ increase by this process, which would mean $750 for a 24GB 4070 vs $600 for a 12GB 4070 and $950 for a 24GB 4070 ti vs $800 for a 12GB 4070 ti.

Seeing that the 16GB 4080 is $1200 i dont see how a $800 4070 ti would be unreasonable.

In other words: if they did it for the 4060 ti, why not for the 4070 and 4070 ti? I would have probably bought a 24GB 4070 for $750 instead of my $600 12GB 4070 because over time the card will hold more value as games vram requirements will increase. Same reason why 6GB gtx 1060 is worth much more now than a 4GB one.
 
Joined
Aug 23, 2013
Messages
557 (0.14/day)
The biggest takeaway from me for this is that Nvidia is actually aware and concerned about the constant talk about VRAM. I actually am shocked. I thought they were above it all. To go to the trouble of making a new 16GB variant of the 4060 Ti plus a whole web page about how it's not them, it's you, is way more than I expected as a reaction. Perhaps next gen they'll temporarily boost memory up to merely "acceptable" levels unlike recently when they always do "intolerably low" for every market segment.

The best part was when they said how the 4070 Ti had to have 12GB, but the 4060 Ti gets 8 or 16GB. But- but- but Nvidia. 24GB exists, you know. They could have doubled memory at every level. They could have. They just didn't want to. Given the markup and the record profits they're reporting, they could have easily absorbed the cost.

But they didn't want to do that. Now that the jig is up, they're in full-on damage control. Jensen took a minor paycut on the lowest part of his compensation package and he is angry that people are angry because they aren't buying anything they shove out the door. That's why he's mad. He had them millions earmarked for his latest yacht. Him and Bobby Kotick are grumpy because they're having to delay their competition to be the first to get a Bezos-level superyacht.
 
Joined
Dec 25, 2020
Messages
4,908 (3.91/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Ocean Plastic Mouse
Keyboard Galax Stealth
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
I see thank you professor! :D

But by this logic, a "clamshelled" 4060 ti for $100 increase means "clamshelling" each original single chip cost around $25 right?

So a "clamshelled" 4070 would be a 6 x 25$ increase by this process, which would mean $750 for a 24GB 4070 vs $600 for a 12GB 4070 and $950 for a 24GB 4070 ti vs $800 for a 12GB 4070 ti.

Seeing that the 16GB 4080 is $1200 i dont see how a $800 4070 ti would be unreasonable.

In other words: if they did it for the 4060 ti, why not for the 4070 and 4070 ti? I would have probably bought a 24GB 4070 for $750 instead of my $600 12GB 4070 because over time the card will hold more value as games vram requirements will increase. Same reason why 6GB gtx 1060 is worth much more now than a 4GB one.

It is hard to estimate the exact cost because Nvidia signs tailored supply contracts ahead of time, so the prices that they pay on each unit may be lower or higher than the average cost of each unit in the regular bulk market. But rest assured, it's less than $25 per chip, significantly less. $25 or so was rumored to be the cost of single GDDR6X chips when they were brand new 3 years ago, and that's why there was a very large difference in price going from the 3080 to the 3090.

The reason they don't add more memory to consumer-grade graphics cards is a business one, they don't want businesses to buy them and want to sell their enterprise products instead. Nvidia's MSRP price for the RTX 4090 is $1600 USD, but its equivalent professional card (RTX 6000 Ada Generation) with 48 GB costs a cool $6800.

There is also another concern that is an undesirable for gaming cards is that this memory isn't free in terms of power consumption. The original non-Ti RTX 3090 is the biggest example of that, I will link you this thread where I was discussing it a few weeks ago:

 
Joined
Feb 3, 2017
Messages
3,543 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
That sounds like of lot of PR/marketing copium and damage control. The article completely sidesteps the issue of the VRAM having to hold the texture assets. Sure, larger cache means shader functions have fewer cache hits, but that's like 2% of what VRAM is actually used for by games and game delelopers.
Half/most of that is explaining the relevance of cache and why the memory bus sizes have been going down. VRAM size problems are part of this due to available memory chip sizes.

Btw, this is not a Nvidia thing. AMD did the large cache thing with RDNA2 and reduced cache sizes in RDNA3 - looking at RDNA2, RDNA3 and Ada they are trying to hone in on the sweetspot of cache size and performance benefit. AMD will have the same choices in front of them and there will be some cool marketing to accompany it.
 
Joined
Jun 14, 2020
Messages
2,678 (1.85/day)
System Name Mean machine
Processor 13900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Well I have to give Nvidia some credit for their honesty. I mean this is like a coming out, even if they don't realize it, they are confirming more suspicion than they've removed. They do that especially in the lines where they say even a 4060ti will benefit from 16GB for higher IQ settings. They know they can't fight facts.

Imagine buying an 8GB card with all this information. You'd be pretty damn stupid. Especially if you know RT also demands added VRAM - the very feature Nvidia itself pushes.
I've been very vocal against the latest "8gb not enough" clickbait drama caused by youtubers, I have to admit I was talking about the now almost 3 year old 3070 and 3060ti. For those cards, 8gb was fine. Yes you have to drop textures in a couple of games to high - but again - we are talking about 3 year old midrange cards, that's normal.

But a brand new 4060 having 8gb is just...uhm....let's just say...suboptimal. It might be fine today even for ultra textures due to the extra cache (not sure - waiting for reviews), but man, 3 years down the road they will age much much worse than the ampere 8gb cards did. The 4060 and the 4060ti should have a single model with 12gb and call it a day. I don't know wth is the leatherman doing, it really doesn't make sense to release 8gb, and I can't explain that by just "greediness".
 
Joined
Aug 26, 2021
Messages
293 (0.29/day)
Or you know just give more vram instead and you wouldn't have to spend money on marketing.
Nvidia decided that they and their shareholders didn't want the COVID/mining gravy train to end.

Let's not mince words this is the most ripoff series launch Nvidia has ever done they moved cards down a silicon teir again and doubled the prices.
 
Joined
Jun 14, 2020
Messages
2,678 (1.85/day)
System Name Mean machine
Processor 13900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Yeah, stretching out another couple hundred bucks for the 7900 XT if possible seems generally sensible to me. I'm personally not sold on DLSS 3, I would maybe be more lenient with it if Nvidia didn't willingly withhold it from us 30 series owners, but I already tend to keep traditional DLSS off whenever possible, so frame generation couldn't possibly sway me either way.



RT is of questionable value, but frame generation is going to make or break these lower-end cards. Nvidia is fully accounting its frame generation technology into the general performance uplift and they strongly encourage you to enable it regardless of impact on image quality. Regarding Ada's lowest segments (such as 4050 mobile), you are essentially expected to use DLSS3 FG to achieve playable frame rates. Sucks to be you if the game you want to play doesn't support it, mail your dev requesting it or just don't be poor I guess.
FG is great - not when you are GPU bound - but when you are CPU bound. Hogwarts is almost unplayable without it, no matter what CPU and GPU you have. FG really shines on that game.
 
Joined
Dec 25, 2020
Messages
4,908 (3.91/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Ocean Plastic Mouse
Keyboard Galax Stealth
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
FG is great - not when you are GPU bound - but when you are CPU bound. Hogwarts is almost unplayable without it, no matter what CPU and GPU you have. FG really shines on that game.

Shortcut to performance regardless... I'm giving it a hard pass :laugh:
 
Joined
Jun 14, 2020
Messages
2,678 (1.85/day)
System Name Mean machine
Processor 13900k
Motherboard MSI Unify X
Cooling Noctua U12A
Memory 7600c34
Video Card(s) 4090 Gamerock oc
Storage 980 pro 2tb
Display(s) Samsung crg90
Case Fractal Torent
Audio Device(s) Hifiman Arya / a30 - d30 pro stack
Power Supply Be quiet dark power pro 1200
Mouse Viper ultimate
Keyboard Blackwidow 65%
Shortcut to performance regardless... I'm giving it a hard pass :laugh:
Well there is no other option, you either don't play the game or activate FG. I tried overclocking my 12900k to 5.6ghz all core at 1.64 volts, it was melting at 114c but hogwarts was not budging, certain areas dropped me below 60. That was on a fully tuned 12900k with manual ram. It's just one of those games...
 
Joined
Apr 14, 2022
Messages
672 (0.86/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Asus XG35VQ
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Software Windows 11 64bit
The reason they don't add more memory to consumer-grade graphics cards is a business one, they don't want businesses to buy them and want to sell their enterprise products instead. Nvidia's MSRP price for the RTX 4090 is $1600 USD, but its equivalent professional card (RTX 6000 Ada Generation) with 48 GB costs a cool $6800.

Let's write it again.
nVidia CANNOT put large amount of vram on their consumer cards because the latter have millions of uses far from gaming.
That's why putting some more vram is a different model, A series, Quadro and cost 3 to 6 times more.

nVidia is extremely careful where they put additional vram. For example 1080Ti or 2080Ti with 11GB of VRAM perform ages better today compared to identically performant 3070 etc. Yes but the consumer paid 1000$+ for the 2080Ti....and you remember the gold Titan RTX with 24 GB of VRAM and nearly 2.5 times the price of the 2080Ti.

Yes, they put 12GB in the 3060. The card is quite slow for the prosumer/professional, so the added vram does not make it attractive.
Yes, they put 16GB in the 4060Ti which seem to perform 10-15% faster than the 3060Ti, so the same issue again.

The VRAM is extremely valuable in nVidia cards, so they will do everything to not put enough, unless you pay for a bigger model.
It's worth developing a software trick to compensate the necessity of loads of vram than adding more vram in their cards.
And that's what is coming next.
 
Joined
Feb 3, 2017
Messages
3,543 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
You jest right?
There's a difference between adding extra cache in an attempt to give your cards an edge versus adding extra cache in an effort to save some money by skimping on the VRAM/bus width.
When AMD does it, it is to give your cards an edge.
When Nvidia does it, it is to skimp on VRAM/bus width.
Got it.

xx60 class has always represented "the sweet spot" and for gamers, the sweet spot moved on from 1080p60 a long time ago.
IMO, the sweet spot has been 1440p high refresh with VRR for years now. You don't need to always get >144 fps but an average of ~90fps with 1% lows of over 60 is a good place to be.
I think you are wrong on this one. 1080p is still the most mainstream monitor resolution and 1440p high refresh is still a very very heavy use case.
Games are still getting heavier on the GPU - outside the VRAM thing - and the resolutions and refresh rates are not moving on as much.
For enthusiasts yes, 2160p@120 and even above has become a thing but it also basically requires cards at price points that were not a thing a few generations back.
 
Joined
Sep 17, 2014
Messages
21,214 (5.98/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Half/most of that is explaining the relevance of cache and why the memory bus sizes have been going down. VRAM size problems are part of this due to available memory chip sizes.

Btw, this is not a Nvidia thing. AMD did the large cache thing with RDNA2 and reduced cache sizes in RDNA3 - looking at RDNA2, RDNA3 and Ada they are trying to hone in on the sweetspot of cache size and performance benefit. AMD will have the same choices in front of them and there will be some cool marketing to accompany it.
And yet they offer an RDNA3 line up that offers 20-24GB starting at the level of Nvidia's 12GB.

It matters a lot how these choices are timed... Nvidia has been dropping VRAM since Turing already. AMD hasnt honed in on anything just yet, but DO use larger cache. I dont entirely believe Nvidia needs to cut it down as it does 'for gaming' nor that AMD feels like it faces the same issues wrt profitability.

Keep in mind the consoles are a major factor where RDNA finds alignment. AMD isnt going to paint itself in a corner and we already see how they dont have those typical Nvidia struggles wrt game stability, esp when VRAM is involved.

Pricing then. I think the pricing (of RDNA2-3; not of Ada) is fine IF you get a piece of hardware that can happily run shit for >5 years. But if the expiry date has been pulled forward to 3 years like we see on Ampere midrange today... thats a big box of nope to me.
 
Last edited:
Joined
Dec 25, 2020
Messages
4,908 (3.91/day)
Location
São Paulo, Brazil
System Name Project Kairi Mk. IV "Eternal Thunder"
Processor 13th Gen Intel Core i9-13900KS Special Edition
Motherboard MSI MEG Z690 ACE (MS-7D27) BIOS 1G
Cooling Noctua NH-D15S + NF-F12 industrialPPC-3000 w/ Thermalright BCF and NT-H1
Memory G.SKILL Trident Z5 RGB 32GB DDR5-6800 F5-6800J3445G16GX2-TZ5RK @ 6400 MT/s 30-38-38-38-70-2
Video Card(s) ASUS ROG Strix GeForce RTX™ 4080 16GB GDDR6X White OC Edition
Storage 1x WD Black SN750 500 GB NVMe + 4x WD VelociRaptor HLFS 300 GB HDDs
Display(s) 55-inch LG G3 OLED
Case Cooler Master MasterFrame 700
Audio Device(s) EVGA Nu Audio (classic) + Sony MDR-V7 cans
Power Supply EVGA 1300 G2 1.3kW 80+ Gold
Mouse Microsoft Ocean Plastic Mouse
Keyboard Galax Stealth
Software Windows 10 Enterprise 22H2
Benchmark Scores "Speed isn't life, it just makes it go faster."
Well there is no other option, you either don't play the game or activate FG. I tried overclocking my 12900k to 5.6ghz all core at 1.64 volts, it was melting at 114c but hogwarts was not budging, certain areas dropped me below 60. That was on a fully tuned 12900k with manual ram. It's just one of those games...

In that case it's the game itself, mate. A 12900K is a monster of a CPU, it should be murdering every game out there for the next 5 years at a minimum. In those cases I just lower settings or better yet, don't play the game at all until it's either fixed or 75% off :laugh:

And yet they offer an RDNA3 line up that offers 20-24GB starting at the level of Nvidia's 12GB.

Primarily because their workstation cards are confined to an extremely specific niche and the market share for them is very small. Nvidia owns the visualization and creative professional market. They don't stand much to lose by releasing Radeon cards with high VRAM, but even then, the gaming variants usually have only half of the VRAM of the Radeon Pro GPUs. Nvidia just usually shaves a little extra.

I think you are wrong on this one. 1080p is still the most mainstream monitor resolution and 1440p high refresh is still a very very heavy use case.
Games are still getting heavier on the GPU - outside the VRAM thing - and the resolutions and refresh rates are not moving on as much.
For enthusiasts yes, 2160p@120 and even above has become a thing but it also basically requires cards at price points that were not a thing a few generations back.

I've been using 1080p 60 Hz myself. A few months ago I grabbed one of Samsung's quantum dot Frame TVs when my Sony X900F kicked it. Told you guys the story I believe, infested with ants. It looks fantastic and is quite comfortable to look at (despite being smaller than I'd like it to be), and ultimately I think this is what people value more on a monitor instead of raw Hz, resolution or whatever. Something that's enjoyable to look at.

I've been wanting to purchase a high-end LG OLED, and that is indeed going to be my next big tech purchase (I'm just skipping this generation of GPUs entirely) but honestly, no rush. As long as ultra settings are achievable, 4K 60 is fine, btw. No need for more.
 
Joined
Jan 3, 2021
Messages
2,819 (2.26/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Nvidia, and others too, should at least start experimenting with two-bits-per-cell RAM. Am I joking? No.
 
Joined
Feb 20, 2019
Messages
7,485 (3.88/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Let's write it again.
nVidia CANNOT put large amount of vram on their consumer cards because the latter have millions of uses far from gaming.
That's why putting some more vram is a different model, A series, Quadro and cost 3 to 6 times more.

nVidia is extremely careful where they put additional vram. For example 1080Ti or 2080Ti with 11GB of VRAM perform ages better today compared to identically performant 3070 etc. Yes but the consumer paid 1000$+ for the 2080Ti....and you remember the gold Titan RTX with 24 GB of VRAM and nearly 2.5 times the price of the 2080Ti.

Yes, they put 12GB in the 3060. The card is quite slow for the prosumer/professional, so the added vram does not make it attractive.
Yes, they put 16GB in the 4060Ti which seem to perform 10-15% faster than the 3060Ti, so the same issue again.

The VRAM is extremely valuable in nVidia cards, so they will do everything to not put enough, unless you pay for a bigger model.
It's worth developing a software trick to compensate the necessity of loads of vram than adding more vram in their cards.
And that's what is coming next.
As someone who buys Quadro cards for VRAM, the VRAM issue is present on Quadros too.

What Nvidia need to do is double the RAM accross the whole product range, Geforce and Quadro. If that means developing dual-rank GDDR6 controllers, then that's what they need to do, but their stagnation in VRAM capacity for the last 5 years is hurting both enterprise and consumer markets alike.
 
Joined
Jan 28, 2019
Messages
10 (0.01/day)
let me explain to Nvidia my spent functionality:
increase VRAM and lower RTX 40 series price or stay on shelf.
 
Top