- Joined
- Sep 1, 2020
- Messages
- 2,688 (1.55/day)
- Location
- Bulgaria
What the difference between GDDR6 and GDDR6X?
- One letter more.
- One letter more.
System Name | Tiny the White Yeti |
---|---|
Processor | 7800X3D |
Motherboard | MSI MAG Mortar b650m wifi |
Cooling | CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3 |
Memory | 32GB Corsair Vengeance 30CL6000 |
Video Card(s) | ASRock RX7900XT Phantom Gaming |
Storage | Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB |
Display(s) | Gigabyte G34QWC (3440x1440) |
Case | Lian Li A3 mATX White |
Audio Device(s) | Harman Kardon AVR137 + 2.1 |
Power Supply | EVGA Supernova G2 750W |
Mouse | Steelseries Aerox 5 |
Keyboard | Lenovo Thinkpad Trackpoint II |
VR HMD | HD 420 - Green Edition ;) |
Software | W11 IoT Enterprise LTSC |
Benchmark Scores | Over 9000 |
Its not just different - the point I was making is that Nvidia gambled on NOT fixing increased demand on their memory system with expensive HBM that was also hard to source and gpus on it harder to produce. Basic GDDR5 simply wasnt an option to cover the performance offered by anything as fast or faster than Vega. Both AMD and Nvidia saw this. Big Cache apparently wasnt an option either.Thats basically what I said.
The X variant is a something different that entices people to want the product. If they just used basic GDDR, there would be less of a draw and as such lower prices.
The HBM failed experiment was a bad move on AMD part trying to do same, but luckily they reverted to basic GDDR after.
Processor | Ryzen 7800X3D |
---|---|
Motherboard | ROG STRIX B650E-F GAMING WIFI |
Memory | 2x16GB G.Skill Flare X5 DDR5-6000 CL36 (F5-6000J3636F16GX2-FX5) |
Video Card(s) | INNO3D GeForce RTX™ 4070 Ti SUPER TWIN X2 |
Storage | 2TB Samsung 980 PRO, 4TB WD Black SN850X |
Display(s) | 42" LG C2 OLED, 27" ASUS PG279Q |
Case | Thermaltake Core P5 |
Power Supply | Fractal Design Ion+ Platinum 760W |
Mouse | Corsair Dark Core RGB Pro SE |
Keyboard | Corsair K100 RGB |
VR HMD | HTC Vive Cosmos |
PAM4 signaling.What the difference between GDDR6 and GDDR6X?
- One letter more.
You are right and you are making a really good point. For some reason I did not really look at the bandwidth similarity in this case.Let's compare the 7800 XT and the 4070 Ti. The former uses a 256-bit bus with 19.5 Gbps GDDR6, resulting in 624.1 GB/s total bandwidth. The latter uses a 192-bit bus with 21 Gbps GDDR6X, resulting in 504.2 GB/s. To imagine the same results with GDDR6 (non-X), Nvidia would have to use a 256-bit bus which results in a larger GPU die and more VRAM chips on a more complex PCB. GDDR6X saves costs on these fronts.
Processor | i5-6600K |
---|---|
Motherboard | Asus Z170A |
Cooling | some cheap Cooler Master Hyper 103 or similar |
Memory | 16GB DDR4-2400 |
Video Card(s) | IGP |
Storage | Samsung 850 EVO 250GB |
Display(s) | 2x Oldell 24" 1920x1200 |
Case | Bitfenix Nova white windowless non-mesh |
Audio Device(s) | E-mu 1212m PCI |
Power Supply | Seasonic G-360 |
Mouse | Logitech Marble trackball, never had a mouse |
Keyboard | Key Tronic KT2000, no Win key because 1994 |
Software | Oldwin |
Processor | 13th Gen Intel Core i9-13900KS |
---|---|
Motherboard | ASUS ROG Maximus Z790 Apex Encore |
Cooling | Pichau Lunara ARGB 360 + Honeywell PTM7950 |
Memory | 32 GB G.Skill Trident Z5 RGB @ 7600 MT/s |
Video Card(s) | Palit GameRock OC GeForce RTX 5090 32 GB |
Storage | 500 GB WD Black SN750 + 4x 300 GB WD VelociRaptor WD3000HLFS HDDs |
Display(s) | 55-inch LG G3 OLED |
Case | Cooler Master MasterFrame 700 benchtable |
Power Supply | EVGA 1300 G2 1.3kW 80+ Gold |
Mouse | Microsoft Classic IntelliMouse |
Keyboard | IBM Model M type 1391405 |
Software | Windows 10 Pro 22H2 |
Benchmark Scores | I pulled a Qiqi~ |
Thanks Raja for that stunt and his professional leadership of the development process. He had all the resources to foresee a bad product. He was hired to have the ability to foresee it just looking at the dev drawing. It ain't the memory at fault here.
GDDR6 or X or not... are GPU's that memory speed starved? Definitely not.
HBM was made with AMD collab and was used first on Fiji. So, NV simply couldn't get it before Fiji launched (IF they REALLY wanted it). Aside from that, in 2015, HBM 1.0 was probably too much hassle to make work for NV, on top of being very capacity limited AND not really needed it at that point (since HBM didn't help AMD beat Maxwell 2.0).
Vega 10 was made with HBM2 in mind, and you can't simply switch memory tech mid way because "it's too expensive to implement"). Also, Vega 10 needed HBM2 to not blow past power budget too fast (as every MHz was needed to counter Pascal price/performance metric).
Titan V was released in December 2017, and it was at least 50% cheaper than previous HBM2 GPU (Quadro GP100)(/s)
And it is faster than Vega 20 which launched a bit over year later (even with one HBM2 stack disabled vs. Vega 20).
System Name | My second and third PCs are Intel + Nvidia |
---|---|
Processor | AMD Ryzen 7 7800X3D @ 45 W TDP Eco Mode |
Motherboard | MSi Pro B650M-A Wifi |
Cooling | Noctua NH-D9L chromax.black |
Memory | 2x 24 GB Corsair Vengeance DDR5-6000 CL36 |
Video Card(s) | PowerColor Reaper Radeon RX 9070 XT |
Storage | 2 TB Corsair MP600 GS, 4 TB Seagate Barracuda |
Display(s) | Dell S3422DWG 34" 1440 UW 144 Hz |
Case | Corsair Crystal 280X |
Audio Device(s) | Logitech Z333 2.1 speakers, AKG Y50 headphones |
Power Supply | 750 W Seasonic Prime GX |
Mouse | Logitech MX Master 2S |
Keyboard | Logitech G413 SE |
Software | Bazzite (Fedora Linux) KDE Plasma |
It's not exactly the same bandwidth. 624 vs 502 GB/s (even with GDDR6X), but I get your point.PAM4 signaling.
You are right and you are making a really good point. For some reason I did not really look at the bandwidth similarity in this case.
As a side note, this only makes me want to see someone do a real head-to-head with 4070Ti and 7800XT. 60CU/SM, same memory bandwidth - this is as close a comparison of RDNA3 vs Ada as we can get. Forcing both to run at same/similar clock speed would give a cool comparison point. I bet RDNA3 and Ada are pretty even in performance architecturally.
System Name | Good enough |
---|---|
Processor | AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge |
Motherboard | ASRock B650 Pro RS |
Cooling | 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30 |
Memory | 32GB - FURY Beast RGB 5600 Mhz |
Video Card(s) | Sapphire RX 7900 XT - Alphacool Eisblock Aurora |
Storage | 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB |
Display(s) | LG UltraGear 32GN650-B + 4K Samsung TV |
Case | Phanteks NV7 |
Power Supply | GPS-750C |
I mean he was the guy in charge, GCN was clearly compute oriented and stayed that way throughout all it's lifespan, as it was used as a basis for CDNA, an architecture literally designed to be compute only. They could have made Vega much faster at raster by simply adding more ROPs and restructuring the CUs for example, they choose not to, it's only natural to assume at the very least most of those choices were his doing.I don't agree with blaming Raja Koduri like a boogeyman.
Processor | 13th Gen Intel Core i9-13900KS |
---|---|
Motherboard | ASUS ROG Maximus Z790 Apex Encore |
Cooling | Pichau Lunara ARGB 360 + Honeywell PTM7950 |
Memory | 32 GB G.Skill Trident Z5 RGB @ 7600 MT/s |
Video Card(s) | Palit GameRock OC GeForce RTX 5090 32 GB |
Storage | 500 GB WD Black SN750 + 4x 300 GB WD VelociRaptor WD3000HLFS HDDs |
Display(s) | 55-inch LG G3 OLED |
Case | Cooler Master MasterFrame 700 benchtable |
Power Supply | EVGA 1300 G2 1.3kW 80+ Gold |
Mouse | Microsoft Classic IntelliMouse |
Keyboard | IBM Model M type 1391405 |
Software | Windows 10 Pro 22H2 |
Benchmark Scores | I pulled a Qiqi~ |
I mean he was the guy in charge, GCN was clearly compute oriented and stayed that way throughout all it's lifespan, as it was used as a basis for CDNA, an architecture literally designed to be compute only. They could have made Vega much faster at raster by simply adding more ROPs and restructuring the CUs for example, they choose not to, it's only natural to assume at the very least most of those choices were his doing.
System Name | HELLSTAR |
---|---|
Processor | AMD RYZEN 9 5950X |
Motherboard | ASUS Strix X570-E |
Cooling | 2x 360 + 280 rads. 3x Gentle Typhoons, 3x Phanteks T30, 2x TT T140 . EK-Quantum Momentum Monoblock. |
Memory | 4x8GB G.SKILL Trident Z RGB F4-4133C19D-16GTZR 14-16-12-30-44 |
Video Card(s) | Sapphire Pulse RX 7900XTX. Water block. Crossflashed. |
Storage | Optane 900P[Fedora] + WD BLACK SN850X 4TB + 750 EVO 500GB + 1TB 980PRO+SN560 1TB(W11) |
Display(s) | Philips PHL BDM3270 + Acer XV242Y |
Case | Lian Li O11 Dynamic EVO |
Audio Device(s) | SMSL RAW-MDA1 DAC |
Power Supply | Fractal Design Newton R3 1000W |
Mouse | Razer Basilisk |
Keyboard | Razer BlackWidow V3 - Yellow Switch |
Software | FEDORA 41 |
I don't think it's that simple, you can't just add more ROPs and expect that to fix the other issues of the architecture.
System Name | Good enough |
---|---|
Processor | AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge |
Motherboard | ASRock B650 Pro RS |
Cooling | 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30 |
Memory | 32GB - FURY Beast RGB 5600 Mhz |
Video Card(s) | Sapphire RX 7900 XT - Alphacool Eisblock Aurora |
Storage | 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB |
Display(s) | LG UltraGear 32GN650-B + 4K Samsung TV |
Case | Phanteks NV7 |
Power Supply | GPS-750C |
Vega clearly had the shading power, something else was the bottleneck in the pipeline and that was likely the amounts of ROPs. Both 1080 and Vega 64 had 64 ROPs, it was clear Vega was never going to be faster than that.I don't think it's that simple, you can't just add more ROPs and expect that to fix the other issues of the architecture.
Processor | 13th Gen Intel Core i9-13900KS |
---|---|
Motherboard | ASUS ROG Maximus Z790 Apex Encore |
Cooling | Pichau Lunara ARGB 360 + Honeywell PTM7950 |
Memory | 32 GB G.Skill Trident Z5 RGB @ 7600 MT/s |
Video Card(s) | Palit GameRock OC GeForce RTX 5090 32 GB |
Storage | 500 GB WD Black SN750 + 4x 300 GB WD VelociRaptor WD3000HLFS HDDs |
Display(s) | 55-inch LG G3 OLED |
Case | Cooler Master MasterFrame 700 benchtable |
Power Supply | EVGA 1300 G2 1.3kW 80+ Gold |
Mouse | Microsoft Classic IntelliMouse |
Keyboard | IBM Model M type 1391405 |
Software | Windows 10 Pro 22H2 |
Benchmark Scores | I pulled a Qiqi~ |
Vega clearly had the shading power, something else was the bottleneck in the pipeline and that was likely the amounts of ROPs. Both 1080 and Vega 64 had 64 ROPs, it was clear Vega was never going to be faster than that.
RDNA1 also had 64 ROPs and what do you know it was still not much faster than Vega, then RDNA2 came along with a CU not much different from RDNA1 but now double the ROPs and boom, up to 2.5X faster than Vega.
It actually is. As we see cut down GPU's as different SKU's and how they perform. The tough part is making that into silicon and RD costs of making such specific one. IE it is planning. But the math remains simple.
After Vega Launch and directors meeting, he packed his stuff and went to sabbatical you get the idea? He basically admitted it.
I don't agree that memory speed actually matters that much. It has always been that there is an architectural sweet spot, were further gains are minimal... basically if you design with that mind, there penalty of using slower memory, yet cheaper and more available is counterweights the gains of speed. It all about balancing.
What you don't do is make a product yet faster memory, claim it as it can cope with the fact it ain't enough because of caching etc... that was a pretty low bar...