• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 50-series "Blackwell" to use 28 Gbps GDDR7 Memory Speed

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,476 (7.66/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
The first round of NVIDIA GeForce RTX 50-series "Blackwell" graphics cards that implement GDDR7 memory are rumored to come with a memory speed of 28 Gbps, according to kopite7kimi, a reliable source with NVIDIA leaks. This is despite the fact that the first GDDR7 memory chips will be capable of 32 Gbps speeds. NVIDIA will also stick with 16 Gbit densities for the GDDR7 memory chips, which means memory sizes could remain largely unchanged for the next generation; with the 28 Gbps GDDR7 memory chips providing 55% higher bandwidth over 18 Gbps GDDR6 and 33% higher bandwidth than 21 Gbps GDDR6X. It remains to be seen what memory bus widths NVIDIA chooses for its individual SKUs.

NVIDIA's decision to use 28 Gbps as its memory speeds has some precedent in recent history. The company's first GPUs to implement GDDR6, the RTX 20-series "Turing," opted for 14 Gbps speeds despite 16 Gbps GDDR6 chips being available. 28 Gbps is exactly double that speed. Future generations of GeForce RTX GPUs, or even refreshes within the RTX 50-series could see NVIDIA opt for higher memory speeds such as 32 Gbps. When the standard debuts, companies like Samsung even plan to put up fast 36 Gbps chips. Besides a generational doubling in speeds, GDDR7 is more energy-efficient as it operates at lower voltages than GDDR6. It also uses a more advanced PAM3 physical layer signaling compared to NRZ for JEDEC-standard GDDR6.



View at TechPowerUp Main Site | Source
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
16,453 (4.70/day)
Location
Kepler-186f
Processor Ryzen 7800X3D -30 uv
Motherboard AsRock Steel Legend B650
Cooling MSI C360 AIO
Memory 32gb 6000 CL 30-36-36-76
Video Card(s) MERC310 7900 XT -60 uv +150 core
Display(s) NZXT Canvas IPS 1440p 165hz 27"
Case NZXT H710 (Red/Black)
Audio Device(s) HD58X, Asgard 2, Modi 3
Power Supply Corsair RM850W
It will be interesting to see benchmarks, my guess is GDDR7 will help 4k gamers and 3440x1440 gamers, but any resolution below that may not benefit from the extra speed of GDDR7. None the less, it's nice to see progress still, we are lucky this industry even exists and they didn't just force cloud gaming down our throats, that day will come, but thankfully its not today.
 

ARF

Joined
Jan 28, 2020
Messages
4,044 (2.57/day)
Location
Ex-usa
If AMD doesn't return with a large monolithic GPU, you will see how nvidia will charge $2900 for GB103 based RTX 5080 10-20% faster than RTX 4090 and won't even bother to release the much larger GB102 in RTX 5090.

Let's hope AMD and nvidia don't intentionally allign once again their lineups, so that Radeon RX 8700 XT is 10% faster than RX 7800 XT, and RTX 5060 is 5% faster than RTX 4060.
That will be a disaster, but at the same time the gamers will save some money because those lineups will make the negative buying decisions much easier.
 
Joined
Jan 19, 2023
Messages
230 (0.47/day)
They should not return to monolithic design. They have the advantage that they already released chiplet based consumer GPU and server market one. Sooner or later nvidia will also switch to chiplet design but design and actual release are two different things. But remains to be seen if AMD will use that advantage.
 
Joined
Jul 15, 2020
Messages
981 (0.70/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.025mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F26 with "Instant 6 GHz" on
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 85%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
So one can expect the same amount of GDDR per GPU tire as the current gen.
Lovely.
We can all keep enjoying the endless ‘not enough memory vs it just allocating’ and you ‘want to be future proof vs by than your fps will tank anyway’ for another episode.

Also, NV must keep the 32gbps variant for the upcoming ‘super’ 5xxx series.
 
Joined
May 11, 2018
Messages
998 (0.45/day)
If AMD doesn't return with a large monolithic GPU, you will see how nvidia will charge $2900 for GB103 based RTX 5080 10-20% faster than RTX 4090 and won't even bother to release the much larger GB102 in RTX 5090.

Let's hope AMD and nvidia don't intentionally allign once again their lineups, so that Radeon RX 8700 XT is 10% faster than RX 7800 XT, and RTX 5060 is 5% faster than RTX 4060.
That will be a disaster, but at the same time the gamers will save some money because those lineups will make the negative buying decisions much easier.

I was just wandering how Nvidia will pull + 50% peeformance / +50% price increase this generation without the public outrage, and one option is of course:

- approx. $2000 RTX 5080, which will be faster than RTX 4090 ("so youre getting your money's worth", reviewers will be paid to say)

- a tier above that will be a new "Titan" class card, but aimed at "Home AI acceleration". Price? Sky is the limit.
 
Joined
Apr 9, 2013
Messages
231 (0.06/day)
Location
Chippenham, UK
System Name Hulk
Processor 7800X3D
Motherboard Asus ROG Strix X670E-F Gaming Wi-Fi
Cooling Custom water
Memory 32GB 3600 CL18
Video Card(s) 4090
Display(s) LG 42C2 + Gigabyte Aorus FI32U 32" 4k 120Hz IPS
Case Corsair 750D
Power Supply beQuiet Dark Power Pro 1200W
Mouse SteelSeries Rival 700
Keyboard Logitech G815 GL-Tactile
VR HMD Quest 2
My hopes are not high for the 5000 series given the attention given to making AI HW atm & AMD not competing at the top end this time round :( On the plus side, my 4090 will remain relevant for longer I guess.

I was just wandering how Nvidia will pull + 50% peeformance / +50% price increase this generation without the public outrage, and one option is of course:

- approx. $2000 RTX 5080, which will be faster than RTX 4090 ("so youre getting your money's worth", reviewers will be paid to say)

- a tier above that will be a new "Titan" class card, but aimed at "Home AI acceleration". Price? Sky is the limit.
I don't think even Nvidia would be as bold as jumping to $2k for the 5080. I agree with your just faster than the 4090 though, my guess is just under 4090 MSRP with 10% more performance so Jensen can pretend he's our friend.
 
Low quality post by Bwaze
Joined
May 11, 2018
Messages
998 (0.45/day)
I don't think even Nvidia would be as bold as jumping to $2k for the 5080. I agree with your just faster than the 4090 though, my guess is just under 4090 MSRP with 10% more performance so Jensen can pretend he's our friend.

But it's the logical conclusion of Jensen's "Moire's Law Is Dead" paradigm.

2020, RTX 3080 - $700
2022, RTX 4080 - $1200
2024, RTX 5080 - $2040
2026, RTX 6080 - $3468
2028, RTX 7080 - $5896
2030, RTX 8080 - $10022
2032, RTX 9080 - $17038
2034, RTX 1080 - $28965
 
Low quality post by HOkay
Joined
Apr 9, 2013
Messages
231 (0.06/day)
Location
Chippenham, UK
System Name Hulk
Processor 7800X3D
Motherboard Asus ROG Strix X670E-F Gaming Wi-Fi
Cooling Custom water
Memory 32GB 3600 CL18
Video Card(s) 4090
Display(s) LG 42C2 + Gigabyte Aorus FI32U 32" 4k 120Hz IPS
Case Corsair 750D
Power Supply beQuiet Dark Power Pro 1200W
Mouse SteelSeries Rival 700
Keyboard Logitech G815 GL-Tactile
VR HMD Quest 2
But it's the logical conclusion of Jensen's "Moire's Law Is Dead" paradigm.

2020, RTX 3080 - $700
2022, RTX 4080 - $1200
2024, RTX 5080 - $2040
2026, RTX 6080 - $3468
2028, RTX 7080 - $5896
2030, RTX 8080 - $10022
2032, RTX 9080 - $17038
2034, RTX 1080 - $28965
That's fine, as long as my income scales in the same way :D
 

ARF

Joined
Jan 28, 2020
Messages
4,044 (2.57/day)
Location
Ex-usa
They should not return to monolithic design.

Monolithic is better.
1710156060612.png
 

3x0

Joined
Oct 6, 2022
Messages
898 (1.52/day)
Processor AMD Ryzen 7 5800X3D
Motherboard MSI MPG B550I Gaming Edge Wi-Fi ITX
Cooling Scythe Fuma 2 rev. B Noctua NF-A12x25 Edition
Memory 2x16GiB G.Skill TridentZ DDR4 3200Mb/s CL14 F4-3200C14D-32GTZKW
Video Card(s) PowerColor Radeon RX7800 XT Hellhound 16GiB
Storage Western Digital Black SN850 WDS100T1X0E-00AFY0 1TiB, Western Digital Blue 3D WDS200T2B0A 2TiB
Display(s) Dell G2724D 27" IPS 1440P 165Hz, ASUS VG259QM 25” IPS 1080P 240Hz
Case Cooler Master NR200P ITX
Audio Device(s) Altec Lansing 220, HyperX Cloud II
Power Supply Corsair SF750 Platinum 750W SFX
Mouse Lamzu Atlantis Mini Wireless
Keyboard HyperX Alloy Origins Aqua
Joined
Nov 27, 2023
Messages
1,166 (6.70/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original)
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (23H2)
Theoretically, if we are talking in a vacuum, sure. It’s also unsustainable in terms of yields as the designs get denser and more complex. We already have seen it with the AD102. There is a good reason AMD is on chiplets, Intel is going to chiplets and for NV it’s the question of when, not if.
 
Joined
Sep 2, 2014
Messages
650 (0.18/day)
Location
Scotland
Processor 5800x
Motherboard b550-e
Cooling full - custom liquid loop
Memory cl16 - 32gb
Video Card(s) 6800xt
Storage nvme 1TB + ssd 750gb
Display(s) xg32vc
Case hyte y60
Power Supply 1000W - gold
Software 10
In last 30 days i seen 4 or 5 post, telling the same over and over and over again.

Now every time nvidia farts we get a new post..
 
Joined
Jul 13, 2016
Messages
2,890 (1.01/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10

No, chiplet is 100% better. For starters there are papers demonstrating that a chiplet based architecture with an active interposer can achieve lower latency than a monolithic design (like the one done by the University of Toronto). An active interposer gives the chip a dedicated point to point communication layer, which will be superior to a monolithic design that has to route wires around CPU features particularly as complexity scales up.

A monolithic chip is smaller individually but that's essentially irrelevant given that chiplets are larger horizontally. It does not make the chip extend beyond the height of caps for example and does not increase size requirements for devices. The larger chiplet based design would be easier to cool as well. Of course both of the above depend on the exact chiplet design, Intel's chiplets for example are much closer together so the size difference between it and a monolithic design is going to be less.

Of course chiplets also allow modularity, superior binning (each individual chiplet on a die can be binned), chips to exceed the reticle limit, they are cheaper to produce, and they have higher yield as compared to the same chip in a monolithic design.

AMD has 3 chiplet designs for it's entire CPU lineup: The IO die, Zen 4 core die, and the Zen 4c core die. Meanwhile Intel needs dozens of designs to address the same markets.

This is why Intel is switching to chiplet, it's just better.
 
Joined
Dec 31, 2020
Messages
785 (0.64/day)
Processor E5-2690 v4
Motherboard VEINEDA X99
Video Card(s) 2080 Ti WINDFROCE OC
Storage NE-512 KingSpec
Display(s) G27Q
Case DAOTECH X9
Power Supply SF450
that was the case of GDDR5 7Gbps as well for low cost. expect 60 series with 28-32Gbps 70-series 42Gbps
unless they come up with GDDR7X again like G5X and G6X before just for a buzzword and otherwise nothingburger
 

ARF

Joined
Jan 28, 2020
Messages
4,044 (2.57/day)
Location
Ex-usa
We already have seen it with the AD102.

Meanwhile Navi 31 has a hard time to keep up with AD103. That is much smaller and less expensive to make.
Please, don't tell me fairy tales. Why did AMD abandon the failed Navi 41? Because the chiplet approach doesn't work and they leave large portions of performance on the table.

Meanwhile Intel needs dozens of designs to address the same markets.

That's because Intel is a large corporation and it can afford it. I, for one, also support the idea that it must cut at least 50% of the projects because they are not necessary and instead waste time and money.
 
Joined
Jun 11, 2019
Messages
492 (0.27/day)
Location
Moscow, Russia
Processor Intel 12600K
Motherboard Gigabyte Z690 Gaming X
Cooling CPU: Noctua NH-D15S; Case: 2xNoctua NF-A14, 1xNF-S12A.
Memory Ballistix Sport LT DDR4 @3600CL16 2*16GB
Video Card(s) Palit RTX 4080
Storage Samsung 970 Pro 512GB + Crucial MX500 500gb + WD Red 6TB
Display(s) Dell S2721qs
Case Phanteks P300A Mesh
Audio Device(s) Behringer UMC204HD
Power Supply Fractal Design Ion+ 560W
Mouse Glorious Model D-
- a tier above that will be a new "Titan" class card, but aimed at "Home AI acceleration". Price? Sky is the limit.
As much as I personally want a card like that to happen, there are two points that make it highly unlikely:
1) They already have a sky-high priced RTX6000 ADA, which is an almost fully unlocked AD102 with 48GB - exactly what one would want for "home AI acceleration." They're charging $7000 for it and it's consistently out of stock.
2) They can charge even more by allocating their production towards dedicated AI chips! Those go for 15k+ while the actual die size is around the same. Sure, packaging costs, even more memory and all that but, you know, that's where the margins are.

At the end of the day I wouldn't be too surprised if they stopped bothering with gaming altogether. IMHO this might end up even worse than mining for home GPU market - that stuff went in cycles, miners weren't paying 20k for a card even in the worst moments and they wanted the same product as we do. AI people want a different product - and they can pay the price for it, hogging up all the supply they can get. If they keep doing that - where's the incentive to sell us the cards or make them better? Even high-end will become like mid- and low-end already has, with measly +5-10% boosts every couple of years.
 
Last edited:
Joined
Dec 5, 2020
Messages
169 (0.13/day)
No, chiplet is 100% better. For starters there are papers demonstrating that a chiplet based architecture with an active interposer can achieve lower latency than a monolithic design (like the one done by the University of Toronto). An active interposer gives the chip a dedicated point to point communication layer, which will be superior to a monolithic design that has to route wires around CPU features particularly as complexity scales up.
And HBM is better than GDDR6x. And Graphene is better than Silicium. For regular consumers chiplets are purely a cost saving measure with performance/efficiency downsides. Although I'm not even sure if they're cheaper to make GPU-wise as I'm sure the 7900XTX is more expensive in chip cost than the 4080. We don't know if chiplets are to blame though for RDNA3's shortcomings.

Of course at one point chiplets will be a necessity because of rising costs but I don't think of that as a positive for consumers. It's going to be all about increasing margins.

AMD has 3 chiplet designs for it's entire CPU lineup: The IO die, Zen 4 core die, and the Zen 4c core die.
That's not really true, is it? AMD also uses monolithic CPUs.
 
Joined
Feb 23, 2019
Messages
5,665 (2.96/day)
Location
Poland
Processor Ryzen 7 5800X3D
Motherboard Gigabyte X570 Aorus Elite
Cooling Thermalright Phantom Spirit 120 SE
Memory 2x16 GB Crucial Ballistix 3600 CL16 Rev E @ 3800 CL16
Video Card(s) RTX3080 Ti FE
Storage SX8200 Pro 1 TB, Plextor M6Pro 256 GB, WD Blue 2TB
Display(s) LG 34GN850P-B
Case SilverStone Primera PM01 RGB
Audio Device(s) SoundBlaster G6 | Fidelio X2 | Sennheiser 6XX
Power Supply SeaSonic Focus Plus Gold 750W
Mouse Endgame Gear XM1R
Keyboard Wooting Two HE
Joined
Jul 13, 2016
Messages
2,890 (1.01/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
Meanwhile Navi 31 has a hard time to keep up with AD103. That is much smaller and less expensive to make.

Is it? Where are you getting this info?

The GCD of Navi 31 is smaller and the MCDs are tiny, for all we know it could very well be cheaper even if the total die space used is higher.

Please, don't tell me fairy tales. Why did AMD abandon the failed Navi 41? Because the chiplet approach doesn't work and they leave large portions of performance on the table.

I'd like to point out that AMD is whopping Intel in the server and enterprise space and has the fastest consumer desktop processors as well. To me it seems you are purposefully ignoring things that don't favor your point.

AMD pointed out why they could not yet scale up their GPUs, bandwidth.

That's because Intel is a large corporation and it can afford it. I, for one, also support the idea that it must cut at least 50% of the projects because they are not necessary and instead waste time and money.

You are implying that a company is wasting money for the fun of it. I'm sure their CEO and shareholders would highly disagree.

And HBM is better than GDDR6x. And Graphene is better than Silicium. For regular consumers chiplets are purely a cost saving measure with performance/efficiency downsides.


The X3D CPUs absolutely prove this false. The binning of the 5950X does as well.

Although I'm not even sure if they're cheaper to make GPU-wise as I'm sure the 7900XTX is more expensive in chip cost than the 4080. We don't know if chiplets are to blame though for RDNA3's shortcomings.

The 7900 XTX has a die size of 304mm2 and the MCDs have a size of 34mm2. The cost is similar to that of a mid-ranged GPU. RDNA3 doesn't reach 4090 level performance because AMD was unable to get a 2nd GCD working.

I bet they failed to Google this at Nvidia. You should mail it to them.

Nvidia wrote a paper in 2017 about how chiplets are better FYI.
 
Joined
Nov 27, 2023
Messages
1,166 (6.70/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original)
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (23H2)
Meanwhile Navi 31 has a hard time to keep up with AD103. That is much smaller and less expensive to make.
So are we legitimately comparing what is a first consumer GPU chip using chiplets with a technology that’s at its peak (monolithic chip GPU) and immediately coming to conclusion that chiplets are worthless? That’s cool. Should I remind you that the OG Zen also had a “hard time” keeping up with the 7700K in many tasks? Guess that chiplet approach was worthless too.
 
Joined
Feb 23, 2019
Messages
5,665 (2.96/day)
Location
Poland
Processor Ryzen 7 5800X3D
Motherboard Gigabyte X570 Aorus Elite
Cooling Thermalright Phantom Spirit 120 SE
Memory 2x16 GB Crucial Ballistix 3600 CL16 Rev E @ 3800 CL16
Video Card(s) RTX3080 Ti FE
Storage SX8200 Pro 1 TB, Plextor M6Pro 256 GB, WD Blue 2TB
Display(s) LG 34GN850P-B
Case SilverStone Primera PM01 RGB
Audio Device(s) SoundBlaster G6 | Fidelio X2 | Sennheiser 6XX
Power Supply SeaSonic Focus Plus Gold 750W
Mouse Endgame Gear XM1R
Keyboard Wooting Two HE
I bet they failed to Google this at Nvidia. You should mail it to them.
I guess that's why Nvidia GPUs are becoming cheaper with every generation, right? Right?

1710163809188.jpeg
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,279 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I bet they failed to Google this at Nvidia. You should mail it to them.
Monolithic is better for efficient designs, multi-chip is better for scaling. There is a scaling threshold before you need to go multi-chip. The thing is, GPUs right now are right on the edge. Monolithic is becoming expensive to build, but multi-chip isn't justified just yet.

Nvidia can afford to build expensive monoliths, because AI will gobble up anything (and likes efficiency). AMD... just tries to play catch up.
 
Top