- Joined
- Oct 2, 2004
- Messages
- 13,930 (1.84/day)
System Name | Dark Monolith |
---|---|
Processor | AMD Ryzen 7 5800X3D |
Motherboard | ASUS Strix X570-E |
Cooling | Arctic Cooling Freezer II 240mm + 2x SilentWings 3 120mm |
Memory | 64 GB G.Skill Ripjaws V Black 3600 MHz |
Video Card(s) | XFX Radeon RX 9070 XT Mercury OC Magnetic Air |
Storage | Seagate Firecuda 530 4 TB SSD + Samsung 850 Pro 2 TB SSD + Seagate Barracuda 8 TB HDD |
Display(s) | ASUS ROG Swift PG27AQDM 240Hz OLED |
Case | Silverstone Kublai KL-07 |
Audio Device(s) | Sound Blaster AE-9 MUSES Edition + Altec Lansing MX5021 2.1 Nichicon Gold |
Power Supply | BeQuiet DarkPower 11 Pro 750W |
Mouse | Logitech G502 Proteus Spectrum |
Keyboard | UVI Pride MechaOptical |
Software | Windows 11 Pro |
I got some brainstorming going on around AMD's Navi architecture, particularly the "scaling" part they mention, which is suggesting multi-GPU setups on single PCB. So, I've expanded that a bit...
We know creating huge GPU's is very costly, mostly because of defective GPU's from the waffers.
So, more small GPU's fit on a single waffer, meaning you get more of them and less defective ones -> cheaper.
Fitting more GPU's the way we do it now requires SLi/CrossfireX profiles which are always pain in the ass and it just doesn't scale well.
Now, this is pure brainstorming with very little actual electronics knowledge...
Would it be possible to design smaller GPU's in such a way that you could stack 4 or 6 of them on a single PCB, but would present themselves to the system and behave as a single GPU without the use of ANY software or driver feature for that? I mean, like having more separate GPU chips that work together as a single physical GPU and not as multi-GPU which are tied together with SLI/Crossfire software.
Imagine internal compute units of it being connected to each other, but spread across multiple actual chips.
Or to go even further, separating compute units from memory controller and decoders and have them as individual chips on a card. Would it be even theoretically possible to go with such radical change to the way how graphic cards are being designed?
This would bring:
- Cheaper high end graphics due to simple stacking of slower smaller cores
- No scaling issues
- Decentralized heat output (multiple moderate heat points opposed to current single highly concentrated one)
What I'm suspecting is the limitation:
- Latency between GPU's once signal leaves the actual chip
- Memory connections & cache design
We know creating huge GPU's is very costly, mostly because of defective GPU's from the waffers.
So, more small GPU's fit on a single waffer, meaning you get more of them and less defective ones -> cheaper.
Fitting more GPU's the way we do it now requires SLi/CrossfireX profiles which are always pain in the ass and it just doesn't scale well.
Now, this is pure brainstorming with very little actual electronics knowledge...
Would it be possible to design smaller GPU's in such a way that you could stack 4 or 6 of them on a single PCB, but would present themselves to the system and behave as a single GPU without the use of ANY software or driver feature for that? I mean, like having more separate GPU chips that work together as a single physical GPU and not as multi-GPU which are tied together with SLI/Crossfire software.
Imagine internal compute units of it being connected to each other, but spread across multiple actual chips.
Or to go even further, separating compute units from memory controller and decoders and have them as individual chips on a card. Would it be even theoretically possible to go with such radical change to the way how graphic cards are being designed?
This would bring:
- Cheaper high end graphics due to simple stacking of slower smaller cores
- No scaling issues
- Decentralized heat output (multiple moderate heat points opposed to current single highly concentrated one)
What I'm suspecting is the limitation:
- Latency between GPU's once signal leaves the actual chip
- Memory connections & cache design