I got some brainstorming going on around AMD's Navi architecture, particularly the "scaling" part they mention, which is suggesting multi-GPU setups on single PCB. So, I've expanded that a bit...
We know creating huge GPU's is very costly, mostly because of defective GPU's from the waffers.
So, more small GPU's fit on a single waffer, meaning you get more of them and less defective ones -> cheaper.
Fitting more GPU's the way we do it now requires SLi/CrossfireX profiles which are always pain in the ass and it just doesn't scale well.
Now, this is pure brainstorming with very little actual electronics knowledge...
Would it be possible to design smaller GPU's in such a way that you could stack 4 or 6 of them on a single PCB, but would present themselves to the system and behave as a single GPU without the use of ANY software or driver feature for that? I mean, like having more separate GPU chips that work together as a single physical GPU and not as multi-GPU which are tied together with SLI/Crossfire software.
Imagine internal compute units of it being connected to each other, but spread across multiple actual chips.
Or to go even further, separating compute units from memory controller and decoders and have them as individual chips on a card. Would it be even theoretically possible to go with such radical change to the way how graphic cards are being designed?
This would bring:
- Cheaper high end graphics due to simple stacking of slower smaller cores
- No scaling issues
- Decentralized heat output (multiple moderate heat points opposed to current single highly concentrated one)
What I'm suspecting is the limitation:
- Latency between GPU's once signal leaves the actual chip
- Memory connections & cache design
We know creating huge GPU's is very costly, mostly because of defective GPU's from the waffers.
So, more small GPU's fit on a single waffer, meaning you get more of them and less defective ones -> cheaper.
Fitting more GPU's the way we do it now requires SLi/CrossfireX profiles which are always pain in the ass and it just doesn't scale well.
Now, this is pure brainstorming with very little actual electronics knowledge...
Would it be possible to design smaller GPU's in such a way that you could stack 4 or 6 of them on a single PCB, but would present themselves to the system and behave as a single GPU without the use of ANY software or driver feature for that? I mean, like having more separate GPU chips that work together as a single physical GPU and not as multi-GPU which are tied together with SLI/Crossfire software.
Imagine internal compute units of it being connected to each other, but spread across multiple actual chips.
Or to go even further, separating compute units from memory controller and decoders and have them as individual chips on a card. Would it be even theoretically possible to go with such radical change to the way how graphic cards are being designed?
This would bring:
- Cheaper high end graphics due to simple stacking of slower smaller cores
- No scaling issues
- Decentralized heat output (multiple moderate heat points opposed to current single highly concentrated one)
What I'm suspecting is the limitation:
- Latency between GPU's once signal leaves the actual chip
- Memory connections & cache design