Isn't that what they're doing with their Radeon Instinct accelerator cards these days? The MI250X I'm pretty sure has two dies that the software sees as one and I think they've had some pretty good success with it.
View attachment 274623
It's a very valid strategy for compute which has wholly different set of requirements to real-time graphics.
To massively oversimplify things, GPU compute prioritises completing the task as fast as possible (rendering 60 frames of a 3D simulation, for example) with no real regard for when the frames are rendered, or what order they're rendered in. As long as all 60 frames are rendered, the result is good, even if they're out of sequence and the GPU spent most of the time producing no frames at all, and then spat out 60 frames all at once.
Real-time graphics require a GPU to produce those 60 frames in strict sequential order, and to do so as evenly as possible. So while there is a lot of parallelism in the actual calculations per frame, a gaming GPU cannot take as much advantage of parallelism between
different frames. Massively oversimplifying, again - if you had a coastal scene with land, sea, and sky - a compute card could process all 60 frames of sky first, batch-processing similar functions for each part of the sky elements, maximising cache efficiency and copy-pasting stuff that doesn't change for efficiency. Once it had done all the work it needs to on the sky for 60 frames, it can free up the cache, wipe the slate clean and start on doing all the same sort of things for the sea in all 60 frames. Rinse and repeat that for the land, and after looking like it had hung for 3 seconds doing nothing, you'd get all 60 frames at once, during one single monitor refresh. With vsync on, you'd get a single frame and 59 frames wasted. With vsync off, you'd get 59 thin slices of animation, displayed in a completely random order based on which of the final operations on each frame took longest. The net result of 60 frames in 0.3 seconds is an impressive 200 frames per second average, but the experience gaming on that is obviously unplayable.
Real-time graphics might be way less efficient by working on the scene one frame at a time, but they aren't concerned with getting all 60 frames rendered as fast as possible. Despite our obsession with high framerates, we're not actually wanting the lowest possible time interval between
each and every frame. We say we want at least 60fps, but what we really mean is that we never want a single frame to take more than 16.6ms If even
one of those frames takes an extra ~8ms to render, then it's a jarring stutter which breaks the illusion of fluid motion and just about everyone can spot the problem.