Cool would it be if Vega 10 (400mm chip, 1080 is 300-ish, 480 200-ish) arrives this fall.
But to me here, just stepping back and looking at both pictures the right just looks much better
Anyhow, from AotS site itself, C7 in CF:
http://www.ashesofthesingularity.co...-details/ac88258f-4541-408e-8234-f9e96febe303
1080:
http://www.ashesofthesingularity.co...-details/a957db0f-59b3-4394-84cc-2ba0170ab699
Game versions are different though, but both run 1440p "crazy".
Yea they always try to compete FPS/consumption wise but they just never look the same.
Dude.
I'll assume you are neither trolling nor on a NV payrol ("chizow" and company).
If you'd check sites going deeper, such as anandtech, you'd realize AMD is the best of 3 major GPU manufacturer in that regard, with nVidia being somewhat worse and Intel simply terrible in that regard. (things such as anisotropic filtering). And for quite a while.
Makes me realize that $200 isn't so great overseas, when the conversion isn't arranged fairly.
That's not how AMD handled things, at least in the past.
Heck, even nGreedia didn't do 699$ => 789€ last gen.
But my point is is 2 cards get 50% due to terrible scaling surly one card would get near 100% as CF be taken out the picture, but they did not do that which makes me think only bad things and this is some game engine they have put a lot of time in to.
Come on.
There could be other bottlnecks.
Like CPU, for instance.
CF vs 1080 comparison is just PR for lols anyhow.
480 being between 970 and 980 or even 980 and Fury at 199$/229$ is real news.
Why don't AMD just slam 2x RX 480 on a single card, give it dual 6pin and call it a day?
I think they could use single 8pin. (225w)
And yeah, why not, provided there are enough chips to satisfy demand.
Timing is crucial, they must pump out as much as they can before 1060 (which, given nVidia's arrogance, won't be priced too competitively) arrives, so if they simply can't produce
PS
A case for multi-GPU (for dudes who are into VR):
Moving on, we have AMD’s compelling content goal, which is backed by their Affinity Multi-GPU technology. Short and to the point, Affinity Multi-GPU allows for each eye in a VR headset to be rendered in parallel by a GPU, as opposed to taking the traditional PC route of alternate frame rendering (AFR), which has the GPUs alternate on frames and in the process can introduce quite a bit of lag. Though multi-GPU setups are not absolutely necessary for VR, the performance requirements for high quality VR combined with the simplicity of this solution make it a easy way to improve performance (reduce latency) just by adding in a second GPU
At a lower level, Affinity Multi-GPU also implements some rendering pipeline optimizations to get rid of some of the CPU overhead that would come from dispatching two jobs to render two frames.
With each eye being nearly identical, it’s possible to cut down on some of this work by dispatching a single job and then using masking to hide from each eye what it can’t actually see.
Marked bold is what nVidia's "simultaneous multi-projection" is likely about.