Wednesday, May 11th 2016

AMD Pulls Radeon "Vega" Launch to October

In the wake of NVIDIA's GeForce GTX 1080 and GTX 1070 graphics cards, which if live up to their launch marketing, could render AMD's high-end lineup woefully outperformed, AMD reportedly decided to pull the launch of its next big silicon, Vega10, from its scheduled early-2017 launch, to October 2016. Vega10 is a successor to "Grenada," and will be built on the 5th generation Graphics CoreNext architecture (codenamed "Vega").

Vega10 will be a multi-chip module, and feature HBM2 memory. The 14 nm architecture will feature higher performance/Watt than even the upcoming "Polaris" architecture. "Vega10" isn't a successor to "Fiji," though. That honor is reserved for "Vega11." It is speculated that Vega10 will feature 4096 stream processors, and will power graphics cards that compete with the GTX 1080 and GTX 1070. Vega11, on the other hand, is expected to feature 6144 stream processors, and could take on the bigger GP100-based SKUs. Both Vega10 and Vega11 will feature 4096-bit HBM2 memory interfaces, but could differ in standard memory sizes (think 8 GB vs. 16 GB).
Source: 3DCenter.org
Add your own comment

116 Comments on AMD Pulls Radeon "Vega" Launch to October

#101
HumanSmoke
KananIt's about chip size (reduced price per GPU in manufacturing) + efficiency. 256 bit bus is more effcient than 384 bit bus and GDDR5X is a good marketing tool "see we have the newest Ram technology on this graphics card!". I think 320 GB/s is a good bandwidth for the GTX 1080, else Nvidia wouldn't have used it. And I guess it can be easily overclocked too 10,5 GHz or 11 GHz.
Also, the new Pascal compression algorithm gives a supposed 20% increase over that used by Maxwell.
Posted on Reply
#102
RejZoR
KananIt's about chip size (reduced price per GPU in manufacturing) + efficiency. 256 bit bus is more effcient than 384 bit bus and GDDR5X is a good marketing tool "see we have the newest Ram technology on this graphics card!". I think 320 GB/s is a good bandwidth for the GTX 1080, else Nvidia wouldn't have used it. And I guess it can be easily overclocked to 10,5 GHz or 11 GHz.
They salvage large GPU's into lower end ones. Not all, but majority still.
Posted on Reply
#103
bug
RejZoRI don't quite understand the point of GDDR5X considering it delivers LESS bandwidth on GTX 1080 than GTX 980Ti already has right now. Sure the bus can be narrower now, but dos that really affect the price all that much considering they are offsetting cheaper (narrower) bus with faster and more expensive memory which is hard to come by and is thus more expensive.
It does affect the cost a lot. A wider bus means more wires on the PCB, which complicates routing. PCBs accommodating wider buses usually use more layers to get the job done.
Posted on Reply
#104
RejZoR
But they are using a more exclusive rare VRAM which most likely negates all of that in terms of costs. Or it just benefits NVIDIA and not so much consumers.
Posted on Reply
#105
Prima.Vera
RejZoRit just benefits NVIDIA and not so much consumers.
this
Posted on Reply
#106
bug
RejZoRBut they are using a more exclusive rare VRAM which most likely negates all of that in terms of costs. Or it just benefits NVIDIA and not so much consumers.
GDDR5X is very, very similar to GDDR5. See here: www.anandtech.com/show/9883/gddr5x-standard-jedec-new-gpu-memory-14-gbps
Basically, the only new thing is doubling the read/write capability, but even that has been done before. The rest is just refinements on top of GDDR5 (improved efficiency).

It does benefit Nvidia, because fewer traces on the PCB are not only cheaper to manufacture, but cheaper to test as well. You can expect part of those saving to be passed onto the customers as well.
While I don't have numbers on how much savings you can get from a narrower bus, just look at how long the mid range cards have lived without 256bit buses. I was expecting them to have transitioned 5 years ago and it still hasn't happened. I had it on my GTX 460 and the GTX 760 was also on a 256bit bus. But 660(Ti) and 960 took a more conservative approach. This feature seems to get the axe as often as possible...

And to address your original dilemma ("I don't quite understand the point of GDDR5X") - it's not about cost. If you want 10Gbps today, only GDDR5X can deliver (HBM does better, but is limited to 4GB).
Posted on Reply
#107
RejZoR
But 10Gbps would make sense if you put it to proper use, not stuff it on a narrower bus, making it even less effective than old GDDR5.
Posted on Reply
#108
bug
RejZoRBut 10Gbps would make sense if you put it to proper use, not stuff it on a narrower bus, making it even less effective than old GDDR5.
10Gbps is 10Gbps whether you get it over a 1024bit wide but or over a 1bit serial connection.
What you're saying is like the old joke: what's heavier, 1Kg of lead or 1Kg of feathers?
Posted on Reply
#109
RejZoR
1kg of lead of course! :D
Posted on Reply
#110
Fluffmeister
Interesting to see if AMD have been caught with their pants down again.
Posted on Reply
#111
bug
FluffmeisterInteresting to see if AMD have been caught with their pants down again.
AMD's video cards have been pretty much ok. What they have consistently lacked is a lead across the board over Nvidia. Their mid-range is mostly ok, but either they're lacking a high-end card or 980Ti steals some of Fury X's thunder or Nvidia smacks them with performance per Watt or they're caught in the GCN recycling/rebranding debate, Nvidia always seems to be one step ahead and not let AMD cash in on anything. And AMD desperately needs some margins, the GPU might be their only profitable division. What's more, Nvidia's market capitalization is almost 9x that of AMD (CPU division included), so you can imagine how research goes...
For a bit of perspective AMD has bought ATI for $5.4bn and now the whole company is worth $3bn (and bleeding non-stop).
Posted on Reply
#112
Mello
buggalugsThe reference 1070 is $449 and with limited availability and no competition for a while, expect to pay over $500 , probably around $549 for a 1070 at least until October. Same with the 1080, the reference card is $699, but will be more like $749 or more for months.

I really dont think AMD have been caught out, Everyone in the industry has expected big performance gains with 16nm and 14nm. Die shrinks usually at least double performance from previous gen. Both companies have been slow getting to 16/14 nm and AMD are dealing with a new fab, and new technology like HBM/2. Actually the 1080/1070 is slower than I expected, in reality they are only slightly faster than last gen's highend like 980Ti and TitanX.

Both companies would be planning for a refresh too, You can be sure the 1080 and 1070 isnt a maxed out 16nm GPU, they always leave a little in the tank for Ti versions, then a refresh or 2 to milk the market and get their moneys worth.
I agree with you, but I would like to know if the extensive optimisation of 28nm platform has influenced the typical expected performance gains based on die shrinks? For example if we compare the 1080 to the highend 600 series it looks really good (the first geforce series on the 28nm fabrication process). Unfortunately we were on 28nm for a really long time.
Posted on Reply
#113
bug
MelloI agree with you, but I would like to know if the extensive optimisation of 28nm platform has influenced the typical expected performance gains based on die shrinks? For example if we compare the 1080 to the highend 600 series it looks really good (the first geforce series on the 28nm fabrication process). Unfortunately we were on 28nm for a really long time.
Maybe for AMD, but not likely for Nvidia. To survive at 28nm, Nvidia just cut some of the OpenCL logic from desktop parts. I'm not aware of other optimizations on their part.
Posted on Reply
#114
VladVidya
LightningJR4 years ago I bought my 670 at the EXACT same price it launched for.
Not to burst your bubble, but that's probably because exactly four years ago the 670 came out.
Posted on Reply
#115
LightningJR
VladVidyaNot to burst your bubble, but that's probably because exactly four years ago the 670 came out.
That was my point.... :shadedshu:

You obviously didn't understand what was said... :slap:
Posted on Reply
#116
D007
If AMD ever wants to get ahead of Nvidia, they need to think up their next version and double it's potential.. lol
Posted on Reply
Add your own comment
Apr 19th, 2024 22:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts