• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD E3 Next Horizon Event: Live Blog

Congratulations AMD you finally caught up with the help of die shrink in both fronts. Enjoy it as half a year later when others go for die shrink gap will be even bigger than it was!
 
Stop trolling. You do know Zen 3 and better processes are also coming to AMD next year, right?
 
Obviously you have no idea of what trolling is. The numbers show they are on equal terms now. The conclusion is they have just caught up to yesterdays technology, with current technology. they used ther die shrink card from now on they can just improve single digit percentages where as others have a huge improvement gap. sad but true.
 
Congratulations AMD you finally caught up with the help of die shrink in both fronts. Enjoy it as half a year later when others go for die shrink gap will be even bigger than it was!
Wow, so much salt.

For starters, they "caught up" with half a chip size (5700XT is 250mm^2, 2070 is 450mm^2).
On top of it, Huang complained that 7nm "is expensive", so hardly anything coming within next 6 month.
What is likely to come within 6 month is ngreedia's "super" bump and AMD's bigger Navi that could step well into 2080 area.
 
For starters, they "caught up" with half a chip size (5700XT is 250mm^2, 2070 is 450mm^2).
On top of it, Huang complained that 7nm "is expensive", so hardly anything coming within next 6 month.
10.3 billion transistors vs 10.8 billion transistors.
What 7nm gives over 12nm is 70-80% better transistor density, a 450mm^2 chip at 12nm would become 250mm^2 at 7nm. In December 2017 AMD stated 7nm was twice as expensive as 14/12nm, negating the effect of smaller die size.
The other big win from a smaller manufacturing process is better power efficiency and lower voltage accompanied by lower power consumption.
 
There is much more to it than just the node shrink though. Both Zen 2 and Navi are very different architecturally.

There is also a lot of room to grow performance for Zen3/Navi 2.0.

I think people will be surprised by how much more of a gain they can produce and I think it is a bit "cheeky" (I'm purposely not using the other word to avoid "reactions") to be making absolute statements that there is nothing more here (and in the future) other than node gains and that they will be behind in 6 months and other salty comments.

On top of that, there is actually a good pipeline of process improvements coming too.
 
Congratulations AMD you finally caught up with 1/10 or 1/50 of the R&D budget, only a small team of engineers, no room for mistakes and while also trying to hold your market share against Nvidia in the graphics market. Enjoy it as long as it lasts, you and your customers, while I will keep waiting and hoping for Intel to make those 10nm a viable manufacturing process.
Fixed that for you.
 
In December 2017 AMD stated 7nm was twice as expensive as 14/12nm, negating the effect of smaller die size.
That was comparing 250mm^2 14nm chip against 250mm^2 7nm chip.
But 450mm^2 14nm chip must cost more than twice more than 250mm^2 14nm chip.

AMD should have no problem beating 2070/2060 on price.
 
Congratulations AMD you finally caught up with 1/10 or 1/50 of the R&D budget, only a small team of engineers, no room for mistakes and while also trying to hold your market share against Nvidia in the graphics market. Enjoy it as long as it lasts, you and your customers, while I will keep waiting and hoping for Intel to make those 10nm a viable manufacturing process.
Fixed that for you.
What are you comparing AMD R&D budget with?
The exact division of R&D costs for all involved companies are unknown.

Nvidia outspends AMD by about 50% as a whole, it could be estimated 4-5 times for GPU division but whether that is true is debatable.

Intel outspends AMD about 10-fold for company as a whole but keep in mind that Intel does a lot more than AMD and some of these things are really expensive. Intel spends billions on foundries - TSMC is their main competitor in that field and TSMC should be a larger manufacturer than Intel is (and if I remember correctly Intel puts new fabs cost in R&D while TSMC does not). They work together with Micron on NAND/Optane/SSDs and Micron spends couple billion a year on R&D. There are more fields Intel competes in with further R&D costs and while they definitely have more to spend on CPU R&D the difference there is definitely not 10-fold but much less.

AMD spent about $1.5B in R&D in 2018.
Intel spent $13.5B, Nvidia spent $2.3B, TSMC spent $2.6B, Micron spent $2.3B.

That was comparing 250mm^2 14nm chip against 250mm^2 7nm chip.
But 450mm^2 14nm chip must cost more than twice more than 250mm^2 14nm chip.
The problem is that we do not know how things stand right now.
I would expect 7nm cost to have been gone down but how much exactly we do not know. Pretty sure it is not even close to 12nm yet.
12nm is old, tried and true manufacturing process while 7nm is a new one. The yield dropoff with size increase is definitely much lower at 12nm and can be negligible at 450mm^2.

Related to this, one has to wonder why does Nvidia stay at 12nm? Their GPUs are power limited, the same as AMD's. Clocks for Nvidia GPUs are stuck at 2-2.1GHz on 12nm because we know after that point the power efficiency goes straight to hell. AMD is showing us that 2.0-2.1 is achievable on 7nm so that cannot be the problem. Yields and cost remain the main suspects here.
 
Last edited:
Back
Top