Sunday, November 12th 2017

NVIDIA "Volta" Architecture Successor Codenamed "Ampere," Expected GTC 2018
NVIDIA has reportedly codenamed the GPU architecture that succeeds its upcoming "Volta" architecture after the 18th century French physicist who is one of the pioneers of electromagnetism, André-Marie Ampère, after whom the popular unit of measuring current is named. The new NVIDIA "Ampere" GPU architecture, which succeeds "Volta," will make its debut at the 2018 Graphics Technology Conference (GTC), hosted by NVIDIA. As with GPU architecture launches by the company in recent times, one can expect an unveiling of the architecture, followed by preliminary technical presentations by NVIDIA engineers, with actual products launching a little later, and consumer-grade GeForce product launching much later.
NVIDIA is yet to launch GeForce products based on its upcoming "Volta" architecture as its current "Pascal" architecture turns 18 months old in the consumer graphics space. Should NVIDIA continue on the four-digit model number scheme of its GeForce 10-series "Pascal" family, one can expect those based on "Volta" to follow the GeForce 20-series, and "Ampere" GeForce 30-series. NVIDIA is yet to disclose the defining features of the "Ampere" architecture. We'll probably have to wait until March 2018 to find out.
Source:
Heise.de
NVIDIA is yet to launch GeForce products based on its upcoming "Volta" architecture as its current "Pascal" architecture turns 18 months old in the consumer graphics space. Should NVIDIA continue on the four-digit model number scheme of its GeForce 10-series "Pascal" family, one can expect those based on "Volta" to follow the GeForce 20-series, and "Ampere" GeForce 30-series. NVIDIA is yet to disclose the defining features of the "Ampere" architecture. We'll probably have to wait until March 2018 to find out.
97 Comments on NVIDIA "Volta" Architecture Successor Codenamed "Ampere," Expected GTC 2018
www.anandtech.com/show/12022/nvidia-announces-earnings-of-26-billion-for-q3-2018
I'm sure it will be pretty fast...:D
If I'm not mistaken, TSMC will ramp up production of their production of the next node shrink in H2 2018, so a small volume Tesla product in Q3/Q4 is possible. Is it really?
Titan XP's 384-bit 11Gb/s GDDR5X crushes Vega 64's two stacks of HBM2, 547.7 GB/s vs. 483 GB/s. Heck, even GTX 1080's 256-bit can almost keep up with Vega 56's two stacks. So it really depends on what you are measuring.
GDDR6 will scale in the range of 12-16 Gb/s per pin, so it doesn't look like HBM will be dominant in the consumer market anytime soon. HBM is really an option where a product needs more bandwidth than a 384-bit GDDR controller can supply. HBM or successors might become more advantageous eventually…
Edit: if i was nvidia, i wouldnt release it to gamers until 2018 either.. why would i without real competition in many segments??? Smart business to me. :)
Well I wont go on forever about this , we'll see how things will play out in the next few years.
Only time will tell... and i can tell you im not worried about volta for the consumer next year... especially with what the competition has to play against it. ;)
'Denial' is a river many people can travel on. ;)
But I'm a little worried about Nvidia's "AI" focus, it seems like they think this will grow forever. The AI market will eventually flatten out, and will within a few years be dominated with specialized hardware. Adding general math capabilities is fine, but adding more hardware for "neural nets" etc. will just make a large expensive processor which will be beaten by ASICs. If Nvidia wants to continue down the AI route, they should make a separate architecture for that. But Nvidia will risk loosing everything by relying too much on the AI market.
HBM2 does have lower latency, which makes it great for compute loads (and that's where nVidia has used it with Volta).
I have missed why people were assuming Ampere to be a 7nm piece. First mass volume production of 7nm by GF is H2 2018 and TSMC... has it even announced any concrete dates? Source for that?
Limiting factor will be the clocks; if nvidia can't clock them any higher than pascal(it's same just little tweaked manufacturing process after all), they have to make bigger gpus to gain performance on different tdp slots. Or clock them near to the limit and OC potential will be very low(much lower than pascal). We were talking about tape out of GV102 and GV104...
BUT STILL Nvidia GPU's are expensive af ! Lately is very annoying that they still sell 650-700-800 $ dollars graphics card !
Im very pissed off !! I cant afford a 700-800 flagship enthusiast card !! Not anymore !
The last was a EVGA 780 Ti that i bought for 600$ brand new and after 4 months after launch and it was the best on the market.
Do you remember the days of Geforce 8800?
Remember there is something called inflation as well.
In reality prices are quite okay at the moment.
And people buy expensive Iphones which barely last one year, yet they constantly keep expecting PCs to get cheaper and cheaper.
Again, a rumour. A desktop gfx card to facilitate adults playing games, a toy.
Jeez.