Sunday, November 12th 2017

NVIDIA "Volta" Architecture Successor Codenamed "Ampere," Expected GTC 2018

NVIDIA has reportedly codenamed the GPU architecture that succeeds its upcoming "Volta" architecture after the 18th century French physicist who is one of the pioneers of electromagnetism, André-Marie Ampère, after whom the popular unit of measuring current is named. The new NVIDIA "Ampere" GPU architecture, which succeeds "Volta," will make its debut at the 2018 Graphics Technology Conference (GTC), hosted by NVIDIA. As with GPU architecture launches by the company in recent times, one can expect an unveiling of the architecture, followed by preliminary technical presentations by NVIDIA engineers, with actual products launching a little later, and consumer-grade GeForce product launching much later.

NVIDIA is yet to launch GeForce products based on its upcoming "Volta" architecture as its current "Pascal" architecture turns 18 months old in the consumer graphics space. Should NVIDIA continue on the four-digit model number scheme of its GeForce 10-series "Pascal" family, one can expect those based on "Volta" to follow the GeForce 20-series, and "Ampere" GeForce 30-series. NVIDIA is yet to disclose the defining features of the "Ampere" architecture. We'll probably have to wait until March 2018 to find out.
Source: Heise.de
Add your own comment

97 Comments on NVIDIA "Volta" Architecture Successor Codenamed "Ampere," Expected GTC 2018

#26
Vya Domus
jabbadapWell they just get 1.56 billion from consumer products alone in last quarter.
And it took them 3 billion to develop Volta , a product pretty much exclusive for datacenters. Keep living in denial.
Posted on Reply
#28
P4-630
I wouldn't mind having card with "just" GDDR6 though....
I'm sure it will be pretty fast...:D
Posted on Reply
#29
Vya Domus
I suggest watching their GTC conferences from the last few years and judge for yourselves what they are really focusing on.
Posted on Reply
#30
efikkan
Nvidia has usually been bragging about coming generations, sometimes even 2-3 generations down the line. I think they has been uncomfortably silent for some time, so I'm looking forward to detals about the successor of Volta.

If I'm not mistaken, TSMC will ramp up production of their production of the next node shrink in H2 2018, so a small volume Tesla product in Q3/Q4 is possible.
ImsochoboCurrent volta is on HBM so who knows :)
HBM is better than GDDR so it's the natural evolution.
Is it really?
Titan XP's 384-bit 11Gb/s GDDR5X crushes Vega 64's two stacks of HBM2, 547.7 GB/s vs. 483 GB/s. Heck, even GTX 1080's 256-bit can almost keep up with Vega 56's two stacks. So it really depends on what you are measuring.

GDDR6 will scale in the range of 12-16 Gb/s per pin, so it doesn't look like HBM will be dominant in the consumer market anytime soon. HBM is really an option where a product needs more bandwidth than a 384-bit GDDR controller can supply. HBM or successors might become more advantageous eventually…
Posted on Reply
#31
Noyand
Vya DomusIt's second grade from a manufacturing and technical point of view. It just simply is. What is laughable is people not realizing this.

Matter of the fact is AMD is giving you their most expensive silicon for less than half of what Nvidia charges for their equivalent. They are instead giving you their second grade silicon , that's a fact independent from performance metrics. In this industry profit and success isn't caused exclusively by having a better product , it is dictated largely by having a product that is less expensive to manufacture against what the completion has at that price point. Recall all the major releases from the last 10 years and you'll see this is in fact the case.

This leads to one and only one outcome , the party that is winning isn't giving you their best.

I don't care if what Nvidia has it's enough , it could have been more. It boggles my mind as to why as a consumer you would be fine with that.
I don't think that the amount of gamers that are deep into technical stuff is that huge. I bet that most of the consumer base is just watching the performance and the power consomption, and are more linustechtip or jayz2cents reader than TPU or Hardware.fr readers. GPC, TPC, SM are not thing that you will understand without a proper education in computer science, or a huge, huge curiosity, so most of them must not know that they are buying criplled gpu, all they know is that it's fast enough in games compared to the competion.
Posted on Reply
#32
EarthDog
Vya DomusI suggest watching their GTC conferences from the last few years and judge for yourselves what they are focusing on.
and, like the previous generations before it, the enterprise part and technology trickles down. IIRC, didnt they also show a Final Fantasy GAME on Volta at GTC (yes, they did).

Edit: if i was nvidia, i wouldnt release it to gamers until 2018 either.. why would i without real competition in many segments??? Smart business to me. :)
Posted on Reply
#33
Vya Domus
EarthDogdidnt they also show a Final Fantasy GAME on Volta at GTC (yes, they did).
And that must mean that's their number one priority. Just ignore the rest.

Well I wont go on forever about this , we'll see how things will play out in the next few years.
Posted on Reply
#34
rtwjunkie
PC Gaming Enthusiast
RehmanpaI wonder if it will run 120hz 4k
The problem with this goal is that the goalpost always moves as more intensive games come out. There has not been a card yet that can do it consistently and across the board.
Posted on Reply
#35
EarthDog
I didnt say, nor allude to it be8ng their #1 priority. Simply saying it isnt forgotten as was seemingly the point you were making earlier. ;)

Only time will tell... and i can tell you im not worried about volta for the consumer next year... especially with what the competition has to play against it. ;)

'Denial' is a river many people can travel on. ;)
Posted on Reply
#36
efikkan
Regarding Nvidia "prioritizing" compute over gaming, so far this effort has only benefited gaming, since most features trickles down and the overall architecture has proven to be the best for gaming as well.

But I'm a little worried about Nvidia's "AI" focus, it seems like they think this will grow forever. The AI market will eventually flatten out, and will within a few years be dominated with specialized hardware. Adding general math capabilities is fine, but adding more hardware for "neural nets" etc. will just make a large expensive processor which will be beaten by ASICs. If Nvidia wants to continue down the AI route, they should make a separate architecture for that. But Nvidia will risk loosing everything by relying too much on the AI market.
Posted on Reply
#37
bug
Vya DomusI suggest watching their GTC conferences from the last few years and judge for yourselves what they are really focusing on.
Conferences are always about the new kid on the block. Nobody wants to hear about the advantages of having DX feature level 12.2 over just 12.1 in a conference.
Posted on Reply
#38
medi01
HBM2 (2 stack) does NOT beat GDDR6 on throughtput and different on power consumption, even if in favor of HBM (not sure about that) is on par.
HBM2 does have lower latency, which makes it great for compute loads (and that's where nVidia has used it with Volta).

I have missed why people were assuming Ampere to be a 7nm piece. First mass volume production of 7nm by GF is H2 2018 and TSMC... has it even announced any concrete dates?
Vya DomusAnd it took them 3 billion to develop Volta , a product pretty much exclusive for datacenters. Keep living in denial.
Source for that?
Posted on Reply
#39
jabbadap
Vya DomusAnd it took them 3 billion to develop Volta , a product pretty much exclusive for datacenters. Keep living in denial.
Well I can agree that main focus on developing gv100 have been for datacenter/AI/HPC, it's too expensive and big to be used in any consumer grade product. But I don't think that is the only chip that will come with Volta architecture. Thus what they spend for developing Volta is just not only for "datacenters". I.E. advancements on shader power efficiency will benefit upcoming Volta based consumer grade products too.
bugConferences are always about the new kid on the block. Nobody wants to hear about the advantages of having DX feature level 12.2 over just 12.1 in a conference.
Well that depends on conference. GTC is not consumer oriented conference, while GDC is.
Posted on Reply
#40
efikkan
Just FYI, GV102 and GV104 was taped out several months ago, so more Volta is coming…
Posted on Reply
#41
jabbadap
efikkanJust FYI, GV102 and GV104 was taped out several months ago, so more Volta is coming…
hmm were there any confirmed news about that?
Posted on Reply
#42
Vya Domus
jabbadapWell I can agree that main focus on developing gv100 have been for datacenter/AI/HPC, it's too expensive and big to be used in any consumer grade product. But I don't think that is the only chip that will come with Volta architecture. Thus what they spend for developing Volta is just not only for "datacenters". I.E. advancements on shader power efficiency will benefit upcoming Volta based consumer grade products too.
They will continue to sell as many consumer products as they can for as long as they can to pay the R&D expenses for Tesla/Jetson etc. Volta doesn't seem to have anything that would benefit a gaming card , the power efficiency gains are pretty limited , this isn't same as the jump form 22nm to 16nnm.
medi01Source for that?
Jenson himself said it.
Posted on Reply
#43
efikkan
jabbadaphmm were there any confirmed news about that?
No, such info are never confirmed by any official source. :)
Posted on Reply
#44
medi01
Heise.de is the original source, not WemakeupCrapCrap
Posted on Reply
#45
jabbadap
Vya DomusThey will continue to sell as many consumer products as they can for as long as they can to pay the R&D expenses for Tesla/Jetson etc. Volta doesn't seem to have anything that would benefit a gaming card , the power efficiency gains are pretty limited , this isn't same as the jump form 22nm to 16nnm.
Pretty limited? By numbers GV100@300W = 15TFlops and GP100@300W=10.6TFlops that makes 1.48 times more fp32 performance. And for lowered tdp it increases: GV100@250W = 14TFlops and GP100@250W = 9.36TFlops, which makes 1.5 times more fp32 performance.

Limiting factor will be the clocks; if nvidia can't clock them any higher than pascal(it's same just little tweaked manufacturing process after all), they have to make bigger gpus to gain performance on different tdp slots. Or clock them near to the limit and OC potential will be very low(much lower than pascal).
medi01Heise.de is the original source, not WemakeupCrapCrap
We were talking about tape out of GV102 and GV104...
Posted on Reply
#46
Vya Domus
jabbadapPretty limited? By numbers GV100@300W = 15TFlops and GP100@300W=10.6TFlops that makes 1.48 times more fp32 performance.
And then look at the clocks and die space these GPUs have. Power consumption scales faster with clocks than it does with die space , in other words they "cheated" by making a substantially bigger chip. You can bet your ass Nvidia wont give us that 800mm^2 chip even for their highest end products , so we'll see much smaller chips similar in size with Pascal ( Nvidia will want to maintain the same profit margins) at which point they will have to increase the clocks in order to provide any meaningful speed increases hence the power consumption advantage will end up being much smaller than you think. So yes it will be pretty limited.
Posted on Reply
#47
Animalpak
All i know that they are comfortable and they dont have any competitors on the market...

BUT STILL Nvidia GPU's are expensive af ! Lately is very annoying that they still sell 650-700-800 $ dollars graphics card !

Im very pissed off !! I cant afford a 700-800 flagship enthusiast card !! Not anymore !

The last was a EVGA 780 Ti that i bought for 600$ brand new and after 4 months after launch and it was the best on the market.
Posted on Reply
#48
efikkan
The main gain of Volta over Pascal for consumers will be small architectural improvements and larger dies, while maintaining the level of power consumption. GP102 is a relatively "small" chip to be the high-end consumer product, with only 471 mm². Its predecessor was 561 mm².
AnimalpakIm very pissed off !! I cant afford a 700-800 flagship enthusiast card !! Not anymore !
Really? Then I challenge you to do the math!
Do you remember the days of Geforce 8800?
Remember there is something called inflation as well.
In reality prices are quite okay at the moment.

And people buy expensive Iphones which barely last one year, yet they constantly keep expecting PCs to get cheaper and cheaper.
Posted on Reply
#49
the54thvoid
Super Intoxicated Moderator
So much bickering over a rumour that starts a verbal conflict over what is effectively a toy. Because some people are really upset that desktops GPU's might not be the focus of Nvidia. Wow, really.

Again, a rumour. A desktop gfx card to facilitate adults playing games, a toy.

Jeez.
Posted on Reply
#50
bug
the54thvoidSo much bickering over a rumour that starts a verbal conflict over what is effectively a toy. Because some people are really upset that desktops GPU's might not be the focus of Nvidia. Wow, really.

Again, a rumour. A desktop gfx card to facilitate adults playing games, a toy.

Jeez.
On top of that, it's not even a rumour about the capabilities of the new chip, just its name.
Posted on Reply
Add your own comment
May 28th, 2025 10:12 CDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts