• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA "Volta" Architecture Successor Codenamed "Ampere," Expected GTC 2018

Why is it second grade?

Does this say more about NVIDIA or more about AMD? On one hand, company A isnt able to put out a high end product for gaming, but does better in compute.. company B with its 'second grade' silicon easily takes the performance crown and performance per watt metric, but falls behind in compute.

I mean, use the right tool for the job right???
I do just that. :)
Good for you.. a self described 1%. Clearly your choice is made for you. Again, using the riggt tool for the job.

Each one excels in different ways. ;)
 
Well they just get 1.56 billion from consumer products alone in last quarter.

And it took them 3 billion to develop Volta , a product pretty much exclusive for datacenters. Keep living in denial.
 
I wouldn't mind having card with "just" GDDR6 though....
I'm sure it will be pretty fast...:D
 
I suggest watching their GTC conferences from the last few years and judge for yourselves what they are really focusing on.
 
Nvidia has usually been bragging about coming generations, sometimes even 2-3 generations down the line. I think they has been uncomfortably silent for some time, so I'm looking forward to detals about the successor of Volta.

If I'm not mistaken, TSMC will ramp up production of their production of the next node shrink in H2 2018, so a small volume Tesla product in Q3/Q4 is possible.

Current volta is on HBM so who knows :)
HBM is better than GDDR so it's the natural evolution.
Is it really?
Titan XP's 384-bit 11Gb/s GDDR5X crushes Vega 64's two stacks of HBM2, 547.7 GB/s vs. 483 GB/s. Heck, even GTX 1080's 256-bit can almost keep up with Vega 56's two stacks. So it really depends on what you are measuring.

GDDR6 will scale in the range of 12-16 Gb/s per pin, so it doesn't look like HBM will be dominant in the consumer market anytime soon. HBM is really an option where a product needs more bandwidth than a 384-bit GDDR controller can supply. HBM or successors might become more advantageous eventually…
 
It's second grade from a manufacturing and technical point of view. It just simply is. What is laughable is people not realizing this.

Matter of the fact is AMD is giving you their most expensive silicon for less than half of what Nvidia charges for their equivalent. They are instead giving you their second grade silicon , that's a fact independent from performance metrics. In this industry profit and success isn't caused exclusively by having a better product , it is dictated largely by having a product that is less expensive to manufacture against what the completion has at that price point. Recall all the major releases from the last 10 years and you'll see this is in fact the case.

This leads to one and only one outcome , the party that is winning isn't giving you their best.

I don't care if what Nvidia has it's enough , it could have been more. It boggles my mind as to why as a consumer you would be fine with that.
I don't think that the amount of gamers that are deep into technical stuff is that huge. I bet that most of the consumer base is just watching the performance and the power consomption, and are more linustechtip or jayz2cents reader than TPU or Hardware.fr readers. GPC, TPC, SM are not thing that you will understand without a proper education in computer science, or a huge, huge curiosity, so most of them must not know that they are buying criplled gpu, all they know is that it's fast enough in games compared to the competion.
 
I suggest watching their GTC conferences from the last few years and judge for yourselves what they are focusing on.
and, like the previous generations before it, the enterprise part and technology trickles down. IIRC, didnt they also show a Final Fantasy GAME on Volta at GTC (yes, they did).

Edit: if i was nvidia, i wouldnt release it to gamers until 2018 either.. why would i without real competition in many segments??? Smart business to me. :)
 
Last edited:
didnt they also show a Final Fantasy GAME on Volta at GTC (yes, they did).

And that must mean that's their number one priority. Just ignore the rest.

Well I wont go on forever about this , we'll see how things will play out in the next few years.
 
I wonder if it will run 120hz 4k
The problem with this goal is that the goalpost always moves as more intensive games come out. There has not been a card yet that can do it consistently and across the board.
 
I didnt say, nor allude to it be8ng their #1 priority. Simply saying it isnt forgotten as was seemingly the point you were making earlier. ;)

Only time will tell... and i can tell you im not worried about volta for the consumer next year... especially with what the competition has to play against it. ;)

'Denial' is a river many people can travel on. ;)
 
Regarding Nvidia "prioritizing" compute over gaming, so far this effort has only benefited gaming, since most features trickles down and the overall architecture has proven to be the best for gaming as well.

But I'm a little worried about Nvidia's "AI" focus, it seems like they think this will grow forever. The AI market will eventually flatten out, and will within a few years be dominated with specialized hardware. Adding general math capabilities is fine, but adding more hardware for "neural nets" etc. will just make a large expensive processor which will be beaten by ASICs. If Nvidia wants to continue down the AI route, they should make a separate architecture for that. But Nvidia will risk loosing everything by relying too much on the AI market.
 
I suggest watching their GTC conferences from the last few years and judge for yourselves what they are really focusing on.
Conferences are always about the new kid on the block. Nobody wants to hear about the advantages of having DX feature level 12.2 over just 12.1 in a conference.
 
HBM2 (2 stack) does NOT beat GDDR6 on throughtput and different on power consumption, even if in favor of HBM (not sure about that) is on par.
HBM2 does have lower latency, which makes it great for compute loads (and that's where nVidia has used it with Volta).

I have missed why people were assuming Ampere to be a 7nm piece. First mass volume production of 7nm by GF is H2 2018 and TSMC... has it even announced any concrete dates?

And it took them 3 billion to develop Volta , a product pretty much exclusive for datacenters. Keep living in denial.
Source for that?
 
And it took them 3 billion to develop Volta , a product pretty much exclusive for datacenters. Keep living in denial.

Well I can agree that main focus on developing gv100 have been for datacenter/AI/HPC, it's too expensive and big to be used in any consumer grade product. But I don't think that is the only chip that will come with Volta architecture. Thus what they spend for developing Volta is just not only for "datacenters". I.E. advancements on shader power efficiency will benefit upcoming Volta based consumer grade products too.

Conferences are always about the new kid on the block. Nobody wants to hear about the advantages of having DX feature level 12.2 over just 12.1 in a conference.

Well that depends on conference. GTC is not consumer oriented conference, while GDC is.
 
Just FYI, GV102 and GV104 was taped out several months ago, so more Volta is coming…
 
Well I can agree that main focus on developing gv100 have been for datacenter/AI/HPC, it's too expensive and big to be used in any consumer grade product. But I don't think that is the only chip that will come with Volta architecture. Thus what they spend for developing Volta is just not only for "datacenters". I.E. advancements on shader power efficiency will benefit upcoming Volta based consumer grade products too.

They will continue to sell as many consumer products as they can for as long as they can to pay the R&D expenses for Tesla/Jetson etc. Volta doesn't seem to have anything that would benefit a gaming card , the power efficiency gains are pretty limited , this isn't same as the jump form 22nm to 16nnm.

Source for that?

Jenson himself said it.
 
Heise.de is the original source, not WemakeupCrapCrap
 
They will continue to sell as many consumer products as they can for as long as they can to pay the R&D expenses for Tesla/Jetson etc. Volta doesn't seem to have anything that would benefit a gaming card , the power efficiency gains are pretty limited , this isn't same as the jump form 22nm to 16nnm.

Pretty limited? By numbers GV100@300W = 15TFlops and GP100@300W=10.6TFlops that makes 1.48 times more fp32 performance. And for lowered tdp it increases: GV100@250W = 14TFlops and GP100@250W = 9.36TFlops, which makes 1.5 times more fp32 performance.

Limiting factor will be the clocks; if nvidia can't clock them any higher than pascal(it's same just little tweaked manufacturing process after all), they have to make bigger gpus to gain performance on different tdp slots. Or clock them near to the limit and OC potential will be very low(much lower than pascal).

Heise.de is the original source, not WemakeupCrapCrap

We were talking about tape out of GV102 and GV104...
 
Pretty limited? By numbers GV100@300W = 15TFlops and GP100@300W=10.6TFlops that makes 1.48 times more fp32 performance.

And then look at the clocks and die space these GPUs have. Power consumption scales faster with clocks than it does with die space , in other words they "cheated" by making a substantially bigger chip. You can bet your ass Nvidia wont give us that 800mm^2 chip even for their highest end products , so we'll see much smaller chips similar in size with Pascal ( Nvidia will want to maintain the same profit margins) at which point they will have to increase the clocks in order to provide any meaningful speed increases hence the power consumption advantage will end up being much smaller than you think. So yes it will be pretty limited.
 
Last edited:
All i know that they are comfortable and they dont have any competitors on the market...

BUT STILL Nvidia GPU's are expensive af ! Lately is very annoying that they still sell 650-700-800 $ dollars graphics card !

Im very pissed off !! I cant afford a 700-800 flagship enthusiast card !! Not anymore !

The last was a EVGA 780 Ti that i bought for 600$ brand new and after 4 months after launch and it was the best on the market.
 
The main gain of Volta over Pascal for consumers will be small architectural improvements and larger dies, while maintaining the level of power consumption. GP102 is a relatively "small" chip to be the high-end consumer product, with only 471 mm². Its predecessor was 561 mm².

Im very pissed off !! I cant afford a 700-800 flagship enthusiast card !! Not anymore !
Really? Then I challenge you to do the math!
Do you remember the days of Geforce 8800?
Remember there is something called inflation as well.
In reality prices are quite okay at the moment.

And people buy expensive Iphones which barely last one year, yet they constantly keep expecting PCs to get cheaper and cheaper.
 
Last edited:
So much bickering over a rumour that starts a verbal conflict over what is effectively a toy. Because some people are really upset that desktops GPU's might not be the focus of Nvidia. Wow, really.

Again, a rumour. A desktop gfx card to facilitate adults playing games, a toy.

Jeez.
 
Back
Top