• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA "Volta" Architecture Successor Codenamed "Ampere," Expected GTC 2018

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,696 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA has reportedly codenamed the GPU architecture that succeeds its upcoming "Volta" architecture after the 18th century French physicist who is one of the pioneers of electromagnetism, André-Marie Ampère, after whom the popular unit of measuring current is named. The new NVIDIA "Ampere" GPU architecture, which succeeds "Volta," will make its debut at the 2018 Graphics Technology Conference (GTC), hosted by NVIDIA. As with GPU architecture launches by the company in recent times, one can expect an unveiling of the architecture, followed by preliminary technical presentations by NVIDIA engineers, with actual products launching a little later, and consumer-grade GeForce product launching much later.

NVIDIA is yet to launch GeForce products based on its upcoming "Volta" architecture as its current "Pascal" architecture turns 18 months old in the consumer graphics space. Should NVIDIA continue on the four-digit model number scheme of its GeForce 10-series "Pascal" family, one can expect those based on "Volta" to follow the GeForce 20-series, and "Ampere" GeForce 30-series. NVIDIA is yet to disclose the defining features of the "Ampere" architecture. We'll probably have to wait until March 2018 to find out.



View at TechPowerUp Main Site
 
I wonder if the new GTX's double up as ammeters :p
 
Last edited:
So right around Navi's release date then? Not expecting AMD to make any sort of come back. They are at least 1 generation behind in efficiency and 1 generation behind in performance. It's likely going to be another story of them beating the 1180 but not the Ti model and with more power consumption.
 
Can you guys stop with the clickbait news titles? Despite the fact that Heise is generally quite reliable, at this stage this is a completely unconfirmed rumour - FFS even Guru3D has the decency to put "could be" in the title of their article about this.
 
So right around Navi's release date then? Not expecting AMD to make any sort of come back. They are at least 1 generation behind in efficiency and 1 generation behind in performance. It's likely going to be another story of them beating the 1180 but not the Ti model and with more power consumption.
Except this time around, assuming the rumors of Navi @7nm TSMC are true, it will be apples vs apples. For instance we don't know how much of GF (14nm) vs TSMC (16nm) made a difference between Vega vs Pascal.
Previously both were on 28nm TSMC & Fury Nano was close in terms of efficiency wrt Maxwell, partly due to HBM of course.
 
AMD needs to pull another HD 5xxx series if they want to beat nVidia again in both price/performance...
 
Looks like NV will be playing same mode as Intel. Consumer cards for "peasants" lagging behind their top of the line. When "peasants" will get Volta, their real top end will be Ampere.
 
I wonder if it will run 120hz 4k
 
Well officially 10nm offers 2x logic density compared to 16nm. and 7nm offers 1.6x compared to 10nm.
In this case we are looking at 2x1.6 density and 1.2x1.15 faster clock speed.

And I just don't think we will see 7nm just yet, as it represents too big of a leap forward. as it means 484mm2 titan Xp being shrinked to 150mm2 and working at 2.5Ghz / 75 watts and who the hell will release such monstrosity. And bigger chip of that type it will run 4K at 120Hz.
 
Keep dreaming people , Nvidia is no longer focused on consumer products.
 
FFS even Guru3D has the decency to put "could be" in the title of their article about this.

Indeed, even WCCFTECH have "rumoured" in the title.
 
Well officially 10nm offers 2x logic density compared to 16nm. and 7nm offers 1.6x compared to 10nm.
In this case we are looking at 2x1.6 density and 1.2x1.15 faster clock speed.

And I just don't think we will see 7nm just yet, as it represents too big of a leap forward. as it means 484mm2 titan Xp being shrinked to 150mm2 and working at 2.5Ghz / 75 watts and who the hell will release such monstrosity. And bigger chip of that type it will run 4K at 120Hz.
Those are just theoretical numbers, in reality the different GPU uarch & a new(er) process node makes it almost certain that such leaps are never achieved, at least on the first iteration of any node.
Also it's 1.x clock speeds at same power or 0.x power at the same clocks, not both at the same time.
 
Should be GDDR6 by then.
 
Should be GDDR6 by then.

Current volta is on HBM so who knows :)
HBM is better than GDDR so it's the natural evolution.
 
Current volta is on HBM so who knows

Nvidia hasn't made a single card with HBM for us consumers yet.
I think GDDR6 will be used.

But ok we don't know for sure.
 
... and reduced compute in case you might need/want/like it.
 
Yup , second grade silicon.
If this is second grade.....(which is absolutely a laughable statememt to me)

... and reduced compute in case you might need/want/like it.
yes, that is a good point for the less than 1% of home consumers using compute functionality on their cards. Regardless, buy what works best for your uses.
 
If this is second grade.....(which is absolutely a laughable statememt to me)

It's second grade from a manufacturing and technical point of view. It just simply is. What is laughable is people not realizing this.

Matter of the fact is AMD is giving you their most expensive silicon for less than half of what Nvidia charges for their equivalent. They are instead giving you their second grade silicon , that's a fact independent from performance metrics. In this industry profit and success isn't caused exclusively by having a better product , it is dictated largely by having a product that is less expensive to manufacture against what the completion has at that price point. Recall all the major releases from the last 10 years and you'll see this is in fact the case.

This leads to one and only one outcome , the party that is winning isn't giving you their best.

I don't care if what Nvidia has it's enough , it could have been more. It boggles my mind as to why as a consumer you would be fine with that.
 
Last edited:
yes, that is a good point for the less than 1% of home consumers using compute functionality on their cards. Regardless, buy what works best for your uses.

I do just that. :)
 
Keep dreaming people , Nvidia is no longer focused on consumer products.

"Hey lets ignore what made 60% of our Q3 revenue..."
 
"Hey lets ignore what made 60% of our Q3 revenue..."

10 years ago that was pretty much 100%. Believe what you want but the industry is changing so is their business.
 
Keep dreaming people , Nvidia is no longer focused on consumer products.

Well they just get 1.56 billion from consumer products alone in last quarter. What kind of morons do you think they are not continue keeping focus on consumer products, which have giving them most of the company's revenue in the past. Sure AI/datacenter business is rapidly growing business for them, but it's still just the third of what they get from consumer products.
 
Back
Top