Friday, March 18th 2016

NVIDIA's Next Flagship Graphics Cards will be the GeForce X80 Series

With the GeForce GTX 900 series, NVIDIA has exhausted its GeForce GTX nomenclature, according to a sensational scoop from the rumor mill. Instead of going with the GTX 1000 series that has one digit too many, the company is turning the page on the GeForce GTX brand altogether. The company's next-generation high-end graphics card series will be the GeForce X80 series. Based on the performance-segment "GP104" and high-end "GP100" chips, the GeForce X80 series will consist of the performance-segment GeForce X80, the high-end GeForce X80 Ti, and the enthusiast-segment GeForce X80 TITAN.

Based on the "Pascal" architecture, the GP104 silicon is expected to feature as many as 4,096 CUDA cores. It will also feature 256 TMUs, 128 ROPs, and a GDDR5X memory interface, with 384 GB/s memory bandwidth. 6 GB could be the standard memory amount. Its texture- and pixel-fillrates are rated to be 33% higher than those of the GM200-based GeForce GTX TITAN X. The GP104 chip will be built on the 16 nm FinFET process. The TDP of this chip is rated at 175W.
Moving on, the GP100 is a whole different beast. It's built on the same 16 nm FinFET process as the GP104, and its TDP is rated at 225W. A unique feature of this silicon is its memory controllers, which are rumored to support both GDDR5X and HBM2 memory interfaces. There could be two packages for the GP100 silicon, depending on the memory type. The GDDR5X package will look simpler, with a large pin-count to wire out to the external memory chips; while the HBM2 package will be larger, to house the HBM stacks on the package, much like AMD "Fiji." The GeForce X80 Ti and the X80 TITAN will hence be two significantly different products besides their CUDA core counts and memory amounts.

The GP100 silicon physically features 6,144 CUDA cores, 384 TMUs, and 192 ROPs. On the X80 Ti, you'll get 5,120 CUDA cores, 320 TMUs, 160 ROPs, and a 512-bit wide GDDR5X memory interface, holding 8 GB of memory, with a bandwidth of 512 GB/s. The X80 TITAN, on the other hand, features all the CUDA cores, TMUs, and ROPs present on the silicon, plus features a 4096-bit wide HBM2 memory interface, holding 16 GB of memory, at a scorching 1 TB/s memory bandwidth. Both the X80 Ti and the X80 TITAN double the pixel- and texture- fill-rates from the GTX 980 Ti and GTX TITAN X, respectively.
Source: VideoCardz
Add your own comment

180 Comments on NVIDIA's Next Flagship Graphics Cards will be the GeForce X80 Series

#26
okidna
Prima.VeraWhich Games, on what settings and resolution please? Otherwise I'm calling this a BS
Rise of The Tomb Raider @ 1600x900 with Very High preset.

Posted on Reply
#27
geon2k2
They forgot to mention the prices I think.

Let me correct that:

X80 - 1000 $
X80ti - 1500$
X80 Titan - 3000$
Posted on Reply
#28
buildzoid
okidnaRise of The Tomb Raider @ 1600x900 with Very High preset.

You do realize that many games will hog available VRAM just because they can not because they have to?

After all the Fury X manages to beat the TITAN-X in TR at 4K even though it only has 4GB of RAM compared to the TITAN-X's 12GB.

So TR doesn't need 7.5GB of VRAM it will just use them if they are provided.
Posted on Reply
#29
Legacy-ZA
Prima.VeraWhich Games, on what settings and resolution please? Otherwise I'm calling this a BS
I fear some articles on this website goes to waste on some people. Just do a search.

Rise of the Tomb Raider
Shadows of Mordor
GTA V

There might be other titles that also use more, however, these are the games I am interested in and have noticed use a lot of VRAM. :)

With DX12 games coming out, I am sure we will see even more VRAM used in the near future. Games like Deus Ex - Mankind Divided and so forth.

Then there is the modding community, for games, which add extra details to textures, more objects, more landscapes. I am sure when Fallout 4's Creation Kits comes out, that will also go over the 6GB VRAM usage for some of us.

Things like Anti-Aliasing and Resolution also comes into play for people like me that don't like jaggies. Downside? More VRAM usage.

Then there are all the monitors with their refresh rates... 60hz, 75hz 85Hz 100Hz 120Hz 144Hz+ I prefer mine stable at 120FPS if you run out of VRAM, you can be sure to see those numbers dip more often than you like... loosely translated... in-game stuttering.

And last but not least. It's always a good thing to have that little extra head room when it comes to VRAM to make things a little future proof, especially when you are someone like me that doesn't enjoy upgrading his GPU once a year and being wallet raped while doing it.

Many factors are involved, more than people would like to realize.
FrickSome actually do, but I have no idea if it impacts performance
The performance impact translates into what people refer to as "Stuttering" this happens when new textures have to be loaded into the VRAM and older textures have to be removed. When there is enough VRAM headroom for all the textures, then there is no problem, especially when you have to move back and forth to prior locations within the game world.

Note: there are other components that will also have to work in harmony and conjunction with each other for everything to work smoothly. For example, a PC using a SSD won't struggle as much vs one that still uses an HDD.

Often referred to as a bottleneck.
Posted on Reply
#30
RejZoR
Games use such huge amounts of memory just because they can, not because it's necessary. Unreal Engine already knows Texture Streaming, a system that dynamically loads textures as they are necessary based on field of vision, object range, scene radius around player and other heuristic parameters.

I've tested this well in Killing Floor 2 as I was making tweaker for it. With Texture Streaming enabled, on 4GB VRAM graphic card, memory can go up to 2 GB with everything set to Ultra at 1080p. But if I disable Texture Streaming, memory usage goes to 3,9 GB. Game doesn't look any better. So, there's that. If I had 12GB, game could fit all of it in memory. Would there be any actual benefit? Not really. Maybe less stuttering on systems with crappy data storage subsystem (slow HDD). If you have SSD or SSHD like I do, I frankly don't notice any difference.
Don't know how clever other engines are, but Texture Streaming does wonders when you have tons of textures, but you're limited by VRAM capacity.
If you want to target as many users as possible while still delivering high image quality, this is excellent way to do it.
Posted on Reply
#31
Vayra86
Legacy-ZAI fear some articles on this website goes to waste on some people. Just do a search.

Rise of the Tomb Raider
Shadows of Mordor
GTA V

There might be other titles that also use more, however, these are the games I am interested in and have noticed use a lot of VRAM. :)

With DX12 games coming out, I am sure we will see even more VRAM used in the near future. Games like Deus Ex - Mankind Divided and so forth.

Then there is the modding community, for games, which add extra details to textures, more objects, more landscapes. I am sure when Fallout 4's Creation Kits comes out, that will also go over the 6GB VRAM usage for some of us.

Things like Anti-Aliasing and Resolution also comes into play for people like me that don't like jaggies. Downside? More VRAM usage.

Then there are all the monitors with their refresh rates... 60hz, 75hz 85Hz 100Hz 120Hz 144Hz+ I prefer mine stable at 120FPS if you run out of VRAM, you can be sure to see those numbers dip more often than you like... loosely translated... in-game stuttering.

And last but not least. It's always a good thing to have that little extra head room when it comes to VRAM to make things a little future proof, especially when you are someone like me that doesn't enjoy upgrading his GPU once a year and being wallet raped while doing it.

Many factors are involved, more than people would like to realize.



The performance impact translates into what people refer to as "Stuttering" this happens when new textures have to be loaded into the VRAM and older textures have to be removed. When there is enough VRAM headroom for all the textures, then there is no problem, especially when you have to move back and forth to prior locations within the game world.

Note: there are other components that will also have to work in harmony and conjunction with each other for everything to work smoothly. For example, a PC using a SSD won't struggle as much vs one that still uses an HDD.

Often referred to as a bottleneck.
Great story, but largely untrue in practice.

VRAM is filled because it is possible, not because it is required. When you increase IQ, you decrease the ability for the GPU to push out 120/144 fps, which means the 'stutter' is masked by the lower framerate. Frametimes are never stable and neither is FPS. The real limitation is the GPU core, not the VRAM. Memory overclocking gains are always lower than core overclocking.

There are almost zero situations in which the GPU is out of horsepower before the VRAM starts to complain. High end GPU's are balanced around core versus VRAM usage.

Now, the real issues with regards to having 'more gpu core horsepower' than the VRAM can work with, can occur in SLI/Crossfire. But when you go into that realm, you are already saying goodbye to the lowest input lag numbers ánd you say goodbye to a fully stutter-free gaming experience. GPU's in SLI are never 100% of the time in perfect sync. Any deviation from that sync will either result in a higher frametime OR higher input lag.

You can't just say 'a game can use 8 GB so I need to have 8 GB GPU's'.
Posted on Reply
#32
64K
VRAM used doesn't always mean VRAM needed for smooth play.

From the review here on the Titan X



W1zzard's conclusion after testing

"Our results clearly show that 12 GB is overkill for the Titan X. Eight GB of memory would have been more than enough, and, in my opinion, future-proof. Even today, the 6 GB of the two-year-old Titan are too much as 4 GB is enough for all titles to date. Both the Xbox One and PS4 have 8 GB of VRAM of which 5 GB and 4.5 GB are, respectively, available to games, so I seriously doubt developers will exceed that amount any time soon.

Modern games use various memory allocation strategies, and these usually involve loading as much into VRAM as fits even if the texture might never or only rarely be used. Call of Duty: AW seems even worse as it keeps stuffing textures into the memory while you play and not as a level loads, without ever removing anything in the hope that it might need whatever it puts there at some point in the future, which it does not as the FPS on 3 GB cards would otherwise be seriously compromised."

www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan_X/33.html

Does COD AW use 7.3 GB of VRAM if available? Yes.
Does COD AW need 7.3 GB of VRAM? No.
Posted on Reply
#33
rtwjunkie
PC Gaming Enthusiast
medi01I missed what was the source of this information.

Is it "some TPU guy just made this up"? =/
Apparently you missed the source link in btarunr's OP, and you missed @the54thvoid post about Guru3D being hesitant to post anything on it since it is not substantiated yet. FTR, bta link is always to bottom left in his OP.
Posted on Reply
#34
bogami
To be charged up to hernias !
Again, they will be sold incomplite cut out chip for best gameplay product (x80ti it is scrapped chip 5120 core !!!!!!!!!!!!!! and scraped RAM DDR5 to !!!!!!!!!!!!!!!!!!!!!!!!):mad: ! TITAN is another class( for D.P.and CUDA use) I hope that polaris will be better .12.6TFLOPS on TITAN.It sounds good to be a little in relation to the predicted 300%. 10.5TFLOPS X80ti also does not promise 100% jump. We will see ,sustained waiteng and earneng money !I hope that will be wise enough and quickly issued a dual GPU product.
Posted on Reply
#35
L.ccd
rtwjunkieApparentlu you missed the source link in btarunr's OP
Regarding this, the source article itself says:
the so called leakers just can’t live without speculating and creating their own charts, because I suppose that’s exactly what we are seeing on this graph.
And yet the same unsubstantiated crap is linked and talked about on a number of hardware sites. Why does TPU relay that made-up crap? I'd rather them not post anything on nvidia's next gen before they have something solid to talk about. I come here to read solid info, not the same bullshit so many sites publish.

Sorry for the rant.
Posted on Reply
#36
rtwjunkie
PC Gaming Enthusiast
L.ccdRegarding this, the source article itself says:

And yet the same unsubstantiated crap is linked and talked about on a number of hardware sites. Why does TPU relay that made-up crap? I'd rather them not post anything on nvidia's next gen before they have something solid to talk about. I come here to read solid info, not the same bullshit so many sites publish.

Sorry for the rant.
True enough! I agree completely.

Your position, however, is alot different than saying bta or someone else on TPU made it up.
Posted on Reply
#37
MxPhenom 216
ASIC Engineer
FrickSome actually do, but I have no idea if it impacts performance
The only one i can think of off the top of my head is Shadow of Mordor.
Posted on Reply
#38
GhostRyder
Well, this is just a rumor so ill just think of it that way until we have substantial proof. However, if the name series is the Geforce X80 (Or similar) that going to take some getting used to lol.
Posted on Reply
#39
EarthDog
I love how we have seen GTX1080 spoken as if it was The Gospel when people have nooooooooooooooooooooooooooooooooo idea. Thanks!



LOL @ vRAM discussion... SOME games will use more ram for giggles and not show stuttering (until something else needs the space). BF4 did this... but make no mistake about it... there was stuttering with 2GB cards at 1080p Ultra settings.... regardless if it used 3.5GB on a 4GB card or 2.5GB on a 2GB card.
Posted on Reply
#40
nickbaldwin86
I see what they did there.

X80 = X = 10 = 1080 ;) so they thought X sounded "cooler" because marketing!

Time for a set of X80 Ti,s ... :)
Posted on Reply
#41
PP Mguire
Lol at the convo. This would actually make a lot of sense.

Leaks, PR, and Nvidia have all said Pascal is bringing back compute performance for deep learning which could limit all but the Titan GPU to GDDR5x. Reasoning behind this is because the compute cards were pretty much confirmed already to have HBM2 and a minimum of 16GB at that, and if that's the case it could also mean the Titan will have the double duty classification brought back with HBM2, leaving the rest with GDDR5x. This really isn't an issue, but I suppose people will argue otherwise.
Posted on Reply
#42
TheDeeGee
geon2k2They forgot to mention the prices I think.

Let me correct that:

X80 - 600 $
X80ti - 800$
X80 Titan - 1200$
Fixed it for ya.
Posted on Reply
#43
trog100
can we assume there will be an X60 and an X70 series.. :)

the naming makes sense the rest of it sounds a tad on the optimistic side.. but who knows.. he he..

6 gigs of memory is a realistic "enough" the fact that some amd cards have 8 is pure marketing hype.. no current game needs more than 4 gigs.. people who think otherwise are wrong..

i will be bit pissed off if my currently top end cards become mere mid range all in one go but such is life.. :)

trog
Posted on Reply
#44
nickbaldwin86
trog100can we assume there will be an X60 and an X70 series.. :)
of course they will make x6x and x7x but no one cares about those as far as rumors go. this is the future state of dreams and hopes for the big cards. but again rumors.
Posted on Reply
#45
AuDioFreaK39
I do not like the name "X80." It seems like a backwards step for such a powerful architecture, especially from "GTX 980," "GTX 780," "GTX 680," etc. The other issue is that many people may be inclined to believe that "GTX 980" is more powerful simply because it sounds as such.

It honestly looks like anyone could have typed this up in Excel and posted to Imgur, as the table is very generic and does not have indicators of Nvidia's more presentation-based "confidential" slides.

History of notable GPU naming schemes:
GeForce 3 Ti500 (October 2001)
GeForce FX 4800 (March 2003)
GeForce 6800 GT (June 2004)
GeForce GTX 680 (March 2012)

GeForce PX sounds more appropriate for Pascal, considering they already branded the automotive solution Drive PX 2. I would have no problem purchasing a GPU called "GeForce PX 4096," "GeForce PX 5120 Ti" or "GeForce PX Titan," where 4096 and 5120 indicate the number of shaders.

They could even do the same with Volta in 2018: GeForce VX, GeForce VX Ti, and GeForce VX Titan, etc.
Posted on Reply
#46
64K
trog100can we assume there will be an X60 and an X70 series.. :)

the naming makes the rest of it sounds a tad on the optimistic side.. but who knows.. he he..

i will be bit pissed off if my currently top end cards become mere mid range all in one go but such is life.. :)

trog
Yes the 980 Ti will be relegated to mid range in comparative performance to the Pascal and Polaris lineup once the flagships are released. It's expensive to stay on the cutting edge if that's what you want.
Posted on Reply
#47
nickbaldwin86
AuDioFreaK39I do not like the name "X80." It seems like a backwards step for such a powerful architecture, especially from "GTX 980," "GTX 780," "GTX 680," etc. The other issue is that many people may be inclined to believe that "GTX 980" is more powerful simply because it sounds as such.

It honestly looks like anyone could have typed this up in Excel and posted to Imgur, as the table is very generic and does not have indicators of Nvidia's more presentation-based "confidential" slides.

History of notable GPU naming schemes:
GeForce 3 Ti500 (October 2001)
GeForce FX 4800 (March 2003)
GeForce 6800 GT (June 2004)
GeForce GTX 680 (March 2012)

GeForce PX sounds more appropriate for Pascal, considering they already branded the automotive solution Drive PX 2. I would have no problem purchasing a GPU called "GeForce PX 4096," "GeForce PX 5120 Ti" or "GeForce PX Titan," where 4096 and 5120 indicate the number of shaders.

They could even do the same with Volta in 2018: GeForce VX, GeForce VX Ti, and GeForce VX Titan, etc.
You should apply for their "Lead of naming things in marketing Director of name stuff" I hear they have a new opening at launch of every new card series. Bit of a long title but small font and you can get it on a business card ;)

Sorry just love how people get hung up on the name!

You must remember that marketing names these and it is based feel good not technical reasons. how ever they can sell more! a X in front of anything sells!!!!!
Posted on Reply
#48
rtwjunkie
PC Gaming Enthusiast
trog100i will be bit pissed off if my currently top end cards become mere mid range all in one go but such is life.. :)
This is always how it ends up.
Posted on Reply
#49
EarthDog
rtwjunkieThis is always how it ends up.
+1... That is just the way it works.

But look at the bright side... you neuter yourself (limit FPS) so when you need the horsepower, you have some headroom built in from your curious methods. :p
Posted on Reply
#50
nickbaldwin86
trog100i will be bit pissed off if my currently top end cards become mere mid range all in one go but such is life.. :)

trog
Yes we should just go back to hitting things with rocks!
rtwjunkieThis is always how it ends up.
course
Posted on Reply
Add your own comment
May 10th, 2024 22:33 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts