Monday, July 25th 2016

NVIDIA Launches Maxed-out GP102 Based Quadro P6000

Late last week, NVIDIA announced the TITAN X Pascal, its fastest consumer graphics offering targeted at gamers and PC enthusiasts. The reign of TITAN X Pascal being the fastest single-GPU graphics card could be short-lived, as NVIDIA announced a Quadro product based on the same "GP102" silicon, which maxes out its on-die resources. The new Quadro P6000, announced at SIGGRAPH alongside the GP104-based Quadro P5000, features all 3,840 CUDA cores physically present on the chip.

Besides 3,840 CUDA cores, the P6000 features a maximum FP32 (single-precision floating point) performance of up to 12 TFLOP/s. The card also features 24 GB of GDDR5X memory, across the chip's 384-bit wide memory interface. The Quadro P5000, on the other hand, features 2,560 CUDA cores, up to 8.9 TFLOP/s FP32 performance, and 16 GB of GDDR5X memory across a 256-bit wide memory interface. It's interesting to note that neither cards feature full FP64 (double-precision) machinery, and that is cleverly relegated to NVIDIA's HPC product line, the Tesla P-series.
Add your own comment

22 Comments on NVIDIA Launches Maxed-out GP102 Based Quadro P6000

#1
Chaitanya
So no HBM unless one has super deep pockets to spend on Tesla.
Posted on Reply
#2
btarunr
Editor & Senior Moderator
ChaitanyaSo no HBM unless one has super deep pockets to spend on Tesla.
Deep pockets and Jen-Hsun Huang in your phonebook. You can't buy those cards from retailers at the moment.
Posted on Reply
#3
TheLostSwede
News Editor
btarunrDeep pockets and Jen-Hsun Huang in your phonebook. You can't buy those cards from retailers at the moment.
You mean to say that you don't have his personal number? :p
Posted on Reply
#4
btarunr
Editor & Senior Moderator
TheLostSwedeYou mean to say that you don't have his personal number? :p
All I know is that his number ends in "480".
Posted on Reply
#5
ZoneDymo
btarunrAll I know is that his number ends in "480".
....HE WORKS FOR AMD, ILLUMINATI, it all makes sense now!!
Posted on Reply
#6
bogami
This sounds like it's predecessor for 1080ti is already done.Waiting for AMD to respond or we will have competition or not so soon. Pro card are otherwise announced, but I do not see a successor in gaming .
Posted on Reply
#7
Breit
So thats basically the fully enabled Titan X (GP102)... Titan X Ti? :D

Does anyone here have any experience with the recent Quadro cards from Nvidia? Do they run with (modded) Geforce drivers or is this not possible anymore? If so, then is overclocking without hardmods an option?
Posted on Reply
#8
fynxer
The TITAN X is NOT marketed as a gaming card, this is the first time this has happened since the TITAN series was presented.

Sure you can play games on TITAN X but it is not marketed as a GTX GeForce gaming card anymore and there has to be a reason that nVidia choose to go this way. Not very many gamers have TITAN series card and a lot of games have been pissed that nVidida set TITAN series at such an inaccessible price that 99.9% of all gamers can not afford it or just wont buy it.

This could mean that the 1080Ti could have all 3840 cuda cores enabled for pure gaming and INT8 nerfed so it will not compete with TITAN X as a Deep Learning calculation card.

The sweet spot for a absolute top performing gaming card to sell in volume seams to be around maximum of $800-$900.

If nVidia was to give 1080Ti full 3840 cuda cores at around $800-$900 price range it would make gamers delirious with excitement and make it the best selling Ti gaming card of all time making them a lot more money in volume sales than TITAN X ever would or could as a gaming card, gaming is all about volume these days to make the big bucks.

This way gamers would feel that they got the absolute top gaming card and it would make it a lot easier for more people fork out around $800-$900 knowing this. It is much more fun to buy the best gaming card than the second best, it's all psychology to get us to buy more.

Sweclocker seams to have some sense that this could possibly happen:
www.sweclockers.com/nyhet/22431
Posted on Reply
#9
PP Mguire
fynxerThe TITAN X is NOT marketed as a gaming card, this is the first time this has happened since the TITAN series was presented.

Sure you can play games on TITAN X but it is not marketed as a GTX GeForce gaming card anymore and there has to be a reason that nVidia choose to go this way. Not very many gamers have TITAN series card and a lot of games have been pissed that nVidida set TITAN series at such an inaccessible price that 99.9% of all gamers can not afford it or just wont buy it.

This could mean that the 1080Ti could have all 3840 cuda cores enabled for pure gaming and INT8 nerfed so it will not compete with TITAN X as a Deep Learning calculation card.

The sweet spot for a absolute top performing gaming card to sell in volume seams to be around maximum of $800-$900.

If nVidia was to give 1080Ti full 3840 cuda cores at around $800-$900 price range it would make gamers delirious with excitement and make it the best selling Ti gaming card of all time making them a lot more money in volume sales than TITAN X ever would or could as a gaming card, gaming is all about volume these days to make the big bucks.

This way gamers would feel that they got the absolute top gaming card and it would make it a lot easier for more people fork out around $800-$900 know this. It is much more fun to buy the best gaming card than the second best, it's all psychology to get us to buy more.

Sweclocker seams to have some sense that this could possibly happen:
www.sweclockers.com/nyhet/22431
Titan X isn't marketed as a Geforce card because A) they needed to diversify from the previous Maxwell based Titan X and B) because they are the only ones selling it hence "Nvidia" Titan X. Be willing to bet the TItan X doesn't have full FP32 like the P6000 which essentially makes it a gamer card. The only other thing the Titan X would have an advantage over the 1080 is Iray farming with the added VRAM.
Posted on Reply
#10
Breit
They could've named the TITAN X differently to diversify it from it's predecessor for starters. And with "differently" I don't mean dropping the GeForce moniker.
Posted on Reply
#11
PP Mguire
BreitThey could've named the TITAN X differently to diversify it from it's predecessor for starters. And with "differently" I don't mean dropping the GeForce moniker.
Sure, but they didn't.
Posted on Reply
#12
iO
fynxerThe TITAN X is NOT marketed as a gaming card, this is the first time this has happened since the TITAN series was presented.

Sure you can play games on TITAN X but it is not marketed as a GTX GeForce gaming card anymore and there has to be a reason that nVidia choose to go this way. Not very many gamers have TITAN series card and a lot of games have been pissed that nVidida set TITAN series at such an inaccessible price that 99.9% of all gamers can not afford it or just wont buy it.

This could mean that the 1080Ti could have all 3840 cuda cores enabled for pure gaming and INT8 nerfed so it will not compete with TITAN X as a Deep Learning calculation card.

The sweet spot for a absolute top performing gaming card to sell in volume seams to be around maximum of $800-$900.

If nVidia was to give 1080Ti full 3840 cuda cores at around $800-$900 price range it would make gamers delirious with excitement and make it the best selling Ti gaming card of all time making them a lot more money in volume sales than TITAN X ever would or could as a gaming card, gaming is all about volume these days to make the big bucks.

This way gamers would feel that they got the absolute top gaming card and it would make it a lot easier for more people fork out around $800-$900 knowing this. It is much more fun to buy the best gaming card than the second best, it's all psychology to get us to buy more.

Sweclocker seams to have some sense that this could possibly happen:
www.sweclockers.com/nyhet/22431
Right and thats why they give it the exact same name as the card that they marketed as the ultimate gaming card.
But this time they justify the price tag with deep learning instead of DP performance as with the first Titan..

They might go a similiar route as with Kepler: Titan X > 1080Ti with just higher clocks > Titan X Black fully enabled > 1080Ti something fully enabled with higher clocks
Posted on Reply
#13
qubit
Overclocked quantum bit
Everyone wants the expensive HBM, but I wonder just how much extra performance it would bring to games. Will it be something like 30-50% extra framerate at the same clock speed, or more like 5-10% we see on AMD cards? If the latter, then it would hardly be worth bothering with.
Posted on Reply
#15
jabbadap
iORight and thats why they give it the exact same name as the card that they marketed as the ultimate gaming card.
But this time they justify the price tag with deep learning instead of DP performance as with the first Titan..

They might go a similiar route as with Kepler: Titan X > 1080Ti with just higher clocks > Titan X Black fully enabled > 1080Ti something fully enabled with higher clocks
Well yeah Geforce GTX titan X was marketed as Ultimate Gaming card, Nvidia Titan X is marketed as Ultimate graphics card. Not that either are somehow ultimate, well maybe price is ultimate... But anyhow you can play games even on quadro P6000 if you like and have problem of excessive cash piles.

Hmm I don't remember route been that with kepler. It was Titan as castrated gk110, gtx780 even more castrated gk110/gk110b, gtx780ti full gk110b, titan black full gk110b to titan Z 2x castrated gk110b. If I had to guess nvidia will release gtx780 like castrated gp102 card only if vega 10 is not powerful enough.
qubitEveryone wants the expensive HBM, but I wonder just how much extra performance it would bring to games. Will it be something like 30-50% extra framerate at the same clock speed, or more like 5-10% we see on AMD cards? If the latter, then it would hardly be worth bothering with.
Next to nothing, from performance aspect hbm just takes one bottleneck off the equation, but gddr5x do quite the same. Of course if you use some high end cf/sli rig and multimonitor surround/eyefinity with 4k displays, then you might need that bandwidth. But other than that single gpu configs are still more shader power limited than memory bandwidth.
Posted on Reply
#16
efikkan
ChaitanyaSo no HBM unless one has super deep pockets to spend on Tesla.
Back at GTC, all the HBM2/GP100 production capacity was already reserved throughout Q1 2017, so don't hold your breath.
fynxerThis could mean that the 1080Ti could have all 3840 cuda cores enabled for pure gaming and INT8 nerfed so it will not compete with TITAN X as a Deep Learning calculation card.
The reason why Titan X (Pascal) only have 3584 cores while Quadro P6000 have 3840 cores is yields and market volume. If Nvidia doesn't have enough fully enabled GP102s for Titan X, any GeForce product on the horizon will certainly not have more cores enabled.
Posted on Reply
#17
BiggieShady
SteevoWith recent advancements in quantum computing I wonder how long of a life "compute" will have for super computing, when Google just showed that 10 hours of its quantum chip is equal to almost a weeks worth of work in a supercomputer.

www.rsc.org/chemistryworld/2016/07/quantum-computer-simulates-hydrogen-molecule-complex-calculations
Interesting article. Too bad quantum computer is not programmable in a traditional computer programming sense, we should be fair and call this google digital quantum computer what it really is: a controlled experiment with number of parameters that drive the quantum system that exhibits algorithmic behavior (for few types of algorithms) and collecting the result data saves them time on simulation. They are using the existing phenomenon of a quantum system that has algorithmic properties and tweak it for a very correlating application. Their quantum computer (qomputer?) doesn't have quantum processor or quantum memory or quantum instruction set or quantum machine code that could be used for general purpose applications ... the code is "built in" the phenomenon and the memory is the conduit itself :eek:
... and everything is tad warmer than absolute zero
I'd like to see them implement quick sort algorithm on their qomputer rather than calculate waveform of the molecule :laugh:
Posted on Reply
#18
Steevo
BiggieShadyInteresting article. Too bad quantum computer is not programmable in a traditional computer programming sense, we should be fair and call this google digital quantum computer what it really is: a controlled experiment with number of parameters that drive the quantum system that exhibits algorithmic behavior (for few types of algorithms) and collecting the result data saves them time on simulation. They are using the existing phenomenon of a quantum system that has algorithmic properties and tweak it for a very correlating application. Their quantum computer (qomputer?) doesn't have quantum processor or quantum memory or quantum instruction set or quantum machine code that could be used for general purpose applications ... the code is "built in" the phenomenon and the memory is the conduit itself :eek:
... and everything is tad warmer than absolute zero
I'd like to see them implement quick sort algorithm on their qomputer rather than calculate waveform of the molecule :laugh:
Classical computing is a misnomer of comparison to quantum. Quantum seeks the lowest state, so a change in thought process is required to "code" for it. en.wikipedia.org/wiki/Grover%27s_algorithm

Quantum has already proved to be faster at cracking huge datasets/tables/sorting than classical, but its a waste of the machine time when more important work that requires computational power beyond the limits of classic computing.

www.gizmag.com/d-wave-quantum-computer-supercomputer-ranking/27476/


A simple sort and test shows "How did the D-Wave computer do on the tests? On the largest problem sizes tested, the V5 chip found optimal solutions in less than half a second, while the best classical software solver required 30 minutes to find those same solutions. This makes the D-Wave computer over 3,600 times faster than the classical computer in these tests."
Posted on Reply
#19
the54thvoid
Intoxicated Moderator
SteevoClassical computing is a misnomer of comparison to quantum. Quantum seeks the lowest state, so a change in thought process is required to "code" for it. en.wikipedia.org/wiki/Grover's_algorithm

Quantum has already proved to be faster at cracking huge datasets/tables/sorting than classical, but its a waste of the machine time when more important work that requires computational power beyond the limits of classic computing.

www.gizmag.com/d-wave-quantum-computer-supercomputer-ranking/27476/


A simple sort and test shows "How did the D-Wave computer do on the tests? On the largest problem sizes tested, the V5 chip found optimal solutions in less than half a second, while the best classical software solver required 30 minutes to find those same solutions. This makes the D-Wave computer over 3,600 times faster than the classical computer in these tests."
That link is from 2013, this one is from 2014:

Sure I read somewhere that D-Wave is a lot more a proof of concept and that's probably why Google bought into it. By all means though, it's a start into a very weird school of 'computing'.

www.cnet.com/uk/news/d-wave-quantum-computer-sluggishness-finally-confirmed/
Posted on Reply
#20
BiggieShady
SteevoClassical computing is a misnomer of comparison to quantum.
Sure you can compare them, especially how different is to "code" for them, as you say :
SteevoQuantum seeks the lowest state, so a change in thought process is required to "code" for it.
and that's part of my point, are all computable problems solvable the quantum way? I don't know enough but I'd say we are are limited to creative applications of Grover's algorithm and similar ... but this is all so very off topic.
Posted on Reply
#21
Steevo
BiggieShadySure you can compare them, especially how different is to "code" for them, as you say :

and that's part of my point, are all computable problems solvable the quantum way? I don't know enough but I'd say we are are limited to creative applications of Grover's algorithm and similar ... but this is all so very off topic.
We just have to think about it the way that we have to when dealing with binary code, the absolute simplest logic, yes or no, or in that state, what is the probability range for the answer.
Posted on Reply
Add your own comment
Apr 26th, 2024 13:34 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts