Wednesday, April 13th 2016

NVIDIA Readies Three GP104 "Pascal" Based SKUs for June 2016

NVIDIA is reportedly giving final touches to no less than three SKUs based on the 16 nm GP104 silicon, to launch some time this June. The ASIC markings for the chips that drive these SKUs are "GP104-400-A1," "GP104-200-A1" and "GP104-150-A1." If you recall, NVIDIA last reserved the "-400-A1" markings for the GeForce GTX 980 (GM204-400-A1), and the "-200-A1" for the GTX 970 (GM204-200-A1).

The GP104-150-A1 is a mystery ASIC. Either it will drive a more affordable third desktop SKU based on the GP104, or could signify a mobile SKU. The company plans to launch the products based on the GP104-400-A1 and GP104-200-A1, logical successors to the GeForce GTX 980 and GTX 970, in early June. The GP104-150-A1, on the other hand, could see the light of the day in mid-June.
Source: HardwareBattle
Add your own comment

47 Comments on NVIDIA Readies Three GP104 "Pascal" Based SKUs for June 2016

#1
RCoon
I guess GP100 becomes the 980ti and Titan replacements then?
Posted on Reply
#2
the54thvoid
Intoxicated Moderator
RCoonI guess GP100 becomes the 980ti and Titan replacements then?
That's the consensus view I think. Like last time, the 980 matched the 780ti and we had to wait for 980ti for the performance part.
Tell you what though, if I get a whiff of Maxwell driver support slipping, I'm going red team.
Posted on Reply
#3
RCoon
the54thvoidThat's the consensus view I think. Like last time, the 980 matched the 780ti and we had to wait for 980ti for the performance part.
Tell you what though, if I get a whiff of Maxwell driver support slipping, I'm going red team.
I'll be waiting for GP100 so I can switch over from dual to single GPU. I've had my fun :P
Posted on Reply
#5
Legacy-ZA
Time to milk the populace nVidia? Don't worry... nVidia will ensure to drag out the P100/104 chips lifespan for as long as possible to maximize cash flow...

There is another technology that will launch this year that I have my eye on and that, is Intel's XPoint technology. I really hope when Intel's XPoint technology launches, we will see another major boost in performance at a affordable price, like their Sandy Bridge 2600k CPU's, apparently it will be cheaper to manufacturer so let's see... Even though SSD's have reached it's bandwidth limit with SATA, we are yet still constrained by their ludicrously expensive prices even though there is no shortage and the technology most certainly, isn't new.

This technology will put any SSD to shame, gone are the days where a storage device will be the cause of a bottleneck, sure SSD's help, but it doesn't eliminate it altogether, well, depends what one does. ;-)

www.intelsalestraining.com/infographics/memory/3DXPointc.pdf
Posted on Reply
#6
vega22
980 and 970 replacements and a new mobile chip makes the most sense imo too.

i thought these were expected to be gddr5x, the gp100 gets the hbm2 treatment and the low end chips get the fast gddr5?
Posted on Reply
#7
Legacy-ZA
vega22980 and 970 replacements and a new mobile chip makes the most sense imo too.

i thought these were expected to be gddr5x, the gp100 gets the hbm2 treatment and the low end chips get the fast gddr5?
This is a question I too would like to see answered, from what I have read everywhere, no one knows for sure, just speculations. I think it will most likely be GDDR5X, they were mass produced to be ready at the middle of this year, if we are lucky, HBM2.
Posted on Reply
#8
the54thvoid
Intoxicated Moderator
Legacy-ZATime to milk the populace nVidia? Don't worry... nVidia will ensure to drag out the P100/104 chips lifespan for as long as possible to maximize cash flow...
Hmm, let's see - company with shareholders wishes to maximise profits..... Your ability to point out the obvious transcends belief.

Move along, nothing to see here.
Posted on Reply
#9
HumanSmoke
vega22i thought these were expected to be gddr5x, the gp100 gets the hbm2 treatment and the low end chips get the fast gddr5?
That is my understanding also. By the time the cards are ready to go retail, Micron's pilot GDDR5X line should have production ready for the 980 replacement.
Legacy-ZATime to milk the populace nVidia? Don't worry... nVidia will ensure to drag out the P100/104 chips lifespan for as long as possible to maximize cash flow...
:SMFH: I hope you get paid to make an idiot out of yourself - I'd hate to think you do it for free.

Current Nvidia GPUs........................

GM 204 lifespan to date: 1 year, 7 months
GM 200 lifespan to date: 1 year, 1 month
GM 206 lifespan to date: 1 year, 4 months
GM 107 lifespan to date: 2 years, 2 months

...and here are some other current GPUs that comfortably eclipse those timelines:

Pitcairn/Curacao/Trinidad lifespan to date: 4 years, 2 months
Bonaire lifespan to date: 3 years, 1 month
Hawaii lifespan to date : 2 years, 6 months
Oland lifespan to date: 3 years, 3 months.
Posted on Reply
#10
TheDeeGee
Legacy-ZATime to milk the populace nVidia? Don't worry... nVidia will ensure to drag out the P100/104 chips lifespan for as long as possible to maximize cash flow...
Sucks to be a successful company these days and not being allowed to make money : /

Really if you don't like NVIDIA go play with AMD and their Max 2 Generation Support.
Posted on Reply
#11
P4-630
Nice, waiting to buy a GP104-200-A1 :D
Posted on Reply
#12
Vayra86
GP104 will have to be a good performer or I'm skipping this one, even though when Maxwell released 104 I was really rooting for Pascal. It better bring GDDR5X, for one, or I'm instantly turned off. That memory looks promising, the yields are reportedly very good, so there will be some good OC potential to be had, something GDDR5 can't really offer anymore.
Posted on Reply
#13
bug
At least one of these upcoming cards was spotted with GDDR5 (non-X) memory. My guess is that one will be the GTX 1060, because the mid-range cards have always been differentiated by memory bandwidth. Hopefully both the GTX 1080 and 1070 will use GDDR5X. And I mean really use it, because 4k needs the bandwidth.
Posted on Reply
#14
medi01
Legacy-ZAdrag out the P100/104 chips lifespan
That's fine (and up to them) level of evilness near zero.

I wonder, though, whether we'll see performance of Maxwell cards suddenly drop in the same benchmark (recall minus 10 fps on 780Ti in reviews on this very site) once new gen is available. That stinks quite a bit.

And whether we'll see more "Project (we didn't get a penny from nVidia) Cars" kind of games, with nVizillaCripplingCompetitorAndPreviousGenWithObscureCodeWorksWondersForOurSales. That's pure evil.









Posted on Reply
#15
Legacy-ZA
the54thvoidHmm, let's see - company with shareholders wishes to maximise profits..... Your ability to point out the obvious transcends belief.

Move along, nothing to see here.
There is no problem in making profits, however, there used to be limits; the guts get ripped out purposefully and then sold. It kind of reminds me of game DLC... I suppose if you have a lot of money, you don't have to care how you are being done in.

That is "Maximize" not "Maximise." (Don't hate me, it's not bait) :)
HumanSmokeThat is my understanding also. By the time the cards are ready to go retail, Micron's pilot GDDR5X line should have production ready for the 980 replacement.

:SMFH: I hope you get paid to make an idiot out of yourself - I'd hate to think you do it for free.

Current Nvidia GPUs........................

GM 204 lifespan to date: 1 year, 7 months
GM 200 lifespan to date: 1 year, 1 month
GM 206 lifespan to date: 1 year, 4 months
GM 107 lifespan to date: 2 years, 2 months

...and here are some other current GPUs that comfortably eclipse those timelines:

Pitcairn/Curacao/Trinidad lifespan to date: 4 years, 2 months
Bonaire lifespan to date: 3 years, 1 month
Hawaii lifespan to date : 2 years, 6 months
Oland lifespan to date: 3 years, 3 months.
Yes, it's funny how easy this is to understand; yet you don't seem to be able to grasp the concept behind it. How ironic, perhaps your sentence is better used for yourself? The concept being, you should have had more, but didn't get it.
TheDeeGeeSucks to be a successful company these days and not being allowed to make money : /

Really if you don't like NVIDIA go play with AMD and their Max 2 Generation Support.
Like i said above; take note and observe, if the new Pascal cards don't have a major jump in performance, like they should have.. you can be sure they clocked and gutted everything to perfection. I guess I am just starting to get tired of it all.
Posted on Reply
#16
the54thvoid
Intoxicated Moderator
Vayra86GP104 will have to be a good performer or I'm skipping this one, even though when Maxwell released 104 I was really rooting for Pascal. It better bring GDDR5X, for one, or I'm instantly turned off. That memory looks promising, the yields are reportedly very good, so there will be some good OC potential to be had, something GDDR5 can't really offer anymore.
Given Fiji's bandwidth of 512GB/s over 980ti's at 336GB/s...



The GDDR5 memory bandwidth isn't really an issue. Architecture design is.
Posted on Reply
#17
silentbogo
Hah.... hah-hah!
[...more sarcastic laugh...]

Once again, I take my hat off for citing "a reliable source of information".

With help from my buddy Google Translate I was able to decypher this:
More detailed information, but (to monitor the hardware battle Sauron lot this?) People, the process of verifying the information received, one will also need to disclose.

++ But additional international (chiphell.com) News GP 104-200 yen is a GTX 970, we believe habae [the author] sources.

Coming soon.

** This information is referred to as single-grade highly matboegi[reliable?] information as possible, but will be asked to share the link.
Which is roughly interpreted as "source not confirmed. just a rumor".

...and feedback from readers:
- Chuckles is expected
- Overhear start ㅎㅎ
- Habae[author] honestly informed of the fact that it comes almost better not move far enough to be almost inform the party raising the rumors rumor than fact, trust in the infinite I look up to --- You missed a rumor anyway not only climbed out of the mud fight doenikkayo white lead heh
... and from the author:
And real-time sharing them with the current overseas media and sources , especially Videocardz and WCCFtech
Posted on Reply
#18
rtwjunkie
PC Gaming Enthusiast
Legacy-ZATime to milk the populace nVidia? Don't worry... nVidia will ensure to drag out the P100/104 chips lifespan for as long as possible to maximize cash flow...
That IS the job of a company. It should be the goal of ANY company, whether privately or publicly held.
Legacy-ZA. I guess I am just starting to get tired of it all.
And most of us are starting to get tired of people who subscribe to non-capitalist views, yet never seem to have any problem benefiting from said capitalism. That's somewhat duplicitous.

Maximisation of Profits provides funding for R&D and associated advancement that you expect.
Posted on Reply
#19
Legacy-ZA
rtwjunkieThat IS the job of a company. It should be the goal of ANY company, whether privately or publicly held.



And most of us are starting to get tired of people who subscribe to non-capitalist views, yet never seem to have any problem benefiting from said capitalism. That's somewhat duplicitous.

Maximisation of Profits provides funding for R&D and associated advancement that you expect.
Perhaps instead of nitpicking bits and pieces to make yourself sound clever, maybe you should read everything? No? Thought not. You also love to make assumptions. Have fun duckling.
Posted on Reply
#20
HumanSmoke
Legacy-ZAYes, it's funny how easy this is to understand; yet you don't seem to be able to grasp the concept behind it. How ironic, perhaps your sentence is better used for yourself? The concept being, you should have had more, but didn't get it.
No, I think you may have missed the point. Nvidia amortize R&D and increase profit by utilizing salvage parts and increasing overall performance as the fully enabled parts come online (AMD did exactly the same thing with Tonga). AMD on the other hand amortize R&D and maximize profit with a longer product cadence. The difference is that Nvidia's way brings performance increases, while AMD's way means that relative performance and pricing falls. Pitcairn when it launched was a second-tier GPU, it is currently a fourth-tier GPU. One of these strategies brings in record revenue and profits, and one of these strategies ensures that the GPUs become viewed as stale in the minds of consumers well before they are discontinued.
Legacy-ZALike i said above; take note and observe, if the new Pascal cards don't have a major jump in performance, like they should have.. you can be sure they clocked and gutted everything to perfection. I guess I am just starting to get tired of it all.
Neither vendor will put its best foot forward on a new node. It has never happened before, and given that 14nm/16nm will see 2-3 product cycles before 10nm becomes reality, it certainly wont happen now. If it was an out-and-out performance race, AMD's PR wouldn't have turned from "worlds fastest..." to "2.5X better performance per watt" and begun talking up Navi ahead of a launch two generations before it.
Posted on Reply
#21
rtwjunkie
PC Gaming Enthusiast
Legacy-ZAPerhaps instead of nitpicking bits and pieces to make yourself sound clever, maybe you should read everything? No? Thought not. You also love to make assumptions. Have fun duckling.
LOL
Posted on Reply
#22
Caring1
Legacy-ZAThat is "Maximize" not "Maximise." (Don't hate me, it's not bait) :)
Only in America. To the rest of the world maximise is correct.
Posted on Reply
#23
bug
the54thvoidGiven Fiji's bandwidth of 512GB/s over 980ti's at 336GB/s...
<snip/>
The GDDR5 memory bandwidth isn't really an issue. Architecture design is.
You're reading that wrong.
980Ti beats the Fury X at 1080, but Fury X scales better towards 4k. And that is using today's hardware. More powerful hardware will only make this difference more obvious.
Also, GDDR5X having more bandwidth, means 384 bit memory interfaces are no longer a must-have. That's money saved on the PCB.
But where I agree with you is that if GDDR5 is enough to (mostly) keep the 980Ti competitive, is highly unlikely mid-range Pascal cards will need GDDR5X. Then again, using HBM2, GDDR5X and GDDR5 at the same time can turn into a supply/inventory nightmare.
Posted on Reply
#24
ensabrenoir
Legacy-ZAThere is no problem in making profits, however, there used to be limits; the guts get ripped out purposefully and then sold. It kind of reminds me of game DLC... I suppose if you have a lot of money, you don't have to care how you are being done in.
That is "Maximize" not "Maximise." (Don't hate me, it's not bait) :)
Yes, it's funny how easy this is to understand; yet you don't seem to be able to grasp the concept behind it. How ironic, perhaps your sentence is better used for yourself? The concept being, you should have had more, but didn't get it.
Like i said above; take note and observe, if the new Pascal cards don't have a major jump in performance, like they should have.. you can be sure they clocked and gutted everything to perfection. I guess I am just starting to get tired of it all.
Alright everybody!!!!!!!!!!!! You have absolutely no choice!!!!!!!!!!! Line up and give Nvidia your monies....there's new cards a coming......


I guess I am just starting to get tired of it all.[/QUOTE]


.....i need some coffee...........come on guys new tech to play with....this is a happy place.....
Posted on Reply
#25
the54thvoid
Intoxicated Moderator
Caring1Only in America. To the rest of the world maximise is correct.
Beat me to it. I am British.
bugYou're reading that wrong.
980Ti beats the Fury X at 1080, but Fury X scales better towards 4k. And that is using today's hardware. More powerful hardware will only make this difference more obvious.
Also, GDDR5X having more bandwidth, means 384 bit memory interfaces are no longer a must-have. That's money saved on the PCB.
But where I agree with you is that if GDDR5 is enough to (mostly) keep the 980Ti competitive, is highly unlikely mid-range Pascal cards will need GDDR5X. Then again, using HBM2, GDDR5X and GDDR5 at the same time can turn into a supply/inventory nightmare.
Didn't read it wrong and although you are correct in the bulk of your post and I do agree.... The reason Fiji is not that hot at 1080p has something to do with its ROP count (some sort of back end architecture issue). It means as the fps drops, the difficulty's faced by its ROP count are mitigated and therefore it appears to scale better at 4K, when in fact it 'retrogrades' performance as the resolution goes down and the back end is flooded with fps processing. I'm sure @HumanSmoke could explain in technical terms.
Posted on Reply
Add your own comment
Apr 24th, 2024 08:14 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts