• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce GTX Titan Graphics Card Pictured in Full

So even after nVidia published some benchmarks most of you guys still expect some end-all do-all graphics card? Really?! It's barely 30% faster than GTX 680 or 25% faster than R7970GE by nVidia's own dodgey charts for crying out loud. ASUS Matrix Platinum 7970s that can clock to 1300-1325MHz, are probably gonna be faster the Titan in a lot of scenarios (like Metro VH/DoF/MSAA)...

5399805046_0d30e4dd70.jpg
 
Dont forget how Nvidia dropped the 8800GT, offering 90% of the performance of the super-pricey 8800GTX while costing less than half as much.

Or the fact that the top end of Nvidia's GPUs has been on the decline since then;

Nvidia Launch Prices:
GTX280 - $650
GTX480 - $500
GTX580 - $500
GTX680 - $499

Granted not a huge drop towards the end, but I think $500 for the pinnacle of GPU performance is somewhat a sweet spot.

Are they going to cripple it again so it'll become useless for 3ds max design gpu accelerated rendering? so that I'll be forced to buy their super uber premium stupendously expensive quadro 6000 video card. :confused:

My 680 cant even use quicksilver and mental ray; its slow in opengl apps like google sketchup as well. I wonder if this one to is.

Nope. The GK110 should be a revision of the GK100, which was the successor to the GTX580. That means all that compute functionality that was stripped away from GTX680 is back and presumably better than ever.

It's always kinda dumb to compare an overclocked card to a stock one. The Titan card will overclock at least somewhat, and only then can you compare a manually overclocked 7970 to a (manually OC'd) Titan. While I agree that what Nvidia is bringing to the table isn't particularly impressive (according to the info we have now), the reason has more to do with price than anything else.

Not many 7970s, even Matrix ones, will hit 1300 MHz on air.

If I recall, didn't the HD7970 launch with clocks similar to what is reported for Titan? Around 800-850 MHz? Who's to say the Titan won't overclock at least as well? GK104 and Tahiti have had no problems being stretched upwards of 30% for retail cards...
 
Dude, you can't pair four of something. A pair is two.
 
If I recall, didn't the HD7970 launch with clocks similar to what is reported for Titan? Around 800-850 MHz? Who's to say the Titan won't overclock at least as well? GK104 and Tahiti have had no problems being stretched upwards of 30% for retail cards...

The 7970 launched at 925 MHz and the average overclock is around 1200 Mhz, which is crazy.

Average % OCs are a bit less for GTX 670/680 in comparison.

I am curious to see how this Titan card OCs. I'm glad they're supposedly restoring real voltage control to the card, but with the chip being so large (7.1B transistors!!!), I kind of doubt that it's going to OC much more than 10-15%.

The big average overclocks seen from the 7950 and 7970 are somewhat uncommon for high end GPUs.
 
OK, I'm done in this thread. Had enough of fanboys or people with pockets deeper then their intellect/sanity well for one day. Enjoy your crummy stillborn stopgap GPU.

And for the record, when I saw ocholic.ch's "review" of it, I was absolutely ecstatic about getting one during the summer maybe... even tho they smelled like complete bullshit... Thanks nVidia for confirming that... no GPUs to look forward to anytime soon then...
 
~850MHz? What'll it do under water @ 1150 and 400 watts ...
 
OK, I'm done in this thread. Had enough of fanboys or people with pockets deeper then their intellect/sanity well for one day. Enjoy your crummy stillborn stopgap GPU.

And for the record, when I saw ocholic.ch's "review" of it, I was absolutely ecstatic about getting one during the summer maybe... even tho they smelled like complete bullshit... Thanks nVidia for confirming that... no GPUs to look forward to anytime soon then...

apple-jelly.jpg
 
Are they going to cripple it again so it'll become useless for 3ds max design gpu accelerated rendering? so that I'll be forced to buy their super uber premium stupendously expensive quadro 6000 video card. :confused:

My 680 cant even use quicksilver and mental ray; its slow in opengl apps like google sketchup as well. I wonder if this one to is.

for professional graphics its all how the driver is setup a desktop driver is not the same as a professional driver, at least back in the day some GF and Radeon Cards could be converted over to the Quadro and Fire GL Series via a bios flash and use of the proper drivers for those cards
 
OK, I'm done in this thread. Had enough of fanboys or people with pockets deeper then their intellect/sanity well for one day. Enjoy your crummy stillborn stopgap GPU.

And for the record, when I saw ocholic.ch's "review" of it, I was absolutely ecstatic about getting one during the summer maybe... even tho they smelled like complete bullshit... Thanks nVidia for confirming that... no GPUs to look forward to anytime soon then...

That's ironic, considering i was about to say earlier, to chill down, amd fanboy!
 
Looks like their Tesla series of GPUs
 
Nope. The GK110 should be a revision of the GK100, which was the successor to the GTX580. That means all that compute functionality that was stripped away from GTX680 is back and presumably better than ever.

need some reviews before I'd jump to a 1K price tagged gpu. I wanna see how it performs in 3d rendering.

for professional graphics its all how the driver is setup a desktop driver is not the same as a professional driver, at least back in the day some GF and Radeon Cards could be converted over to the Quadro and Fire GL Series via a bios flash and use of the proper drivers for those cards

Never knew that, so I did some experiment and bought a low prof pro gpu; a $130 quaddro 410 that performs 10x better in sketchup than my 680. :laugh:
 
for professional graphics its all how the driver is setup a desktop driver is not the same as a professional driver, at least back in the day some GF and Radeon Cards could be converted over to the Quadro and Fire GL Series via a bios flash and use of the proper drivers for those cards

I thought it had more to do with the silicon's error testing process, which (I thought) was long and involved for professional workstation GPUs and rudimentary for gaming GPUs. The logic being that you really need error free graphics computation in the pro market but not for gaming.

Actually, I'm fairly certain this is still the case. Converting a gaming GPU to a "workstation" GPU through a bios flash will make a gaming GPU appear to be a workstation GPU, but it doesn't mean it will have the same lack of computation error...
 
I thought it had more to do with the silicon's error testing process, which (I thought) was long and involved for professional workstation GPUs and rudimentary for gaming GPUs. The logic being that you really need error free graphics computation in the pro market but not for gaming.

Actually, I'm fairly certain this is still the case. Converting a gaming GPU to a "workstation" GPU through a bios flash will make a gaming GPU appear to be a workstation GPU, but it doesn't mean it will have the same lack of computation error...


you ever compared specs between certain cards, they are exactly the same, on the professional side its all about those drivers and software
 
I want four of these, and 2 of them I'll break up and smoke for 3 months straight.....
 
you ever compared specs between certain cards, they are exactly the same, on the professional side its all about those drivers and software
Not entirely. Both you and Horrux are correct.
Pro cards like the Tesla have a much more rigorous validation procedure which goes far beyond crafting UMD's (User Mode Drivers) and in-place ongoing test evaluation, so while the GPUs are binned for voltage and usable logic blocks, you'd find that a more fine scaled binning is also being used to test the integrity of the logic blocks that are functional. I don't think the binning of pro GPUs differs a great deal from pro CPUs like Xeon and Opteron in that respect.
 
you ever compared specs between certain cards, they are exactly the same, on the professional side its all about those drivers and software

I know my shiz bro. I come across as humble, because I am, but I still know tech. ;)
 
Not entirely. Both you and Horrux are correct.
Pro cards like the Tesla have a much more rigorous validation procedure which goes far beyond crafting UMD's (User Mode Drivers) and in-place ongoing test evaluation, so while the GPUs are binned for voltage and usable logic blocks, you'd find that a more fine scaled binning is also being used to test the integrity of the logic blocks that are functional. I don't think the binning of pro GPUs differs a great deal from pro CPUs like Xeon and Opteron in that respect.

ya I wouldn't doubt it. I did research on the 5870 and its firegl/firestream counterpart only thing changed was capacity.of ram and clock speeds

I know my shiz bro. I come across as humble, because I am, but I still know tech. ;)

ok cool dude
 
Back
Top