• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce GTX TITAN 6 GB

Joined
Oct 29, 2012
Messages
1,926 (0.46/day)
Location
UK
System Name TITAN Slayer / CPUCannon / MassFX
Processor i7 5960X @ 4.6Ghz / i7 3960x @5.0Ghz / FX6350 @ 4.?Ghz
Motherboard Rampage V Extreme / Rampage IV Extreme / MSI 970 Gaming
Cooling Phanteks PHTC14PE 2.5K 145mm TRs / Custom waterloop / Phanteks PHTC14PE + 3K 140mm Noctuas
Memory Crucial 2666 11-13-13-25 1.45V / G.skill RipjawsX 2400 10-12-12-34 1.7V / Crucial 2133 9-9-9-27 1.7V
Video Card(s) 3 Fury X in CF / R9 Fury 3840 cores 1145/570 1.3V / Nothing ATM
Storage 500GB Crucial SSD and 3TB WD Black / WD 1TB Black(OS) + WD 3TB Green / WD 1TB Blue
Display(s) LG 29UM67 80Hz/Asus mx299q 2560x1080 @ 84Hz / Asus VX239 1920x1080 @60hz
Case Dismatech easy v3.0 / Xigmatek Alfar (Open side panel)
Audio Device(s) M-audio M-track / realtek ALC 1150
Power Supply EVGA G2 1600W / CoolerMaster V1000 / Seasonic 620 M12-II
Mouse Mouse in review process/Razer Naga Epic 2011/Razer Naga 2014
Keyboard Keyboard in review process / Razer Blackwidow Ultimate 2014/Razer Blackwidow Ultimate 2011
Software Windows 7 Ultimate / Windows 7 ultimate / Windows 7 ultimate
Benchmark Scores cinebench 15.41 3960x @ 5.3ghz Wprime32m 3.352 3960x @ 5.25ghz Super PI 32m: 6m 42s 472ms @5.25ghz
I'm pretty sure the +20% performance over the 7970 Ghz ed. is caused by that low clock. The 680 is 1008mhz and the Titan is 836mhz that's 83% of the 680 clock so the effect of the 2688 shaders is reduced by 83% so they actually perform like 2231shaders at 1008mhz. Also the 680 was bandwidth starved well this is also bandwdith starved as true shader preformance went up by 45% (over the 680) and bandwidth is up 50% and add to that the fact that these are the first drivers for this card and you get why it's so slow.

However no amount of driver optimization can make up for the low clock so the max that I can see this card pushing is 45% more performance over an equally optimized 680 while still having that stupidly high price tag
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
The problem with your argument is that the only way I could see someone justifying getting a Titan is someone who is planning on running multi-monitor with games
You think that is why Nvidia have broken their own convention of castrating FP64 performance on GeForce cards by allowing full 1:3 rate double precision on a consumer GTX Titan ?

It would seem rather obvious (at least to me) that Nvidia is casting a wider net than just Surround gaming. GK110 was developed primarily for compute, yet aside from disabling MPI and ECC memory, the Titan retains not only the full spec of the $3200-4500 Tesla (inc the 6GB memory component) but also allows for the option of a 35% clock increase if the users workload is FP32 based.

Hey, but what the fuck do I know? Maybe Nvidia just forgot to disable double precision, and the selling of workstation card at consumer card prices is just the beginning of the end.
it's simply not worth 100% more price for 20% more power
Strange math. You work on a different numbering system where you live?
@ 2560 by W1ZZ's charts the Titan shows an increase of 31.57% over the 7970GE and 42.86% increase over the GTX 680
It might be fast but it looks poorly optimized in comparison to the 680 and 7970 even if it is faster overall.
Depends how you look at it.
1. GK 110 wasn't developed as a gaming chip- the GK104 and Tahiti were.
2. The Titan uses equal power to the 7970GE yet offers 31.57% more gaming performance at 2560x1600
The only real argument is price- which nobody is disputing, and is largely irrelevant since the pricing is 1. Deliberately set high to ensure Nvidia need not keep the consumer channel supplied with GPUs that would return better margins as Quadro and Tesla, and 2. Not to undermine the professional cards above it in the product stack.

Whats the point of Nvidia pricing Titan at $499 ? It means that Nvidia then have to sell the GTX 680 for around $299-329, with the rest of the product stack realigned. The same people that are going to buy Titan at $499, would then buy a 680 for $299...or a 7970 at whatever price AMD would need to be competitive....assuming people didn't use the same logic/ performance-per-$ metric and buy a couple of bargain basement priced GTX 660 TI's or 7950's.
 
Joined
Aug 7, 2007
Messages
2,723 (0.45/day)
Processor i5-7600k
Motherboard ASRock Z170 Pro4
Cooling CM Hyper 212 EVO w/ AC MX-4
Memory 2x8GB DDR4 2400 Corsair LPX Vengeance 15-15-15-36
Video Card(s) MSI Twin Frozr 1070ti
Storage 240GB Corsair Force GT
Display(s) 23' Dell AW2310
Case Corsair 550D
Power Supply Seasonic SS-760XP2 Platinum
Software Windows 10 Pro 64-bit
I guess Nvidia had a good reason for naming it the "Titan" :eek:
 
Joined
Apr 30, 2012
Messages
3,881 (0.89/day)
Depends how you look at it.
1. GK 110 wasn't developed as a gaming chip- the GK104 and Tahiti were.
2. The Titan uses equal power to the 7970GE yet offers 31.57% more gaming performance at 2560x1600
The only real argument is price- which nobody is disputing, and is largely irrelevant since the pricing is 1. Deliberately set high to ensure Nvidia need not keep the consumer channel supplied with GPUs that would return better margins as Quadro and Tesla, and 2. Not to undermine the professional cards above it in the product stack.

How can it not be deliberately set high when there is no Quadro and Tesla option to compete.
If you buy a Tesla you need to buy a Quadro for video out thats $6K

Something that a $3.5K W10000 will do with full support. Thats half the money and half the slots and 2/3rd the power saved right there unless its CUDA your after.

That same premise can be made for every big chip Nvidia has released that went into a Tesla variant. Maybe you have more insight but I havent heard how selling those chips in GeForce variants hurt HPC sales in the past.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
How can it not be deliberately set high when there is no Quadro and Tesla option to compete.
If you buy a Tesla you need to buy a Quadro for video out thats $6K
Glad to see that we agree, although I'd hate to have you shopping for me:
Tesla K20 $3259 + Quadro NVS 510 $365 = $3624
Something that a $3.5K W10000 will do with full support
Cool. Where can I buy this imaginary card ? You can't even buy the $3599 S10000 yet.
Thats half the money
Well, its actually 99.3%....or 74.7% with the same stores K20X. You use the same numerical system as Aquinas ?
and half the slots
3 slots versus 2 slots = two thirds
and 2/3rd the power saved right there
Unlikely. The S10000 is board rated at 375 watts (in comparison; the W9000 is rated at 274 Watts and reaches that consumption) . The K20/K20X is rated at 225/235W, and the Quadro NVS 510 is rated at 35 watts.
If you think that two Tahiti GPUs use less power than one GK110 + one GK107 then I'd suggest you do some more fact checking.
unless its CUDA your after.
Considering AMD's pro drivers are basically non-existent, I'd say that the Quadro drivers and apps also come into that equation.
That same premise can be made for every big chip Nvidia has released that went into a Tesla variant. Maybe you have more insight but I havent heard how selling those chips in GeForce variants hurt HPC sales in the past.
Maybe you couldn't understand my previous post. I will re-iterate:
Tesla and Quadro cards retain full compute ability. GeForce cards with the exception of the Titan have had their compute features artificially limited to protect the Quadro and Tesla brands.
 
Joined
Apr 30, 2012
Messages
3,881 (0.89/day)
I like how suddenly money is an issue for a professional setup.

What happen to its not targeted at you, TITAN arguement :rolleyes:

Glad to see that we agree, although I'd hate to have you shopping for me:
Tesla K20 $3259 + Quadro NVS 510 $365 = $3624

If your gonna try and correct someone atleast reference the proper card yourself ;)
K20X $4,450

Maybe you couldn't understand my previous post. I will re-iterate:
Tesla and Quadro cards retain full compute ability. GeForce cards with the exception of the Titan have had their compute features artificially limited to protect the Quadro and Tesla brands.

You can confirm this? Firmware is not limiting factor anymore?
Links please.


You can't even buy the $3599 S10000 yet.
AMD FirePro S10000
TigerDirect
SabrePC - Same site you referanced :laugh:

:nutkick:
 
Last edited:
Joined
Dec 22, 2009
Messages
88 (0.02/day)
Location
AR, USA
Processor AMD 2700 @ 4.0GHz all-core
Motherboard Asrock X470 Taichi
Cooling EK Fluid Gaming A240R + Extra 240 Rad
Memory 2x8GB GSkill DDR4-3533 CL16
Video Card(s) Radeon Vega 64 @ 1672MHz
Storage HP EX920 1TB NVMe SSD
Display(s) MSI Optix G24C 1080p 144Hz 24", Dell P2214H 22" 1080p LCD
Case Fractal Design Focus G Red
Power Supply Rosewill Hive 750S 80+ Bronze
Software Windows 10 Pro 64bit
You know, I love that you guys do price/performance charts, especially broken down by resolution. However, I do have one suggestion that'd would make them absolutely perfect. You guys do the price/performance calculation for us and then order by best "value", and that could be useful for some, but for a lot of us we're looking more for maximum performance without crossing a harsh "diminishing returns" wall. (Kinda like Tom's "Best ___ for the Money" columns). What I'd like to see is price on one axis (That way later we could adjust for price changes mentally) and performance on the other, ordered by performance, and broken down by resolution like it is now. Personally I'm thinking kind of like a line chart, or even the current bar chart rotated 90 degrees, but ordered by performance instead of value.

I guess at the end of the day, the question I really want to know with that section is, "At a given resolution, at what point to I hit 60fps average [overkill] or start getting ripped off [diminishing returns]?" It's like, I know a Geforce 660 is a great value, but it's not going to drive the FPS I want at 2560x1440 high details, you know?
 

Attachments

  • perfdollar_2560.gif
    perfdollar_2560.gif
    143 KB · Views: 535
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
I like how suddenly money is an issue for a professional setup.
Stop trolling. You know full well that I was correcting your faulty $6K figure
If your gonna try and correct someone atleast reference the proper card yourself ;)
K20X $4,450
You mean the card I already referenced ? If you cant comprehend my posts, why bother trying to answer them?
Well, its actually 99.3%....or 74.7% with the same stores K20X. You use the same numerical system as Aquinas ?
You can confirm this? Firmware is not limiting factor anymore?
the biggest factor is that for the first time on any consumer-level NVIDIA card, double precision (FP64) performance is uncapped. That means 1/3 FP32 performance, or roughly 1.3TFLOPS theoretical FP64 performance. NVIDIA has taken other liberties to keep from this being treated as a cheap Tesla K20, but for lighter workloads it should fit the bill.

As compared to the server and high-end workstation market that Tesla carves out, NVIDIA will be targeting the compute side of Titan towards researchers, engineers, developers, and others who need access to (relatively) cheap FP64 performance, and don’t need the scalability or reliability that Tesla brings.
[source 1], [source 2], [and source 3 on the first page of this thread]
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
That doesnt establish if the TITAN is not Firmware limited like previous GeForce *80s. :confused:
What part of "double precision (FP64) performance is uncapped. That means 1/3 FP32 performance, or roughly 1.3TFLOPS theoretical FP64 performance" don't you understand?
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.63/day)
Really linking to more benchmarks

That doesnt establish if the TITAN is not Firmware limited like previous GeForce *80s.

:confused:

But, running it under LN2, does. Cards are VRM limited for LN2. Firmware doesn't even need to be thought about. Find K1ngP1n's rig pics, and your answer is there.
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
But, running it under LN2, does. Cards are VRM limited for LN2
I'm sure someone will get around to hard modding/adding a daughterboard to the card at some stage...probably about 5 minutes after the HWBot leaderboard becomes congested with unmodded Titans filling the single, 2, 3, and 4 card benchmarks.

/Looking forward to a succession of Titan OCérs shattering the 3DM Fire Strike record...by 5 points...every few days :laugh:
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.63/day)
I'm sure someone will get around to hard modding/adding a daughterboard to the card at some stage.

K1ngP1n already did. Which to me says the cards are VRM limited already. 1 day after launch. :p

You just can't compete with these guys that work at the OEMs and have open access to parts. Anything anyone else would try has already been done. Now it's just a matter of binning cards for the best one, and @ $1000 a pop, that's not gonna happen too quickly. :laugh: 1750 MHz, more than double stock, already posted on HWBOT.
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
K1ngP1n already did. Which to me says the cards are VRM limited already. 1 day after launch. :p
Understandable from a vendors point of view. The nature of the enthusiast is to keep pushing until something breaks...and components breaking-regardless of the circumstances tend to reflect badly on the manufacturer. I'm pretty sure that if thermal throttling were removed from modern CPUs, or average Joe Blow could switch off PowerTune, the blowback would more than negate any gain from HWBot competition.
As far as Nvidia are concerned, you could probably see the writing on the wall when GTX 590's started producing fireworks when overvolted. In the days when a YouTube video negates a whole marketing campaign its easy to see why they wouldn't take the chance.
You just can't compete with these guys that work at the OEMs and have open access to parts. Anything anyone else would try has already been done.
Pretty much. With competitive overclocking now being a valid PR and marketing tool, every vendor seems eager to jump on the bandwagon, which means that the traditional enthusiast orientated powerhouses need to up the ante
Now it's just a matter of binning cards for the best one, and @ $1000 a pop, that's not gonna happen too quickly. :laugh: 1750 MHz, more than double stock, already posted on HWBOT.
I'd be surprised if the top vendors weren't already binning for factory OCéd "specials" like the Asus Matrix/DCII, MSI Lightning, EVGA SSC/HC, Gigabyte WF3 - in which case, they will certainly be putting aside any golden samples for the extreme crowd.
 
Joined
Oct 26, 2011
Messages
3,145 (0.69/day)
Processor 8700k Intel
Motherboard z370 MSI Godlike Gaming
Cooling Triple Aquacomputer AMS Copper 840 with D5
Memory TridentZ RGB G.Skill C16 3600MHz
Video Card(s) GTX 1080 Ti
Storage Crucial MX SSDs
Display(s) Dell U3011 2560x1600 + Dell 2408WFP 1200x1920 (Portrait)
Case Core P5 Thermaltake
Audio Device(s) Essence STX
Power Supply AX 1500i
Mouse Logitech
Keyboard Corsair
Software Win10
Meh, they don't cherry pick chips, this has been proven atleast for ASUS, I mean look at the 7970 Platinum, some clock 100 Mhz worse than reference GPUs...
 
Joined
Mar 23, 2012
Messages
568 (0.13/day)
Processor Intel i9-9900KS @ 5.2 GHz
Motherboard Gigabyte Z390 Aorus Master
Cooling Corsair H150i Elite
Memory 32GB Viper Steel Series DDR4-4000
Video Card(s) RTX 3090 Founders Edition
Storage 2TB Sabrent Rocket NVMe + 2TB Intel 960p NVMe + 512GB Samsung 970 Evo NVMe + 4TB WD Black HDD
Display(s) 65" LG C9 OLED
Case Lian Li O11D-XL
Audio Device(s) Audeze Mobius headset, Logitech Z906 speakers
Power Supply Corsair AX1000 Titanium
Meh, they don't cherry pick chips, this has been proven atleast for ASUS, I mean look at the 7970 Platinum, some clock 100 Mhz worse than reference GPUs...

A lot of times the cherry picking is for LN2 and not air on cards like that. I can tell from the ASIC on my chips (in addition to the results of others) that Lightning 7970s are absolutely binned for LN2.
 

johnspack

Here For Good!
Joined
Oct 6, 2007
Messages
5,980 (0.99/day)
Location
Nelson B.C. Canada
System Name System2 Blacknet , System1 Blacknet2
Processor System2 Threadripper 1920x, System1 2699 v3
Motherboard System2 Asrock Fatality x399 Professional Gaming, System1 Asus X99-A
Cooling System2 Noctua NH-U14 TR4-SP3 Dual 140mm fans, System1 AIO
Memory System2 64GBS DDR4 3000, System1 32gbs DDR4 2400
Video Card(s) System2 GTX 980Ti System1 GTX 970
Storage System2 4x SSDs + NVme= 2.250TB 2xStorage Drives=8TB System1 3x SSDs=2TB
Display(s) 2x 24" 1080 displays
Case System2 Some Nzxt case with soundproofing...
Audio Device(s) Asus Xonar U7 MKII
Power Supply System2 EVGA 750 Watt, System1 XFX XTR 750 Watt
Mouse Logitech G900 Chaos Spectrum
Keyboard Ducky
Software Manjaro, Windows 10, Kubuntu 23.10
Benchmark Scores It's linux baby!
Isn't this kind of like the 7950GT which was released like 3 months before the 8 series, just to pacify the enthusiasts? Just a quick market grab. Money well spent!
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.96/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
You think that is why Nvidia have broken their own convention of castrating FP64 performance on GeForce cards by allowing full 1:3 rate double precision on a consumer GTX Titan ?

You make it sound like you can enable full power DP math on non-Titan GeForce chips. Let's get something perfectly clear. How many shaders and performance did this card have to dedicate to get that 1:3 DP math? You also gimp SP math when you enable full speed DP math. At least the 7970 just does compute well regardless if its DP or SP.
It would seem rather obvious (at least to me) that Nvidia is casting a wider net than just Surround gaming. GK110 was developed primarily for compute, yet aside from disabling MPI and ECC memory, the Titan retains not only the full spec of the $3200-4500 Tesla (inc the 6GB memory component) but also allows for the option of a 35% clock increase if the users workload is FP32 based.

You know, ECC memory is pretty important when you're doing compute applications. If you start overclocking the GPU your results can't be guaranteed. ECC at least eliminates the concern for corrupted memory to a point. Most gamers won't ever need full DP math and as far as professionals who use the Tesla cards, I think they might be intested in spending the extra money knowing that they can have their compute scale using MPI and that memory is reliable (ECC).
Strange math. You work on a different numbering system where you live?
@ 2560 by W1ZZ's charts the Titan shows an increase of 31.57% over the 7970GE and 42.86% increase over the GTX 680

I was considering all resolutions which isn't the best gauge. 30% at 100% price is still a bit steep. A 1:3 ratio of performance relative to price against the 680 isn't exactly great still.
GK 110 wasn't developed as a gaming chip- the GK104 and Tahiti were.
You're going to have to prove that I think. Just because it was in Tesla chips first does not mean that it was designed for compute, but the truth of the matter is, we don't know why it came out late and I doubt it was because it wasn't ready. I'm willing to bet if the 7970 was vastly faster and they needed the GK110, they would have released it. They didn't feel that they had to so they waited. The timing was pretty bad though IMHO but I don't agree that the GK110 was developed strictly with compute in mind.

The only real argument is price- which nobody is disputing, and is largely irrelevant since the pricing is 1. Deliberately set high to ensure Nvidia need not keep the consumer channel supplied with GPUs that would return better margins as Quadro and Tesla, and 2. Not to undermine the professional cards above it in the product stack.
You know, for people who actually invest in Tesla and use it's features would think that not having MPI would suck because now it's that much harder to get more than one of them to work together. If Titan is designed for compute, it's designed to do it on its own because anything to allow it to scale or be truly reliable for compute has been gimped. Also once again, most data centers won't be wanted a Titan to crunch, they will be wanting something that's more reliable and has the features they need.

With all of that said, GK110 is a GPU that does DP math well when you enable it. I wouldn't go so far to say that it was designed for compute. Telsa has the extra hardware to do that the right way.
Whats the point of Nvidia pricing Titan at $499 ?
That is what everyone else is saying, not me. I've been saying $700-750 USD would have been the sweet spot. 500-550 USD is too low and 1000 USD is too high. 750 feels like an acceptable medium that would get more buyers.
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
You make it sound like you can enable full power DP math on non-Titan GeForce chips. Let's get something perfectly clear. How many shaders and performance did this card have to dedicate to get that 1:3 DP math?
Precisely? Zero. Of the 2688 shaders on the chip,1792 are FP32 capable, 896 are FP32/64. There are no dedicated FP64 shaders on the chip.
You also gimp SP math when you enable full speed DP math. At least the 7970 just does compute well regardless if its DP or SP.
Yes...looks very gimped.
Single precision:

Double precision:

You know, ECC memory is pretty important when you're doing compute applications.
The case for ECC with GDDR5. ECC is generally a province of pro graphics/math co-processors where error detection is critical. Obviously, Titan is being not being aimed at those markets- that is what Quadro and Tesla are for.
You know, for people who actually invest in Tesla and use it's features would think that not having MPI would suck because now it's that much harder to get more than one of them to work together. If Titan is designed for compute, it's designed to do it on its own because anything to allow it to scale or be truly reliable for compute has been gimped....
What I’m trying to say is that for the last week I’ve been having to fend off our CS guys, who upon hearing I had a GK110 card wanted one of their own. If you’ve ever wanted proof of just how big a deal GK110 is – and by extension Titan – you really don’t have to look too much farther than that....Titan, its compute performance, and the possibilities it unlocks is a very big deal for researchers and other professionals that need every last drop of compute performance that they can get, for as cheap as they can get it. This is why on the compute front Titan stands alone; in NVIDIA’s consumer product lineup there’s nothing like it, and even AMD’s Tahiti based cards (7970, etc), while potent, are very different from GK110/Kepler in a number of ways. Titan essentially writes its own ticket here....As compared to the server and high-end workstation market that Tesla carves out, NVIDIA will be targeting the compute side of Titan towards researchers, engineers, developers, and others who need access to (relatively) cheap FP64 performance, and don’t need the scalability or reliability that Tesla brings - Anandtech
If you start overclocking the GPU your results can't be guaranteed.
Titan doesn't allow overclocking when full rate FP64 is enabled for that precise reason:
The penalty for enabling full speed FP64 mode is that NVIDIA has to reduce clockspeeds to keep everything within spec. For our sample card this manifests itself as GPU Boost being disabled, forcing our card to run at 837MHz (or lower) at all times-Anandtech
FP64 calculation is obviously slower than FP32, and requires much more power to run. Just as well the laws of physics are in play- a 90% improvement over Tahiti using less power is probably not something AMD would like to see extended.
ECC at least eliminates the concern for corrupted memory to a point.and as far as professionals who use the Tesla cards, I think they might be intested in spending the extra money knowing that they can have their compute scale using MPI and that memory is reliable (ECC)
Obviously, there are enough people who just require EDC than full ECC (I'll take the word of the Anandtech guys over a random here I think)...after all, EDC was good enough for every AMD GPU (ECC was implemented only with Southern Islands FirePro)
Also once again, most data centers won't be wanted a Titan to crunch, they will be wanting something that's more reliable and has the features they need
I don't think anyone is suggesting Titan will be used in this manner.
With all of that said, GK110 is a GPU that does DP math well when you enable it. I wouldn't go so far to say that it was designed for compute
If FP64 isn't compute (GPGPU) then what is it ?
If you could please list some applications that require double precision that aren't considered compute ?
I think you'll find that most commercial applications (i.e. compute orientated Maya and AutoCAD for instance) use a combination of single and double precision.

So, what you are trying to convey is that enthusiast gamers wont buy the card because it is too expensive, and GPGPU users wont buy the card because it lacks features...so no one will buy the card! (There aren't a whole lot of options left). So your analysis differs -and you have me believe, superior, to Anandtechs staff and Nvidias strategic marketing planners. Well, hopefully you're right and the price craters a few weeks from now.
 
Last edited:
Joined
Oct 26, 2011
Messages
3,145 (0.69/day)
Processor 8700k Intel
Motherboard z370 MSI Godlike Gaming
Cooling Triple Aquacomputer AMS Copper 840 with D5
Memory TridentZ RGB G.Skill C16 3600MHz
Video Card(s) GTX 1080 Ti
Storage Crucial MX SSDs
Display(s) Dell U3011 2560x1600 + Dell 2408WFP 1200x1920 (Portrait)
Case Core P5 Thermaltake
Audio Device(s) Essence STX
Power Supply AX 1500i
Mouse Logitech
Keyboard Corsair
Software Win10
If I can add to your debate, Vray (which is a renderer tool I use in 3Dstudio Max instead of the default one) uses both DP and SP code as far as I know.

I will use the heck out of my Titan CUDA cores on VRAY CUDA acceleration, this GPU is a bloody good entry level compute monster.

I'll give you more details as soon as my order arrives.
 

Cortex

New Member
Joined
Aug 18, 2012
Messages
44 (0.01/day)
Quote:
Originally Posted by Aquinus View Post
You make it sound like you can enable full power DP math on non-Titan GeForce chips. Let's get something perfectly clear. How many shaders and performance did this card have to dedicate to get that 1:3 DP math?
Precisely? Zero. Of the 2688 shaders on the chip,1792 are FP32 capable, 896 are FP32/64. There are no dedicated FP64 shaders on the chip.


No. 2688 FP32 only and 896 DP. (16*12 FP32SP and 16*4 FP64 SP per SMX)

http://www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last/3

 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Quote:
Originally Posted by Aquinus View Post
You make it sound like you can enable full power DP math on non-Titan GeForce chips. Let's get something perfectly clear. How many shaders and performance did this card have to dedicate to get that 1:3 DP math?
Precisely? Zero. Of the 2688 shaders on the chip,1792 are FP32 capable, 896 are FP32/64. There are no dedicated FP64 shaders on the chip.

No. 2688 FP32 only and 896 DP. (16*12 FP32SP and 16*4 FP64 SP per SMX)
The question was I believe was dedicated FP64 cores/shaders.
The 896 double precision units are linked to FP32 shaders. As far as my understanding goes, a conventional core/shader encompasses the whole graphics pipeline (Input Assemble > Vertex > Hull > Tessellation > Domain > Geometry > Raster > Pixel) while the FP64 unit is largely a separate entity - and that is why it's differentiated in the literature as a unit rather than shader or core. Is this not correct ?

I wouldn't argue that the units take up die real estate (as they do in any architecture), just that the units aren't shaders by definition- I have never heard that the GK 110 die for instance is a 3840 cores GPU. The number is usually defined as 2880.
 
Last edited:
Joined
Sep 15, 2011
Messages
6,457 (1.41/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
Where is the SLI/3 SLI review? The link is not working/...
 
Joined
Oct 26, 2011
Messages
3,145 (0.69/day)
Processor 8700k Intel
Motherboard z370 MSI Godlike Gaming
Cooling Triple Aquacomputer AMS Copper 840 with D5
Memory TridentZ RGB G.Skill C16 3600MHz
Video Card(s) GTX 1080 Ti
Storage Crucial MX SSDs
Display(s) Dell U3011 2560x1600 + Dell 2408WFP 1200x1920 (Portrait)
Case Core P5 Thermaltake
Audio Device(s) Essence STX
Power Supply AX 1500i
Mouse Logitech
Keyboard Corsair
Software Win10
Joined
Jul 5, 2008
Messages
272 (0.05/day)
System Name WorkStation
Processor Intel i7 3770k @ 4.4GHz
Motherboard ASRock Z77 Extreme6
Cooling Corsair H110 Water Cooler AIO
Memory Corsair Vengeance 8GB DDR3 1600MHz
Video Card(s) MSI GTX680 Twin Frozr III OC
Storage WD 1TB Sata III
Display(s) Samsung 22-inch LED 1080p
Case Corsair Carbide Air 540
Audio Device(s) Onboard Realtek 898 HD
Power Supply Corsair CS750M Gold
Software Windows 8.1 Pro x64
Just curious, could GPUs with Boost 1.0 be updated to Boost 2.0 with BIOS update in the future?
 
Top