Thursday, May 1st 2014

New GTX TITAN-Z Launch Details Emerge

NVIDIA's GeForce GTX TITAN-Z missed the bus on its earlier 29th April, 2014 launch date, which was confirmed to the press by several retailers, forcing some AIC partners to content with paper-launches of cards bearing their brand. It turns out that the delay is going to be by just a little over a week. The GeForce GTX TITAN-Z is now expected to be available on the 8th of May, 2014. That will be when you'll be able to buy the US $3,000 graphics card off the shelf.

A dual-GPU graphics card based on a pair of 28 nm GK110 GPUs, the GTX TITAN-Z features a total of 5,760 CUDA cores (2,880 per GPU), 480 TMUs (240 per GPU), 96 ROPs (48 per GPU), and a total of 12 GB of GDDR5 memory, spread across two 384-bit wide memory interfaces. Although each of the two GPUs is configured identical to a GTX TITAN Black, it features lower clock speeds. The core is clocked at 705 MHz (889 MHz on the GTX TITAN Black), with GPU Boost frequencies of up to 876 MHz (up to 980 MHz on the GTX TITAN Black); while the memory remains at 7.00 GHz. The card draws power from a pair of 8-pin PCIe power connectors, and its maximum power draw is rated at 375W. It will be interesting to see how it stacks up against the Radeon R9 295X2 by AMD, which costs half as much, at $1,500.
Source: ComputerBase.de
Add your own comment

105 Comments on New GTX TITAN-Z Launch Details Emerge

#26
radrok
Ferrum MasterBe more mature...

Nevertheless the design looks underpowered. Both Titan too much Zeroes and 295X2 Celsius are failures design wise... they are utterly useless for the given price, R/D cost and other stuff... it is just a check in the book like we had them...
Can't see any childish remark in my post, I think it's genuinely positive for a customer to ask for a decent power section on a graphics card of this caliber.

For 3K USD I expect NOTHING else than overkill.
MxPhenom 216Its not entirely about the amount of phases. But also the capacity that each phase is rated for.

I expect that with dropping one phase per gpu, the rest are rated a bit higher to compensate, but who knows.
I honestly think that they won't be rated any higher than what Nvidia has been using on reference 780/Titan/780ti, they are almost all the same.

I bet this GPU will blow when matched against 110%+ TDP, but hey we shouldn't overclock our GPUs, right? :)
Posted on Reply
#27
Ferrum Master
radrokFor 3K USD I expect NOTHING else than overkill.
Well mate... it reminds me of this :D. And the second thing... It is just the way the things work... They do it because they CAN.

Posted on Reply
#28
PLAfiller
radrokCan't see any childish remark in my post, I think it's genuinely positive for a customer to ask for a decent power section on a graphics card of this caliber.
I do. I don't know if you have noticed or pretend not to notice, but nVidia has been smacking some pretty impressive power efficiency numbers in AMD's face. Nothing personal, just pick any Maxwell-based card (750 Ti for example- 4W idle/ 5W multi-monitor etc., etc.). I think they know what they are doing with power. At least they have some pretty serious testimony for it. Of course, nobody is "bullet-proof" of error in one's life, but I personally trust these guys.
Posted on Reply
#29
radrok
lZKoceI do. I don't know if you have noticed or pretend not to notice, but nVidia has been smacking some pretty impressive power efficiency numbers in AMD's face. Nothing personal, just pick any Maxwell-based card (750 Ti for example- 4W idle/ 5W multi-monitor etc., etc.). I think they know what they are doing with power. At least they have some pretty serious testimony for it. Of course, nobody is "bullet-proof" of error in one's life, but I personally trust these guys.
Do you realize that we are talking about power delivery section and not power consumption? Those things are completely different from each other.
Posted on Reply
#30
HumanSmoke
lZKoceI do. I don't know if you have noticed or pretend not to notice, but nVidia has been smacking some pretty impressive power efficiency numbers in AMD's face. Nothing personal, just pick any Maxwell-based card (750 Ti for example- 4W idle/ 5W multi-monitor etc., etc.). I think they know what they are doing with power. At least they have some pretty serious testimony for it. Of course, nobody is "bullet-proof" of error in one's life, but I personally trust these guys.
Unfortunately this board (the Titan Z) isn't Maxwell...and people looking at the top of the performance hierarchy tend to be happy for efficiency to play second fiddle to outright performance.

The Titan Z seems to fall into the chasm between usability and outright performance. Nvidia obviously tried to squeeze as much into a conventional air cooled card as was possible, but it falls short against the competition. AMD have shown in the past that they don't have any qualms about ignoring the PCI-SIG (the HD 6990 and 7990 ), but unlikely that Nvidia expected AMD to put out the first 500 watt reference card, or the first water cooled reference card for that matter. In this instance (the top of the model line) brute force trumps efficiency and Nvidia will be pilloried for being too conservative even if they relaunch the card sans FP64 as a GTX 790. Having said that, I fully expect both cards to enjoy the short and intermittent production runs and free-falling depreciation enjoyed by their dual-GPU predecessors.

The sad thing is that one camp has a $3000 card, and the other camp has a 500 watt card. I'm not entirely sure we're heading in the right direction.;)
Posted on Reply
#31
Prima.Vera
Can I ask a stupid question? Isn't just 10x times better to buy 2x 780 Ti GTX cards for 1500$ and have 4 slots taken, instead of buying 1 card for 3000$ and have 3 slots taken, but 20% LESS performance?!? I mean, seriously, what's the deal with this card???? For professional use are better cards for the same price. I mean, is it only me, or this card seems an abomination!?
Posted on Reply
#32
radrok
To be fair, your question is not stupid at all, I feel the same about it.

Two Titan Blacks for DP make it obsolete, two 780Tis for gaming make it obsolete.

It's just an hype halo w/e product.
Posted on Reply
#33
64K
Prima.VeraCan I ask a stupid question? Isn't just 10x times better to buy 2x 780 Ti GTX cards for 1500$ and have 4 slots taken, instead of buying 1 card for 3000$ and have 3 slots taken, but 20% LESS performance?!? I mean, seriously, what's the deal with this card???? For professional use are better cards for the same price. I mean, is it only me, or this card seems an abomination!?
No, your question is not in any way stupid. Gamers are trying to figure out why Nvidia is calling this a GeForce card and aiming it at gamers and though I have no experience with professional cards I have to wonder why wouldn't 2X Titan Black for less money and more performance be better? As a gamer the Titan Z would have never come across my radar if Nvidia hadn't labeled it GeForce and aimed it at gamers.

www.geforce.com/whats-new/articles/announcing-the-geforce-gtx-titan-z

blogs.nvidia.com/blog/2014/03/25/titan-z/
Posted on Reply
#34
BiggieShady
Prima.VeraI mean, seriously, what's the deal with this card???? For professional use are better cards for the same price. I mean, is it only me, or this card seems an abomination!?
Having 2 GPUs on a single PCB is actually a good thing for compute professionals that need double precision performance but want cheaper alternative for array of Tesla cards.
There is no requirement for SLI with compute work, so with pcie risers one can build massive GPU array while reducing overall number of systems (less cpu-s, motherboards and psu-s needed) on site.
This card is for companies that are building their own supercomputer.
Additionally marketing it as a geforce product because it runs games beautifully, nvidia would be crazy not to. Promote synergy like a boss and all that.
Posted on Reply
#35
Xzibit
BiggieShadyHaving 2 GPUs on a single PCB is actually a good thing for compute professionals that need double precision performance but want cheaper alternative for array of Tesla cards.
There is no requirement for SLI with compute work, so with pcie risers one can build massive GPU array while reducing overall number of systems (less cpu-s, motherboards and psu-s needed) on site.
This card is for companies that are building their own supercomputer.
Additionally marketing it as a geforce product because it runs games beautifully, nvidia would be crazy not to. Promote synergy like a boss and all that.
Not very smart company if their building a supercomputer from a gaming product with no ECC and half the memory per GPU. Kind of takes the super out of it all together.
Posted on Reply
#36
BiggieShady
XzibitNot very smart company if their building a supercomputer from a gaming product with no ECC and half the memory per GPU.
Well you do get what you pay for ... single Tesla K40 is 5.5K USD ... that's one kepler gpu
Posted on Reply
#37
DrunkMonk74
Anyone think that when nVidia's partners get their hands on this card, they'll be able to squeeze some more juice out of it? Maybe along the lines of EVGA's ACX cooler that you find on EVGA's version of the 780 Ti?

If you look at EVGA's K|NGP|N edition of the 780 Ti it's base clock is 1072MHz!! Rumour is that EVGA is going to also be bringing out a 6Gb version of that card. The current 3Gb version of that card sells for 859.99 USD from EVGA themselves. A 6Gb version might possibly be closer to 1000 USD buy two of those, Sli, and you'll have all the power you need for quite some time and probably save yourself 1000 USD along the way.
Posted on Reply
#38
HumanSmoke
DrunkMonk74Anyone think that when nVidia's partners get their hands on this card, they'll be able to squeeze some more juice out of it? Maybe along the lines of EVGA's ACX cooler that you find on EVGA's version of the 780 Ti?
If it gets any kind of treatment then it should be a HydroCopper Classified solution. A custom air cooled card along the lines of the KPE would certainly be possible, but would the sales and PR justify the development expenditure?
Judging by the initial testing, the card has some headroom. A 1050MHz boost on an unheated board isn't too bad, so either a waterblock or a reworked air cooler with larger fans such as the ACX would be a better bet for maintain that kind of level without having to ramp the fanspeed to max.
XzibitNot very smart company if their building a supercomputer from a gaming product with no ECC and half the memory per GPU
ECC for GDDR5 isn't really needed unless the workload is of critical importance. GDDR5 already has EDC (Error Detection Code) built in which detects errors across the system bus. The only errors it can't check for is memory IC fault and GPU memory controller errors, both of which (along with GPU runtime validation) are stringently binned for to produce pro cards. It's why a K40 (as BiggieShady mentioned) costs 4-5 timesthe price of a Titan Black, and a W9100 costs 6 timesas much as a 290X
Posted on Reply
#39
Xzibit
HumanSmokeECC for GDDR5 isn't really needed unless the workload is of critical importance. GDDR5 already has EDC (Error Detection Code) built in which detects errors across the system bus. The only errors it can't check for is memory IC fault and GPU memory controller errors, both of which (along with GPU runtime validation) are stringently binned for to produce pro cards. It's why a K40 (as BiggieShady mentioned) costs 4-5 timesthe price of a Titan Black, and a W9100 costs 6 timesas much as a 290X
I believe this is what he said
BiggieShadyHaving 2 GPUs on a single PCB is actually a good thing for compute professionals that need double precision performance but want cheaper alternative for array of Tesla cards.
There is no requirement for SLI with compute work, so with pcie risers one can build massive GPU array while reducing overall number of systems (less cpu-s, motherboards and psu-s needed) on site.
This card is for companies that are building their own supercomputer.
Additionally marketing it as a geforce product because it runs games beautifully, nvidia would be crazy not to. Promote synergy like a boss and all that.
Nvidia Titan Z 12 GB
6 GB per GPU
single precision = 8.0 Tflops
2.6 Tflops per slot
double precision = 2.6 Tflops
0.86 Tflops per slot

375 TDP
$2,999

I'll save you some money on Nvidia at the same site

Nvidia Tesla K40 12 GB
single precision = 4.29 Tflops
2.14 Tflops per slot
double precision = 1.43 Tflops
0.71 Tflops per slot

235 TDP
$4,245

Nvidia Quadro K6000 12 GB
single precision = 5.2 Tflops
2.6 Tflops per slot
double precision = 1.7 Tflops
0.85 Tflops per slot

225 TDP
$4,235

AMD FirePro W9100 16 GB
single precision = 5.24 Tflops
2.62 Tflops per slot
double precision = 2.62 Tflops
1.31 Tflops per slot

275 TDP
$3,499

You don't build supercomputers with a card from a gaming stack. Unless uptime, errors and stability isn't a concern.
Posted on Reply
#40
HumanSmoke
XzibitI believe this is what he said
I was answering your post, not anyone elses. The fact that I quoted your post should have been a dead giveaway. Why you answered a post concerning double precision with a supposed need for ECC I have no idea- they aren't inextricably linked.
As for whatever vague point you're making, there are plenty of instances where FP64 could be useful to a prosumer (mixed single+ double precision workloads such as 3D modelling)
XzibitNvidia Titan Z 12 GB
single precision = 8.0 Tflops
double precision = 2.6 Tflops
$2,999

I'll save you some money on Nvidia at the same site

Nvidia Tesla K40 12 GB
single precision = 4.29 Tflops
double precision = 1.43 Tflops
$4,245

Nvidia Quadro K6000 12 GB
single precision = 5.2 Tflops
double precision = 1.7 Tflops
$4,235

AMD FirePro W9100 16 GB
single precision = 5.24 Tflops
double precision = 2.62 Tflops
$3,499
So, judging by the bolding and price inclusion, you're saying double precision :
Titan Z...0.87 GFlop/$
W9100..0.75 GFlop/$
K6000...0.45 GFlop/$ (the card is available for $3800)
K40.......0.34 GFlop/$

Not sure how that the Tesla, Quadro, or FirePro are supposed to be "saving some money".

Of course, it's still an apples vs oranges scenario. Professional drivers, software (Nvidia's OptiX, SceniX, CompleX etc.), support, binning, and a larger frame buffer (the Titan Z isn't a 12GB card, it's a 2 x 6GB card) should all add value to the pro boards regardless of vendor.

A further point to note is that Nvidia's FLOP numbers are calculated on base clock (which is correct for double precision since boost is disabled) , not boost -either guaranteed minimum or maximum sustained for single precision. The FLOPS for AMD's cards are calculated on maximum boost, whether it is attainable/sustainable or not.
Case in point: The GTX 780 is quoted as having a 3977 GFlops FP32 rate( 863 base clock * 2304 cores * 2 Ops/clock). But GPGPU apps can be as intensive as games. The GTX 780 I have here at the moment - based on that the usual calculation should be 967 * 2304 * 2 = 4456 GFlops. In reality the card sustains a boost of 1085 MHz at stock settings (no extra voltage, no OC above factory, no change in stock fan profile). The actual FP32 rate would be 1085 * 2304 * 2 = 5000 GFlops
A quick Heaven run to show how meaningless the base clock (and its associated numbers) are, and why they generally aren't worth the time to record
XzibitYou don't build supercomputers with a card from a gaming stack.
Jesus, how many times are you going to edit a post.

It probably depends upon your definition of a supercomputer. If its an HPC cluster, then no, you wouldn't...but that's a very narrow association used by people with little technical knowledge of the range of compute solutions.
Other examples:
The Fastra IIis a desktop supercomputer designed for tomography
Rackmount GPU serversalso generally come under the same heading, since big iron generally tend to be made up of the same hardware....just add more racks to a cabinet...and more cabinets to a cluster...etc. etc.
I'd also note that they aren't "one offs" as you opined once before, as explained here: " We build and ship at least a few like this every month".
Go nuts configure away.
Posted on Reply
#41
Suka
matarI am a nVidia Fan but not this time $3000 for 3 slots so basically $1000 for each slot.
How much performance per slot :)?
Posted on Reply
#42
radrok
GTX Titan also shines at CUDA, untapped CUDA power is nothing to overlook.

I've been abusing my two graphics cards for rendering, work and fun especially now that you can purchase multiple render platforms that support CUDA rendering.

I personally use Octane CUDA render plugins (3Ds Max and Poser) for my personal enjoyment when I'm free and Vray CUDA for work.

You don't need a Tesla/Quadro card for CUDA rendering :)

I would still purchase two separate GTX Titans (not black cause the difference is minimal) compared to this one. Save one slot for what?
Posted on Reply
#43
BiggieShady
XzibitYou don't build supercomputers with a card from a gaming stack. Unless uptime, errors and stability isn't a concern.
Look at it from a standpoint of a testing environment and production environment. If you own a software company and want to offer a solution for CUDA based supercomputers, it would be economically more feasible to do development on Titans, and deploy on customer's Tesla array.
Posted on Reply
#44
sweet
HumanSmokeI was answering your post, not anyone elses. The fact that I quoted your post should have been a dead giveaway. Why you answered a post concerning double precision with a supposed need for ECC I have no idea- they aren't inextricably linked.
As for whatever vague point you're making, there are plenty of instances where FP64 could be useful to a prosumer (mixed single+ double precision workloads such as 3D modelling)

So, judging by the bolding and price inclusion, you're saying double precision :
Titan Z...0.87 GFlop/$
W9100..0.75 GFlop/$
K6000...0.45 GFlop/$ (the card is available for $3800)
K40.......0.34 GFlop/$
Stop defending the Titan's price with DP. 7970 is 200$ on eBay nowaday, and it has 947 GFlop double precision. Sooooo....
7970... 4.735 GFlop/$
It makes all your number look like a rob. Not to mention 7970 can easily OC more than its default 925 MHz

Few people bought the first Titan for their work with CUDA. However, the others simply bought it because it was the best of its time. They was f*cked really hard by nVi with the release of 780 and 780Ti. Hopefully a smart gamer could learn from their demise and stay away from these stupidly overpriced cards.
Posted on Reply
#45
HumanSmoke
sweetStop defending the Titan's price with DP. 7970 is 200$ on eBay nowaday, and it has 947 GFlop double precision. Sooooo....
7970... 4.735 GFlop/$
It makes all your number look like a rob. Not to mention 7970 can easily OC more than its default 925 MHz
Newsflash genius, I used the models and numbers provided by Xzibit in his comment to me.....Do I care that you can get a 7970 on eBay for $200 ? Not really since it isn't relevant as it wasn't part of the original data set provided by Xzibit- if he'd included a bunch of other cards I'd have extrapolated their numbers also. All it proves is that the 7970 is a good secondhand deal (if it hasn't been half fried by a miner) and suffers from horrendous depreciation (a loss of 64% of its initial value in 28 months). Delve into the second hand market and you can find comparable deals everywhere since it seems to come as a shock to you that new card prices don't compare particularly well with pre-owed. How about aQuadro at 8.68 GFlops/$+ pro driver support thrown in? or a HD 5970 at 16.87 GFlops/$, or a desktop $20 8800 GTSthat works out at 31.2 GFlops/$
sweetFew people bought the first Titan for their work with CUDA.
You got a link for that ? Maybe some sales numbers?......even some anecdotal evidence would suffice....really.
Posted on Reply
#46
sweet
HumanSmokeNewsflash genius, I used the models and numbers provided by Xzibit in his comment to me.....Do I care that you can get a 7970 on eBay for $200 ? Not really since it isn't relevant as it wasn't part of the original data set provided by Xzibit- if he'd included a bunch of other cards I'd have extrapolated their numbers also. All it proves is that the 7970 is a good secondhand deal (if it hasn't been half fried by a miner) and suffers from horrendous depreciation (a loss of 64% of its initial value in 28 months). Delve into the second hand market and you can find comparable deals everywhere since it seems to come as a shock to you that new card prices don't compare particularly well with pre-owed. How about aQuadro at 8.68 GFlops/$+ pro driver support thrown in? or a HD 5970 at 16.87 GFlops/$, or a desktop $20 8800 GTSthat works out at 31.2 GFlops/$

You got a link for that ? Maybe some sales numbers?......even some anecdotal evidence would suffice....really.
I can remind you that 7970's release price is 549$. But it's not my main point here.

Most of people bought Titan for GAMES, and the official drivers of nVidia for this card have been always optimized for GAMES.

And that price for a gaming card is stupidly high.

Sales numbers cannot determine the buying purpose unfortunately. However, you can go to some tech forum to see how those Titan buyers bragged their FPS , just like AMD buyers recently have been talked about the hashrate of their cards.
Posted on Reply
#47
BiggieShady
sweetMost of people bought Titan for GAMES, and the official drivers of nVidia for this card have been always optimized for GAMES.
Graphics driver should be optimized for all applications (including games) but this is about CUDA libraries and CUDA driver (they are part of a standard geforce driver package but are also independent). Also the whole point is that TITAN is not only marketed as a gaming card: developer.nvidia.com/ultimate-cuda-development-gpu
Posted on Reply
#48
Xzibit
BiggieShadyGraphics driver should be optimized for all applications (including games) but this is about CUDA libraries and CUDA driver (they are part of a standard geforce driver package but are also independent). Also the whole point is that TITAN is not only marketed as a gaming card: developer.nvidia.com/ultimate-cuda-development-gpu
Nvidia just let us know that Titan supports Dynamic Parallelism and Hyper-Q for CUDA streams, and does not support ECC, the RDMA feature of GPU Direct, or Hyper-Q for MPI connections
The Titan brand is to suck you into the CUDA eco-system. Once they got you there its not like you can buy Intel or AMD to use it.
Posted on Reply
#49
radrok
^ Kinda what my point has always been.

Crosses fingers for Maxwell's Titan to have 12GB Vram, FULL GPU rendered scenes can reach up the 6GB framebuffer easily as you have to load all textures into the GPU.
Posted on Reply
#50
OneCool
I smell smoke. Thick green smoke.
Posted on Reply
Add your own comment
Apr 18th, 2024 11:11 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts