Thursday, May 1st 2014

New GTX TITAN-Z Launch Details Emerge

NVIDIA's GeForce GTX TITAN-Z missed the bus on its earlier 29th April, 2014 launch date, which was confirmed to the press by several retailers, forcing some AIC partners to content with paper-launches of cards bearing their brand. It turns out that the delay is going to be by just a little over a week. The GeForce GTX TITAN-Z is now expected to be available on the 8th of May, 2014. That will be when you'll be able to buy the US $3,000 graphics card off the shelf.

A dual-GPU graphics card based on a pair of 28 nm GK110 GPUs, the GTX TITAN-Z features a total of 5,760 CUDA cores (2,880 per GPU), 480 TMUs (240 per GPU), 96 ROPs (48 per GPU), and a total of 12 GB of GDDR5 memory, spread across two 384-bit wide memory interfaces. Although each of the two GPUs is configured identical to a GTX TITAN Black, it features lower clock speeds. The core is clocked at 705 MHz (889 MHz on the GTX TITAN Black), with GPU Boost frequencies of up to 876 MHz (up to 980 MHz on the GTX TITAN Black); while the memory remains at 7.00 GHz. The card draws power from a pair of 8-pin PCIe power connectors, and its maximum power draw is rated at 375W. It will be interesting to see how it stacks up against the Radeon R9 295X2 by AMD, which costs half as much, at $1,500.
Source: ComputerBase.de
Add your own comment

105 Comments on New GTX TITAN-Z Launch Details Emerge

#51
HumanSmoke
sweetI can remind you that 7970's release price is 549$.
Why would I need reminding when you're the one replying to my post where I made a calculation based on the $549 price???? :confused:
HumanSmokeAll it proves is that the 7970 is a good secondhand deal (if it hasn't been half fried by a miner) and suffers from horrendous depreciation (a loss of 64% of its initial value in 28 months).
Based on your $200 current estimate...
sweet. 7970 is 200$ on eBay nowaday.
$200 / $549 = 0.36 = 64% decrease

/notrocketscience
Posted on Reply
#52
sweet
HumanSmokeWhy would I need reminding when you're the one replying to my post where I made a calculation based on the $549 price???? :confused:

Based on your $200 current estimate...


$200 / $549 = 0.36 = 64% decrease

/notrocketscience
Mate, you totally miss my point here. I just quote the release price of 7970 to show that even at the first release, 7970 was still a better choice for DP than any card of Titan branch. Therefore, when considered as a compute card, Titan cards are still not a valid option.

So please find a better excuse to defend the stupid price of those GAMING cards.

As mentioned above, only people working with CUDA could find benefit in these cards. For others, just learn the lesson that the first Titan buyers took and stay away from these shiny money sucker.
Posted on Reply
#53
HumanSmoke
sweetMate, you totally miss my point here. I just quote the release price of 7970 to show that even at the first release, 7970 was still a better choice for DP than any card of Titan branch.
No one is denying the 7970 isn't a good deal at $200 (unless it's been thrashed by mining), so you're basically arguing with yourself.

You also seem to be another person hung up on numbers. AMD/ATI cards have had the edge over Nvidia boards in double precision (and single for that matter) on some time, yet it still isn't reflected in the wider community. Why? Because AMD's architectures are totally reliant upon OpenCL for the most part, and OpenCL support is spotty at best.
Say what you will about CUDA, but the software ecosystem is in place and it works.
Blender from their ownFAQ:
Currently NVidia with CUDA is rendering faster. There is no fundamental reason why this should be so—we don't use any CUDA-specific features—but the compiler appears to be more mature, and can better support big kernels. OpenCL support is still being worked on and has not been optimized as much, because we haven't had the full kernel working yet.
AFAIK, OpenCL (working) support isn't overly prevalent. Even Lux, which is touted as a poster child for OpenCL has ongoing issues, and generally where both CUDA and OpenCL are supported, it is the former that is generally more mature. The Blender sentiment isn't a lone voice (pdf)
In our tests, CUDA performed better when transferring data to and from the GPU. We did not see any considerable change in OpenCL's relative data transfer performance as more data were transferred. CUDA's kernel execution was also consistently faster than OpenCL's, despite the two implementations running nearly identical code.
CUDA seems to be a better choice for applications where achieving as high a performance as possible is important. Otherwise the choice between CUDA and OpenCL can be made by considering factors such as prior familiarity with either system, or available development tools for the target GPU hardware.
So, basically, hardware performs as well as the coding allows. AMD is tied to OpenCL, and OpenCL is tied to third parties for its advancement, which it is very much a case of YMMV. What clouds the issue further is that mainstream (gaming) sites use OpenCL apps only to compare AMD and Nvidia cards which distorts the overall picture, since using the CUDA path for Nvidia hardware invariably means a better result. Note this render test using Premiere where the GTX 670 (using CUDA) and the R9 290X (using OpenCL) are basically equal in time to render. On raw numbers the 290X should have it all over the GTX 670, after all the AMD card has 5.8 TFlops of processing power to the 670's 2.46 TFlops - well over double!

So the 7970 might represent great value with the right application, and it certainly is affordable at $200. On the other side of the ledger, the GTX 580 -also because of its decreasing price (and its ability to run CUDA apps), makes a compelling buy for people who want to use CUDA coded render/CAD apps. Is the Titan the be-all-and-end-all ? Of course not, and I don't see anyone saying it is. What I see is people comparing two current top tier GPUs because....well, because they are the flavour of the week.

Numbers on the page don't always translate that well in real life scenarios. Harping back to point regarding double precision. It's use is governed by the same coding environment. Can't say I've seen many FP64-only benchmarks outside of HPC, most consumer apps tend to involve both single and double precision calculation rather than FP64 solely, and those that do find their way into benchmarks, are again devoid of using the CUDA path. HPC is the environment for widespread use of double precision, and the ratio of Nvidia to AMD GPUsthere is a pretty telling story.

Now, you realise that these two statements you made are contradictory:
sweetTherefore, when considered as a compute card, Titan cards are still not a valid option.
sweetAs mentioned above, only people working with CUDA could find benefit in these cards.
The majority of compute application Nvidia cards use ARE CUDA code. These machines, this machine, this machine, thesemachinesare all geared for content creation. You say its a waste of money, these people beg to differ. Different requirements equals different usage patterns.
Posted on Reply
#54
radrok
^
If I may add we are almost forced to use CUDA devices, as you posted their development is more mature and offers way more than what OpenCL based solutions have currently in the market.

Nvidia has done its homework with CUDA and it is gathering what has planted in the past.

Sadly CUDA is proprietary but the fact that it is proprietary allowed it to strive compared to the hot mess OpenCL is nowadays, at least for what I do.

I also challenge you to use Blender with an AMD based graphics solution, you'll run away with nightmares, there are so many cursor bugs I can't even start to enumerate.
Posted on Reply
#55
sweet
HumanSmokeNo one is denying the 7970 isn't a good deal at $200 (unless it's been thrashed by mining), so you're basically arguing with yourself.

You also seem to be another person hung up on numbers. AMD/ATI cards have had the edge over Nvidia boards in double precision (and single for that matter) on some time, yet it still isn't reflected in the wider community. Why? Because AMD's architectures are totally reliant upon OpenCL for the most part, and OpenCL support is spotty at best.
Say what you will about CUDA, but the software ecosystem is in place and it works.
Blender from their ownFAQ:

AFAIK, OpenCL (working) support isn't overly prevalent. Even Lux, which is touted as a poster child for OpenCL has ongoing issues, and generally where both CUDA and OpenCL are supported, it is the former that is generally more mature. The Blender sentiment isn't a lone voice (pdf)


So, basically, hardware performs as well as the coding allows. AMD is tied to OpenCL, and OpenCL is tied to third parties for its advancement, which it is very much a case of YMMV. What clouds the issue further is that mainstream (gaming) sites use OpenCL apps only to compare AMD and Nvidia cards which distorts the overall picture, since using the CUDA path for Nvidia hardware invariably means a better result. Note this render test using Premiere where the GTX 670 (using CUDA) and the R9 290X (using OpenCL) are basically equal in time to render. On raw numbers the 290X should have it all over the GTX 670, after all the AMD card has 5.8 TFlops of processing power to the 670's 2.46 TFlops - well over double!

So the 7970 might represent great value with the right application, and it certainly is affordable at $200. On the other side of the ledger, the GTX 580 -also because of its decreasing price (and its ability to run CUDA apps), makes a compelling buy for people who want to use CUDA coded render/CAD apps. Is the Titan the be-all-and-end-all ? Of course not, and I don't see anyone saying it is. What I see is people comparing two current top tier GPUs because....well, because they are the flavour of the week.

Numbers on the page don't always translate that well in real life scenarios. Harping back to point regarding double precision. It's use is governed by the same coding environment. Can't say I've seen many FP64-only benchmarks outside of HPC, most consumer apps tend to involve both single and double precision calculation rather than FP64 solely, and those that do find their way into benchmarks, are again devoid of using the CUDA path. HPC is the environment for widespread use of double precision, and the ratio of Nvidia to AMD GPUsthere is a pretty telling story.

Now, you realise that these two statements you made are contradictory:


The majority of compute application Nvidia cards use ARE CUDA code. These machines, this machine, this machine, thesemachinesare all geared for content creation. You say its a waste of money, these people beg to differ. Different requirements equals different usage patterns.
CUDA applications are dominating content creation field, but "compute" is not only content creation, mate. GPGPU covers a lot of fields, such as GPU mining, folding, password crack, ...

Leaving that aside, my statement is still valid "Only people working with CUDA could find benefit in these cards"

Even if considering only Titan branch, as compute cards, Titan Z is still stupidly overpriced. A couple Titan Black with blower coolers are clearly better than this 3 slot co-axial card in term of performance as well as compatibility with server racks. And they are even cheaper!!

Titan cards make little sense already, and Titan Z makes absolutely no sense, at all.
Posted on Reply
#56
radrok
Why do you insist in saying Titan makes no sense? Or little, I am curious :)
Posted on Reply
#57
GhostRyder
sweetCUDA applications are dominating content creation field, but "compute" is not only content creation, mate. GPGPU covers a lot of fields, such as GPU mining, folding, password crack, ...

Leaving that aside, my statement is still valid "Only people working with CUDA could find benefit in these cards"

Even if considering only Titan branch, as compute cards, Titan Z is still stupidly overpriced. A couple Titan Black with blower coolers are clearly better than this 3 slot co-axial card in term of performance as well as compatibility with server racks. And they are even cheaper!!

Titan cards make little sense already, and Titan Z makes absolutely no sense, at all.
He tried to give the same argument on another site under a different name. Im glad to see someone else who understands why Titan-Z will not work in a rack mount environment because of its 2 direction axial fan (Something Titan and Titan black don't have, instead the usual blower). Your 100% correct, the fact is that the card has no real good purpose right now because on the gaming front there are better cheaper options, on the professional front there are better cheaper options, and the lack of the professional drivers, build quality, and support for 24/7 work really holds it back. At the price of 3 times a standard titan black and the fact its a 3 slot device it really holds it back from being worthwhile.

Im guessing either they are going to bump the boost clocks or lower the price to accommodate. Right now, I don't see it as being a fruitful investment since anyone who really needs Cuda Dev is going to buy 3 Titan blacks for the same price.
Hitman_ActualThee dumbest, non-sense making deploy by Nvidia I've ever seen.......

This card and it's price tag make ZERO SENSE!

What people should really understand and know is the the "12GB" of Vram is really 6GB

they're doing what they did with the 690 allocating 4GB per GPU

So in the Z's casse 6GB to each GPU which means you really only get 6GB of usable VRAM.

I'm a stout Nvidia/Intel user but this saddens me to see such stupidity.
Your completely right, but in the small Cuda render area that 12gb might shine. However as stated before because of it being a tri-slot, non-professional designed card, and the price make it a near impossible sell. Its gaming aspects are abysmal, its professional aspects are limited, which brings in many problems to the whole idea of the card.
radrokWhy do you insist in saying Titan makes no sense? Or little, I am curious :)
It mostly comes to branding and the whole idea/design of Titan. The original Titan cost 1k, but was the fastest single GPU, carried a hefty 6gb of ram, and had some Cuda dev applications that made use of it. But even back then the price, the fact its not 24/7 rated, does not have all the professional support, and is beaten but its younger brothers pretty easily in gaming make it a hard sell. Some people took advantage of it for the 6gb of ram, but in reality the card just comes down to a problem of being where in reality does it actually fit? Its branding says gaming, Nvidia advertises it as a Professional card, but its lack of support says its not. It misses on all the fronts which is why I along with many (If not most) find it an hard sell.

Titan makes sense on a small front, but its a very limited front.
Posted on Reply
#58
HumanSmoke
GhostRyderIm glad to see someone else who understands why Titan-Z will not work in a rack mount environment because of its 2 direction axial fan (Something Titan and Titan black don't have, instead the usual blower).
Weirdly enough the guy you're quoting mentioned nothing about rackmount. Pretty lame attempt at histrionics on your part as per usual. I've never met any other poster that tries so hard to convince others of his immaturity.
BTW and OT: The GTX 690 also has a the same fan/cooler arrangement, maybe you take time out from composing poorly spelled and punctuated posts to write some poorly spelled and punctuated emails to the GPU server builders - they clearly don't have your technical expertise, since they seem happy to sell and warranty the GTX 690 in a 4U form factor:
Posted on Reply
#59
GhostRyder
HumanSmokeWeirdly enough the guy you're quoting mentioned nothing about rackmount. Pretty lame attempt at histrionics on your part as per usual. I've never met any other poster that tries so hard to convince others of his immaturity.
BTW and OT: The GTX 690 also has a the same fan/cooler arrangement, maybe you take time out from composing poorly spelled and punctuated posts to write some poorly spelled and punctuated emails to the GPU server builders - they clearly don't have your technical expertise, since they seem happy to sell and warranty the GTX 690 in a 4U form factor:
First of all, apparently you can't read at all and you have just made another lame excuse to try and pretend to be smart.
sweetEven if considering only Titan branch, as compute cards, Titan Z is still stupidly overpriced. A couple Titan Black with blower coolers are clearly better than this 3 slot co-axial card in term of performance as well as compatibility with server racks. And they are even cheaper!!
Just because a site will sell it to you does not mean it works well something you spoke about in the past with the HD 7990's in quad setups.

I have never seen someone who tries so hard to convince others of experience in fields hes never even touched in real life. Googling something and actually working on something are two totally different things. I like your name better on this site because it sums you up pretty well, just trying to blow smoke in peoples faces.
Posted on Reply
#60
Xzibit
HumanSmokeJesus, how many times are you going to edit a post.

It probably depends upon your definition of a supercomputer. If its an HPC cluster, then no, you wouldn't...but that's a very narrow association used by people with little technical knowledge of the range of compute solutions.
Other examples:
The Fastra IIis a desktop supercomputer designed for tomography
Rackmount GPU serversalso generally come under the same heading, since big iron generally tend to be made up of the same hardware....just add more racks to a cabinet...and more cabinets to a cluster...etc. etc.
I'd also note that they aren't "one offs" as you opined once before, as explained here: " We build and ship at least a few like this every month".
Go nuts configure away.
Edit button is there. There for I use it. :p

K.I.S.S.

750w, 1500w & Max

Nvidia Titan Z 12 GB (6GB per GPU) / 2.6 Tflops / 3 slot / 375w
750w = 1 / 2.6 Tflops
1500w = 3 / 7.8 Tflops
Max = 5 / 13 Tflops

Nvidia Quadro K6000 12 GB / 1.7 Tflops / 2 slot / 225w
750w = 3 / 5.1 Tflops
1500w = 6 / 10.2 Tflops
Max = 8 / 13.6 Tflops

AMD FirePro W9100 16 GB / 2.62 Tflops / 2 slot / 275w
750w = 2 / 5.24 Tflops
1500w = 5 / 13.1 Tflops
Max = 8 / 20.96 Tflops
Posted on Reply
#61
cadaveca
My name is Dave
GhostRyderGoogling something and actually working on something are two totally different things.
You're kidding, right? like... man... Google FTW!!!


Not really though, I just like that bit of your post. I might have to put it in my sig, if you don't mind. More people need to read that. :p
Posted on Reply
#62
GhostRyder
cadavecaYou're kidding, right? like... man... Google FTW!!!


Not really though, I just like that bit of your post. I might have to put it in my sig, if you don't mind. More people need to read that. :p
Well you can't believe everything you read on the internet :P
Posted on Reply
#63
cadaveca
My name is Dave
GhostRyderWell you can't believe everything you read on the internet :p
That was a big part of why I wanted to do hardware reviews in the first place, and still do, although it's really something that costs me money rather than makes me money.
Posted on Reply
#64
GhostRyder
cadavecaThat was a big part of why I wanted to do hardware reviews in the first place, and still do, although it's really something that costs me money rather than makes me money.
Yea I know what you mean, ive always wanted to do reviews as well but never really bitten the bullet. Part of that problem for me is being on call 12 hours a day 5 days of the week most of the time to fix servers. You can use it as part of your sig, I don't mind :p
Posted on Reply
#65
Xzibit
HumanSmokeYeah ? Glad I don't listen to your system build advice.
Your advise isn't that great. Even if your just advocating for Nvidia & CUDA. Titan Z makes no sense other then to feed some ones lack of e-Peen. I guess its easier for some when its not their money.

Nvidia Titan Z 12 GB (6GB per GPU) / 2.6 Tflops / 3 slot / 375w / $2,999
750w = 1 / 2.6 Tflops / $2,999
1500w = 3 / 7.8 Tflops / $8,997
Max = 5 / 13 Tflops / $14,995

Nvidia Titan Black 6 GB / 1.7 Tflops / 2 slot / 250w / $1,299
750w = 2 / 3.4 Tflops / $2,598
1500w = 5 / 8.5 Tflops / $6,495
Max = 8 / 13.6 Tflops / $10,392
Posted on Reply
#66
HumanSmoke
XzibitYour advise isn't that great. Even if your just advocating for Nvidia & CUDA. Titan Z makes no sense other then to feed some ones lack of e-Peen. I guess its easier for some when its not their money.

Nvidia Titan Z 12 GB (6GB per GPU) / 2.6 Tflops / 3 slot / 375w / $2,999
750w = 1 / 2.6 Tflops / $2,999
1500w = 3 / 7.8 Tflops / $8,997
Max = 5 / 13 Tflops / $14,995

Nvidia Titan Black 6 GB / 1.7 Tflops / 2 slot / 250w / $1,299
750w = 2 / 3.4 Tflops / $2,598
1500w = 5 / 8.5 Tflops / $6,495
Max = 8 / 13.6 Tflops / $10,392
You're right....but then again I don't believe I advocated the use of the Titan Z over the Titan Black. So your point is? (apart from trolling and learning to use a calculator app that is)
A lot of CG render artists actually snap up GTX 580's (it's where my two cards ended up). They can utilise CUDA, are reasonable well suited for CG work, relatively cheap for the performance, and are available with a 3GB framebuffer.
Posted on Reply
#68
HumanSmoke
tiggerI could buy a whole lot of drugs 'n' whores for $3000.
Probably a mornings budget for Keith Richard!
Posted on Reply
#69
Tatty_Two
Gone Fishing
This total thread hijack is becoming tiresome now and embarrassing, at least for one or two of you, lets stop the tit for tat childish behaviour, thread cleaned up.
Posted on Reply
#70
Am*
SihastruSo much hate.

I understand the arguments.

CON #1: It creates a precedent. A really expensive card, like we never had before. It creates a new (new-new) price bracket for very (very-very) high end cards. We don't want that.

REPLY to CON #1: For every Koenigsegg, there are millions of affordable cars of all sizes and for all purposes. You're not required to buy the Koenigsegg. You can buy the half price Nissan GTR and still go as fast as the speed limit.

CON #2. It's stupid. It makes no sense. I can get INSERT-NAME-HERE card for half the price, or I can get two of the INSERT-ANOTHER-NAME-HERE for even less.

REPLY to CON #2: So what? If we won't push the limits, how will we ever get ahead? You're not required to buy into the crazy-bonkers ragged edge products.

CON #3: It's not as great for mining as the INSERT-NAME-HERE! AMD FTW!

REPLY to CON #3: The world is not just about mining. NVIDIA created an ecosystem in the professional market and they can now get a nice return on that investment. For the professional market, it's a bargain card. If AMD wants in, they must work at it. It's irrelevant that an AMD card is good at certain compute tasks if the companies that write the software for the professional market do not care. And there are good reasons for them not to care, one of the most important ones is that AMD always offloads almost everything to INSERT-COMPANY-NAME-HERE.

The most recent example to this is Mantle. AMD created a lot of buzz, but in reality it offloaded the actual work to the game developers. The same with 3D display technology. AMD offloaded work to some company and then it buzzed it up with the words "free" and "open source". And there are many other examples...

CON #4: Nobody will buy the card. If you buy the card, you're stupid! NVIDIA is stupid! Titan-Z is stupid!

REPLY to CON #4: I hear this argument a lot. People that can afford expensive things are for some reason all stupid. Well... maybe some of them are, but most of them are smarter then most people. And a lot of people will buy this card. They will.

Crazy and stupid are not the same thing.

CON #5: Why would NVIDIA create such a product? The same was asked when Titan came along.

REPLY to CON#5: Because people will buy it. This kind of purchase can be justified in many ways. If you have the money, 'because I can' is enough. NVIDIA thanks you.

CON #6: Whatever.

REPLY to CON #6: Whatever.
Comparing GPUs to cars is nonsense. Cars do not get replaced by ones 40%-50% faster every 1-2 years. They also gain a lot of resale value over time, especially those with a discontinued but iconic designs. For example, the late 90s VW Golf that I could've bought 10 years ago for about £500 now goes for about £3000 just because of its looks, (and it's only a budget car).

It pushes limits of nothing, besides the level of stupidity of people who have more money than sense.

As for the pro-market "bargain card" statement, sorry but that is a bunch of bullshit that either misinformed people chant or just blind Nvidia fanboys use as an excuse. Anyone who thinks spending $3000 on a GeForce card is a good idea compared to say a Quadro K6000 that can be had for just $2000 more for development -- is brain-dead, to say the least. This is still a Geforce card, meaning no ECC support, no compatibility certification in any non-gaming/pro-oriented program (Adobe, AutoDesk etc) and no compute-oriented support of Tesla cards either, meaning they will slaughter this card in their own markets even more than other cards will in the gamers' market as far as value-for-money goes. The last time GeForce cards were comparable with their pro-counterparts was back in the Fermi days when cards like the GTX 580 had Adobe certification and it is also why there were so many pissed off people who upgraded to Kepler who realised how much worse the GTX 680 was at the same tasks due to Nvidia crippling Geforce cards at hardware-level compared to the vBIOS/software and driver limits which Nvidia used to put in place and which clever people used to bypass on old cards with a custom vBIOS and modded drivers. This card is a huge waste of time, money and resources for Nvidia and I hope it loses them a ton of cash (though I very much doubt it).
Posted on Reply
#71
GhostRyder
Am*Comparing GPUs to cars is nonsense. Cars do not get replaced by ones 40%-50% faster every 1-2 years. They also gain a lot of resale value over time, especially those with a discontinued but iconic designs. For example, the late 90s VW Golf that I could've bought 10 years ago for about £500 now goes for about £3000 just because of its looks, (and it's only a budget car).

It pushes limits of nothing, besides the level of stupidity of people who have more money than sense.

As for the pro-market "bargain card" statement, sorry but that is a bunch of bullshit that either misinformed people chant or just blind Nvidia fanboys use as an excuse. Anyone who thinks spending $3000 on a GeForce card is a good idea compared to say a Quadro K6000 that can be had for just $2000 more for development -- is brain-dead, to say the least. This is still a Geforce card, meaning no ECC support, no compatibility certification in any non-gaming/pro-oriented program (Adobe, AutoDesk etc) and no compute-oriented support of Tesla cards either, meaning they will slaughter this card in their own markets even more than other cards will in the gamers' market as far as value-for-money goes. The last time GeForce cards were comparable with their pro-counterparts was back in the Fermi days when cards like the GTX 580 had Adobe certification and it is also why there were so many pissed off people who upgraded to Kepler who realised how much worse the GTX 680 was at the same tasks due to Nvidia crippling Geforce cards at hardware-level compared to the vBIOS/software and driver limits which Nvidia used to put in place and which clever people used to bypass on old cards with a custom vBIOS and modded drivers. This card is a huge waste of time, money and resources for Nvidia and I hope it loses them a ton of cash (though I very much doubt it).
You have listed nearly every problem that makes this card a complete bust. The problems listed are just many of the problems this card at its current configuration bring that make it a problem.

Nvidia decided with Kepler to separate out the dev cards and the gaming cards completely so people with budget concerns were completely stranded and forced to either stick with the old cards (Fermi 580's for instance) or spend the extra on the Professional level cards. But then Nvidia released the Titan which basically brought that back on the gaming series of cards with its basic levels of cuda dev and high level of ram to the market. Of course that came at a price of nearly double at the time of its release what the cost of the highest GPU from Nvidia cost (Though to be fair, it was the fastest single GPU for gaming so it was slightly more justified at the time versus the recently released Titan Black). Titan was designed with a nice blower to keep it cool which would (as with the reference designs of the past) work well in any environment including a crammed Rack mount environment which made people build CGI render house with it because of its high capacity ram.

Titan-Z was supposed to be a new dual GPU card with 12gb of ram that brought Titan devs a better buy or the ability to use it in similar environment. Thats not the case with the design of this card. Unlike previous dual GPU cards, this one is a 3 slot form factor, keeps the central axial fan, and costs 3 times a single Titan Black. The price of 3k is in a level of crazy proportions that brings its price well close to the Quaddro and tesla costs. Of course people still try to advocate "Well its still cheaper than them" but what they miss is the completely high up-charge this card is bringing for what little it brings and the scenarios that Titan once worked are much more limited with this card. If they had upgraded titan-Z with some more professional level features including some ECC ram we could be having a different debate right now but facts are facts and no amount of fanboys are going to justify the cards cost.

Whether or not your a professional and need something like this, there is already a significantly better option for you out there called the Firepro W9100. If you really need Cuda Dev and want to use that in your purchase, buy 3 titan blacks and be happy with the fact you will blow Titan-Z to pieces and it will work in ANY environment. But for those who need the extreme power, the W9100 is 70 bucks more, has more ram that Titan-Z that is also ECC rated, comes with the professional level driver support, rated for 24/7 use, the blower will work in most environments, and its got all that on a SINGLE GPU which means its not limited in the ground of its Dual GPU support in certain compute areas.

Titan-Z misses the ball completely, now if the price comes down to 2200-2500, maybe it will have slightly more purpose (Though I doubt its going to, but Nvidia might realize their foolishness) but until then this card is beyond redemption.
XzibitYour advise isn't that great. Even if your just advocating for Nvidia & CUDA. Titan Z makes no sense other then to feed some ones lack of e-Peen. I guess its easier for some when its not their money.

Nvidia Titan Z 12 GB (6GB per GPU) / 2.6 Tflops / 3 slot / 375w / $2,999
750w = 1 / 2.6 Tflops / $2,999
1500w = 3 / 7.8 Tflops / $8,997
Max = 5 / 13 Tflops / $14,995

Nvidia Titan Black 6 GB / 1.7 Tflops / 2 slot / 250w / $1,299
750w = 2 / 3.4 Tflops / $2,598
1500w = 5 / 8.5 Tflops / $6,495
Max = 8 / 13.6 Tflops / $10,392
This is pretty much the reason summed up quite nicely. Huge price difference and huge performance difference all at once (Though Titan Black is a bit cheaper at 1099 for the EVGA superclocked variant, so its even better than Titan-Z lawlz).
Posted on Reply
#72
radrok
You obviously missed how much Titan sold to prosumers.

There are many application where DP on CUDA is welcome and certified drivers make no difference at all.

You are all trying to justify your own reasons without having your hands into anything that could remotely use this kind of graphics power.

I'm not justifying Titan-Z but there is a market for Titan branding, you just can't grasp it.

I say most of you are gamers, buy non DP graphics card like the 780 Ti and call it a day.

If you don't have a use for a thing it doesn't necessarily mean it is useless to everyone.
Posted on Reply
#73
GhostRyder
radrokYou obviously missed how much Titan sold to prosumers.

There are many application where DP on CUDA is welcome and certified drivers make no difference at all.

You are all trying to justify your own reasons without having your hands into anything that could remotely use this kind of graphics power.

I'm not justifying Titan-Z but there is a market for Titan branding, you just can't grasp it.

I say most of you are gamers, buy non DP graphics card like the 780 Ti and call it a day.

If you don't have a use for a thing it doesn't necessarily mean it is useless to everyone.
I think you missed the point of what I was saying, the prosumers used to have a different choice set by Nvidia when Fermi was predominant. Even by todays standards, in Cuda Iray and Blender (Which are 2 staples of the prosumers buying Titan's) the 580 even beats out the newer 680 and 780 (The 680 in many cases by near double) where as the Titan even in some cases is not that far ahead. The problem being that Nvidia specifically limited and separated out the professional world and the gaming world on the GK architecture specifically to slow this and make people more justified to buy the more expensive Teslas and Quaddro cards. Then when people were pretty upset and were still buying up 580's (I remember the price of 580's stayed around a high range because of this) they decided to release the Titan class of cards which were basically what the 580's used to be at an extraordinarily high price. GM is actually sounding like its going to be fixing this a bit by having more of a focus on the compute aspects (As seen on the 750 and 750ti) along with the gaming parts. The fact is that the Titan branding is very confusing and an oddity, its not got features that make professional cards well... professional cards (Drivers, ECC memory, 24/7 rated) but advertised as such while keeping the gaming branding (Geforce GTX).

I know exactly why people buy the Titan (If you read what I was saying I stated that many a time and even some of the posts pointed out how much better Titan Black was a better buy than Z) and the reasons are strong especially with CGI being predominant and with Ram being a high requirement in those fields. However, does not change what I stated about how they were attempting to separate the cards for reasons that mostly involve making more money (For instance I would feel safe to say the profit margin of Titan Black is significantly higher than the 780ti which are both the same chip except one comes with 6gb of ram.

Titans work ok for what they are purchased for by the prosumers but they come at a price that while cheaper than the pro cards carries the same styles and power as the gamer series all the while having enough aspects to make it a "Best of a bad situation". I've met people who snatched 3gb 580's to use in similar environments for cheap that perform excellent in the same environments the expensive Titan does and a position the GTX 680 and 780/ti cards can't even imagine. That's my problem with the Titan branding and especially the up and coming Titan-Z.
Posted on Reply
#74
Unregistered
If I had the money for one of these, I would buy one in a second, as do the people who these are aimed at, and do not worry about the cost.

I'd have a nerdgasm every time I looked in my case.
#75
radrok
Well we can say they basically became more greedy, Nvidia just saw another segment between their gaming - professional line of graphics cards.

That segment was happy and dandy by using untapped Fermi until they decided to milk them (us).
Posted on Reply
Add your own comment
Apr 26th, 2024 15:44 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts