Thursday, May 1st 2014
New GTX TITAN-Z Launch Details Emerge
NVIDIA's GeForce GTX TITAN-Z missed the bus on its earlier 29th April, 2014 launch date, which was confirmed to the press by several retailers, forcing some AIC partners to content with paper-launches of cards bearing their brand. It turns out that the delay is going to be by just a little over a week. The GeForce GTX TITAN-Z is now expected to be available on the 8th of May, 2014. That will be when you'll be able to buy the US $3,000 graphics card off the shelf.
A dual-GPU graphics card based on a pair of 28 nm GK110 GPUs, the GTX TITAN-Z features a total of 5,760 CUDA cores (2,880 per GPU), 480 TMUs (240 per GPU), 96 ROPs (48 per GPU), and a total of 12 GB of GDDR5 memory, spread across two 384-bit wide memory interfaces. Although each of the two GPUs is configured identical to a GTX TITAN Black, it features lower clock speeds. The core is clocked at 705 MHz (889 MHz on the GTX TITAN Black), with GPU Boost frequencies of up to 876 MHz (up to 980 MHz on the GTX TITAN Black); while the memory remains at 7.00 GHz. The card draws power from a pair of 8-pin PCIe power connectors, and its maximum power draw is rated at 375W. It will be interesting to see how it stacks up against the Radeon R9 295X2 by AMD, which costs half as much, at $1,500.
Source:
ComputerBase.de
A dual-GPU graphics card based on a pair of 28 nm GK110 GPUs, the GTX TITAN-Z features a total of 5,760 CUDA cores (2,880 per GPU), 480 TMUs (240 per GPU), 96 ROPs (48 per GPU), and a total of 12 GB of GDDR5 memory, spread across two 384-bit wide memory interfaces. Although each of the two GPUs is configured identical to a GTX TITAN Black, it features lower clock speeds. The core is clocked at 705 MHz (889 MHz on the GTX TITAN Black), with GPU Boost frequencies of up to 876 MHz (up to 980 MHz on the GTX TITAN Black); while the memory remains at 7.00 GHz. The card draws power from a pair of 8-pin PCIe power connectors, and its maximum power draw is rated at 375W. It will be interesting to see how it stacks up against the Radeon R9 295X2 by AMD, which costs half as much, at $1,500.
105 Comments on New GTX TITAN-Z Launch Details Emerge
/notrocketscience
So please find a better excuse to defend the stupid price of those GAMING cards.
As mentioned above, only people working with CUDA could find benefit in these cards. For others, just learn the lesson that the first Titan buyers took and stay away from these shiny money sucker.
You also seem to be another person hung up on numbers. AMD/ATI cards have had the edge over Nvidia boards in double precision (and single for that matter) on some time, yet it still isn't reflected in the wider community. Why? Because AMD's architectures are totally reliant upon OpenCL for the most part, and OpenCL support is spotty at best.
Say what you will about CUDA, but the software ecosystem is in place and it works.
Blender from their ownFAQ: AFAIK, OpenCL (working) support isn't overly prevalent. Even Lux, which is touted as a poster child for OpenCL has ongoing issues, and generally where both CUDA and OpenCL are supported, it is the former that is generally more mature. The Blender sentiment isn't a lone voice (pdf) So, basically, hardware performs as well as the coding allows. AMD is tied to OpenCL, and OpenCL is tied to third parties for its advancement, which it is very much a case of YMMV. What clouds the issue further is that mainstream (gaming) sites use OpenCL apps only to compare AMD and Nvidia cards which distorts the overall picture, since using the CUDA path for Nvidia hardware invariably means a better result. Note this render test using Premiere where the GTX 670 (using CUDA) and the R9 290X (using OpenCL) are basically equal in time to render. On raw numbers the 290X should have it all over the GTX 670, after all the AMD card has 5.8 TFlops of processing power to the 670's 2.46 TFlops - well over double!
So the 7970 might represent great value with the right application, and it certainly is affordable at $200. On the other side of the ledger, the GTX 580 -also because of its decreasing price (and its ability to run CUDA apps), makes a compelling buy for people who want to use CUDA coded render/CAD apps. Is the Titan the be-all-and-end-all ? Of course not, and I don't see anyone saying it is. What I see is people comparing two current top tier GPUs because....well, because they are the flavour of the week.
Numbers on the page don't always translate that well in real life scenarios. Harping back to point regarding double precision. It's use is governed by the same coding environment. Can't say I've seen many FP64-only benchmarks outside of HPC, most consumer apps tend to involve both single and double precision calculation rather than FP64 solely, and those that do find their way into benchmarks, are again devoid of using the CUDA path. HPC is the environment for widespread use of double precision, and the ratio of Nvidia to AMD GPUsthere is a pretty telling story.
Now, you realise that these two statements you made are contradictory: The majority of compute application Nvidia cards use ARE CUDA code. These machines, this machine, this machine, thesemachinesare all geared for content creation. You say its a waste of money, these people beg to differ. Different requirements equals different usage patterns.
If I may add we are almost forced to use CUDA devices, as you posted their development is more mature and offers way more than what OpenCL based solutions have currently in the market.
Nvidia has done its homework with CUDA and it is gathering what has planted in the past.
Sadly CUDA is proprietary but the fact that it is proprietary allowed it to strive compared to the hot mess OpenCL is nowadays, at least for what I do.
I also challenge you to use Blender with an AMD based graphics solution, you'll run away with nightmares, there are so many cursor bugs I can't even start to enumerate.
Leaving that aside, my statement is still valid "Only people working with CUDA could find benefit in these cards"
Even if considering only Titan branch, as compute cards, Titan Z is still stupidly overpriced. A couple Titan Black with blower coolers are clearly better than this 3 slot co-axial card in term of performance as well as compatibility with server racks. And they are even cheaper!!
Titan cards make little sense already, and Titan Z makes absolutely no sense, at all.
Im guessing either they are going to bump the boost clocks or lower the price to accommodate. Right now, I don't see it as being a fruitful investment since anyone who really needs Cuda Dev is going to buy 3 Titan blacks for the same price. Your completely right, but in the small Cuda render area that 12gb might shine. However as stated before because of it being a tri-slot, non-professional designed card, and the price make it a near impossible sell. Its gaming aspects are abysmal, its professional aspects are limited, which brings in many problems to the whole idea of the card. It mostly comes to branding and the whole idea/design of Titan. The original Titan cost 1k, but was the fastest single GPU, carried a hefty 6gb of ram, and had some Cuda dev applications that made use of it. But even back then the price, the fact its not 24/7 rated, does not have all the professional support, and is beaten but its younger brothers pretty easily in gaming make it a hard sell. Some people took advantage of it for the 6gb of ram, but in reality the card just comes down to a problem of being where in reality does it actually fit? Its branding says gaming, Nvidia advertises it as a Professional card, but its lack of support says its not. It misses on all the fronts which is why I along with many (If not most) find it an hard sell.
Titan makes sense on a small front, but its a very limited front.
BTW and OT: The GTX 690 also has a the same fan/cooler arrangement, maybe you take time out from composing poorly spelled and punctuated posts to write some poorly spelled and punctuated emails to the GPU server builders - they clearly don't have your technical expertise, since they seem happy to sell and warranty the GTX 690 in a 4U form factor:
I have never seen someone who tries so hard to convince others of experience in fields hes never even touched in real life. Googling something and actually working on something are two totally different things. I like your name better on this site because it sums you up pretty well, just trying to blow smoke in peoples faces.
K.I.S.S.
750w, 1500w & Max
Nvidia Titan Z 12 GB (6GB per GPU) / 2.6 Tflops / 3 slot / 375w
750w = 1 / 2.6 Tflops
1500w = 3 / 7.8 Tflops
Max = 5 / 13 Tflops
Nvidia Quadro K6000 12 GB / 1.7 Tflops / 2 slot / 225w
750w = 3 / 5.1 Tflops
1500w = 6 / 10.2 Tflops
Max = 8 / 13.6 Tflops
AMD FirePro W9100 16 GB / 2.62 Tflops / 2 slot / 275w
750w = 2 / 5.24 Tflops
1500w = 5 / 13.1 Tflops
Max = 8 / 20.96 Tflops
Not really though, I just like that bit of your post. I might have to put it in my sig, if you don't mind. More people need to read that. :p
Nvidia Titan Z 12 GB (6GB per GPU) / 2.6 Tflops / 3 slot / 375w / $2,999
750w = 1 / 2.6 Tflops / $2,999
1500w = 3 / 7.8 Tflops / $8,997
Max = 5 / 13 Tflops / $14,995
Nvidia Titan Black 6 GB / 1.7 Tflops / 2 slot / 250w / $1,299
750w = 2 / 3.4 Tflops / $2,598
1500w = 5 / 8.5 Tflops / $6,495
Max = 8 / 13.6 Tflops / $10,392
A lot of CG render artists actually snap up GTX 580's (it's where my two cards ended up). They can utilise CUDA, are reasonable well suited for CG work, relatively cheap for the performance, and are available with a 3GB framebuffer.
It pushes limits of nothing, besides the level of stupidity of people who have more money than sense.
As for the pro-market "bargain card" statement, sorry but that is a bunch of bullshit that either misinformed people chant or just blind Nvidia fanboys use as an excuse. Anyone who thinks spending $3000 on a GeForce card is a good idea compared to say a Quadro K6000 that can be had for just $2000 more for development -- is brain-dead, to say the least. This is still a Geforce card, meaning no ECC support, no compatibility certification in any non-gaming/pro-oriented program (Adobe, AutoDesk etc) and no compute-oriented support of Tesla cards either, meaning they will slaughter this card in their own markets even more than other cards will in the gamers' market as far as value-for-money goes. The last time GeForce cards were comparable with their pro-counterparts was back in the Fermi days when cards like the GTX 580 had Adobe certification and it is also why there were so many pissed off people who upgraded to Kepler who realised how much worse the GTX 680 was at the same tasks due to Nvidia crippling Geforce cards at hardware-level compared to the vBIOS/software and driver limits which Nvidia used to put in place and which clever people used to bypass on old cards with a custom vBIOS and modded drivers. This card is a huge waste of time, money and resources for Nvidia and I hope it loses them a ton of cash (though I very much doubt it).
Nvidia decided with Kepler to separate out the dev cards and the gaming cards completely so people with budget concerns were completely stranded and forced to either stick with the old cards (Fermi 580's for instance) or spend the extra on the Professional level cards. But then Nvidia released the Titan which basically brought that back on the gaming series of cards with its basic levels of cuda dev and high level of ram to the market. Of course that came at a price of nearly double at the time of its release what the cost of the highest GPU from Nvidia cost (Though to be fair, it was the fastest single GPU for gaming so it was slightly more justified at the time versus the recently released Titan Black). Titan was designed with a nice blower to keep it cool which would (as with the reference designs of the past) work well in any environment including a crammed Rack mount environment which made people build CGI render house with it because of its high capacity ram.
Titan-Z was supposed to be a new dual GPU card with 12gb of ram that brought Titan devs a better buy or the ability to use it in similar environment. Thats not the case with the design of this card. Unlike previous dual GPU cards, this one is a 3 slot form factor, keeps the central axial fan, and costs 3 times a single Titan Black. The price of 3k is in a level of crazy proportions that brings its price well close to the Quaddro and tesla costs. Of course people still try to advocate "Well its still cheaper than them" but what they miss is the completely high up-charge this card is bringing for what little it brings and the scenarios that Titan once worked are much more limited with this card. If they had upgraded titan-Z with some more professional level features including some ECC ram we could be having a different debate right now but facts are facts and no amount of fanboys are going to justify the cards cost.
Whether or not your a professional and need something like this, there is already a significantly better option for you out there called the Firepro W9100. If you really need Cuda Dev and want to use that in your purchase, buy 3 titan blacks and be happy with the fact you will blow Titan-Z to pieces and it will work in ANY environment. But for those who need the extreme power, the W9100 is 70 bucks more, has more ram that Titan-Z that is also ECC rated, comes with the professional level driver support, rated for 24/7 use, the blower will work in most environments, and its got all that on a SINGLE GPU which means its not limited in the ground of its Dual GPU support in certain compute areas.
Titan-Z misses the ball completely, now if the price comes down to 2200-2500, maybe it will have slightly more purpose (Though I doubt its going to, but Nvidia might realize their foolishness) but until then this card is beyond redemption. This is pretty much the reason summed up quite nicely. Huge price difference and huge performance difference all at once (Though Titan Black is a bit cheaper at 1099 for the EVGA superclocked variant, so its even better than Titan-Z lawlz).
There are many application where DP on CUDA is welcome and certified drivers make no difference at all.
You are all trying to justify your own reasons without having your hands into anything that could remotely use this kind of graphics power.
I'm not justifying Titan-Z but there is a market for Titan branding, you just can't grasp it.
I say most of you are gamers, buy non DP graphics card like the 780 Ti and call it a day.
If you don't have a use for a thing it doesn't necessarily mean it is useless to everyone.
I know exactly why people buy the Titan (If you read what I was saying I stated that many a time and even some of the posts pointed out how much better Titan Black was a better buy than Z) and the reasons are strong especially with CGI being predominant and with Ram being a high requirement in those fields. However, does not change what I stated about how they were attempting to separate the cards for reasons that mostly involve making more money (For instance I would feel safe to say the profit margin of Titan Black is significantly higher than the 780ti which are both the same chip except one comes with 6gb of ram.
Titans work ok for what they are purchased for by the prosumers but they come at a price that while cheaper than the pro cards carries the same styles and power as the gamer series all the while having enough aspects to make it a "Best of a bad situation". I've met people who snatched 3gb 580's to use in similar environments for cheap that perform excellent in the same environments the expensive Titan does and a position the GTX 680 and 780/ti cards can't even imagine. That's my problem with the Titan branding and especially the up and coming Titan-Z.
I'd have a nerdgasm every time I looked in my case.
That segment was happy and dandy by using untapped Fermi until they decided to milk them (us).