• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Titan V: did the performance ever get any better with later driver revisions?

The gaming perf. was completely unimpressive when it was first released:

https://hothardware.com/reviews/nvidia-titan-v-volta-gv100-gpu-review?page=3

Maybe that's why Volta died off without ever being released to AIB's?

It was called Turing and later Ampere, go check it out :)

The unimpressive part stands though, I agree, to this date. AMD proves that point by creating more power efficient GPUs now at every performance tier.

Its clear the new hardware still is a datacenter/enterprise-first technology transplanted to have some purpose for gaming. Nvidia still hasn't quite figured out whether they need to push harder on DLSS or on RT, but its clear they have to push hard for it to work and content is still scarce wherever they won't keep pushing/dropping bags of money. They've dug themselves a hole, strategically, and the only path is deeper into it. Competitors haven't quite dug in like that, just yet, for various reasons, most of them pretty good ones. Late to market, or wise? Time will tell, lots of variables in play... What's striking in the gaming front is that RT tech isn't really pushed in the console space, and high end GPUs are heavily overpriced in PC space, so its turning into haves/have-nots territory, and doesn't really take off quite so fast, the chicken/egg conundrum is still live and ongoing, as it has since the launch of RTX. Not every dev is going to build around RT, in fact, most won't, as the majority won't enjoy it.

Where dó we see industry wide adoption? DLSS/FidelityFX/Intel's vaporware version of it with more X'es. Again, for all sorts of good reasons.
 
Last edited:
Volta didn't "die off", it was extremely successful in its target market—the datacenter. It was never released to AIBs because it was never intended for the AIB market.
Do you have any proof for your latter assertion? Because Ngreedia is all about greed I'd figure if they COULD make money off AIB's they sure as hell wouldn't be shy about doing so.

@Vayra86
Volta isn't the same architecture as Turing or Ampere. The Titan RTX out-performs the Titan V (according to TPU's GPU database) even though it has less shading units, tensor cores, L1 cache and TMU's
 
Do you have any proof for your latter assertion? Because Ngreedia is all about greed I'd figure if they COULD make money off AIB's they sure as hell wouldn't be shy about doing so.
This is the biggest fan boy comment I have seen outside a news post. He provides publicly available info you can get from any generation slide regarding this arc, but he has to provide proof to convince you because your basis for disagreement is based on personal opinion?

odd that doesn’t seem like how healthy discussion works.
 
This is the biggest fan boy comment I have seen outside a news post. He provides publicly available info you can get from any generation slide regarding this arc, but he has to provide proof to convince you because your basis for disagreement is based on personal opinion?

odd that doesn’t seem like how healthy discussion works.
Um, I asked for proof not vituperation, but maybe you don't know the difference or figure insults are a substitute for proof.
 
Um, I asked for proof not vituperation, but maybe you don't know the difference or figure insults are a substitute for proof.

I mean, they are right. It seemed that you meant to dig at NVIDIA without actually having any meaningful interest in a conversation there.

Anyway, the GV100 was a GPU designed for heavy compute tasks first and graphics second, just like AMD's Vega 20 used on the Radeon VII. Both were extremely powerful processors in their own right; but the less than impressive gaming performance that these exhibited next to significantly more affordable consumer grade hardware was mostly down to price concerns.

Take the AMD card as an example. Was Radeon VII faster than the 5700 XT? Yes. At everything? No. By how much? Very little. Proportionally to its power? Overwhelmingly so, Vega 20 is far more capable hardware than Navi 10 could hope to be. That didn't translate to performance in graphics applications. The same applied to TItan V and its consumer-class brother, Titan Xp with a full GP102 processor. Titan Xp was excellent at graphics, but it truly looked like a previous generation GPU in productivity apps that Titan V was designed for.

It was also the first tensor-capable hardware to be made available to end-users, reminder that it came out in 2017 and many things we take for granted today (since tensor is present even on a lowly 3050) were pioneered with this exquisite looking, expensive, and very, very large processor.

I hope to see Titan Vs "on the cheap" someday, but that day still hasn't come. I wouldn't mind buying one for $500 or so, even 5 years down the road. Oddly enough, they still seem to sell in excess of $2000, which is madness to me.
 
Maybe that's why Volta died off without ever being released to AIB's?

Titan V used a chip designated for compute, those never make their way to AIBs because they're not meant to.

Oddly enough, they still seem to sell in excess of $2000, which is madness to me.
Because outside A100, which is a datacenter card, this is the fastest FP64 consumer card available.
 
Because outside A100, which is a datacenter card, this is the fastest FP64 consumer card available.

Maybe, If FP64 is an exclusive concern, Titan variants of GK110 can be acquired for relatively very little these days. The Titan Z went down in price significantly and it's still one of the absolute strongest fp64 processing cards out there. There's also the Radeon Pro VII, which is cheaper and will crunch fp64 fine... I think there's also a lot of mysticism around these cards, as Titan was always a fancy version of GeForce, the gold shroud, etc... it kind of kept a name for itself.

But you do raise a valid point. The GV100 is the GA100's direct predecessor (as a TU100 did not exist). This was the point where NV's split into a datacenter and a graphics and visualization line, from Volta's graphics engine Turing was born, and from its compute capabilities, the GA100 was born, GA100 being quite different from the other Ampere cards in its physical layout. GA100 evolves into Hopper, and from consumer Ampere, comes Ada Lovelace, just like Vega 20 spawned CDNA and RDNA was built from scratch as a new architecture designed for graphics first and compute second. :)
 
HardOCP reviewed the Titan V nearly 5 months after launch. By that time, drivers had improved, and it was 40% faster than the 1080 Ti in some titles. Sadly, the review is no longer up, but the forum thread discussing the review is still available.
Interesting fact in those comments, there never was an 1180, 1170, 1180ti, 1170ti, then again there never was a 8xx series either was there?

Nice jump in performance over the 1080ti.

Titan V was apparently a failure at computing as well (article sub-title: Fine for gaming, not so much for modeling, it is claimed):
https://www.theregister.com/2018/03/21/nvidia_titan_v_reproducibility/

Perhaps this is what killed volta?

There was one AIB who manufactured a volta quadro card:
https://www.pny.com/professional/explore-our-products/nvidia-quadro-volta

So there were a grand total of 4 videocard models based on the Volta architecture.

Did Nvidia take a bath on Volta?
 
Last edited:
Here's a render based comparison of that era from good ole Puget.


In that second view, we can easily see that the Titan V is over 40% faster than the 1080 Ti - and about 25% faster than the Titan Xp.


From a performance standpoint, the new Titan V is a beast for rendering. It is not yet supported in OctaneRender, but in V-Ray we saw a 60% performance gain over a GTX 1080 Ti while FurryBall RT saw almost a 2x increase in performance.
 
Interesting fact in those comments, there never was an 1180, 1170, 1180ti, 1170ti, then again there never was a 8xx series either was there?

Nice jump in performance over the 1080ti.

Titan V was apparently a failure at computing as well (article sub-title: Fine for gaming, not so much for modeling, it is claimed):
https://www.theregister.com/2018/03/21/nvidia_titan_v_reproducibility/

Perhaps this is what killed volta?

There was one AIB who manufactured a volta quadro card:
https://www.pny.com/professional/explore-our-products/nvidia-quadro-volta

So there were a grand total of 4 videocard models based on the Volta architecture.

Did Nvidia take a bath on Volta?

My earlier answer stands. Volta was another iteration of Nvidia GPUs on the road to further improvements. Volta was a unicorn for good reasons, I reckon. At the same time, there was no competitor doing something like it at the time. It enabled Nvidia's early RT move.

@purecain didn't you game on one?
 
My earlier answer stands. Volta was another iteration of Nvidia GPUs on the road to further improvements. Volta was a unicorn for good reasons, I reckon. At the same time, there was no competitor doing something like it at the time. It enabled Nvidia's early RT move.

@purecain didn't you game on one?

If Volta was aimed at the compute market it was a failure because after running a simulation for hours on end you can't afford to take the chance the card is spitting out the wrong data.
 
If Volta was aimed at the compute market it was a failure because after running a simulation for hours on end you can't afford to take the chance the card is spitting out the wrong data.
One unsubstantiated report from a tabloid rag does not constitute a problem.
 
My earlier answer stands. Volta was another iteration of Nvidia GPUs on the road to further improvements. Volta was a unicorn for good reasons, I reckon. At the same time, there was no competitor doing something like it at the time. It enabled Nvidia's early RT move.

@purecain didn't you game on one?
I hoped he would chime in , he truly would know about driver improvement, he used it a long time for sure, might still Even have it.

No coin in this debate, I would buy one now for the right money just to mess with, and I am intrigued by the thread title.

@OP it might suit your thread better if you weren't such an Nvidia flamer, in it?!.

Ngreedia, doesn't bother me, but it does some, and never ever ever helps the debate, like Amdoomed etc etc.
 
If Volta was aimed at the compute market it was a failure because after running a simulation for hours on end you can't afford to take the chance the card is spitting out the wrong data.

Check Top500 how many still using Volta.

Also we have culprit why aliens still not discovered.
 

Attachments

  • Capture5.PNG
    Capture5.PNG
    52 KB · Views: 72
Titan V was apparently a failure at computing as well (article sub-title: Fine for gaming, not so much for modeling, it is claimed):
https://www.theregister.com/2018/03/21/nvidia_titan_v_reproducibility/

Perhaps this is what killed volta?
no it didn't because its a load of crap;
One engineer told The Register that when he tried to run identical simulations of an interaction between a protein and enzyme on Nvidia’s Titan V cards, the results varied. After repeated tests on four of the top-of-the-line GPUs, he found two gave numerical errors about 10 per cent of the time. These tests should produce the same output values each time again and again
anonymous source, unspecified testing err . . simulations . . no.

plenty of anecdotal evidence on the OCN owners club to dispute all your assumption:
Tested it with F@H for awhile. Uses only 40% TDP, not boosting above 1500 MHz with already applied overclock. On the screenshot the second card is actually one of the Titan V, but the third one is Titan X (Pascal). F@H swapped them for some reason. Performance wise, the V acts like the X but using half the power. The X is actually reaching thermal limit due to high heat in my case. The Vs are reporting "no load limit".
LAST PAGE
for DX gaming the 2080Ti is somewhat better, slightly, and much, much better at RT for sure For everything else (what the Volta was designed for) the Titan V is much better.
I have 2 2080Tis running here and 3 Titan V's also. Best you just wait a month and get a 3080Ti.

where's the TV for sale. ;P

Had several 1080Tis (6 I think). Blew all my scores off the leader boards with the TV... just sayin'
Yeah, the Volta core as a titan was an outlier in the product stack and does not have crippled FP64 performance as the Titan RTX does. So I returned the TRTXs to nvidia and hunted down 2 more Titan Vs. If I can find a 4th, I'd reconfigure that rig and pull the Intel 900P for the slot. I was really hoping NV did the Turing core justice outside the Quadra stack - fail. I'm hoping the ampere stack gives us a (near) full die in the consumer market. :)

Bottom line for the question asked, keep the 2080Ti and see how the 3080Ti looks for gaming. Non-NDA word is the 3080Ti is worth the wait.

I still want to know where he found a TV for sale. :whistle:

;)
 
no it didn't because its a load of crap;

anonymous source, unspecified testing err . . simulations . . no.

plenty of anecdotal evidence on the OCN owners club to dispute all your assumption:

LAST PAGE





;)
Not worth an argument but, and this is from a crazy folder of many years, F@H is not exactly the goto standard for defining, the output as consistently correct.

You don't know unless it actually fails to complete a WU if it finished the work unit.

And you certainly don't know the output, plus you are not able to know if the WU your processing are directly comparable, ending in the same results.

And F@H , being research, could have hypothetically varied results for the same WU, accepted, while that is not really being possible if it was a true rerunning (same start data same endpoint expected)simulation.

And because F@H isn't, it's usually a constantly evolving generational simulation.

Titan V's or any card ftm could turn in 90% shit and it's possible it would get incorporated into the final truth.

F@H do try and get YOU to monitor simulations to check it's running correctly for a reason.

That's why we have benchmark test's and to be fair, when Titan V was out I recall a few articles regarding it's erroneous output in certain cases.
Not that I know that to be true to be fair just he didn't pull it out his ass it was a situation documented in a few tech forums and news outlets.
 
Not worth an argument but,
no it's not.
That's why we have benchmark test's and to be fair, when Titan V was out I recall a few articles regarding it's erroneous output in certain cases.
Not that I know that to be true to be fair just he didn't pull it out his ass it was a situation documented in a few tech forums and news outlets.
any article that i have found are circularly referenced from the register. no independent reporting or testing. :rolleyes::rolleyes:

i trust people i've known for 15 years more:
Yeah, I still have 3 Vs running 24/7 (but not for gaming). The 3090 FTW with an optimus WB runs really well. I got the block about a week ago and it is a very nice design. GPU stays less than 10C above water temp on an open bench table for now. I only flashed he FTW up to a 380W bios while on air, will be flashing it to 500W or so soon. Thats all I need at this point.
the guy hold patents in bio-engineering - but you don't need to believe me, only i do. :p
 
One unsubstantiated report from a tabloid rag does not constitute a problem.
You're right, but there was something to the rumor as Amber revealed:

'With regard to the warnings that were posted earlier on the website, we
have revised them and tempered the comments as none of our other
investigations have yielded any spurious results. The only cards to get
abnormal, inconsistent results were all housed at one site, and no other
investigator has yet reproduced the problem. I did get something odd
myself when I tried to run cases that used 99% of the card's memory, namely
a memory fault followed by a simulation crash, but this can probably be
attributed to the card's own driver using a small amount of the memory just
like your OS utilizes something like 10% of the total RAM (the GPU driver
is probably using much less than 10%, but if any two applications ever
request more than the device can allocate at any given time, there's no
disk on the card for the rest to page into). So, if you want really high
speed and you want it now, the Titan-V is about 50% faster than a GP100 for
less than half the unit cost. If you want a massive number of simulations,
you can get up to four GTX-1080Ti cards for the price of the one Titan-V.
If you can wait another quarter or two, NVIDIA will probably come out with
their new Volta-based GTX soon, which will probably come in cheaper than
the Titan-V and beat the GTX-1080Ti in terms of FLOPS / dollar.'
 
You're right, but there was something to the rumor as Amber revealed:

'With regard to the warnings that were posted earlier on the website, we
have revised them and tempered the comments as none of our other
investigations have yielded any spurious results. The only cards to get
abnormal, inconsistent results were all housed at one site, and no other
investigator has yet reproduced the problem. I did get something odd
myself when I tried to run cases that used 99% of the card's memory, namely
a memory fault followed by a simulation crash, but this can probably be
attributed to the card's own driver using a small amount of the memory just
like your OS utilizes something like 10% of the total RAM (the GPU driver
is probably using much less than 10%, but if any two applications ever
request more than the device can allocate at any given time, there's no
disk on the card for the rest to page into). So, if you want really high
speed and you want it now, the Titan-V is about 50% faster than a GP100 for
less than half the unit cost. If you want a massive number of simulations,
you can get up to four GTX-1080Ti cards for the price of the one Titan-V.
If you can wait another quarter or two, NVIDIA will probably come out with
their new Volta-based GTX soon, which will probably come in cheaper than
the Titan-V and beat the GTX-1080Ti in terms of FLOPS / dollar.'
yeah
The only cards to get abnormal, inconsistent results were all housed at one site, and no other investigator has yet reproduced the problem.
also
The Titan-V is impressive for its ability to deliver all of that performance in full double-precision, and while Amber's production code remains Single Precision / Fixed Precision, if you have other codes that need full double, it's quite a card.
/thread
 
Can the tensor cores of the Titan V be used for DLSS? The Titan V CEO edition actually had slightly better specs than the plain jane Titan V, but maybe it should've been called the Titan V CIO.

Has Nvidia ever produced another GPU architiecture that was never utilized in consumer products?
 
no it's not.

any article that i have found are circularly referenced from the register. no independent reporting or testing. :rolleyes::rolleyes:

i trust people i've known for 15 years more:

the guy hold patents in bio-engineering - but you don't need to believe me, only i do. :p
My main point is F@H
no it's not.

any article that i have found are circularly referenced from the register. no independent reporting or testing. :rolleyes::rolleyes:

i trust people i've known for 15 years more:

the guy hold patents in bio-engineering - but you don't need to believe me, only i do. :p
And you and he use F@h as a repeatable test?!, You can look up the projects yourself, on F@H and take it from the horses mouth, they're generational test's.

Titan V may be fine, I said I would not bet on it not being, I don't know.
But pulling out F@H didn't prove it to me.
 
Back
Top