• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Battlefield V with GeForce RTX DirectX Raytracing

So, going by that logic and working backwards, the fastest single-core CPUs before dual cores arrived were around $63? 'Cause that's where you end up if you think the price should follow the number of cores, and divide the 9900K's ~$500 by 8 (disregarding IPC entirely, of course).

In other words: this is not how things work, this has never been how things work, and shouldn't be how things work.
LOL, no.. just no.

That isn't how I think it works, I am just throwing in a different perception. The fact IS that there are 2x the cores and threads on this CPU versus the other one. There are MANY factors which go into pricing it. But to disparage the pricing and ignore any actual differences, isn't looking at the big picture.

I'm glad you brought that up since it allows me to point out that we seem to have gone a bit off the topic of RTX and to state that the doubling of Nvidia's pricing for the RTX 2080 ti is completely unjustified. I thought Intel was greedy, Nvidia is the worst. The government says my wages have to go up by a minimum of 2.7% (inflation rate for 2018) but these companies are charging me double (Because there is no competition in the market) : That should not be legal (It is in fact Illegal to "run a monopoly" but they get away with it somehow).
It is legal.. NVIDIA isn't a monopoly. Not with RTG popping out frighteningly mediocre performing GPUs. Here is to hoping Intel's discrete entry can shake things up. :)





I digress... RTX thread. My apologies.
 
LOL, no.. just no.

That isn't how I think it works, I am just throwing in a different perception. The fact IS that there are 2x the cores and threads on this CPU versus the other one. There are MANY factors which go into pricing it. But to disparage the pricing and ignore any actual differences, isn't looking at the big picture.
Let's go big picture then: Intel gave us four cores (with HT most of the time) with minimal IPC improvements for a decade, and prices never dropped. They essentially flat out refused to increase core counts in the MSDT market, despite customers asking for it, instead taking every opportunity to sell ever-cheaper silicon (minimal increases in transistor count, per-transistor prices constantly dropping, relatively flat R&D costs) at the same price point. Then they're suddenly faced with competition, double the number of cores in a year, change nothing else, and they raise prices by 50%. Your justification is short-sighted and cuts Intel far too much slack. They've made it obvious over the last decade that they're only interested in padding margins.

And Nvidia is exactly the same. As are most corporations, really, but the ones with near-monopolistic market positions make it far more obvious.
 
Who asked for it? The general PC using population didn't ask for, want, or need it (more cores and threads)...quads have been out for nearly a decade and just now people are saying a quad with HT (for a gamer) would be the low end.

But this what happens when there is little competition. AMD lulled Intel to sleep for the better part of a decade with their sub-par performing architectures until Ryzen. This caused Intel to put out more incremental updates in IPC and clock speed performance. Since AMD went wide because they can't compete in clock speeds or overclocking headroom, this forced big blue to react and throw more cores/threads on their incremental updates.

My justification and POV includes more than just that talking point (but again, this isn't the time and place for a deep dive). What is myopic is seemingly ignoring the fact that it doubled the amount of c/t with faster boost speeds and overclocking headroom. AMD shines in situations where it can use more threads for the same price. But falls to second place, of two, situations outside of that. Both processors (and GPUs, LOL) have a place in the market. Just be sure the measuring stick is the same size for each thing that is measured.

Cheers. :)
 
Who asked for it? The general PC using population didn't ask for, want, or need it (more cores and threads)...quads have been out for nearly a decade and just now people are saying a quad with HT (for a gamer) would be the low end.

But this what happens when there is little competition. AMD lulled Intel to sleep for the better part of a decade with their sub-par performing architectures until Ryzen. This caused Intel to put out more incremental updates in IPC and clock speed performance. Since AMD went wide because they can't compete in clock speeds or overclocking headroom, this forced big blue to react and throw more cores/threads on their incremental updates.

My justification and POV includes more than just that talking point (but again, this isn't the time and place for a deep dive). What is myopic is seemingly ignoring the fact that it doubled the amount of c/t with faster boost speeds and overclocking headroom. AMD shines in situations where it can use more threads for the same price. But falls to second place, of two, situations outside of that. Both processors (and GPUs, LOL) have a place in the market. Just be sure the measuring stick is the same size for each thing that is measured.

Cheers. :)
This is getting too OT even for me, but I'll give one last reply: I can't answer for anyone else here, but I'm certainly not ignoring the doubling of cores/threads. My view on this is simple: it's about d**n time. Intel deserves zero credit for this, given that they've been dragging their feet on increasing this for a full decade. As for who has been asking for it, I'd say most of the enthusiast community for the past 3-4 years? People have been begging Intel to increase core/thread counts outside of HEDT for ages, as the increases in IPC/per-thread perf have been minimal, giving people no reason to upgrade and forcing game developers to halt any CPU-demanding new features as there wouldn't be an install base capable of running it. Heck, we still have people running overclocked Sandy Bridge chips and doing fine, as the de-facto standard of 4c8t being the high end and 4c4t being common has caused utter stagnation in game CPU loads.

Where the core count increase does matter is in the doubling of silicon area required by the cores, which of course costs money - but CFL-R is still smaller than SB or IVB. Due to per-area cost increasing on denser nodes, the total die cost is likely higher than these, but nowhere near enough to justify a 50% price increase. Why? Because prices up until then had remained static, despite production becoming cheaper. In other words, Intel came into this with already padded margins, and decided to maintain these rather than do the sensible thing and bring price and production cost back into relation. That is quite explicitly screwing over end-users. Personally, I don't like that.

Also, what I find myopic is how you seemingly treat corporate greed as a law of nature rather than what it is: corporate greed. There is no necessity whatsoever in Intel ceasing innovation and padding margins when competition disappeared. Heck, you go one step further and blame Intel's greed on AMD, which is quite absurd. Here's a shocker: Intel could very well have kept innovating (i.e. maintained the status quo) or dropped prices as production got cheaper, no matter whether they had competition or not. That would sure have increased sales, at least. Instead, they chose to be greedy, and you're acting like that's not a choice. It is. And please don't come dragging the "fiduciary duty" crap, as there's nothing in that saying that you have to put your customers' wallets through a blender to fulfill that.

Nobody here is judging different products by different standards; quite the opposite, we're taking the whole context into account. AMD's recent rebound is more impressive due to how far behind they are - but in pure numbers, they're still slightly behind. However, they crush Intel on price/perf across loads, and roughly match them for price/perf in gaming. That's not bad. I judge Intel more harshly, as they're in a massively advantageous position, and yet have managed to completely screw this up. They're barely clinging to their lead despite having nearly a decade to cement it. That's disappointing, to say the least, but not surprising when they've prioritized squeezing money out of their customers rather than innovation. And again, that's their choice, and they'll have to live with it.

And I suppose that's how this all ties back into the RTX debacle - hiking up prices for barely-tangible performance increases and the (so far quite empty) promise of future gains, seemingly mostly to pad out corporate margins as much as possible.
 
LOL, there is so much to reply too...... but I said I was leaving it alone (in this thread), and will.

One thing I do appreciate is a mature conversation without personal barbs. That is hard to find on TPU these days. :toast:
 
Not for $1300+ They wouldn't.
Nvidia needed a way to raise prices (to over double that of the previous generation). top-end Laptop prices have doubled over the last 2 years, top-end phone prices have doubled too, Even CPUs (Intel 7700k cost $300 on release, new 9900k cost $600). Nvidia felt they were missing out so they used "Real-time raytracing" to double their prices. Consumers be boned.

You may have missed a big part of my post, dude: i was referring to a potential add-on card with RT capabilities and i said nothing about pricing.

nVidia could have made 2000 series cards with quite a bit more "horse power" if it did NOT have the RT capabilities built in, and it could quite possibly sell it for current 2000 series prices: they could actually "get away" with it because the performance uplift VS 1000 series would justify it.

Instead, the "gave us" RT capabilities that are evidently insufficient for today's high end gaming, unless you find acceptable the requirement of going from 4K @ 50 - 80 FPS to 1080p @ 50 - 70 FPS in order to have partial RT (just reflections, for now).
 
Oh dear. That doesn't even look that good. RTX 20 series is a joke. The hardware isn't ready for Real-time RT yet (it shows) and you're footing the bill for the huge die sizes because they are built on a node that will be outdated in 6 months. Worth it, though, right?

Right?
 
So that's it... The revolution... Back to 90s ! Unreal Engine already made it with DX6 or DX7.
 
All I have to say is that the moment DXR is turned on with a 2080 Ti, even on low, that performance is less than a Vega 56 at 4k. For a 2080, performance at low is practically the same as a 580. At 4k that makes the game practically unplayable. If you want to crank it up, you better have a 2080 Ti and be running at 1080p. To me, that's unacceptable for a graphics card that costs over $1,000 USD. Even more so if you consider this little quote from the review:
It is very important to realize that DXR will not take over rendering of the whole scene. Everything in Battlefield V, with DXR enabled, is rendered exactly the same way, looking exactly the same as with the regular DirectX 12 renderer. The only exception are surfaces that are marked as "reflective" by the developer during model/level design. When one of these surfaces is rendered, it will be fed with raytraced scene data to visually present accurate reflections, that do look amazing and more detailed than anything we've ever seen before.

So remember, you're losing 50-60% of your performance so some elements of the world look better. That's no good, no good at all.
 
Last edited:
One thing we need to remember here is that hybrid RT is in its very infancy, and there is no doubt it will improve dramatically over time. There are undoubtedly undiscovered or untried tricks and workarounds to make this perform better, like rendering the RT scene at a lower resolution and upscaling with DLSS or hereto unknown ways of compensating for reduced ray counts. However, it seems unlikely that we'll see >100% performance improvements with current hardware, at least in the next couple of years, which is more or less what is needed to bring this to acceptable levels of price/performance. The previous generation brought us to solid 4k>/=60Hz, and now suddenly we're back down to half that, and worse performance than even two generations back even at lower resolutions. Add in the dramsimtic price increases, and we're looking at something that simply isn't acceptable. Making RTRT feasible at all is great, but what Turing really shows us is that it isn't really, and given the necessary die area and bleak outlook for future node shrinks on silicon, it might not be for a long, long time.
 
Last edited:
Using
Unreal+Engine%2Fblog%2Fepic-games-demonstrates-real-time-ray-tracing-in-unreal-engine-4-with-ilmxlab-and-nvidia%2FReflections_04_2560x1400-2560x1400-8956b56b8a7c2660270077a9fa4f20b518f40e0f.png

to sell
comparison7-low.jpg

. Well done.
 
lol i'm sticking with my non rtx card for now... :toast:
 
Back
Top