• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD RDNA4 Architecture to Build on Features Relevant to Gaming Performance, Doesn't Want to be Baited into an AI Feature Competition with NVIDIA

Can you point a game that uses ecores with pcores?
Yeah, plenty. Cyberpunk, spiderman remastered, spiderman miles morales

What are YOU talking about, both CCDs have a CCX with 8C/16T with performance cores only. 2nd CCD runs slighly lower clockspeeds BUT USING SAME MICROARCHITECTURE and they are still FAST, with no risk of E cores complicating software compatibility. 7950X = 16 performance cores, 13900K = 8 performance cores.

Intels E cores are using dated and MUCH LOWER CLOCKED microarchitecture. The cores are NOT FAST at all. Their primary goal is to fool consumers into thinking the chip has more cores than it has.
i5-13600K is a "14 core chip" but only has 6 performance cores :roll: Intel STILL only has 6-8 performance cores across the board on mainstream chips in the upper segment. Rest is useless e cores.

Ryzen 7000 chips with 3D cache will beat Intel in gaming anyway. Hell even the 7800X3D 400 dollar chip will beat 13900KS with 6 GHz boost and twice if not triple the peak watt usage + 800 dollar price tag and Intel will abandon the platform after 2 years as usual. Meaning 14th gen will require new socket, new board. Milky milky time. Intels architechture is inferior which is why they need to run with high clockspeeds to be able to compete, SADLY for Intel this means high watt usage.

However i9-12900K/KS and i9-13900K/KS are pointless chips gamers, since i7 delivers the same gaming performance anyway, without the HUGE watt usage. Hell even i5's are within a few percent.
You said ecores are useless. I said they are as useless as the 2nd ccd. Can you actually give me a couple of applications that the 2nd ccd boosts performance but ecores don't? If not, then you HAVE to admit that ecores are as useless as the 2nd ccd.
 
Yeah, plenty. Cyberpunk, spiderman remastered, spiderman miles morales
Can you show me any proof that ecores are being used simultaneously with pcores by the games you have pointed out?
 
Can you show me any proof that ecores are being used simultaneously by the games you have pointed out?
Sure, is a video with each individual core usage enough as proof?
 
Sure, is a video with each individual core usage enough as proof?
Can you tell that the game is using the ecore or is it the usage of a windows 11 background processes that is being shown? After you switch the ecores off (considering they are being used by the game) does the performance drop? If it does drop, by how much?
 
Can you tell that the game is using the ecore or is it the usage of a windows 11 background processes that is being shown? After you switch the ecores off (considering they are being used by the game) does the performance drop? If it does drop, by how much?
Yes, the game is using the ecores, it's pretty obvious to tell. 16 ecores at 30 to 50%+ usage. And yes in those specific games I mentioned performance drops by around 15% with ecores off in cpu demanding areas of the game (apartments, Tom's dinner etc) . That's with the 13900k, I don't know the exact numbers with a 12900k but since that's the cpu I'm using right now I can test it
 
eah, plenty. Cyberpunk, spiderman remastered
Is that the same Spiderman remaster than runs better on a 7900XT than an overclocked 4070ti?
 
Yes, the game is using the ecores, it's pretty obvious to tell. 16 ecores at 30 to 50%+ usage. And yes in those specific games I mentioned performance drops by around 15% with ecores off in cpu demanding areas of the game (apartments, Tom's dinner etc) . That's with the 13900k, I don't know the exact numbers with a 12900k but since that's the cpu I'm using right now I can test it
Funny, you still haven't shown anything just write your theories. 13900k and 12900k will act exactly the same with ecores and pcores utilization since windows is scheduling those not the CPU itself.
According to 13900K benchmark on TPU with the Cyberpunk game at any given resolution that is not correct. Performance does not drop when disabling ecores by 15% but rather 1.8% at 1080p.
 
And the same cyberpunk that runs better on a 3080 than a 7900xt
1676981960755.png


1 fps dude, it runs 1 fps faster with RT. 1 fps

Meanwhile it's like 40% faster with RT off, not even worth comparing the two.

"runs better on a 3080", the nonsense rubbish you say never ceases to amaze me, you might just take the cake for the worst fanboy I've seen on this site yet.
 
Last edited:
Never said the 4090 is a low power card. I said it's incredibly efficient. Which it is.

Never said the 13900k is exceptional. Actually I swapped back to my 12900k cause I prefer it. Had I said is that ecores are not useless in gaming, since there are games that benefit a lot from them. Try to actually argue with what people are saying instead of constant strawmaning
You never said anything on topic so far.

Can this shitpostfest be locked yet it's clear that some would rather argue about AMD v Nvidia or CPUs.
 
Yeah, plenty. Cyberpunk, spiderman remastered, spiderman miles morales


You said ecores are useless. I said they are as useless as the 2nd ccd. Can you actually give me a couple of applications that the 2nd ccd boosts performance but ecores don't? If not, then you HAVE to admit that ecores are as useless as the 2nd ccd.

Oh really? Techpowerup test showed 2% less performance in Cyberpunk with e cores off

And if you actually run Windows 10 instead of 11 most games will perform like crap because of no thread director, which is essential so e-cores are not used for stuff that actually matters
 
Funny, you still haven't shown anything just write your theories. 13900k and 12900k will act exactly the same with ecores and pcores utilization since windows is scheduling those not the CPU itself.
According to 13900K benchmark on TPU with the Cyberpunk game at any given resolution that is not correct. Performance does not drop when disabling ecores by 15% but rather 1.8% at 1080p.
I have videos on my channel showing 13900k with ecore usage on a 13900k

I'll show you when I'm home

View attachment 284827

1 fps dude, it runs 1 fps faster with RT. 1 fps

Meanwhile it's like 40% faster with RT off, not even worth comparing the two.

"runs better on a 3080", the nonsense rubbish you say never ceases to amaze me, you might just take the cake for the worst fanboy I've seen on this site yet.
So it runs slower than a 2.5 year old card. Splendid

Oh really? Techpowerup test showed 2% less performance in Cyberpunk with e cores off

And if you actually run Windows 10 instead of 11 most games will perform like crap because of no thread director, which is essential so e-cores are not used for stuff that actually matters
I don't really care what tpup showed, I have the cpu and the game. If tpup doesn't test in cpu demanding areas that need more than 8cores then obviously you won't see a difference.
 
So it runs slower than a 2.5 year old card. Splendid

Well, looks like you forgot this, a game where a 4070ti is slower than Nvidia's own 2.5 years old previous generation. What should I make of this, our resident fanboy ? I suppose that's splendid as well.

1676983837421.png
 
Okay, do you think for example w1z considers them gimmicks? Dodge the question, go ahead
Everyone is entitled to their opinion. Do you really think 'you have me' now or something? Get a life.

W1z has already confirmed that despite his general statements about the need for more VRAM (generally: core runs out as VRAM runs out), which I do agree with, the exceptions do make the rule and we've already seen several examples appear in his own reviews, where this had to be acknowledged. Similarly, W1z has also been seen saying the technologies in play here are progress, and that he likes to see it. But has also been saying how abysmal the performance can get in certain games. Its a thing called nuance. You should try it someday.

And that's my general stance with regard to these new features too: the general movement is good. But paying through the nose for them today is just early adopting into stuff with an expiry date, and very little to show for it. Upscaling technologies, are good. And they're much better if they are hardware agnostic.

Similarly, RT tech, is good. And its much better if its hardware agnostic.
AMD is proving the latter in fact 'just works' too.

And that is why Nvidia's approach is indeed a gimmick, where fools and money get parted. History repeats.
 
Everyone is entitled to their opinion. Do you really think 'you have me' now or something? Get a life.

W1z has already confirmed that despite his general statements about the need for more VRAM (generally: core runs out as VRAM runs out), which I do agree with, the exceptions do make the rule and we've already seen several examples appear in his own reviews, where this had to be acknowledged. Similarly, W1z has also been seen saying the technologies in play here are progress, and that he likes to see it. But has also been saying how abysmal the performance can get in certain games. Its a thing called nuance. You should try it someday.

And that's my general stance with regard to these new features too: the general movement is good. But paying through the nose for them today is just early adopting into stuff with an expiry date, and very little to show for it. Upscaling technologies, are good. And they're much better if they are hardware agnostic.

Similarly, RT tech, is good. And its much better if its hardware agnostic.
AMD is proving the latter in fact 'just works' too.

And that is why Nvidia's approach is indeed a gimmick, where fools and money get parted. History repeats.
We were talking about dlss / fsr, that's at least what the post I quoted was talking about. Those are definitely not gimmicks. Rt, sure, you can call it that, especially in certain games.
 
I have videos on my channel showing 13900k with ecore usage on a 13900k

I'll show you when I'm home


So it runs slower than a 2.5 year old card. Splendid


I don't really care what tpup showed, I have the cpu and the game. If tpup doesn't test in cpu demanding areas that need more than 8cores then obviously you won't see a difference.
I do, because I look at facts and proof. If you actually need more than 8 cores in a game, you are screwed anyway, because E cores are slow and runs at ~4 GHz using dated architecture.

i7-13700K has the same game performance as i9-13900K.

Even i5-13600K only performs 1% behind i9-13900K. For half the price.

Efficiency cores gives you exactly nothing. Performance cores is what matters for gaming performance.

Ryzen 7800X3D will smack i9-13900K for half the price in a few weeks. Oh, and half the watt usage.

You can enable DLSS 3 to make fake frames tho, that will remove cpu bottleneck :roll:
 
Well, looks like you forgot this, a game where a 4070ti is slower than Nvidia's own 2.5 years old previous generation. What should I make of this, our resident fanboy ? I suppose that's splendid as well.

View attachment 284833
The 4070ti is slower than a 2 year old Much more expensive card. The XT is slower than a 2 year old cheaper card.

But it doesn't matter, it wasn't me who made the claim. Someone said "are we talking about the spiderman that the 7900xt is faster than a 4070ti" as that means something. It doesn't. It's a pointless statement that doesn't seem to bother you. You seem particularly bothered when a specific company is losing. Someone would even call you biased

I do, because I look at facts and proof. If you actually need more than 8 cores in a game, you are screwed anyway, because E cores are slow and runs at ~4 GHz using dated architecture.

i7-13700K has the same game performance as i9-13900K.

Even i5-13600K only performs 1% behind i9-13900K. For half the price.

Efficiency cores gives you exactly nothing. Performance cores is what matters for gaming performance.

Ryzen 7800X3D will smack i9-13900K for half the price in a few weeks. Oh, and half the watt usage.

You can enable DLSS 3 to make fake frames tho, that will remove cpu bottleneck :roll:
Well obviously you don't look at facts and proof cause you don't have the actual cpu. I do, and I'm telling you in areas that are cpu demanding ecores boost performance by a lot

I'll make some videos with ecores off as well since in my channel I only have with ecores on and you'll see that there is a difference.
 
Someone would even call you biased
Yeah sure because I am the one who says a card that runs 1 fps faster in RT and 40% slower in raster in a game "runs better". That's totally not a laughable statement and it doesn't sound like something someone who is biased would ever say.
 
We were talking about dlss / fsr, that's at least what the post I quoted was talking about. Those are definitely not gimmicks. Rt, sure, you can call it that, especially in certain games.
Correct, and I mentioned them. But this topic is about 'the use of AI' for GPU acceleration in general, too - see title.

None of the technologies in play 'require AI' because Nvidia said so, and the point isn't proven either because Nvidia has a larger share of the market now. That just proves the marketing works - until a competitor shows a competitive product/a design win (like Zen!) and the world turns upside down. See, the truth isn't what the majority thinks it is. The truth is what reality dictates - a principle people seem to have forgotten in their online bubbles. And then they meet the real world, where real shit has real consequences. Such as the use of die space vs cost vs margins vs R&D budgets.
 
None of the technologies in play 'require AI' because Nvidia said so
I still remember the first implementation of DLSS in Control where remedy said it didn't actually used tensor cores proving it's completely redundant.
 
I still remember the first implementation of DLSS in Control where remedy said it didn't actually used tensor cores proving it's completely redundant.
Yeah I never quite did understand what's 'AI' about upscaling anyway. You just create a set of rules to implement upscaling, there is no way you're passing through every scene in a game to determine what's what. People play the game with a flexible viewport, they don't rerun a benchmark.

Nvidia is clearly charging ahead with their implementation and marketing because having to dial it back would:
A. destroy their dual-use strategy for datacenter and consumer GPUs
B. force them to revert to old technology sans special cores
C. Redesign the CUDA core to actually do more per clock, or somehow improve throughput further while carrying their old featureset

They realistically can't go back, so strategically, AMD's bet is a perfect one - note that I said this exact thing when they talked about their proprietary RTX cores shortly after it was initially announced. Also, the fact Wang is saying now what he's said years ago at around the same time... Time might be on either company's side, really. Its going to be exciting to see how this works out. Still though, the fact AMD is still on the same trajectory is telling, it shows they have faith in the approach of doing more with less. Historically, doing more with less has always been the success formula for hardware - and it used to be the 'Nvidia thing'.
 
Correct, and I mentioned them. But this topic is about 'the use of AI' for GPU acceleration in general, too - see title.

None of the technologies in play 'require AI' because Nvidia said so, and the point isn't proven either because Nvidia has a larger share of the market now. That just proves the marketing works - until a competitor shows a competitive product/a design win (like Zen!) and the world turns upside down. See, the truth isn't what the majority thinks it is. The truth is what reality dictates - a principle people seem to have forgotten in their online bubbles. And then they meet the real world, where real shit has real consequences. Such as the use of die space vs cost vs margins vs R&D budgets.
But you are not paying through the nose for them. Thats just false. Launch prices, 7900xt was 15% more expensive than the 4070 ti, while being only 12% faster in raster, much slower in rt, with worse up scaling and worse power draw. So how exactly are you paying through the nose for them? The 70ti in fact had better performance per dollar even on just raster. I'm sorry but it seems to me you are paying through the nose for amd
 
But you are not paying through the nose for them. Thats just false. Launch prices, 7900xt was 15% more expensive than the 4070 ti, while being only 12% faster in raster, much slower in rt, with worse up scaling and worse power draw. So how exactly are you paying through the nose for them? The 70ti in fact had better performance per dollar even on just raster. I'm sorry but it seems to me you are paying through the nose for amd
That's just false, you say, and yet, perf/dollar has barely moved forward since the first gen of RTX cards. You're living a nice alternate reality :)

Note my specs, and note how I'm not paying through the nose at any time ever - I still run a 1080 because every offer past it has been regression, not progress. You might not want to see it, but the fact is, the price to get an x80 GPU has more than doubled since then and you actually get less hardware for it. Such as lower VRAM relative to core.

I'm not even jumping on a 550-600 dollar RX 6800 (XT) because we're in 2023 now and this is the original MSRP of years back. That's paying too much for what its going to do, even if it nearly doubles game performance relative to the old card.

There are a LOT of people on this dilemma right now. Every offer the market has currently is crappy in one way or another. If a deal is hard to swallow, its a no deal in my world. Good deals feel like a win-win. There is no way any card in the new gen is a win-win right now.

Chasing the cutting edge has never been great, even when I did try doing so. I've learned I like my products & purchases solid and steady, so that I get what I pay for.

Hey, and don't take it from me, you don't have to:
 
Last edited:
And that is why Nvidia's approach is indeed a gimmick, where fools and money get parted. History repeats.
This is what you said that I disagreed with. Of course prices are up, but they are up for both amd and nvidia cards. In fact as evident by the 4070ti launch prices compared to the 7900xt, you are not paying for those gimmicks, they come for free, since the 4070ti had the better value even for pure raster performance, completely excluding rt dlss and fg.

So who's the fool here? Amd buyers are paying more money for less features, higher power draw, worse rt and similar raster per dollar.
 
I'd agree generally. AI is taking off, but very little is turnkey, through a simple exe. It's usually this whole environment you have to set up. Nvidia has had Tensor since 2018 in consumer chips, yet no AI in games. Then some stuff that may use AI cores, or could use AI cores, run just fine without them. Nvidia has had RTX Voice, which is awesome. But apps do a fine job without AI cores. I have voice.ai installed and it uses no more than 3% of my CPU. We have so much CPU overhead, and keep getting more and more cores that already go underutilized. For games, Nvidia has DLSS, but the competitors are still pretty dang good.

With RDNA3 we are seeing AI accelerators that will largely go unused, especially for gaming until FSR3 comes out. Zen5 will introduce AI accelerators and we already have that laptop Zen that has XDNA. On top of all the CPU cycles that go unused.

It's coming, but I think it's overrated in the consumer space atm. It's very niche to need those Tensor cores and a gaming GPU. On the business side, AMD has had CDNA with AI. What is really limiting is consumer software and strong AI environments on the AMD side. For gaming I'm more excited for raytracing and would rather that be the focus. RT is newer and needs that dedicated hardware. But generally, we are still lacking in how much hardware we are getting to accelerate that RT performance even from Nvidia. If for example Nvidia removed all that Tensor and replaced it with RT and just use FSR or similar, that would be mouth watering performance.

For AMDs argument, if they made up for it in rasterization and ray-tracing performance, that would make since. But they can't even do that. Seems more like AMD just generally lacks resources.
 
Back
Top