• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Readies Radeon RX 6500 XT and RX 6400 Graphics Cards

64Bit Interface on a Midrange Card :laugh:

Nvidia still can learn from AMD, but Nvidia get flamed for theyr GTX 1050 3GB with 96Bit :roll:


But AMD on the Steamchart:
May 2020 16%
October 2021 15,2%
:laugh:


Intel with theyr IGP is on 8,92%, if intel will release soon dedicated GPU. I think its not really good for AMD. :toast:
Amd did a few great moves and now theyr lower the rate nearly to the half like in years with Broadway (58xx) and Cypress (69xx).

AMD had sometimes 28% in the Steamcharts :rockout:



The best AMD for now is the RX 580 with 1.66%
The best Nvidia for now is the 1060 with 8.08%
 
Last edited:
LOL 6400 will provide 720p gaming at best and still be over $150. Better off with an APU. Surprised it isn't a 32bit bus with a whopping 8MB of infinity cache.
 
64Bit Interface on a Midrange Card :laugh:

Nvidia still can learn from AMD, but Nvidia get flamed for theyr GTX 1050 3GB with 96Bit :roll:


But AMD on the Steamchart:
May 2020 16%
October 2021 15,2%
:laugh:


Intel with theyr IGP is on 8,92%, if intel will release soon dedicated GPU. I think its not really good for AMD. :toast:
Amd did a few great moves and now theyr lower the rate nearly to the half like in years with Broadway (58xx) and Cypress (69xx).

AMD had sometimes 28% in the Steamcharts :rockout:



The best AMD for now is the RX 580 with 1.66%
The best Nvidia for now is the 1060 with 8.08%
Midrange? These are entry level GPUs. Not in price perhaps, but gpu prices are insane across the board, so...
 
It's highly unlikely that this will perform on a level with 1650 Super - remember, these things clock very high and are crazy efficient.
Look at the 6600 xt review. The 6500 xt is literally half that card which lines up with the rx 580 / 1650.

The 1650 Super isn't that much slower than a 1650, but if anything that is just an argument for why a decently binned 75W RDNA2 chip should be closer to the 1660S - RDNA2 is just that far ahead of Turing in terms of performance/W.

I think you mean 1660. However, AMD just never had as good of memory management as Nvidia. You will find many examples where the 4GB 5500 Xt falls on its face wile 1650s holds strong.
 
LOL 6400 will provide 720p gaming at best and still be over $150. Better off with an APU. Surprised it isn't a 32bit bus with a whopping 8MB of infinity cache.
Wait till the 6300, maybe it will come with 32bit and 2Gig :laugh:
 
Look at the 6600 xt review. The 6500 xt is literally half that card which lines up with the rx 580 / 1650.
Except that GPU performance never scales linearly with compute resources. If that was the case, why does the RTX 3060 (3584 SPs, 12.74TFLOPS FP32, 192-bit bus) deliver >50% of the performance of the 3080 (8704 SPs, 29.77 TFLOPS FP32, 320-bit bus)? That's 41% of the shader hardware, 43% of the compute resources (so it clocks slightly higher), and 60% of the bus width, and it delivers 52% of the performance according to TPU's database. It's not as simple as counting hardware, and it never has been.

Similarly, the RX 6600 has 28 CUs to the XT's 32 (87.5%), 8.9TF vs. 10.6 (83%, so it clocks a bit lower), the same bus width at 128-bit, the same IC at 32MB, and delivers 85% of the performance. The RX 5600 XT vs. the 5700 XT had 36 CUs vs. 40 (90%), 7.2TF vs. 9.8 (73%, so clocked quite a bit lower), a 192-bit bus vs. 256-bit (75%) yet delivered 82% of the performance. Some times these things seem to scale linearly, some times they don't at all. Thus, calculations like these are - at best - very rough estimates and should be treated as such.
I think you mean 1660.
Yep, sorry about that.
However, AMD just never had as good of memory management as Nvidia. You will find many examples where the 4GB 5500 Xt falls on its face wile 1650s holds strong.
That's entirely possible, but the addition of IC has undoubtedly changed that in some way - not to mention AMD's consistent driver development work over the past few years. They might be held back slightly more, but ultimately these GPUs will be more limited by their compute capabilities than anything else. And, being compute limited, stepping down to a non-ultra quality setting is the smart thing to do anyway, as ultra is always wasteful and way past the point of diminishing returns in terms of perceptible quality increases.

IMO, this is a non-issue.

LOL 6400 will provide 720p gaming at best and still be over $150. Better off with an APU. Surprised it isn't a 32bit bus with a whopping 8MB of infinity cache.
That's .... questionable. Even with worst-case-scenario scaling of the 6500 XT being exactly half of a 6600 XT, that GPU delivers an average of 133FPS across TPU's test suite. Half of that would then be 66,5FPS. Of course that average includes games that aren't hitting much higher than 60FPS, which again would then be barely passing 30FPS - but this is at ultra settings on an entry-level GPU. The 6400 is of course further cut down from this, but cut-down GPUs always make more of their compute resources than their fully enabled siblings. 1080p60 at high or mid-high even in current demanding AAA titles should be perfectly doable.
 
If the XT came with at least 1280 shaders for $100 it probably be ok.
 
Except that GPU performance never scales linearly with compute resources. If that was the case, why does the RTX 3060 (3584 SPs, 12.74TFLOPS FP32, 192-bit bus) deliver >50% of the performance of the 3080 (8704 SPs, 29.77 TFLOPS FP32, 320-bit bus)? That's 41% of the shader hardware, 43% of the compute resources (so it clocks slightly higher), and 60% of the bus width, and it delivers 52% of the performance according to TPU's database. It's not as simple as counting hardware, and it never has been.

Similarly, the RX 6600 has 28 CUs to the XT's 32 (87.5%), 8.9TF vs. 10.6 (83%, so it clocks a bit lower), the same bus width at 128-bit, the same IC at 32MB, and delivers 85% of the performance. The RX 5600 XT vs. the 5700 XT had 36 CUs vs. 40 (90%), 7.2TF vs. 9.8 (73%, so clocked quite a bit lower), a 192-bit bus vs. 256-bit (75%) yet delivered 82% of the performance. Some times these things seem to scale linearly, some times they don't at all. Thus, calculations like these are - at best - very rough estimates and should be treated as such.

I know they don't scale linearly but some of the that is due to game fps and cpu bottlenecks on the high end. Furthermore, you did not give conclusive evidence with your examples as they were a mixed batch and some not 50% across all aspects, ie 43% tf and 60% bandwidth for 52% performance. Also you show bus width size which is not that important when you should show actual bandwidth size.
 
I know they don't scale linearly but some of the that is due to game fps and cpu bottlenecks on the high end. Furthermore, you did not give conclusive evidence with your examples as they were a mixed batch and some not 50% across all aspects, ie 43% tf and 60% bandwidth for 52% performance. Also you show bus width size which is not that important when you should show actual bandwidth size.
These GPUs mostly use similarly clocked VRAM, and if anything, accounting for the faster RAM on cards like the 3080 just makes it harder to explain why the 3060 performs as it does. You're right that CPU bottlenecks come somewhat into play, but TPU's performance comparisons for high end cards are based on 2160p performance, in which essentially no games are CPU limited.

Also, you seem to have missed the point: my argument was that scaling cannot reliably be predicted, period. The drop-off from the high end is typically less than linear with shader resources, but it is crucially not linear at all, but unpredictable and complex. That the end result in performance neither aligns with any spec nor some simple averaging of these is precisely my point. Your argument started from an assumption of linear scaling with compute resources and bandwidth, which might be true, but typically isn't, and is thus a bad assumption to start with.
 
It's X58 so it's a PCIe 2.0 platform. Well, I've been a budget gamer more or less all the time I've been with PCs so it's not that a big deal.

But on the topic itself, a 6500 XT sounds kinda interesting for my 2nd rig if it's supported with such an old platform..
It would be supported, but if its an x8 card like some of the new ones you may have some performance loss at x8 2.0
(Nothing HUGE, but it'd be there)
 
It would be supported, but if its an x8 card like some of the new ones you may have some performance loss at x8 2.0
(Nothing HUGE, but it'd be there)
Any loss like that is most likely entirely drowned out by the CPU being old, the RAM being slow, etc. PCIe scaling tests are typically run on high end gear to eliminate other variables after all. On old gear like that, there will be plenty of other bottlenecks.
 
Any loss like that is most likely entirely drowned out by the CPU being old, the RAM being slow, etc. PCIe scaling tests are typically run on high end gear to eliminate other variables after all. On old gear like that, there will be plenty of other bottlenecks.
I agree, it'd be very difficult to know where the bottleneck may be, but it's worth knowing about.
 
interesting, is it the same as xbox series s GPU? can't wait to see that 4TF monster ripping through games. powercolor can call it Little Devil.
jersey devil.png
 
Last edited:
interesting, is it the same as xbox series s GPU? can't wait to see that 4TF monster ripping through games. powercolor can call it Little Devil.
No, the XSS has 20 (enabled) CUs and no Infinity Cache, this die reportedly tops out at 16 CUs and has 16MB of IC. The XSS also has a much wider memory bus, but that is shared with the CPU and rest of the SoC. This also likely clocks ~1GHz higher than the very conservatively clocked XSS. Of course the XSS is an APU, so it's an entirely different design anyway. But related, obviously, as both are RDNA2 GPUs on TSMC 7nm.
 
No, the XSS has 20 (enabled) CUs and no Infinity Cache, this die reportedly tops out at 16 CUs and has 16MB of IC. The XSS also has a much wider memory bus, but that is shared with the CPU and rest of the SoC. This also likely clocks ~1GHz higher than the very conservatively clocked XSS. Of course the XSS is an APU, so it's an entirely different design anyway. But related, obviously, as both are RDNA2 GPUs on TSMC 7nm.
Both the XSS and this card will probably cost the same too!

Seriously, how can anyone get excited for this 64 bit 4 GB pos? Nvidia's 3050 is rumored to be 8 GB (likely 128 bit). Infinity cache will not make up for that 2x bandwidth deficit.

We are about to get RX 480 price and performance 5 years later.
 
happily waiting for RX 6400 but march is way ahead.
 
So, Videocardz has tons of new rumors regarding these SKUs: https://videocardz.com/newz/amd-radeon-rx-6500xt-and-rx-6400-to-be-the-first-6nm-graphics-cards

- 6 nm production which could mean great availability
- 107W for 6500 XT :( and 53W for 6400 :love:
- 6400 OEM only :(
- 6500 XT launches at 19. january

@Valantar: your thoughts?

107W is really a shame, so close to not needing a power connector :(

Hopefully they use an 8pin at least (6 pin or 6+2 where you're left with 2 pins hanging out sucks!) but I doubt it
 
What's wrong with 107W?

It's a lot nicer than the 300-350W i deal with
 
Back
Top