• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RX 5700, RX 5700 XT review leak(s)

Joined
Nov 1, 2018
Messages
584 (0.24/day)
So we're just 3 days away from official launch, but some reviewers are failing to keep their new content hidden.


AMD-Radeon-RX-5700-TimeSpy-1000x1080.png


AMD-Radeon-RX-5700-Fire-Strike-1000x840.png


Click on the VideoCardz article for the rest of the charts.
 
5700XT could use a US 50$ price drop, It'd be much more competitive at that price point then, I wonder when undervolting results will come out, AMD cards tend to have a large amount of stock voltage and will perform much better with a undervolt.
 
You see how your table is unfair, I mean that's not your table, that's NVIDIA trick.
GTX1080Ti is slower then RTX2080 FE. Why? Because with Turing NVIDIA increase clock of Founders Edition for 90MHz and use results on hundreds comparison while other people get much slower reference models by other brands. But they didn't forgot to charge that extra than reference models from all other companies.
Without exactly that trick, NVIDIA was not capable to show up and represent RTX2080 faster then GTX1080Ti.

They need such trick to represent that RTX2080 is faster then GTX1080Ti and they need same trick to avoid tragically low improvement between GTX1080Ti and successor RTX2080 Ti.
Without trick difference would be less then 20% and they asked 300-350$ more.
If NVIDIA increased 90MHz Founders Edition why then people with Pascal GTX1080Ti can't oc their GPU for 90MHz and compensate trick by NVIDIA.

Because if you show any fabric OC model GTX1080Ti results are different.
My GTX1080Ti is 10.300 Graphic score on fabric frequency.
NVIDIA increase frequency of Founders Edition to show better results and cause bigger failure rate and problems to significantly more number of people then in previous years.
Manipulation with 3DMark results and playing with numbers between similar cards reach epic proportions.
No official result market for new generation, no average results, they are different and companies used them to show that new GPU is little better then previous even when they are same. And always at the end customers are disappointed with improvement. When you play 2 years on one GPU and replace with 15% better you feel nothing if you play games on 4K resolution and high details where every GPU struggle to give 50-60fps in graphically most demanding games. Over 30-40% you start to feel little improvement.
 
Last edited:
This is about 5700, 5700XT, not about nvidia's FE /non FE "tricks".

And I just linked an article on another site, that captured an article on a review site before it was taken down (when they realized their mistake).

In any case, the point is that 5700/5700XT are weaker than the new "Super" cards, while priced somewhat similar (especially the XT, it's way too expensive for it's performance)
 
I knew, but I want to explain that people are tricked. Reference with Turing is not any more as with Kepler, Maxwell, Pascal.
NVIDIA didn't liked numbers and decide to push reference model and charge more but continue to advertise them with previous generations when they didn't OC reference models.

NVIDIA knew that very small number of customers will think what they done... FE was marker for reference version years, and FE will stay marker for reference even if they charge 100$ more and OC 100MHz more Turing FE will stay as model used for comparison with FE reference cards from previous generations. They will easier increase clock from 1710 to 1800MHz, than to explain that RTX2080 1710MHz is
2-3% slower then GTX1080Ti FE. In reality GTX1080Ti FE was 2-3% faster then RTX2080 sold as Turbo, iCX2, etc...
 
Seems like RDNA is in fact as power efficient as Turing, how about that. Wonder what will all of those naysayers claiming AMD can't possibly scale this architecture up for higher end products think about this.
 
You see how your table is unfair, I mean that's not your table, that's NVIDIA trick.
GTX1080Ti is slower then RTX2080 FE. Why? Because with Turing NVIDIA increase clock of Founders Edition for 90MHz and use results on hundreds comparison while other people get much slower reference models by other brands. But they didn't forgot to charge that extra than reference models from all other companies.
Without exactly that trick, NVIDIA was not capable to show up and represent RTX2080 faster then GTX1080Ti.

They need such trick to represent that RTX2080 is faster then GTX1080Ti and they need same trick to avoid tragically low improvement between GTX1080Ti and successor RTX2080 Ti.
Without trick difference would be less then 20% and they asked 300-350$ more.
If NVIDIA increased 90MHz Founders Edition why then people with Pascal GTX1080Ti can't oc their GPU for 90MHz and compensate trick by NVIDIA.

Because if you show any fabric OC model GTX1080Ti results are different.
My GTX1080Ti is 10.300 Graphic score on fabric frequency.
NVIDIA increase frequency of Founders Edition to show better results and cause bigger failure rate and problems to significantly more number of people then in previous years.
Manipulation with 3DMark results and playing with numbers between similar cards reach epic proportions.
No official result market for new generation, no average results, they are different and companies used them to show that new GPU is little better then previous even when they are same. And always at the end customers are disappointed with improvement. When you play 2 years on one GPU and replace with 15% better you feel nothing if you play games on 4K resolution and high details where every GPU struggle to give 50-60fps in graphically most demanding games. Over 30-40% you start to feel little improvement.

This is correct and with Pascal they essentially did the same thing compared to Maxwell - Maxwell didn't boost nearly as high and had more OC headroom, especially the 980ti. With Pascal, you can call yourself lucky with 5-7% over the actual boost clocks.

Seems like RDNA is in fact as power efficient as Turing, how about that. Wonder what will all of those naysayers claiming AMD can't possibly scale this architecture up for higher end products think about this.

They're on 7nm. Imagine if they hadn't caught up!
 
They're on 7nm. Imagine if they hadn't caught up!

Not hard to imagine at all, nodes are becoming more restrictive. Lower power consumption used to be a given, now you have a choice of either getting higher clocks and rampant power consumption or keep the power in check and get your speed increase from somewhere else with more transistors. Looks like AMD found a middle ground but I still think RDNA is clocked out of it's optimum power envelope. In other words this could have been a lot worse.
 
Seems like RDNA is in fact as power efficient as Turing, how about that. Wonder what will all of those naysayers claiming AMD can't possibly scale this architecture up for higher end products think about this.

How about waiting for W1zzard's review instead?

Also are you getting one? I see one 5700XT Anniversary edition with your name on it. Time to finally ditch that evil company made 1080 and join the RED rebellion with RDNA!
 
Also are you getting one? I see one 5700XT Anniversary edition with your name on it. Time to finally ditch that evil company made 1080 and join the RED rebellion with RDNA!

It's nice having random people on the internet being interested in what I will do, it's like I am a celebrity.

How about waiting for W1zzard's review instead?

How about not posting worthless offtopic fanboy remarks ?
 
Thanks for the leaks, I wonder if navi will be the opposite of radeon vii, radeon vii is so much better on dx12, these images show that navi is so much better in dx11 than dx12.

Synthetic bench no less.
 
The most interesting thing out of it is the power draw numbers.

AMD-Radeon-RX-5700-Power-Consumption-1000x873.png


The review was conducted with early drivers, as explained in the screenshot below. Apparently, these drivers do not even support overclocking. Hence, the results may not correspond to the final performance.

If these benchmark numbers are true it explains why Nvidia felt it had to do the Super refresh.
 
I've got my eye on that 5700 card. This could be the real mid-range, performance per dollar king. That is to say down the track with a price drop it could be, and a potential nice upgrade for current Polaris owners.

Pair this with a Ryzen 5 3600 overclocked on a B450 motherboard and it's screaming value!
 
I have a freesync monitor and as it is not supported by nvidia as a good monitor for variable refresh rate, I have to use AMD gpus and one of them will be my next gpu, their msrp prices are very close. That power consumption on my hands will likely cut in half and yet will keep 90% of the performance. I undervolt everything.
 
Lower power consumption used to be a given, now you have a choice of either getting higher clocks and rampant power consumption or keep the power in check and get your speed increase from somewhere else with more transistors.

Weird how NVIDIA hasn't had this problem for 3 generations.
 
Weird how NVIDIA hasn't had this problem for 3 generations.

Sure they did, when they designed Turing they had a choice, increase clocks or increase transistor count. And that's how we got chips at their reticle limit on the market and their power was kept in check.
 
Sure they did, when they designed Turing they had a choice, increase clocks or increase transistor count. And that's how we got chips at their reticle limit on the market and their power was kept in check.
I think you are off about this. Why do you think they had a choice of increasing clocks? 16/14/12nm seems to top out somewhere around 2GHz before voltage and the resulting power consumption goes all the way to hell. That was the case with Pascal and is the case with Turing. We do not know about 7nm yet. AMD never managed to push the frequencies as far as they could possibly get on 16/14/12nm, Polaris and Vega were both power-limited more than anything else. 7nm Vega and now 7nm Navi both seem to be capable of 2GHz-ish before perf/W starts to decline rapidly.
Lower power consumption used to be a given, now you have a choice of either getting higher clocks and rampant power consumption or keep the power in check and get your speed increase from somewhere else with more transistors.
Lower power consumption is still a given. Along with area reduction these are the two remaining things that node reduction still brings.
If nothing else, Vega 64 compared to Radeon VII shows this very-very clearly.
 
Looking at the power consumption of the V64 I think it is off. I mean I know it has been measured but my card never exceeds 300Watts during any game I've played. Maybe this is the maximum that the card is capable to handle? My card sits always around 1550Mhz and I have never seen it exceed 300Watts. This is kinda odd for me though but I will double check during the weekend cause maybe I've missed something.
Either way I'm still thinking going for Navi. It's just my expectations are a bit higher. I want something bigger than 5700XT. AMD has a lot of headroom to make a bigger chip and squeeze more performance out of it. I'd like that. When I change the card I need to see a boost in FPS. I think I will wait although I will watch all the benchmarks about the 5000 series cards from AMD very closely.
 
My Vega64 went nicely to its 295W power limit at the same 1500-ish frequency. Power limit works on Vega, so it does not normally exceed it.
Vegas had enough different BIOS profiles to be confusing though and with AIB cards all bets are off.

Edit:
Oh, you meant the graph from the leaked review above wher Vega 64 is 395W. This is the whole system power consumption - CPU, motherboard etc use up around 100W out of that, probably a bit more. Depending on what game/test they measured the consumption with, it might not be at the power limit and with lower than max GPU consumption.
 
My Vega64 went nicely to its 295W power limit at the same 1500-ish frequency. Power limit works on Vega, so it does not normally exceed it.
Vegas had enough different BIOS profiles to be confusing though and with AIB cards all bets are off.

Edit:
Oh, you meant the graph from the leaked review above wher Vega 64 is 395W. This is the whole system power consumption - CPU, motherboard etc use up around 100W out of that, probably a bit more. Depending on what game/test they measured the consumption with, it might not be at the power limit and with lower than max GPU consumption.
Oh right. That explains everything. Well as I said I missed something and now I know what I've missed exactly.
BTW. Is there any official confirmation of the bigger 5000 series releases? I've seen rumors about the bigger Navi but I don't know if they are coming out this year.
 
BTW. Is there any official confirmation of the bigger 5000 series releases? I've seen rumors about the bigger Navi but I don't know if they are coming out this year.
Nothing official, no. Even rumors place bigger Navi release to 2020.
 

Still overpriced IMO, but it's better than nothing. Also, I see no point in getting the XT with the regular one being so close.
 

Still overpriced IMO, but it's better than nothing. Also, I see no point in getting the XT with the regular one being so close.
Maybe the OC potential is different between the two. Although if it turns out like V56 and V64 then I'd go non-xt instead I think. The difference with Vega cards was marginal when v56 is tweaked properly.
 
40/36 = 11%
399/349 = 14%
At stock clocks XT is probably even a bit ahead in perf/$.
 
Also, I see no point in getting the XT with the regular one being so close.

Agreed, ROP count has always been AMD's hamstring and it's identical between 5700/XT, you only lose 10% of the CUs and 200MHz clockspeed - the latter which should be easily reclaimable by manual OC. So at the end of the day, probably less than 10% performance difference at identical clocks, but 5700 is ~13% less in price. Very tempting, if its performance is what has been claimed.
 
Back
Top