• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Mobile NVIDIA GeForce RTX GPUs Will Vary Wildly in Performance, Clocks Lowered Substantially

DXR does not eat into TDP unless it is used.
Yes, but that's what I meant. We don't know how much DXR eats into the TDP. Without DXR those cards could boost significantly higher than with it turned on. I hope I worded it better now.
 
Heres a benchmark I just ran into with 2080 maxQ... https://www.notebookcheck.net/We-be...e-desktop-RTX-2080-and-GTX-1080.402036.0.html

"In short, the mobile RTX 2080 and RTX 2080 Max-Q are about 15 percent and 30 percent slower than the desktop RTX 2080 according to 3DMark benchmarks. The deltas are somewhat similar to - if not just slightly wider than - the deltas between mobile GTX 1080 and GTX 1080 Max-Q to the desktop GTX 1080 by about 5 percentage points each".

So, the gap is bigger, by a small amount according to this little write up... certainly not the end of the world as some would like to convey... ;)

It is quite typical for these to drop a card level down which tends to put it in bed with last gen same tier.
Yeah, but look at it from this perspective.
The 2070 replaces the 1080 and either matches it or beats it. And that's true for desktop and laptop. No arguing there.
But the 2070 max-q looses to the to the 1080 max-q in all the games tested on that site. Not to mention the fact that 1080m-q laptop had a weaker 7700HQ against a 8750H powerting the 2070m-q.
 
Yeah, but look at it from this perspective.
The 2070 replaces the 1080 and either matches it or beats it. And that's true for desktop and laptop. No arguing there.
But the 2070 max-q looses to the to the 1080 max-q in all the games tested on that site. Not to mention the fact that 1080m-q laptop had a weaker 7700HQ against a 8750H powerting the 2070m-q.
Well probably there's your answer, a hex core in a laptop is not cool - literally speaking.
 
Well probably there's your answer, a hex core in a laptop is not cool - literally speaking.
Both are 45w tdp parts, that's not the issue. What could be the issue is that it's two different laptops, with two different cooling systems.
But while it's not apples to apples, it does show A picture.
 
Both are 45w tdp parts, that's not the issue, so the only issue it could be is that it's two different laptops, with two different cooling systems.
That's true but both chips will have different PL1 & PL2 limits, not to mention 8750H will downclock more frequently. Unfortunately there's no objective way to measure the two & gauge which is superior wrt cooling.
But while it's not apples to apples, it does show A picture.
Yes & the picture is distorted IMO. We need more data to ascertain how good, or bad, the RTX mGPUs are.
 
Yes & the picture is distorted IMO. We need more data to ascertain how good, or bad, the RTX mGPUs are.
I'm not bringing into question regular RTX mobile parts, but rather max-q versions, which were butchered much more than Pascal max-q. That's my biggest issue with this.
 
Yeah, but look at it from this perspective.
The 2070 replaces the 1080 and either matches it or beats it. And that's true for desktop and laptop. No arguing there.
But the 2070 max-q looses to the to the 1080 max-q in all the games tested on that site. Not to mention the fact that 1080m-q laptop had a weaker 7700HQ against a 8750H powerting the 2070m-q.
It seems 1080 MaxQ is 90W TDP part while 2070 MaxQ is 80W part.
 
Yeah, but look at it from this perspective.
The 2070 replaces the 1080 and either matches it or beats it. And that's true for desktop and laptop. No arguing there.
But the 2070 max-q looses to the to the 1080 max-q in all the games tested on that site. Not to mention the fact that 1080m-q laptop had a weaker 7700HQ against a 8750H powerting the 2070m-q.
There is an argument there. It is quite common that laptop Max-Q cards drop a whole card level. This has happened for generations and is nothing new.

YOu state that the 2070 Max-Q loses to the 1080 Max-Q, but in this image, I see it beating it out except for whatever that overall performance score is...please note that these are all GRAPHICS scores in the benchmark, not overall so the CPU being one generation behind or a couple of cores/threads does not play a major role here.

maxq2.jpg


Obviously synthetics don't give the most encompassing results, but we can see here it beats out the last gen by several %. Where did you see game benchmarks here? That site, to me, isn't easy to navigate.


Yes, but that's what I meant. We don't know how much DXR eats into the TDP. Without DXR those cards could boost significantly higher than with it turned on. I hope I worded it better now.
I'd say not. I have no idea what you are trying to say. :) If you need more out of the card, don't enable RT for the significant FPS drop. It has little (nothing?) to due with TDP but everything to do with the performance impact of RT.
 
Stop trying to be smartass. If you are talking about relative performance it shows 2080 mobile is 1% faster than 1080 Ti.

Forget it, you can't do it after all.
 
I'm not bringing into question regular RTX mobile parts, but rather max-q versions, which were butchered much more than Pascal max-q. That's my biggest issue with this.

This really isn't complicated.

RTX GPUs on desktop consume more power and run hotter than GTX 10-series GPUs did. That is established fact, even when not using RTX/DXR.

GTX 10-Series GPUs in laptops and mobile devices already had to be downclocked to avoid thermal and power consumption issues.

If the RTX chips are hotter and more power hungry, then the downclocking will have to be more aggressive assuming cooling stays the same, otherwise they will simply burn.

If the downclocking is more aggressive, then the performance gap between laptops and desktops will widen.

End of story. Done. Finished. That's that. The laws of physics dictate no less.
 
TY. Are we reading the same thing?

BF V (1080p/QHD/UHD) = 7/11/14% lead over 1080 MaxQ in that (ONE) gaming title. The CPU difference isn't much here considering the card gains more distance the higher the resolution goes.


GTX 10-Series GPUs in laptops and mobile devices already had to be downclocked to avoid thermal and power consumption issues.
So do all laptop GPUs in most cases... in particular all Max-Qs do...
 
Next lie?

Sure, how about this one : You can read and understand what is being said.

msi_gs75_shadows_of_mordor_19x10_ultra_4k_textures-100786343-orig.jpg

The 2080 Max-Q can be slower than even a desktop 1080 and sometimes barley any faster than a 1080 Max-Q despite the major spec bump.
 
Last edited:
TY. Are we reading the same thing?

BF V (1080p/QHD/UHD) = 7/11/14% lead over 1080 MaxQ in that (ONE) gaming title. The CPU difference isn't much here considering the card gains more distance the higher the resolution goes.
You understood me wrong, I was comparing 1080m-q against a 2070m-q and it's 22/8/4% faster across three resolution. You were reading the regular 2070 graphs.
 
RTX GPUs on desktop consume more power and run hotter than GTX 10-series GPUs did. That is established fact, even when not using RTX/DXR
RTX 2080 is 215W vs GTX 1080Ti 250W
RTX 2070 175W vs GTX 1080 180W
RTX 2060 160W vs GTX 1070Ti 180W
The 2080 Max-Q can be slower than even a desktop 1080 and sometimes barley any faster than a 1080 Max-Q despite the major spec bump. This is deception on a whole new level.
Assuming MaxQ means minimum possible spec (which it mostly does), RTX 2080 MaxQ is 80W, GTX 1080 MaxQ is 90W and Desktop GTX 1080 is 180W.
Power is the main limitation in mobile.
From the Shadow of Mordor graph, 9.5% better at 12% power limit deficit is not a bad result.
 
No matter how power efficient the architecture and node is, you simply can't get these monstrously large chips inside a laptop without major compromises. I see Nvidia is still at it trying to use the same name and imply the same level of performance when in reality that is far from the truth. The 2080 Max-Q can be slower than even a desktop 1080 and sometimes barley any faster than a 1080 Max-Q despite the major spec bump. This is deception on a whole new level.

While I don't necessarily disagree with you, people really need to start using their gray matter. A logical person knows damn well you can't squeeze a 2080 into laptop tit for tat. I think it is high time stupid people start paying the consequences for being stupid. Darwinism needs to make a come back.

On the other hand, this is a slippery slope for NV. Get too many stupid people buying these expecting desktop performance and getting, well, laptop performance then they'll get a little unwanted negative publicity and negative performance reviews on forums.
 
Is that a Vega 56 inside a gaming laptop. I didn't even know those existed...
And they also downclocked the hell out of it, it's slower than a 1070
By a lot :laugh:
 
You understood me wrong, I was comparing 1080m-q against a 2070m-q and it's 22/8/4% faster across three resolution. You were reading the regular 2070 graphs.
Ahhh, I did. I get you now. :)

Still, this is expected... I don't understand why (well, I do understand why...) this is 'whole nother level' when its normal, within a few %, to see this behavior...I agree that more clarity can come to this situation...


...however I do not agree with the toxicity levels and lack of reporting posts instead of replying (not you shur)... its the same people creating the same toxic environment here. I need to find my ignore button and stop worrying about the sinking ship since nobody else appears to.


RTX 2080 is 215W vs GTX 1080Ti 250W
RTX 2070 175W vs GTX 1080 180W
RTX 2060 160W vs GTX 1070Ti 180W
That is through performance not generational namesake.... Im not sure why some users are thinking this way. The next gen Honda accord is still an accord. Just because its as fast as an Acura TLX, doesn't suddenly turn a Honda into an Acura... bad analogy aside, let's think about that a little. :)
 
That is through performance not generational namesake....
Does not matter. We can do the generational namesakes but then we'd have to account for performance anyway. Turing efficiency remains better than Pascal which was the point. Not by very much but the difference is well measurable and noticeable.
RTX 2080 at 215W vs GTX 1080 at 180W - 20% more power, ~40% more performance (from here)
RTX 2070 at 175W vs GTX 1070 at 150W - 17% more power, ~42% more performance
RTX 2060 at 160W vs GTX 1060 at 120W - 33% more power, ~60% more performance
 
Does not matter. We can do the generational namesakes but then we'd have to account for performance anyway. Turing efficiency remains better than Pascal which was the point. Not by very much but the difference is well measurable and noticeable.
RTX 2080 at 215W vs GTX 1080 at 180W - 20% more power, ~40% more performance (from here)
RTX 2070 at 175W vs GTX 1070 at 150W - 17% more power, ~42% more performance
RTX 2060 at 160W vs GTX 1060 at 120W - 33% more power, ~60% more performance
Yeah but you forgot to mention die sizes. Of course it will have more performance when the chips have more cores. Also it's on 12nm. And considering that max clock speeds haven't increased, some power reduction was to be expected simply from smaller process.
 
Back
Top