• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Mobile NVIDIA GeForce RTX GPUs Will Vary Wildly in Performance, Clocks Lowered Substantially

Joined
Jul 10, 2011
Messages
788 (0.17/day)
Processor Intel
Motherboard MSI
Cooling Cooler Master
Memory Corsair
Video Card(s) Nvidia
Storage Samsung/Western Digital/ADATA
Display(s) Samsung
Case Thermaltake
Audio Device(s) On Board
Power Supply Seasonic
Mouse A4TECH
Keyboard UniKey
Software Windows 10 x64
Sure, how about this one : You can read and understand what is being said.


Wins in some games, loses in some games. Nothing extreme bad for heavily downclocked 2080 Max-Q as you claimed.
 
Joined
Jan 8, 2017
Messages
8,929 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
While I don't necessarily disagree with you, people really need to start using their gray matter. A logical person knows damn well you can't squeeze a 2080 into laptop tit for tat. I think it is high time stupid people start paying the consequences for being stupid. Darwinism needs to make a come back.

On the other hand, this is a slippery slope for NV. Get too many stupid people buying these expecting desktop performance and getting, well, laptop performance then they'll get a little unwanted negative publicity and negative performance reviews on forums.

Nvidia is smart.

All this began with the 980 that they made into a fully enabled chip that could be used in laptops in addition to the cut down 980M which was released previously. With the 10 series they did things in a different order, first the fully fledged parts were made available and then Max-Q ones which are nothing more than M versions, functionally.

Not they are releasing them concurrently, the way they keep changing all this isn't incidental. I for one didn't even knew there were distinct Mobile and Max-Q versions already out, if you look on their site there is nothing that explicitly explains the difference between the two. Not only that but we know find out even among those you may get wildly variable performance.

They intentionally smeared the line them between so much to the point where even adept consumers would be hard pressed to know for sure what exactly are they going to get.
 
Last edited:
Joined
Dec 31, 2009
Messages
19,366 (3.71/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Max-Q defines the size of the card and it different from their mobile coutnerparts, correct. This is nothing new. MAx-Q designs are typically found in less expensive, but typically more power efficient, and thin models. It was expected upon release of them last gen and nothing has changed here. They are a different tier compared to their more robust mobile and especially desktop counterparts. This should not be a surprise. :)

This, from NVIDIA, explicitly explains what Max-Q is: https://www.nvidia.com/en-us/geforce/gaming-laptops/max-q/
or this: https://blogs.nvidia.com/blog/2018/12/14/what-is-max-q/

Why we are comparing last gen desktop cards with this gen Max-Q is beyond me...

Anyway, enjoy gents. :)
 
Joined
Jan 8, 2017
Messages
8,929 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
MAx-Q designs are typically found in less expensive, but typically more power efficient

You sure about that ? Can you find a comparable 2080 Max-Q laptop cheaper than a regular one ?

This is why this is exceptionally problematic, they sell you something that's more expensive without explicitly saying what sort of performance you are going to get. It will be a surprise, that's the point.
 
Last edited:
Joined
Mar 10, 2014
Messages
1,793 (0.49/day)
Less expensive? The thinner the notebook is, more expensive it is.

You sure about that ? Can you find a 2080 Max-Q laptop cheaper than a regular one ?

This is why this is exceptionally problematic, they sell you something that's more expensive without explicitly saying what sort of performance you are going to get. It will be a surprise, that's the point.

If it is clearly marketed as MaxQ, while most of the pascal thin laptops were, I don't see that much of a problem. Yes, they are two different products, but I don't really see why would they need to have wholly different naming scheme(Heck config is same, just clocks are lower to achieve lower tdp). It goes like an old good days, Gt, ti vs ultra, pro vs xt, etc. and I don't remember any one complaining much of those.
 
Last edited:
Joined
Mar 10, 2015
Messages
3,984 (1.20/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Max-Q defines the size of the card and it different from their mobile coutnerparts, correct. This is nothing new. MAx-Q designs are typically found in less expensive, but typically more power efficient, and thin models. It was expected upon release of them last gen and nothing has changed here. They are a different tier compared to their more robust counterparts. This should not be a surprise. :)

This, from NVIDIA, explicitly explains what Max-Q is: https://www.nvidia.com/en-us/geforce/gaming-laptops/max-q/
or this: https://blogs.nvidia.com/blog/2018/12/14/what-is-max-q/

Why we are comparing last gen desktop cards with this gen Max-Q is beyond me...

Anyway, enjoy gents. :)

I guess my expectations are to have a crappy gaming experience on a lap top then when I get surprised, it is even better. If I am on a laptop then that means I am travelling and probably in a place I have never been before so I'll go explore life instead of sitting in front of a screen in a hotel.
 
Joined
Sep 17, 2014
Messages
20,917 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Joined
Dec 31, 2009
Messages
19,366 (3.71/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Apologies... for thinner laptops. The Max-Q design is also physical as well as the movie magic that makes them work. But they try to save power for these thinner devices and heat mitigation. Again, I can see where an average consumer would have issues distinguishing things, and the more clear the better, but honestly, a bit of searching (or asking questions) goes a long way. Even my mom called to ask me what the difference was between a Max-Q card and mobile version... its obvious enough to ask a question (and my mom is soooo not tech savvy, LOL). I certainly wouldn't use some of the more severe terms that were tossed about in this thread to describe them, but again... par for the course.
 
Joined
Sep 17, 2014
Messages
20,917 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Apologies... for thinner laptops. The Max-Q design is also physical as well as the movie magic that makes them work. But they try to save power for these thinner devices and heat mitigation. Again, I can see where an average consumer would have issues distinguishing things, and the more clear the better, but honestly, a bit of searching (or asking questions) goes a long way. Even my mom called to ask me what the difference was between a Max-Q card and mobile version... its obvious enough to ask a question (and my mom is soooo not tech savvy, LOL). I certainly wouldn't use some of the more severe terms that were tossed about in this thread to describe them, but again... par for the course.

Sorry but Nvidia's laptop product stacks have always been a mess, complete with cross-generational rebrands and downright misleading product names. You can ask questions all day and still not have a good picture of it. Much the same as Intel's approach. And its all intentional, let's not fool ourselves here, and it seems to work. So yes, you can ask questions, but most people simply don't have the capacity to grasp it.

I mean even your very example, and that is considering a fully refreshed product stack with clear naming conventions (Max-Q, 10xx)... but now the clockspeed may vary wildly. So even if you ask 'what is Max-Q' you don't know a thing about actual performance relative to the rest.
 
Joined
Oct 5, 2017
Messages
595 (0.25/day)
RTX 2080 is 215W vs GTX 1080Ti 250W
RTX 2070 175W vs GTX 1080 180W
RTX 2060 160W vs GTX 1070Ti 180W
Assuming MaxQ means minimum possible spec (which it mostly does), RTX 2080 MaxQ is 80W, GTX 1080 MaxQ is 90W and Desktop GTX 1080 is 180W.
Power is the main limitation in mobile.
From the Shadow of Mordor graph, 9.5% better at 12% power limit deficit is not a bad result.

First off, you're comparing different levels of card, so your comparison is invalid here. Secondly, the paper specs do not reflect real-world power consumption.

When TPU tested both cards, 2080Ti Founders consumed 18W more than the 1080Ti Founders while running Furmark, and the peak gaming load for 1080Ti was 267W versus 289W.


Secondly, we aren't discussing whether 2080Ti laptops will be faster than 1080Ti ones. We're discussing whether fitting a 2080Ti into a laptop requires more aggressive downclocking than fitting a 1080Ti into a laptop does.

Since the 2080Ti consumes more power, the simple answer is yes, it will require more aggressive downclocking, because while it is very easy to make a GPU cooler larger and better on desktop, the same cannot be said of increasing the size of cooler in a laptop.

The Desktop's thermal envelope can expand, and the laptop's stays the same. That means the performance delta also grows.

 
Joined
Sep 17, 2014
Messages
20,917 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
First off, you're comparing different levels of card, so your comparison is invalid here. Secondly, the paper specs do not reflect real-world power consumption.

When TPU tested both cards, 2080Ti Founders consumed 18W more than the 1080Ti Founders while running Furmark, and the peak gaming load for 1080Ti was 267W versus 289W.



Yes, but the 2080ti FE also has a higher TDP of 260W on the box. Seems pretty consistent to me. Additionally, we're not mentioning the power plan. Nvidia CP offers three power management modes and its unlikely to be set to 'adaptive' for benchmarking.
 
Joined
Oct 5, 2017
Messages
595 (0.25/day)
Yes, but the 2080ti FE also has a higher TDP of 260W on the box. Seems pretty consistent to me.
Secondly, we aren't discussing whether 2080Ti laptops will be faster than 1080Ti ones. We're discussing whether fitting a 2080Ti into a laptop requires more aggressive downclocking than fitting a 1080Ti into a laptop does.

Since the 2080Ti consumes more power, the simple answer is yes, it will require more aggressive downclocking, because while it is very easy to make a GPU cooler larger and better on desktop, the same cannot be said of increasing the size of cooler in a laptop.

The Desktop's thermal envelope can expand, and the laptop's stays the same. That means the performance delta also grows.

This was the point I was making all along and I've never strayed from it. I'm not debating which is ultimately 5% faster. I'm pointing out that if your 150W laptop GPU is limited to 100W before it thermal throttles, and you then replace it with a 160W GPU, even if it's faster in the end, you lose more performance proportional to the capability of the silicon, than you did with the 150W part, even though both are being constrained thermally.
 
Joined
Dec 31, 2009
Messages
19,366 (3.71/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Sorry but Nvidia's laptop product stacks have always been a mess, complete with cross-generational rebrands and downright misleading product names. You can ask questions all day and still not have a good picture of it. Much the same as Intel's approach. And its all intentional, let's not fool ourselves here, and it seems to work. So yes, you can ask questions, but most people simply don't have the capacity to grasp it.

I mean even your very example, and that is considering a fully refreshed product stack with clear naming conventions (Max-Q, 10xx)... but now the clockspeed may vary wildly. So even if you ask 'what is Max-Q' you don't know a thing about actual performance relative to the rest.
Clockspeed varies wildly not on SKU, but on environmental conditions, right? That is nothing new either, right?

Ugh, so annoying when I try to read the first post to get this, important, information and its nowhere to be found. Soooooooooooooooooooo tired of copy paste news...
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
Yeah but you forgot to mention die sizes. Of course it will have more performance when the chips have more cores. Also it's on 12nm. And considering that max clock speeds haven't increased, some power reduction was to be expected simply from smaller process.
We were talking about power consumption and efficiency. 12nm is a very minor step from 14/16nm.
Die size will affect things like GPU/card price but this was not what we were discussing right now.
Secondly, the paper specs do not reflect real-world power consumption.
It does and quite precisely so.
 
Joined
Mar 10, 2014
Messages
1,793 (0.49/day)
Clockspeed varies wildly not on SKU, but on environmental conditions, right? That is nothing new either, right?

Well yeah, when pascal laptops came, thermal compound replacement was almost mandatory to stop them throttle(Those HP Omens). Not only the gpu but intel cpu too.

Well this one I don't really like, according to Notebookcheck there is 90W tdp variant of rtx 2080 too: base 990, boost 1230, tdp 90W.
 
Joined
Dec 31, 2009
Messages
19,366 (3.71/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
I'm just trying to wrap my head around why some people's heads are popping off over this news. This is how NV laptop GPUs work, peeps. There is little difference.

Mobile/MaxQ Pascal GPUs performance varied wildly as well.


News leading the lemmings off the cliff, gents!
 
Joined
Oct 5, 2017
Messages
595 (0.25/day)
It does and quite precisely so.
Clearly not, or the real world power consumption wouldn't be higher than advertised when TPU tested the items.

Secondly, read the rest of my post instead of constantly attempting to muddy the waters.

This thread is about how aggressively, compared to desktop parts, the Max-Q versions of RTX parts will have to be downclocked in order to meet the thermal constraints of being in a laptop.

The answer to that is quite simply "More aggressively than 10 series GPUs had to be", because the entire RTX product line consumes more power and therefore generates more heat than the 10 Series did.

The performance comparison bullshit you're trying to discuss was not the original topic of this thread. It was brought up by some idiot on the first page who made an unrelated assertion that everyone else is now jumping on them for. I don't care about that. I care about the thread topic, and I'm discussing the thread topic, not the self-gratifying tangent you're trying to drag me into.
 
Joined
Mar 6, 2017
Messages
3,209 (1.23/day)
Location
North East Ohio, USA
System Name My Ryzen 7 7700X Super Computer
Processor AMD Ryzen 7 7700X
Motherboard Gigabyte B650 Aorus Elite AX
Cooling DeepCool AK620 with Arctic Silver 5
Memory 2x16GB G.Skill Trident Z5 NEO DDR5 EXPO (CL30)
Video Card(s) XFX AMD Radeon RX 7900 GRE
Storage Samsung 980 EVO 1 TB NVMe SSD (System Drive), Samsung 970 EVO 500 GB NVMe SSD (Game Drive)
Display(s) Acer Nitro XV272U (DisplayPort) and Acer Nitro XV270U (DisplayPort)
Case Lian Li LANCOOL II MESH C
Audio Device(s) On-Board Sound / Sony WH-XB910N Bluetooth Headphones
Power Supply MSI A850GF
Mouse Logitech M705
Keyboard Steelseries
Software Windows 11 Pro 64-bit
Benchmark Scores https://valid.x86.fr/liwjs3
Combine the fact that most OEMs are pushing the whole "thin for the sake of thin" which of course results in notebooks not being able to properly cool themselves, what did you think was going to happen?
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
The answer to that is quite simply "More aggressively than 10 series GPUs had to be", because the entire RTX product line consumes more power and therefore generates more heat than the 10 Series did.
Given that Turing is more power-efficient than Pascal, is the "more aggressively" part that important?
 
Joined
Dec 31, 2009
Messages
19,366 (3.71/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Combine the fact that most OEMs are pushing the whole "thin for the sake of thin" which of course results in notebooks not being able to properly cool themselves, what did you think was going to happen?
The same thing that has been happening in this space already.. :)

IMO, there is enough clarity (more is always better!) already for an average user to at least ask a question what the difference is between the two. The specs CLEARLY show there is a range for both mobile GPUs and Max-Q GPUs which will vary depending on the laptop that it is in. This is not new, this is, in fact, old news. Could more clarity be brought to it... sure...

...but HOW should NVIDIA report this data? Sell themselves short and list the performance off minimum boost? Or simply report a range of possible clocks speeds and note that it will vary depending on the environment like they have done. Consumers, when making purchases like this, hell almost ANY major purchase like this, due diligence needs to go into it!!! Someone mentioned earlier about Darwinism.... and I have to agree... especially considering my mom (the noobliest of the noobs in PCs) even picked it out when looking at specs!
 
Joined
Mar 6, 2017
Messages
3,209 (1.23/day)
Location
North East Ohio, USA
System Name My Ryzen 7 7700X Super Computer
Processor AMD Ryzen 7 7700X
Motherboard Gigabyte B650 Aorus Elite AX
Cooling DeepCool AK620 with Arctic Silver 5
Memory 2x16GB G.Skill Trident Z5 NEO DDR5 EXPO (CL30)
Video Card(s) XFX AMD Radeon RX 7900 GRE
Storage Samsung 980 EVO 1 TB NVMe SSD (System Drive), Samsung 970 EVO 500 GB NVMe SSD (Game Drive)
Display(s) Acer Nitro XV272U (DisplayPort) and Acer Nitro XV270U (DisplayPort)
Case Lian Li LANCOOL II MESH C
Audio Device(s) On-Board Sound / Sony WH-XB910N Bluetooth Headphones
Power Supply MSI A850GF
Mouse Logitech M705
Keyboard Steelseries
Software Windows 11 Pro 64-bit
Benchmark Scores https://valid.x86.fr/liwjs3
That of course depends upon if people do the proper research and not just listen to the marketing speak and look at all the pretty marketing cards that are placed next to the systems in the big box stores.
 
Joined
Feb 18, 2017
Messages
688 (0.26/day)
This isn't really different from desktops. Better cooling solution allow for higher TDPs which in turn allow for higher frequencies. The only thing different is manufacturers get more leeway to play with the TDP.
There is a huge difference... if you check different AIB cooling solutions, generally there are only 1-3 fps differences in the smallest (FHD) resolution. If you check these clock rates, these indicate much bigger performance difference.
 
D

Deleted member 185088

Guest
The 1080 max-q was already rubbish closer to the 1070 (mobile)than the real thing, it's just stupid. nVidia should've done something similar to the 1070 mobile, given the GPU more cuda cores so the lower clocks are less painful. The whole max-q thing is a scam, they sell two very different GPUs under the same naming scheme, a normal 1080 can run lower if they wanted it to, so I think it's maybe some poor quality GPUs that can't handle higher clocks get named as max-q.
 
Joined
Dec 31, 2009
Messages
19,366 (3.71/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
That of course depends upon if people do the proper research and not just listen to the marketing speak and look at all the pretty marketing cards that are placed next to the systems in the big box stores.
Of course! But how is BB (any brick and mortar) advertises it versus NVIDIA and its specifications really NVIDIA's fault? How far down the line do they need to go to be absolved of someone who is unrelated advertising?

So I ask (anyone) again... how should they advertise it? For those who are up in arms... what would be the best way to advertise this??? How do you pin down performance metrics and clocks when it can vary 500 MHz and several % performance difference from min to max clocks? 2080m is different from 2080 Max-Q. Its a different name. I expect differences in a Honda Accord LE to the Special Edition as well. Do I know what those are? No, but I can search and see the differences in the dealership or online (much like this card, look up specs and performance analysis)
 
Top