• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce GTX 1630 Launching May 31st with 512 CUDA Cores & 4 GB GDDR6

Also the 75W TDP hasn't changed despite being 512 CUDA cores?
The original reference clocked 1650 (1665MHz boost) had very high actual boost clock (1890MHz) so I don't think that a 3% (1950MHz at max) in actual frequency (if it's even 3% and not similar...) will do much to TDP.
Shouldn't the design be at most 60W (or less) in order to offer 1 slot low profile solutions like 1050Ti or 6400?


 
Also the 75W TDP hasn't changed despite being 512 CUDA cores?
The original reference clocked 1650 (1665MHz boost) had very high actual boost clock (1890MHz) so I don't think that a 3% (1950MHz at max) in actual frequency (if it's even 3%...) will do much to TDP.
Shouldn't the design be at most 60W (or less) in order to offer 1 slot low profile solutions like 1050Ti or 6400?



they just downclock it, low profiles are usually less efficent in cooling anyway
 
Would be nice as low profile card to replace 1030GT or 1050... :p
 
they just downclock it, low profiles are usually less efficent in cooling anyway
Sure, but will be the downclock enough (within spec) for 1 slot solutions if the TDP is 75W?
Also it will be a little bit slower than 60W 1050Ti, will Nvidia admitting lower performance/W vs the Pascal based solution?
I'm just examine the possibilities, it may well just be exactly like the report.
 
Even the fastest GTX card aka 1080 Ti can barely run it at 720p :(
not sure, but aren't worst at RTX then even a 3050? no dedicated RTX hardware, they can't do miracles
 
at simple seek seem more rival for arc a310 than rx 6400, maybe arc a3xx stay more closer but only left price

:)
 
not sure, but aren't worst at RTX then even a 3050? no dedicated RTX hardware, they can't do miracles
Yeah... though in the few games where RT on GTX works, in Shadow of the Tomb Raider it doesn't hurt THAT much. Though it's only the shadows which are ray-traced..
 
Yeah... though in the few games where RT on GTX works, in Shadow of the Tomb Raider it doesn't hurt THAT much. Though it's only the shadows which are ray-traced..

SoTR is not a good RTX example/game, it's more like a gimmick on that game. Not to say it's perfect on most games, but on that one it's worst (the gimminck factor i mean)
 
SoTR is not a good RTX example/game, it's more like a gimmick on that game. Not to say it's perfect on most games, but on that one it's worst (the gimminck factor i mean)
Well, it doesn't ruin the performance with 1080 Ti.
 
Hi,
Guessing nvidia missed the TPU poll @W1zzard
Would you buy a 4gb GPU in 2022 I believe the results were a resounding no :laugh:
 
Hi,
Guessing nvidia missed the TPU poll @W1zzard
Would you buy a 4gb GPU in 2022 I believe the results were a resounding no :laugh:

gddr6 is expensive for a card that i think it's mean to be on the cheap side, 8gb on this card i think it would make no financial sense. I think that's why it's 4

they will release a later version with 8GB of gddr3, and another with less badwith, etc... you know nvidia :D
 
I wonder if Nvidia will ever make a 30TDP sub £/$50 card ever again.

pats my GT 1030
 
I guess it's good to see the 1030 and 730 and all that crap get replaced? Though I can't imagine this being a particularly attractive proposition, considering that an RX 6400 matches the 1650 and the 6500 XT beats it soundly. Even accounting for both of those losing ~10% performance on PCIe 3.0 this will be slower than the 6400. And at least here in Sweden, you can get an RX 6500 for less money than any 1650.
10% is a mere indication. Most reviews put performance lost close to 20% with some games seeing loses of 40%
PCIE 3.0 is not the way to go with the 6400/6500

So hopefully with this card they give it more PCIe connects atleast x8
 
Surprised that they didn't use the ones from T400 (384 CUDA cores) or T600 (640 CCs) and instead went with making another cut-down chip that sits between those...
 
I could grab a Fury in my 2nd rig tho :cool:
Want one? I've got a spare :p

10% is a mere indication. Most reviews put performance lost close to 20% with some games seeing loses of 40%
PCIE 3.0 is not the way to go with the 6400/6500

So hopefully with this card they give it more PCIe connects atleast x8
... That's not how averages work. 10% was a bit generous, but TPU says 14%. That roughly indicates half of the tested games are at or worse than 14%, just like half are at or better than 14% - barring significant outliers. There are definitely outliers where these GPUs are absolutely trounced at 3.0, but those are an exception, not the rule. TPU's testing shows a lot of titles where the bandwidth difference is none or very minor (and some of the worse results, like AC:V and CP2077 aren't playable even on 4.0, raising the question of how lower quality settings would affect this difference). I still think AMD made a dumb mistake limiting these chips to x4, but they aren't quite as bad as some people make them out to be.
 
Last edited:
Want one? I've got a spare :p


... That's not how averages work. 10% was a bit generous, but TPU says 14%. That obviously means half of the tested games are at or worse than 14%, just like half are at or better than 14%. There are definitely outliers where these GPUs are absolutely trounced at 3.0, but those are an exception, not the rule. TPU's testing shows a lot of titles where the bandwidth difference is none or very minor (and some of the worse results, like AC:V and CP2077 aren't playable even on 4.0, raising the question of how lower quality settings would affect this difference). I still think AMD made a dumb mistake limiting these chips to x4, but they aren't quite as bad as some people make them out to be.
You'll live next to me, actually I may be interested as there isn't customs or other BS :D
 
Why Nvidia is getting stuck on the GTX 1XXX series naming in the last few years? Are we talking about the same Nvidia that rebranded old GPU dies to "newer series" for years in an attempt to woes customers?
 
Want one? I've got a spare :p


... That's not how averages work. 10% was a bit generous, but TPU says 14%. That roughly indicates half of the tested games are at or worse than 14%, just like half are at or better than 14% - barring significant outliers. There are definitely outliers where these GPUs are absolutely trounced at 3.0, but those are an exception, not the rule. TPU's testing shows a lot of titles where the bandwidth difference is none or very minor (and some of the worse results, like AC:V and CP2077 aren't playable even on 4.0, raising the question of how lower quality settings would affect this difference). I still think AMD made a dumb mistake limiting these chips to x4, but they aren't quite as bad as some people make them out to be.
You forgot Doom and F1 as well lose upto 70% performance at a mere 1080P. Rainbow Six Siege as well lost over 20% plus more. We can look at the averages but these are current games players are playing that suck bad on PCIe 3.0
 
Quake 2 RTX is fully path traced RT...which is on a whole different level of GPU/CPU requirements, Its FAR more demanding, for example, than the likes CP 2077 to run.

You are making an Apples to Oranges comparison.
You say that, but it still doesnt show correct lighting everywhere.

Apples to Oranges is exactly the point: nobody ever asked for oranges but somehow thats all we must eat now in the green camp. You even get oranges if you never intend to eat them.
 
You forgot Doom and F1 as well lose upto 70% performance at a mere 1080P. Rainbow Six Siege as well lost over 20% plus more. We can look at the averages but these are current games players are playing that suck bad on PCIe 3.0
Yes, as I said, there are outliers, some of which are downright unplayable. Yet even when accounting for those, the average loss is 14%, which tells us that for the majority of games the loss is significantly less than 14%. As for whether these are games people are currently playing, that applies pretty broadly across the test suite, no? Or are you claiming that people play the more bottlenecked games more than the ones that perform okay?

Why Nvidia is getting stuck on the GTX 1XXX series naming in the last few years? Are we talking about the same Nvidia that rebranded old GPU dies to "newer series" for years in an attempt to woes customers?
Naming is arbitrary after all, and Nvidia wants to differentiate these from their more prestigious RTX cards, making those look more attractive. Nothing more nefarious than that going on.
 
Back
Top