• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GTX 1650 Lacks Turing NVENC Encoder, Packs Volta's Multimedia Engine

Setting aside all the fanboying and special-snowflaking, this is a budget card. Streamers should not be looking at this card as an option. They should be going for a GTX 1070(ti), 1080(ti) or an RTX card. The GTX16xx budget cards and are just not made for streamers. All the back and forth arguments are kinda pointless.


Seriously? Grow up.
Wouldn't it be better to have a cheap dedicated card for streaming, and the big boy for gaming? No performance loss.
 
https://github.com/obsproject/obs-amd-encoder/wiki/Hardware,-GCN-and-VCE-Limits
As long as the VCE version you are referring to is the same, there are no differences, as it should be. Same with Intel.
Now, if you want official information, too bad, AMD is the worst on that front. Never try their profiler, it doesn't even work.
Pff. How is that better than what Nvidia does?
R9 390X - launched June, 2015: VCE 2.0
R9 370X - launched August, 2015: VCE 1.0
R9 380X - launched November, 2015: VCE 3.0

So it's not even ordered in time, but the least expensive GPU had the least features - exactly like with 1650).
And you have to go to 3rd party github page to learn this, because AMD website doesn't even say which VCE version is implemented.
How do I even know these versions come from AMD? Maybe OBS does that - numbering VCEs according to the features they've found? :-)

BTW: these are the versions: 1.0, 2.0, 3.0, 3.1, 3.4, 4.0. What happened to 3.2 and 3.3?
Moreover, according to ODS: 3.1 was slower than 3.0, while in 3.4 they removed B frames in H264 (added in 2.0).

Are you really trying to convince me this is better than Nvidia case, when I can google "nvidia encoding features" and the first 2 links take me to Nvidia pages that have all the answers:
Bitrate and resolution:
https://developer.nvidia.com/nvidia-video-codec-sdk
Features:
https://developer.nvidia.com/video-encode-decode-gpu-support-matrix
for every GPU since Kepler

Would you feel better in NVENC had version numbers on some github page? I can arrange that. :-)

Setting aside all the fanboying and special-snowflaking, this is a budget card. Streamers should not be looking at this card as an option. They should be going for a GTX 1070(ti), 1080(ti) or an RTX card. The GTX16xx budget cards and are just not made for streamers. All the back and forth arguments are kinda pointless.
How exactly is 1080Ti better than 1650 for streaming? They have the exact same encoding specification (1650 is better at decoding).
 
Pff. How is that better than what Nvidia does?
R9 390X - launched June, 2015: VCE 2.0
R9 370X - launched August, 2015: VCE 1.0
R9 380X - launched November, 2015: VCE 3.0

So it's not even ordered in time, but the least expensive GPU had the least features - exactly like with 1650).
And you have to go to 3rd party github page to learn this, because AMD website doesn't even say which VCE version is implemented.
How do I even know these versions come from AMD? Maybe OBS does that - numbering VCEs according to the features they've found? :)

BTW: these are the versions: 1.0, 2.0, 3.0, 3.1, 3.4, 4.0. What happened to 3.2 and 3.3?
Moreover, according to ODS: 3.1 was slower than 3.0, while in 3.4 they removed B frames in H264 (added in 2.0).

Are you really trying to convince me this is better than Nvidia case, when I can google "nvidia encoding features" and the first 2 links take me to Nvidia pages that have all the answers:
Bitrate and resolution:
https://developer.nvidia.com/nvidia-video-codec-sdk
Features:
https://developer.nvidia.com/video-encode-decode-gpu-support-matrix
for every GPU since Kepler

Would you feel better in NVENC had version numbers on some github page? I can arrange that. :)


How exactly is 1080Ti better than 1650 for streaming? They have the exact same encoding specification (1650 is better at decoding).
If you use renames as an argument, then show me the NvENC properties of a renamed Fermi called GT720.
 
Setting aside all the fanboying and special-snowflaking, this is a budget card. Streamers should not be looking at this card as an option. They should be going for a GTX 1070(ti), 1080(ti) or an RTX card. The GTX16xx budget cards and are just not made for streamers. All the back and forth arguments are kinda pointless.

And this card still has a better encoder than the 1070(ti) and 1080(ti). That is how little this actually matters, and people are freaking out over it like the card it totally worthless...
 
Wouldn't it be better to have a cheap dedicated card for streaming, and the big boy for gaming? No performance loss.
Why? That just doesn't make sense. If you're gaming and streaming on the same card, it needs to be a good performing card. Budget cards to not fit that need.
How exactly is 1080Ti better than 1650 for streaming? They have the exact same encoding specification (1650 is better at decoding).
Other than the fact that the 1080ti kicks the crap out of the 1650 in performance? What else is there to consider? Really kind of a silly question. Are you are having one of those days?
And this card still has a better encoder than the 1070(ti) and 1080(ti).
No, it has the exact same encoding instruction sets.
 
That's what I mean, let the gaming card game, use a cheaper one for encoding, get more FPS from the gaming card.
How would the data be sent?
 
But 1650 can still encode and stream! :)
B frame support simply produces smaller files at the same quality.
People use 1050Ti for esports and they can use 1650 just as well. It has the same NVENC encoding capabilities.

B frame support matters when you need to lower the bandwidth use. I don't think that's a huge problem for 1650 owners. It's not like they'll be streaming 4K@60fps. ;-)
Yea, my point just was that esports gamers would probably like the better encoder :)
 
No, it has the exact same encoding instruction sets.

Yeah, you're right. For some reason I thought Pascal was limited to 4K, but that's just GP100 that is limited to 4K.

Either way, the point is the same, those cards aren't exactly struggling for encoding and streaming, so freaking out over the GTX1650 having the same encoding capabilities makes no sense.
 
If you use renames as an argument, then show me the NvENC properties of a renamed Fermi called GT720.
Actually I think it was GT730 that was partly a Fermi. Anyway, you knew what card you had and Nvidia clearly told you which features you can expect. That's the point.
There's no lottery, there's no browsing github. Official, precise information from the manufacturer. Isn't this how it supposed to be?
The Nvidia site clearly states the chip name and family.

I mean: how would you feel if you went to a car dealer and they told you cars have gearboxes, but you'll need to check github to learn if there's a reverse gear?

Radeon VII has been around for 2.5 months and since it's not mentioned by OBS, there's no way to learn what in can encode, right?
Wikipedia says it's VCE 4.1.
OBS says 4.0 was the "last VCE generation before AMD switched to the VCN naming scheme"..
I look at Radeon VII page and it still says "Video Code Engine (VCE)". https://www.amd.com/en/products/graphics/amd-radeon-vii
So OBS was wrong this time. Let's hope it's the first and last time, because what would we do otherwise...?

Mess.
Is it because AMD saves on everything they can to sell the products for 20% less than competition? I'd rather pay more.
Why? That just doesn't make sense. If you're gaming and streaming on the same card, it needs to be a good performing card. Budget cards to not fit that need.
You can game on a weak card, but you can't grow features.

R9 390X was way more powerful than 380X, but it could only encode 1080p and 380X could do 4K. So which card is better for "gaming and streaming"?
And both cards are OK for 1440p high or even 4K low. It's not like there's much to admire anyway.
Other than the fact that the 1080ti kicks the crap out of the 1650 in performance? What else is there to consider? Really kind of a silly question. Are you are having one of those days?
How is performance even a variable in this topic? We're talking about hardware encoding features. If you need more grunt, buy a bigger card. If you're playing Diablo3 or Fortnite, 1650 is OK. If you have a 250W ITX box, 1080Ti is an idiotic choice.
 
The usual nGreedia tricks...

Why would they even go to the expense of doing this??? It can only be to trick customers, or create artificial barriers.

Nasty, petty minded company, with an open contempt for its own customers.

And the idiots will buy them regardless!
 
That's what I mean, let the gaming card game, use a cheaper one for encoding, get more FPS from the gaming card.
How would the data be sent?
Oh, you mean in a dual card but not SLI setup? Yeah that'd likely work well if you had two 1650's, but then for that kind of money you should just get a more powerful single GPU.
You can game on a weak card, but you can't grow features.
No kidding... For real? I was kinda hoping to grow some more ram chips on my 2080..
How is performance even a variable in this topic? We're talking about hardware encoding features.
We're talking about encoding features with mention of streaming. Most streamers are going to be gaming on the same card they're streaming from. So yeah, do the simple math.
 
Last edited:
Oh, you mean in a dual card but not SLI setup? Yeah that'd likely work well if you had two 1650's, but then for that kind of money you should just get a more powerful single GPU.


We're talking about encoding features with mention of streaming. Most streamers are going gaming on the same card they're streaming from. So yeah, do the simple math.
I have to test Intel IGP for encoding, GT1030 for playing.

Actually I think it was GT730 that was partly a Fermi. Anyway, you knew what card you had and Nvidia clearly told you which features you can expect. That's the point.
There's no lottery, there's no browsing github. Official, precise information from the manufacturer. Isn't this how it supposed to be?
The Nvidia site clearly states the chip name and family.

I mean: how would you feel if you went to a car dealer and they told you cars have gearboxes, but you'll need to check github to learn if there's a reverse gear?

Radeon VII has been around for 2.5 months and since it's not mentioned by OBS, there's no way to learn what in can encode, right?
Wikipedia says it's VCE 4.1.
OBS says 4.0 was the "last VCE generation before AMD switched to the VCN naming scheme"..
I look at Radeon VII page and it still says "Video Code Engine (VCE)". https://www.amd.com/en/products/graphics/amd-radeon-vii
So OBS was wrong this time. Let's hope it's the first and last time, because what would we do otherwise...?

Mess.
Is it because AMD saves on everything they can to sell the products for 20% less than competition? I'd rather pay more.

You can game on a weak card, but you can't grow features.

R9 390X was way more powerful than 380X, but it could only encode 1080p and 380X could do 4K. So which card is better for "gaming and streaming"?
And both cards are OK for 1440p high or even 4K low. It's not like there's much to admire anyway.

How is performance even a variable in this topic? We're talking about hardware encoding features. If you need more grunt, buy a bigger card. If you're playing Diablo3 or Fortnite, 1650 is OK. If you have a 250W ITX box, 1080Ti is an idiotic choice.
Nice way of turning a discussion about a DOA card getting even more limited once again by Nvidia to a "hurr durr but AMD is bad hurr durr".
 
It's a bit hit and miss. For AMD you should try AMD Relive, if that ain't give you acceptable quality non other software can(If your audio goes out of sync, enable HPET on bios. It had bug for OC Ryzens an timer did havoc on audio sync, I don't really know if it fixed).

Problem with streaming on gpus is h.264 that every streaming service uses, it can be done with gpu but the quality is crappy on lower bitrate, thus cpu encoding is usually way to go. H.265 would work better, but no service provider want's to pay that license fee for MPEG LA and I don't really blame them.
AMD Relive has excellent graphics and terrible sound. Unusable. It does not have volume setting.
It is a joke actually. No one is using it (I know of) for youtube recordings or live streaming.
AMD relive is no intended for use - it is a tech demo nothing more.
 
Oh, you mean in a dual card but not SLI setup? Yeah that'd likely work well if you had two 1650's, but then for that kind of money you should just get a more powerful single GPU.
...
We're talking about encoding features with mention of streaming. Most streamers are going to be gaming on the same card they're streaming from. So yeah, do the simple math.

Is it uncommon to run an older card alongside a newer one? I run a 660ti and a 1080 in the same machine.
 
Is it uncommon to run an older card alongside a newer one? I run a 660ti and a 1080 in the same machine.

Back in the day, it was a little more common, especially with PhysX. But, even then, it wasn't that common.

But now, with hardware accelerated PhsyX basically dead, unless you have a very niche reason for having the second card in there, most people don't. Because it's really just a waste of power to have the second card sitting in the system doing nothing.

That's what I mean, let the gaming card game, use a cheaper one for encoding, get more FPS from the gaming card.
How would the data be sent?

There is a very minimal performance hit when encoding and gaming at the same time. That's the point of the NVENC encoder being built into the GPU. It uses its own hardware on the GPU to do the encoding, so it doesn't really take from game performance.

That's also why there are different load statistics for 3D, Decode, and Encode. They are all using different parts of the GPU.

So having a second weaker GPU in the system is really just wasting money that probably could have been better spend elsewhere, like on a better CPU for when you have/want to do software encoding.
 
Last edited:
Hmm... and how many PC users are gaming on them?
Be careful with arguments like that. ;-)

But seriously, quite a lot. In fact most of us stream. :)
Keep in mind all kinds of remote desktops are basically streaming (RDP, Citrix, VMWare, Teamviewer and so on).
Have you ever used the "Project" function in Windows? How do you think that works? :p
We also stream video between devices we own (NAS to TV, PC/Mac to phone etc) - we just don't think about it. We click "show on TV" and it just magically works - like computers should :).

And yeah, the game streaming phenomenon is also quite significant.

There was also a time when people actually kept movies on drives and consciously encoded them all the time for their phones etc. I think this is a dying skill at the moment - it moved to the cloud mostly.

Most people stream unconsciously. Computer chooses the technology for them.

Now, the actual support for NVENC is another matter. By default everything looks for a hardware encoder in the CPU/SoC.
But Citrix and VMware remote desktops already support NVENC (datacenter software - makes sense).

Really, you'll criticize Nvidia because they've introduced a very advanced encoding feature and it's not available in the cheapest GPU. Oh my...

Hmh well granted I haven't used Windows for over ten years now, so I'm not the best person to have a say on that matter... I occasionally use nvenc with ffmpeg, but that is very rare. Maybe I'm just too old for today's standards.
 
Back in the day, it was a little more common, especially with PhysX. But, even then, it wasn't that common.

But now, with hardware accelerated PhsyX basically dead, unless you have a very niche reason for having the second card in there, most people don't. Because it's really just a waste of power to have the second card sitting in the system doing nothing.

Makes sense. I generally use the 1080 for compute and the 660ti is so I can still use my computer. I suppose it's not that relevant to this discussion anyhow, cards that old don't support either encoding standard.
 
Hmh well granted I haven't used Windows for over ten years now, so I'm not the best person to have a say on that matter... I occasionally use nvenc with ffmpeg, but that is very rare. Maybe I'm just too old for today's standards.
Nothing I've mentioned is exclusive to Windows. You can have similar streaming functions in Linux: remote desktops, video streaming to TV, sharing screen over WLAN etc.
Windows is simply easier to use.

If you don't use anything like that, it justs means you use your PC in a more traditional way, like all of us did 10-20 years ago. Nothing wrong about it, if it keeps you productive and happy. ;-)
 
Back in the day, it was a little more common, especially with PhysX. But, even then, it wasn't that common.

But now, with hardware accelerated PhsyX basically dead, unless you have a very niche reason for having the second card in there, most people don't. Because it's really just a waste of power to have the second card sitting in the system doing nothing.



There is a very minimal performance hit when encoding and gaming at the same time. That's the point of the NVENC encoder being built into the GPU. It uses its own hardware on the GPU to do the encoding, so it doesn't really take from game performance.

That's also why there are different load statistics for 3D, Decode, and Encode. They are all using different parts of the GPU.

So having a second weaker GPU in the system is really just wasting money that probably could have been better spend elsewhere, like on a better CPU for when you have/want to do software encoding.
You use memory bandwidth while encoding, there always is a performance loss, how much depends of the GPU, game, and encoding settings.
 
You use memory bandwidth while encoding, there always is a performance loss, how much depends of the GPU, game, and encoding settings.

And that's why I said the performance hit is minimal instead of non-existent. But the memory control load, which measures how much the memory bus is being used, only goes up about 1% when encoding H265 1080p using NVENC. It's very minimal, and you probably won't even notice it. There's also a little extra PCI-E bus load, but again, it's so minimal you won't notice it.
 
And that's why I said the performance hit is minimal instead of non-existent. But the memory control load, which measures how much the memory bus is being used, only goes up about 1% when encoding H265 1080p using NVENC. It's very minimal, and you probably won't even notice it. There's also a little extra PCI-E bus load, but again, it's so minimal you won't notice it.
Then I'm speaking from ignorance, I know how encoding affects the IGPs and small cards, but I have no idea of how it performs on high end ones.

I never had audio problems with ReLive, and as notb said, my card is the oldest of the bunch.

Anyway, back on topic. I hope this card gets a price reduction to make up for how bad it turned out to be, I was waiting for it to see if I could finally jump to the greener side again, looks like I will just stay on the APU territory on the future.
 
Pff. How is that better than what Nvidia does?
R9 390X - launched June, 2015: VCE 2.0
R9 370X - launched August, 2015: VCE 1.0
R9 380X - launched November, 2015: VCE 3.0
Would you like to know the GCN version corresponding wrt each of these cards? If you don't here's a good site for starters ~ GPU Database
 
Because it's really just a waste of power to have the second card sitting in the system doing nothing.
Unless you're using card A for one purpose and card B for another purpose at the same time, IE one card for gaming and one for stream processing. GPU affinity is a thing..
 
Unless you're using card A for one purpose and card B for another purpose at the same time, IE one card for gaming and one for stream processing. GPU affinity is a thing..

Then the second card isn't sitting there doing nothing. Using the second card for stream encoding is one of those niche use cases I was talking about(granted, also one that I also pointed out makes little sense)

I made a whole statement. Quoting part of it and arguing with just the part is just being argumentative for no reason.
 
Would you like to know the GCN version corresponding wrt each of these cards? If you don't here's a good site for starters ~ GPU Database
Let him be, fanboys are just fanboys.
 
Back
Top