Thursday, February 18th 2021

NVIDIA Announces New CMP Series Specifically Designed for Cryptocurrency Mining; Caps Mining Performance on RTX 3060

This is a big one: NVIDIA has officially announced a new family of products specifically designed to satiate the demand coming from cryptocurrency mining workloads and farms. At the same time, the company has announced that the RTX 3060 launch driver will include software limitations for cryptocurrency mining workloads specifically correlated with Ethereum mining, essentially halving the maximum theoretical hashrate that could be achieved from a purely hardware perspective. The new family of products, termed CMP (Crypto Mining Processor) series, will see its products under the HX branding, and will be available in four different tiers: 30HX, 40HX, 50HX and 90HX. These products will not have any display outputs, and therefore are not applicable for gaming scenarios.

NVIDIA's stance here is that their new product will bring some justice in the overall distribution of its GeForce graphics cards, which are marketed and meant for gaming workloads. The new cryptocurrency-geared series will be distributed by NVIDIA authorized partners in the form of ASUS, Colorful, EVGA, Gigabyte, MSI, Palit, and PC Partner (more may be added down the line). There is currently no information on what silicon actually powers these graphics cards; and of course, the success of this enterprise depends on A) the driver restrictions not being limited to the RTX 3060 graphics card - it isn't clear from NVIDIA's press release if other RTX 30-series graphics cards will see the same performance cap. Even if NVIDIA did release those drivers, however, cryptocurrency miners would just opt to, well, not update them. So it is possible that NVIDIA will release a revision of the RTX 3090, RTX 3080, RTX 3070 and RTX 3060 Ti with silicon enhancements that will only work with the latest GeForce drivers - after allowing the channels to move all of their existing, cryptocurrency-enabled stock.
The other factor is, of course, pricing: miners will always look after the best price/performance ratio, even more so than gamers; and as such, the scenario can be imagined that were NVIDIA to add a tax to these products' based on their cryptocurrency mining nature, miners would still opt for GeForce products. The 30HX and 40HX will be made available in 1Q of this year, while the more powerful 50HX and 90HX will only hit retail come 2Q. NVIDIA finally did decide to take the matter into their own hands, in a move that not only accompanies their general "we're for the gamers" stance with more than words, while simultaneously insulating themselves from lawsuits targeting any possible inclusion of mining sales under their gaming division financials. I'll allow myself some emotion now: finally!

The NVIDIA Press Release follows:
We are gamers, through and through. We obsess about new gaming features, new architectures, new games and tech. We designed GeForce GPUs for gamers, and gamers are clamoring for more.

Yet NVIDIA GPUs are programmable. And users are constantly discovering new applications for them, from weather simulation and gene sequencing to deep learning and robotics. Mining cryptocurrency is one of them.

With the launch of GeForce RTX 3060 on Feb. 25, we're taking an important step to help ensure GeForce GPUs end up in the hands of gamers.

Halving Hash Rate
RTX 3060 software drivers are designed to detect specific attributes of the Ethereum cryptocurrency mining algorithm, and limit the hash rate, or cryptocurrency mining efficiency, by around 50 percent.

That only makes sense. Our GeForce RTX GPUs introduce cutting-edge technologies — such as RTX real-time ray-tracing, DLSS AI-accelerated image upscaling technology, Reflex super-fast response rendering for the best system latency, and many more — tailored to meet the needs of gamers and those who create digital experiences.

To address the specific needs of Ethereum mining, we're announcing the NVIDIA CMP, or, Cryptocurrency Mining Processor, product line for professional mining.

CMP products — which don't do graphics — are sold through authorized partners and optimized for the best mining performance and efficiency. They don't meet the specifications required of a GeForce GPU and, thus, don't impact the availability of GeForce GPUs to gamers.

For instance, CMP lacks display outputs, enabling improved airflow while mining so they can be more densely packed. CMPs also have a lower peak core voltage and frequency, which improves mining power efficiency.

Creating tailored products for customers with specific needs delivers the best value for customers. With CMP, we can help miners build the most efficient data centers while preserving GeForce RTX GPUs for gamers.
Sources: Thanks BTA for the inputs!, NVIDIA
Add your own comment

118 Comments on NVIDIA Announces New CMP Series Specifically Designed for Cryptocurrency Mining; Caps Mining Performance on RTX 3060

#76
Totally
Didn't they do this before and fail because miners flash their bios anyway bypassing whatever trickery nvidia put in place to limit mining?
Posted on Reply
#77
Caring1
W1zzard
Checks in the driver are so easy to circumvent, won't take experienced people more than a day
Soldering on video outputs to the card would take longer.
They brought out mining cards previously such as the 1080 with none or at best one but apparently those ports were covered and didn't work with the specific Bios installed on the card.
I say make all mining cards headless and cheaper so miners snap those up instead.
Posted on Reply
#78
qubit
Overclocked quantum bit
Can't say that I agree with gimping the performance of the RTX 3060 in mining and someone will find a way round the blocks anyway.

Releasing mining specific cards in general is a good idea in the current situation, but will have to be priced very cheaply to make it attractive to miners, since the cards have no value afterwards for resale.

The best idea of all though, would be to have enough capacity to simply make enough cards for everybody and forget about cut down mining cards. Looks like realworld realities prevent this though. :ohwell:
Posted on Reply
#79
thesmokingman
msimax
just looked up Rvii prices on ebay -hugs my Radeon vii- :laugh:
It's insane right? I sold mine recently which paid for a lot of the cost of a 3090.
Posted on Reply
#80
HisDivineOrder
Feels like, to me, that nvidia is testing out this whole, "Let's Quadro cards out for mining as their own line" with driver lockouts with these lower end cards. Then they can release the next generation of cards with hardware-based lockouts and drivers that have been fully broken, relocked down, broken, and relocked again over several iterations.

That way, they can release a new product stack, perhaps the Super line?, and prevent this situation from ever happening again. I'm fine with that. Let GPU mining die already. It's been a pox upon the industry for way too long. Go build an ASIC and leave gamers in peace. And if you think it's illegal to differentiate your product stack arbitrarily, well, Intel's been doing it for years. Nvidia had Quadro or Tesla. It's not illegal in any way and it's the way out of this GPU mining mess.

And I can't wait till they lock people out. I wish they could do it right this second (or even at the 3000 series launch), but unfortunately that genie's already out of the bottle. Hopefully, next product stack they can get it fixed.
Posted on Reply
#81
Bruno_O
kayjay010101
Huh? Your comment is genuinely incomprehensible to me. I don't understand what you're trying to say
BTW The 3070 100% does not need more than 8GB. I have one and played at 4K and rarely got above 7GB in new AAA titles. At 1440p it's absolutely enough.
With low quality textures I guess.
My 6800 16GB have used up to 15!!!GB playing Sotr at 4k ultra + rt ultra + msaa. Many other games here using above 10GB at 4k with everything set to ultra.
Buying a new expensive card and having to dial down pq so it doesn’t burst the 8/10GB limitation doesn’t make sense to me.
Posted on Reply
#82
TheoneandonlyMrK
HisDivineOrder
Feels like, to me, that nvidia is testing out this whole, "Let's Quadro cards out for mining as their own line" with driver lockouts with these lower end cards. Then they can release the next generation of cards with hardware-based lockouts and drivers that have been fully broken, relocked down, broken, and relocked again over several iterations.

That way, they can release a new product stack, perhaps the Super line?, and prevent this situation from ever happening again. I'm fine with that. Let GPU mining die already. It's been a pox upon the industry for way too long. Go build an ASIC and leave gamers in peace. And if you think it's illegal to differentiate your product stack arbitrarily, well, Intel's been doing it for years. Nvidia had Quadro or Tesla. It's not illegal in any way and it's the way out of this GPU mining mess.

And I can't wait till they lock people out. I wish they could do it right this second (or even at the 3000 series launch), but unfortunately that genie's already out of the bottle. Hopefully, next product stack they can get it fixed.
The same hardware they use to mine presented the computational quest for a Rona (+others)cure with the equivalent of the entire top 500 supercomputers combined, or more.
Now I'm pissed a bit I'm on old GPU still but calm down , unless you game 16hours a day is the butt hurt warrantied, if you are just pissed you don't have a new GPU for a few hours a night just to leave it sat switched off then chill ,some of us are trying to use our GPU for more.
Posted on Reply
#83
Chrispy_
Dristun
Yeah, cool, I bet there will be custom drivers to remove the limits within weeks, if not days. Also, they're not making extra GPUs, right? They're just carving out another product line out of the silicon they've already ordered. I'm happy to be wrong on this but me thinks nothing will change drastically because of this move.
The only entity benefitting from this is Nvidia: Once the crypto bubble bursts, every mining card without a display output is useless to anyone else and so cannot be sold back into the used market.

This is a 100% Nvidia-benefitting move and hurts the consumers who, having been unable to buy gaming cards for six months, are then also deprived of the wave of cheap graphics cards that happens at the end of the bubble by these mining-only models. Adding display outputs costs very little, but definitively restricts what the card can be used for.
Posted on Reply
#84
DeathtoGnomes
I'm here for the popcorn! :D

Cant wait to see the pricing and how fast supply will last, or could this be another paper launch date.
Posted on Reply
#86
R-T-B
They need a new silicon spinning (ideally one with actually better performance per watt for crypto workloads) or this is dead on arrival.
Posted on Reply
#87
chodaboy19
What about gamers that want to mine part-time?
Posted on Reply
#88
Chloe Price
Bruno_O
With low quality textures I guess.
My 6800 16GB have used up to 15!!!GB playing Sotr at 4k ultra + rt ultra + msaa. Many other games here using above 10GB at 4k with everything set to ultra.
Buying a new expensive card and having to dial down pq so it doesn’t burst the 8/10GB limitation doesn’t make sense to me.
VRAM allocating and VRAM usage isn't the same thing. I just switched from 980 Ti to 1080 Ti and Afterburner shows that more VRAM is being used even if I play a game which ran perfectly fine on 980 Ti and its 6GB.
Posted on Reply
#89
PapaTaipei
I wonder if this makes any sens considering miners also make money by reselling GPU's after they have been used to death...
Posted on Reply
#90
Chloe Price
PapaTaipei
I wonder if this makes any sens considering miners also make money by reselling GPU's after they have been used to death...
They've already done this for years?
Posted on Reply
#91
Caring1
Rakhmaninov3
Xbox ftw lol
Good luck buying one at retail prices, scalpers got their claws in to them also.
Posted on Reply
#92
Reverb256
These are going to have to be very inexpensive.
Posted on Reply
#93
Adc7dTPU
Nvidia
limit the hash rate, or cryptocurrency mining efficiency, by around 50 percent.
With CMP, we can help miners build the most efficient data centers while preserving GeForce RTX GPUs for gamers.
A step in the right direction! Now I can finally get my plus 1 GB more VRAM 1080 Ti close to MSRP, will consider price/perf (6700 XT launches on March 18th) before I do so though. All that said, way to go Nvidia!
Raevenlord
I'll allow myself some emotion now: finally!
Indeed!
Posted on Reply
#94
kayjay010101
Bruno_O
With low quality textures I guess.
My 6800 16GB have used up to 15!!!GB playing Sotr at 4k ultra + rt ultra + msaa. Many other games here using above 10GB at 4k with everything set to ultra.
Buying a new expensive card and having to dial down pq so it doesn’t burst the 8/10GB limitation doesn’t make sense to me.
1. Allocated does not mean used. I doubt a 2+ year old game uses 15GB in reality when brand new AAA open-world games barely use 8GB at 4K. Hell, when I play Black Ops 2, which came out 9 years ago, my 3090 reports 23GB used! Does that mean the people of 2012 were being shafted with their maximum of 4GB on the 690, when in reality games needed 24GB? Or could it perhaps be the COD games just allocate as much VRAM as they can? Nope, definitely the former...
The only way we could check if SOTR really does use 15GB is to have two cards with identical performance, but with different equipped VRAM amounts (8GB/16GB, for example) and see if the game performs any worse when the VRAM is cut. Or perhaps there is a program to artifically store in VRAM without performance degradation, so you can effectively 'emulate' 12GB/8GB/4GB, etc. on a 16GB card? I'd be interested to see that comparison, as that's the only way to objectively measure it.

2. I was talking about 1440p, so a comparison to 4K is moot.

3. NVIDIA will use different compression algorithms so that 8GB will go further than previously. Also DirectStorage will alleviate a lot of VRAM usage as more data can be kept on SSDs instead of the VRAM.
Posted on Reply
#95
enxo218
double dipping for moar money while pretending to care about gamer card plight
Posted on Reply
#96
bug
kayjay010101
1. Allocated does not mean used. I doubt a 2+ year old game uses 15GB in reality when brand new AAA open-world games barely use 8GB at 4K. Hell, when I play Black Ops 2, which came out 9 years ago, my 3090 reports 23GB used! Does that mean the people of 2012 were being shafted with their maximum of 4GB on the 690, when in reality games needed 24GB? Or could it perhaps be the COD games just allocate as much VRAM as they can? Nope, definitely the former...
The only way we could check if SOTR really does use 15GB is to have two cards with identical performance, but with different equipped VRAM amounts (8GB/16GB, for example) and see if the game performs any worse when the VRAM is cut. Or perhaps there is a program to artifically store in VRAM without performance degradation, so you can effectively 'emulate' 12GB/8GB/4GB, etc. on a 16GB card? I'd be interested to see that comparison, as that's the only way to objectively measure it.

2. I was talking about 1440p, so a comparison to 4K is moot.

3. NVIDIA will use different compression algorithms so that 8GB will go further than previously. Also DirectStorage will alleviate a lot of VRAM usage as more data can be kept on SSDs instead of the VRAM.
#3 DirectStorage will not substitute SSD for VRAM. It will let games load data faster from the disk, but the latency will remain the same.
Posted on Reply
#97
Chrispy_
enxo218
double dipping for moar money while pretending to care about gamer card plight
QFT
Posted on Reply
#98
kayjay010101
bug
#3 DirectStorage will not substitute SSD for VRAM. It will let games load data faster from the disk, but the latency will remain the same.
Less data needs to be preloaded in VRAM which will lead to less VRAM usage with DirectStorage. If more texture data can be stored on the drive before it's sent to VRAM, the VRAM will have to store less in cache and that means more space for more textures or less VRAM usage.
Posted on Reply
#99
bug
kayjay010101
Less data needs to be preloaded in VRAM which will lead to less VRAM usage with DirectStorage. If more texture data can be stored on the drive before it's sent to VRAM, the VRAM will have to store less in cache and that means more space for more textures or less VRAM usage.
Rendering a scene will still use the same amount of assets. This will simply result in faster initial loading times and possibly improved texture streaming (which has already been implemented with various degrees of success). Less pressure on VRAM? Most likely. Significant less? My money's on "no".
Posted on Reply
Add your own comment