• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Dual Slot GPUs

Joined
Jan 11, 2013
Messages
1,237 (0.27/day)
Location
California, unfortunately.
System Name Sierra
Processor Core i5-11600K
Motherboard Asus Prime B560M-A AC
Cooling CM 212 Black RGB Edition
Memory 64GB (2x 32GB) DDR4-3600
Video Card(s) MSI GeForce RTX 3080 10GB
Storage 4TB Samsung 990 Pro with Heatsink NVMe SSD
Display(s) 2x Dell S2721QS 4K 60Hz
Case Asus Prime AP201
Power Supply Thermaltake GF1 850W
Software Windows 11 Pro
Why is it so, so hard to find true dual slot video cards!? Literally the only higher end GPUs I see online that only actually use two slots and don't interfere with the 3rd slot which I happen to have a USB-C PCI-E card in are used OEM cards that came out of Dell, etc. prebuilts.
 
Why is it so, so hard to find true dual slot video cards!? Literally the only higher end GPUs I see online that only actually use two slots and don't interfere with the 3rd slot which I happen to have a USB-C PCI-E card in are used OEM cards that came out of Dell, etc. prebuilts.
There are 3060s and 6600s that are dual slot.
 
My little RTX A2000 is a dual slot card. With that said, the reason for why dual slot cards is becoming more rare, specially on high-end cards. Modern cards are becoming more and more power hungry and needs better colling. Dual slot cooler simply dosent cut it any more. For dual slot cards you need to look at the mid to low end cards.

Its funny how things has changed on high-end cards.

EVGA GTX 1080 TI FTW 3 gaming = dual slot card
EVGA RTX 3080 = 3 slot card
ASUS RTX 4090 = 4 slot cards.
All high-end cards, but jut getting bigger every gen it seems.
 
3080 (Ti) Founders Edition ?
 
My little RTX A2000 is a dual slot card. With that said, the reason for why dual slot cards is becoming more rare, specially on high-end cards. Modern cards are becoming more and more power hungry and needs better colling. Dual slot cooler simply dosent cut it any more. For dual slot cards you need to look at the mid to low end cards.

Its funny how things has changed on high-end cards.

EVGA GTX 1080 TI FTW 3 gaming = dual slot card
EVGA RTX 3080 = 3 slot card
ASUS RTX 4090 = 4 slot cards.
All high-end cards, but jut getting bigger every gen it seems.
The bit that gets me, is that the big brute of a card like a 4090 is just as skinny without its cooler as the little tiddler low profile card without its cooler, just a bigger surface area. Some components might stick up a little more, but that's it, the motherboards will be similar thicknesses.
 
My little RTX A2000 is a dual slot card. With that said, the reason for why dual slot cards is becoming more rare, specially on high-end cards. Modern cards are becoming more and more power hungry and needs better colling. Dual slot cooler simply dosent cut it any more. For dual slot cards you need to look at the mid to low end cards.

Its funny how things has changed on high-end cards.

EVGA GTX 1080 TI FTW 3 gaming = dual slot card
EVGA RTX 3080 = 3 slot card
ASUS RTX 4090 = 4 slot cards.
All high-end cards, but jut getting bigger every gen it seems.

Even this isn’t even wholly accurate. Announcements like the Ada A6000 and the current A6000(3090ti) are blower style 2 slot cards. Though you’ll have to want to spend $5k on it
 
The bit that gets me, is that the big brute of a card like a 4090 is just as skinny without its cooler as the little tiddler low profile card without its cooler, just a bigger surface area. Some components might stick up a little more, but that's it, the motherboards will be similar thicknesses.
It´s the cooling needed. The more power used, the bigger the cooler needs to be.

Even this isn’t even wholly accurate. Announcements like the Ada A6000 and the current A6000(3090ti) are blower style 2 slot cards. Though you’ll have to want to spend $5k on it
Look at the power consumption. A serie cards are better power optimized or better binned GPU´s than rtx gamings cards. A6000 is rated to 300 watt while 3090 is 350 watt and 3090 TI/4090 is 450 watt stock. Then ad the max power target of up to 600 watt for 4090. I am not familiar if A6000 power target can be raised throw. But my point is that 4090 can tecnically consume up to twice the power of A6000 if 4090 is allowed to do it. that needs big lumps of a cooler or water cooling.
 
It´s the cooling needed. The more power used, the bigger the cooler needs to be.


Look at the power consumption. A serie cards are better power optimized or better binned GPU´s than rtx gamings cards. A6000 is rated to 300 watt while 3090 is 350 watt and 3090 TI/4090 is 450 watt stock. Then ad the max power target of up to 600 watt for 4090. I am not familiar if A6000 power target can be raised throw. But my point is that 4090 can tecnically consume up to twice the power of A6000 if 4090 is allowed to do it. that needs big lumps of a cooler or water cooling.

Oh for sure, I was splitting hairs I guess, but I was just mentioning (poorly) that they are the same physically they have the same boost targets as FE cards, though I cant say for sure yet for the Ada A6k's obviously. Just that its possible. I'm honestly kind of curious to know if the coolers would bolt to vanilla PCBs though if your spending that much money whats the point anymore.
 
Idle fan-stop is a staple feature now.
It isn't easy to cover all the warranty bases when you have to cool a chip that will wobble between 20W to 250W+ (we're talking high-end here), plus years of use with dust gathering.
Add that to the fad that was very airflow-restrictive case fronts, solid sides and in some examples...the top was covered, and you have to deal a hot box that distracts you from the inferno inside with RGB.
Meanwhile, your card has to give you stable fps.

Dual-slot was common when cards were excessive when they almost reached 300W, but the 902, the CM690II and the HAF were around back then too.
 
Why is it so, so hard to find true dual slot video cards!? Literally the only higher end GPUs I see online that only actually use two slots and don't interfere with the 3rd slot which I happen to have a USB-C PCI-E card in are used OEM cards that came out of Dell, etc. prebuilts.
Because they need bigger heatsinks to dissipate heat. They could get away by being smaller, but then they would end up being noisy or they could have copper only heatsinks, but then they will cost way too much.
 
For sure, increased power consumption requires bigger cooling solutions for add-in cards.

There's probably another factor in play.

Better silicon will provide the same amount of performance at lower power levels. Or you can increase power levels for more performance. This is performance-per-watt. Higher performance-per-watt is desirable.

The issue in 2022 is that there is a bigger market for high performance-per-watt: data center. All of the NVIDIA's best silicon is ending up in their Datacenter business whose customers are far more sensitive to performance-per-watt metrics.

The average DIY PC gamer doesn't really care if their graphics card draws 300, 350, 400 watts.

Same with VRMs, memory chips, etc. These companies are binning all silicon and the best samples are going elsewhere, not into discrete graphics cards for the DIY market.

Hell, even the big system builders like HP, Dell, Lenovo might be getting better GPU silicon so they can sell a 2-slot desktop PC to the General Accounting Office or NOAA.

The CPU landscape is similar. There are OEM-only versions of CPUs that are lower powered. Sometimes the silicon is locked so it can't go above a certain power threshold because the customer in question is sensitive to energy efficiency.
 
It´s the cooling needed. The more power used, the bigger the cooler needs to be.
Yes obviously, I know. I think you misunderstood the point that I was making. nvm.
 
For sure, increased power consumption requires bigger cooling solutions for add-in cards.

There's probably another factor in play.

Better silicon will provide the same amount of performance at lower power levels. Or you can increase power levels for more performance. This is performance-per-watt. Higher performance-per-watt is desirable.

The issue in 2022 is that there is a bigger market for high performance-per-watt: data center. All of the NVIDIA's best silicon is ending up in their Datacenter business whose customers are far more sensitive to performance-per-watt metrics.

The average DIY PC gamer doesn't really care if their graphics card draws 300, 350, 400 watts.

Same with VRMs, memory chips, etc. These companies are binning all silicon and the best samples are going elsewhere, not into discrete graphics cards for the DIY market.

Hell, even the big system builders like HP, Dell, Lenovo might be getting better GPU silicon so they can sell a 2-slot desktop PC to the General Accounting Office or NOAA.

The CPU landscape is similar. There are OEM-only versions of CPUs that are lower powered. Sometimes the silicon is locked so it can't go above a certain power threshold because the customer in question is sensitive to energy efficiency.
I would argue that silicon is basically the same, nVidia just doesn't crack clocks as high for those cards and that leads to massive saving of power, but tiny losses in performance.
 
My 980 Classified is a dual slot card. Love that thing.. wish it had a bit more vram though. Maybe some extra horsepower too. Then there is my Fermi.. big triple slot triple fan dinosaur. It is only slightly larger than my 2.7 slot Ampere.

Remember when SLi was still fresh, and you were running a pair of single slot cards? I miss those days.

I do agree though, seeing a flagship card with a block on it does make it look a bit funny, but kinda sexeh too..
 
There are 2 slot cards out there, even rtx 4090s. The issue is the cooling, to keep a 400W card under 80C you really need a beefy cooling (and if its the traditional one, then the card is going to be either extra long, or extra thicc.)
 
I would argue that silicon is basically the same, nVidia just doesn't crack clocks as high for those cards and that leads to massive saving of power, but tiny losses in performance.

Performance-per-watt is a curve, it's not linear. Anyhow who has tried overclocking silicon should know this: CPU, GPU, memory.

Increasing your GPU power by 20% to get 5% more fps will give you a higher raw score. It also decreases performance-per-watt. NVIDIA's datacenter customers will want optimal performance-per-watt. If they need more performance, they can step up to a higher level product or just buy more compute units.

Overclocking a graphics card is something PC gamers do because you can't buy four RTX 3050 cards for gaming in place of one 3090.

If you're not forcing more electricity into the GPU, you don't need crazy thermal solutions like 4-slot heatsinks.

Also datacenter GPUs live in air conditioned server rooms. They already have the benefit of being shepherded by IT professionals and server administrators who will focus on proper operating conditions. Acoustics aren't really a concern either.

PC gamers generally have their computers nearby, in the worst case actually on their desk. So consumer discrete graphics cards are typically operated by people who often don't put much thought into providing proper airflow and adequate ventilation and put their equipment in places like living rooms, bedrooms, and offices where acoustics are a factor.
 
Last edited:
My 980 Classified is a dual slot card. Love that thing.. wish it had a bit more vram though. Maybe some extra horsepower too. Then there is my Fermi.. big triple slot triple fan dinosaur. It is only slightly larger than my 2.7 slot Ampere.

Remember when SLi was still fresh, and you were running a pair of single slot cards? I miss those days.

I do agree though, seeing a flagship card with a block on it does make it look a bit funny, but kinda sexeh too..
Well, as much as I appreciate a female body with slight curves, I prefer my GPUs on the small and efficient side. :D
 
Performance-per-watt is a curve, it's not linear. Anyhow who has tried overclocking silicon should know this.

Increasing your GPU power by 20% to get 5% more fps will give you a higher raw score. It also decreases performance-per-watt. NVIDIA's datacenter customers will want optimal performance-per-watt. If they need more performance, they can step up to a higher level product or just buy more units.
It might be actually worse than 20% power for 5% performance. You can read about it here:

Basically, heat output or well power consumption linearly increases with current (basically how much energy CPU draws in) and frequency, but increments of voltage increase power usage by square. On top of that, if you want more frequencies and you are really pushing it, then you need increasingly more and more voltage to stabilize those speeds. Also semiconductor efficiency drops a bit when they run hot, therefore, it's not only just that simple and the higher you go, there more problems you have. Now, every new node tries to solves all such problems or at least some of them, but then again, semiconductor engineers like Intel, nV, AMD just like to crank clocks higher and higher with each node, therefore you don't get lower absolute wattage of chips, but you may get more performance peer watt at same wattage. Unfortunately, many computer parts today are cranked to the moon and beyond and losing even the last 5% of clock speed can yield 15-20% reduction in power consumption. 10% performance loss can lead to saving in 30-40% ballpark. If things are really that stupid, then it just makes sense for consumer to do something and no, it really does matter, because energy costs money and prices of it went up by times during this year. Also like never before, we need to switch to greener energy ASAP, because burning dead liquefied dinosaurs is awful for environment and public health and it turned out that it's a major geopolitical risk, which can start a war or just ruin economies overnight.

Overclocking a graphics card is something PC gamers do because you can't buy four RTX 3050s for gaming in place of one 3090.
And I wonder who killed SLI??? Anyway, most people can only afford RTX 3060s, it was the best selling GPU from Ampere line-up. That's all nice and all, but most popular cards that people use right now are GTX 1060, GTX 1650 and RTX 2060 and those 3 alone account for over 17% of all gamers. Performance per dollar matter a lot, so does performance per watt too and also the fact that buying high end cards makes no sense as basically next generation of cards can bring same performance for half the price. You can try to justify that RTX 3090 as much as you want, but it's a dumb card and makes no sense.

And if you're not forcing more electricity into the GPU, you don't need such crazy thermal solutions.

Also datacenter GPUs live in air conditioned server rooms. They already have the benefit of being shepherded by IT professionals and server administrators who will focus on proper operating conditions. PC gamers generally have their computers nearby, in the worst case actually on their desk. So consumer discrete graphics cards are typically operated by people who often really don't put much thought into providing proper airflow and adequate ventilation and put their equipment in places like living rooms, bedrooms, and offices.
And you basically end up with skimped cooling solutions in the end and hotter running GPUs, because quietness isn't important and that's basically what has been happening for over decade with pro cards. Many of them just plain suck, because they are either loud or run very hot and since there are no aftermarket coolers, you can't just buy Zotac or MSI version with actually decent heatsink either. So basically that efficiency advance only lead to smaller heatsinks, but neither lower temps or less noise.
 
It might be actually worse than 20% power for 5% performance.

Whether the actual figure is 20%, 22%, or 26.73% isn't important.

The point is overclocking silicon is a poor strategy from a performance-per-watt perspective, especially for a sustained workload. In the context of this particular thread, you should be looking for a peak load that is near the top of that curve.

Remember that performance-per-watt doesn't address fan acoustics. Decibels is a logarithmic scale so a 10 dB(A) difference is a tenfold increase in sound. A 3 dB(A) difference about twice the noise. So overclocking your GPU and having your fans run at 100% burns more GPU electricity (which is money) as well as more fan electricity (more money).

(truncated for brevity)

And I wonder who killed SLI???
Well, it certainly wasn't the GPU makers. They had a vested interest in selling more cards obviously. The onus was on the individual game developers to make SLI work properly on each title. In some notable cases, SLI made performance worse. Properly configuring SLI for the consumer wasn't a cakewalk either.

By 2017, SLI's heyday had passed. Graphics card performance was improving to the point where rasterization could be handled by one card making SLI unattractive.

Anyway, most people can only afford RTX 3060s, it was the best selling GPU from Ampere line-up. That's all nice and all, but most popular cards that people use right now are GTX 1060, GTX 1650 and RTX 2060 and those 3 alone account for over 17% of all gamers.

(truncated for brevity)

You can try to justify that RTX 3090 as much as you want, but it's a dumb card and makes no sense.
It's natural that the entry level graphics card two generations ago is the most common one on the Steam Hardware Survey, just like there are more five year old Toyota Celicas on the road than the brand new Mercedes-Benz S-Class V12 sedan.

For some people, the price of the S-Class isn't really a significant dent in their disposable income.

Remember that not everyone who buys a 3090 is going to game with it. Video games are made on PCs. And people not involved in gaming use the card too.

The thread discussion was the dearth of 2-slot cards for the DIY market. As several of us mentioned earlier, the main reason for this is increased power consumption in today's graphics card products which requires more hefty cooling solutions.

You can get a 2-slot 3090 but it will be liquid cooled, either as a hybrid with an AIO radiator (and an on-board fan for the VRM & VRAM) or as a waterblock for a custom cooling loop. I had an Alphacool full-length waterblock on a 2070 Super FE in a 2-slot NZXT H210 case. It worked great and ran much quieter than the stock cooler (which was also 2-slots thick). Most of the AIB coolers for 2070 Super were thicker than 2 slots and once I got the FE, I understood why. Two slots isn't realistically adequate for a graphics card GPU with a >225W TDP.

So, yes one can buy an off the shelf 2-slot GPU today. It'll be super expensive and has the caveat that there will be 240mm AIO radiator dangling off the end. There are a handful of entry level GPUs available for 2-slot configurations.

I have one: EVGA GeForce RTX 3050 8GB XC Gaming. It's in a build (the aforementioned NZXT H210) that I don't use for gaming but it works great.

And you basically end up with skimped cooling solutions in the end and hotter running GPUs, because quietness isn't important and that's basically what has been happening for over decade with pro cards. Many of them just plain suck, because they are either loud or run very hot and since there are no aftermarket coolers, you can't just buy Zotac or MSI version with actually decent heatsink either. So basically that efficiency advance only lead to smaller heatsinks, but neither lower temps or less noise.

Again, performance-per-watt is a different way to look at the situation. Apple does this with their iPhones. Those guys are laser focused on performance-per-watt because most of their business comes from iPhone and of the computers, >85% of Macs sold are notebook models.

You won't get 4090 benchmark results from a Mac but for sure their performance-per-watt metrics crush anything in the x86/x64 PC consumer market.
 
Last edited:
Well, it certainly wasn't the GPU makers. They had a vested interest in selling more cards obviously. It was really up to the individual game developers to make SLI work. In some notable cases, SLI made performance worse. Properly configuring SLI for the consumer wasn't a cakewalk either.
That. I personally think SLi and CF were never a thing anyway due to terrible scaling. You always got more performance out of a single $200 GPU than two $100 ones, not to mention that your VRAM was never doubled, either.
 
That. I personally think SLi and CF were never a thing anyway due to terrible scaling. You always got more performance out of a single $200 GPU than two $100 ones, not to mention that your VRAM was never doubled, either.

SLI was a thing for a handful of tinkerers, the type of people who often like to participate on PC forums. From a value perspective (cost, time to configure, etc.) SLI was very poor but the people who loved it didn't listen at the time to the naysayers.

In various discussions about GPU overclocking I've been rather dismissive of the idea of spending much time tweaking settings. However there are plenty of folks here who are still super gung ho about OC-ing their modern CPUs and GPUs.

Anyhow, most of today's two-slot GPUs end up in OEM builds (HP, Dell, Lenovo, etc.) or in some professional computing products. Joe Consumer really wants more gaming performance at the expense of electricity, heat, and size. Not everyone can afford it but look at the 4090's near instantaneous sellout and immediate scalping despite the fact the 4090 is way more expensive than the 3090, there's a recession, and there's zero mining demand.
 
Whether the actual figure is 20%, 22%, or 26.73% isn't important.
How it is not? It just shows how far cards are cranked way past diminishing return point and it's all just ridiculous.


Well, it certainly wasn't the GPU makers. They had a vested interest in selling more cards obviously. The onus was on the individual game developers to make SLI work properly on each title. In some notable cases, SLI made performance worse. Properly configuring SLI for the consumer wasn't a cakewalk either.
Not exactly correct. You see they also have to sell their pro cards, that have really high margins and basically one pro card in margin might equal 10-20 gamer cards. Therefore they sure as heck want to move pro cards as much as possible and SLI or NVLink actually worked beautiful there and still does. Meanwhile, consumer cards in gamer just never worked right after 3DFX's SLI. Well, sure it was difficult for game devs to implement it in games, that's true. Some even skimp on really basic optimization as it is and unlike what some people think, game company margins aren't great and they are in super volatile market, so they have to do many things to not end up upside down. But then again nVidia has an interest in not making SLI work in games well, because people could have just bought cheaper 2 cards and not buy 1 more expensive one or even worse, pros use high end gaming cards in SLI for fraction of cost and not pay those high margins for pro cards. So I just don't think that nVidia really wanted to make SLI work truly well, they just made it kinda cool, but not cool enough to replace some of their other products. nVidia has also been fighting for years against people using GeForce cards instead of Quadros for work and basically had to come up with bunch of small things that made Quadros more sensible for work, it failed and culmination was just outright locking down of vBIOS cross-flashing and later making it physically impossible. I think that nVidia secretly hating SLI actually happened.


Remember that not everyone who buys a 3090 is going to game with it. Video games are made on PCs. And people not involved in gaming use the card too.
And that's when you are supposed to use RTX A or Radeon Pro card. They used to use same hardware and still do, but their drivers are different and Pro cards render depth in games properly, more accurately, plus you have more vRAM and some other smaller things that could make those cards better in game dev.


The thread discussion was the dearth of 2-slot cards for the DIY market. As several of us mentioned earlier, the main reason for this is increased power consumption in today's graphics card products which requires more hefty cooling solutions.
I think it was me lol. If you only want modern features and still decent performance, you can get less than BFGPU or high end SKU, especially if you only game.

You can get a 2-slot 3090 but it will be liquid cooled, either as a hybrid with an AIO radiator (and an on-board fan for the VRM & VRAM) or as a waterblock for a custom cooling loop. I had an Alphacool full-length waterblock on a 2070 Super FE in a 2-slot NZXT H210 case. It worked great and ran much quieter than the stock cooler (which was also 2-slots thick). Most of the AIB coolers for 2070 Super were thicker than 2 slots and once I got the FE, I understood why. Two slots isn't realistically adequate for a graphics card GPU with a >225W TDP.
For 225 watts it could be enough, not for more though. Anyway, you can just buy lower tier GPU from same gen or just undervolt what you want to buy.

So, yes one can buy an off the shelf 2-slot GPU today. It'll be super expensive and has the caveat that there will be 240mm AIO radiator dangling off the end. There are a handful of entry level GPUs available for 2-slot configurations.

I have one: EVGA GeForce RTX 3050 8GB XC Gaming. It's in a build (the aforementioned NZXT H210) that I don't use for gaming but it works great.
I really hate it when people call it an "entry" level card. It sure is the cheapest Ampere card you can get, but it's great at gaming. It can do 60 fps at 1440p, usually high, sometime ultra, very rarely at medium. 1080p runs at ultra with 60-100+ fps. It's a beast of a card and it's nothing like previous xx50 tier cards, which retrospectively sucked donkey's arse.
 
Because they need bigger heatsinks to dissipate heat. They could get away by being smaller, but then they would end up being noisy or they could have copper only heatsinks, but then they will cost way too much.

Yes. I get that, but if Dell can make a perfectly functional video card that fits in two slots Asus and MSI and whoever else can too. They just don’t.

The reference RX6800 and RTX 3070 are dual slot cards too.
I have an RX6600 already. But I need a higher performance card.
 
How it is not? It just shows how far cards are cranked way past diminishing return point and it's all just ridiculous.

We're talking in general concepts here. We're not trying to prove what the actual curve looks like which would vary from GPU to GPU anyhow, as well as sample to sample. If it were 26.16% on a 3060 but 25.98% on a 6800XT. It doesn't matter.

I think that nVidia secretly hating SLI actually happened.

Well, NVIDIA got their wish, NVLink doesn't even exist on the 4090 cards.

For 225 watts it could be enough, not for more though. Anyway, you can just buy lower tier GPU from same gen or just undervolt what you want to buy.
Absolutely, however regardless of the wattage it still needs to fit in a 2-slot space. Remember the thread discussion? TWO SLOTS.

You can stick a 400W graphics card in a 2-slot space. Just not any card and not most cards. But there are two slot cards in the marketplace.

I really hate it when people call it an "entry" level card. It sure is the cheapest Ampere card you can get, but it's great at gaming. It can do 60 fps at 1440p, usually high, sometime ultra, very rarely at medium. 1080p runs at ultra with 60-100+ fps. It's a beast of a card and it's nothing like previous xx50 tier cards, which retrospectively sucked donkey's arse.

Entry level is a common term used in multiple industries (not just PC graphics) to describe the least expensive tier of product in a full stack. It doesn't mean "sucky". Entry level is a standard business term. It's even used for job listings.

We could be talking about food processors, digital SLRs, or fountain pens. Would you like us to call it something else? "Mass market" is a synonym. Would that make you happy?

In the context of the 30 Series, the RTX 3050 in my daily driver Windows PC is the entry level model. It replaced a Radeon RX 550 2GB which was also the entry level model of AMD's 500 Series. They both serve their purposes. Some people game with them, others do not. If they are happy with how it performs in whatever their particular usage case it, then things are fine.

Remember that not everyone games with discrete video games. I assure you that here in my country, there are tens of thousands of PCs sitting in government buildings, corporate offices, research labs that will never download and run a single byte of code from Steam, Epic Game Store, Ubisoft, whatever. Many of those are also using entry level graphics cards, just like my RX 550.

The problem is I can't keep track of which TPU participant hates which term. So the next time I bring up "entry level" in a TPU discussion, I will have forgotten how much you hate the term. Sorry in advance because I know I will use it again.
 
Last edited:
Back
Top