• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Why are the new generation graphics cards such bricks?

You don't understand that's an overkill. Expensive, unnecessary and unneeded!
The advantage is being able to tone the fan noise down, I guess.
 
  • Haha
Reactions: ARF
i want compact watercooled grapic cards :( but i aint paying 2000 for it....
 
you can always buy the MSI Ventus version, but then don't complain about temps and noise :D
 
The biggest reason for this is the increased GPU TDP power. Remember, even the GTX 1080 was the most powerful GPU back then, consuming 180W, even the next 1080 Ti was 250W, now those numbers are for Midrange.
 
Are you joking?
I have to admit this thread is a mistake. There are too much unexpected and unwanted replies here.
Please, the moderators can feel free to delete the thread.
Unwanted replies... well, next time maybe you could tell me what kind of reply you want, so I can copy-paste it for you to make it look like I wrote it. :laugh:
 
Little different answer to OP:
Lack of quality and quantity of good, useful knowledge and thought activity in the teams of scientists and engineers performing the R&D activities to show creativity and invent qualitatively new and times more efficient architecture for GPUs. I mean new and different as a whole, not improvements of individual parts of already created years ago and already morally and materially backward architectures.
 
  • Like
Reactions: ARF
I already suggested to do a new matric of volume (m^3) to performance to see if size dose matter and in which direction.
Also volume to noise will be useful.

Anyway, smaller is better. Fit any case easily and less restrictive for airflow.
I have a single 92mm fan on a 2 slot card that is about half the length of my mobo.

The new gen bricks and them getting larger gen by gen are testimony for the gradual regression we are in.

Tear down that brick wall, pink.
 
i want compact watercooled grapic cards :( but i aint paying 2000 for it....

I also miss the Radeon R9 Nano - perhaps the greatest graphics card of all times. Super fast and yet super small.
It more and more looks like nvidia rapes its silicon in order to extract 1 or 2% more performance. Of course - stupid, expensive and very far from optimal.
When the nvidia recent SKUs are actually ready to nano with those super cut PCBs.

1671112995221.png


1671113117298.png
 
People complaining about GPU sizes and I see tons of people putting 4090's and 4080's in SFF cases.
 
We're approaching the limits of physics and transistor/voltage reduction.

In the past you could use far more advanced nodes/fabrication processes to improve performance basically for free by running faster/having more transistors while increasing the power budget only slightly. Also, hardware algorithms were advancing (remember Maxwell) fast - those were low-hanging fruits.

Nowadays, architectures are advancing a lot slower, nodes provide a lot less power improvements and if you want to have a decent generational uplift you have no choice but to up the power budget significantly.

This has been discussed in depth in many publications over the past decade. I'm surprised people are asking this question.
It's very good that we are approaching that limit, because die designers will finally be forced to put all of their resources into more efficient architectures and maybe , when they reach that limit as well, the motherboard manufacturers would need to work with them to optimize it further. When that limit is reached they would probably invent some revolutionary way of cooling the dies.
 
you don't care about those spots because they are not important.
[Citation Needed]

I never suggested it was dangerous to the silicon, just found the wording "Dead Cold" odd.
 
Nowadays, architectures are advancing a lot slower,
I don't think the reason for this is complications based on using more elements. Because in the meantime, the performance of the supercomputers used for architectural simulations has also increased manifold. Rather, the reasons are managerial decisions about how long companies can profit from off-the-shelf architectures without incurring the expense of inventing and deploying brand new and more efficient architectures over shorter periods of time. Running out of useful architecture options is more of an urban legend when it comes to the near-infinite combinations of the tens of billions of transistors in modern GPUs.
 
I already suggested to do a new matric of volume (m^3) to performance to see if size dose matter and in which direction.
Also volume to noise will be useful.

Anyway, smaller is better. Fit any case easily and less restrictive for airflow.
I have a single 92mm fan on a 2 slot card that is about half the length of my mobo.

The new gen bricks and them getting larger gen by gen are testimony for the gradual regression we are in.

Tear down that brick wall, pink.
Speaking of dimensions, they should also add weight (kg/lb) to the specs sheets of graphics cards, and from there measure weight to performance, noise and thermals. The ideal graphics card will then be one with high performance, weighs little and runs cool, with efficient heatsink & fans design to make it cool and light.
 
The ideal graphics card will then be one with high performance, weighs little and runs cool, with efficient heatsink & fans design to make it cool and light.

So, magic gpu's then!
 
So, magic gpu's then!
While still on the subject of magic GPUs, make them < $300 while at it. :p

Point is, now that current gen cards are pretty much bricks, we should start (re)introducing weight as a metric, alongside the spatial dimensions. Mention them in specs sheets, store descriptions and the like.
 
Emmm,I can still remember the years which ARESII and GTX590,HD6990 in place.
Under a high pressure of competitors,GPU producers are more willing to produce high performance products,in order to pin down competitors and to attract consumers.But it will actually increase TDP under a not so developed tech anyway,just like those "Fermis"
 
Point is, now that current gen cards are pretty much bricks, we should start (re)introducing weight as a metric, alongside the spatial dimensions. Mention them in specs sheets, store descriptions and the like.

every gpu includes the dimensions, just go to their webpage if the store didn't bother with it. Weight i guess this is not a mobile device so it's really not very relevant. Usually weight is more an indication of quality, the same with PSU's for example.
 
They are cool because they are 3-4 slot bricks with plenty of cooling potential. With chunky boy GPU's weighing 2-2.5 kg it's no wonder they're running at 60C in gaming workloads.
Yeah, I mean this is pretty much the reality of the situation and with advanced power management allowing performance to scale directly with temperature its not going away. The real problem is the form factor and the standard don't make sense anymore not that people don't want big powerful graphics cards or CPUs. People need to start demanding something better from manufactures and do some actual innovation instead of just kludging bigger and more power hungry parts into standards they are not designed for.
 
People complaining about GPU sizes and I see tons of people putting 4090's and 4080's in SFF cases.
Zalman Z1 Neo with a Thermalright ARO M14G and a R7 250X lolz
 
I also miss the Radeon R9 Nano - perhaps the greatest graphics card of all times. Super fast and yet super small.
It more and more looks like nvidia rapes its silicon in order to extract 1 or 2% more performance. Of course - stupid, expensive and very far from optimal.
When the nvidia recent SKUs are actually ready to nano with those super cut PCBs.

View attachment 274565

View attachment 274566
Here's a small card for you. ;) A friend of mine has one, and he loves it.

My point is: small cards still exist - just not in the high-end, as that's where Crossfire and SLi used to be.
 
Back
Top