• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

ASUS Launches Single-Fan RTX 3060 12GB Phoenix Graphics Card

AMD and Nvidia have idea but there is ARM/QUALCOMM which is on light years ahead of them.
PS. GPU CU numbers scaling is not so bad like CPU number of cores scaling because operations for GPU are more simplify and is more easily to parallelized to many CU's.
The variety of tasks assigned to the CPU makes it difficult to scale and reduces efficiency, because the CPU also solves a lot of single-threaded tasks.
But my interest in this discussion is on GPU's not CPU's.
I don't think the person you quoted mentioned CPUs whatsoever...

And, again, if ARM and Qualcomm could scale their GPUs up to much larger sizes without sacrificing efficiency, why haven't they done so? That would allow them entry into huge and very lucrative markets like consoles, gaming PCs, etc. Of course none of these come close to the sales volumes of smartphones, but smartphones also have near zero margins.

You're assuming they have some kind of magical technology that simply doesn't exist, as you're not taking into account the inherent efficiency that comes from designing for a small maximum size and overall limited layout. Smaller designs will always be more efficient than larger designs. Period. There's nothing saying that any current mobile GPU maker could match AMD or Nvidia at the 150-250W range, except maybe Apple. But given the drastic differences between mobile GPUs in power delivery, size and thus internal interconnects, VRAM interfaces and bus widths, thread/workload allocation, driver complexity, etc., etc., etc., there's no way of knowing until one of them tries.
 
And, again, if ARM and Qualcomm could scale their GPUs up to much larger sizes without sacrificing efficiency, why haven't they done so?
They don't scale it because make it for GSM's and tablets and has much smaller power budget than GPU's for graphic cards for PC.
 
They don't scale it because make it for GSM's and tablets and has much smaller power budget than GPU's for graphic cards for PC.
... That isn't a logical statement. "They make it for A" in no way precludes them from also making it for B. It's not like they have exclusivity agreements in place with... uh, the entire mobile industry. You're arguing that they have much better GPU tech and much more efficient architectures. If that was true, they could then scale these up and make massive amounts of money from new markets with relatively small investments - the architectures exist already, after all. The issue with your argument, which is understandable as it's by no means self-explanatory, is that the reason they don't scale up their designs is because doing so would be expensive, difficult, and maybe not even possible, and what is an efficient design at very small sizes might not be efficient at all in bigger sizes. You're the one making a new claim here - that mobile GPUs could scale up to beat desktop/server GPUs - so the burden of proof is on you. And sadly you won't be able to prove that, as it isn't as simple as you're making it out to be.
 
If that was true, they could then scale these up and make massive amounts of money from new market
Maybe have gentleman's agreement from it school/student years to devide the market. Desktop, laptop and workstation for owners of Intel, AMD and Nvidia other consumer devices for ARM companies?
 
... That isn't a logical statement. "They make it for A" in no way precludes them from also making it for B. It's not like they have exclusivity agreements in place with... uh, the entire mobile industry. You're arguing that they have much better GPU tech and much more efficient architectures. If that was true, they could then scale these up and make massive amounts of money from new markets with relatively small investments - the architectures exist already, after all. The issue with your argument, which is understandable as it's by no means self-explanatory, is that the reason they don't scale up their designs is because doing so would be expensive, difficult, and maybe not even possible, and what is an efficient design at very small sizes might not be efficient at all in bigger sizes. You're the one making a new claim here - that mobile GPUs could scale up to beat desktop/server GPUs - so the burden of proof is on you. And sadly you won't be able to prove that, as it isn't as simple as you're making it out to be.
There's a simple rule that I decided to apply to myself : If I could think of something that people who are actually expert in a domain couldn't think of, then it means that there's something blocking it that my lack of knowledge can't grasp.

The only time when big companies are not making evident business move, is when those moves are not lucrative enough to bother with them ( like Windows glaring UI issues, people are still buying and using it anyways) or when they can't see the true potential of a market . But they will jump on anything lucrative.
Maybe have gentleman's agreement from it school/student years to devide the market. Desktop, laptop and workstation for owners of Intel, AMD and Nvidia other consumer devices for ARM companies?
gentleman's agreement ? In the tech industry where everyone is constantly low kicking when the other isn't looking ? :confused: ARM is also going beyond the consumer market, Qualcomm and Huawei are selling A.I/general compute card for the datacenter
 
gentleman's agreement ? In the tech industry where everyone is constantly low kicking when the other isn't looking ? :confused: ARM is also going beyond the consumer market, Qualcomm and Huawei are selling A.I/general compute card for the datacenter
Yes why not I not write nothing for A.I. It's not exist when they was students and is impossible to exist in agreement areas.
 
Yes why not I not write nothing for A.I. It's not exist when they was students and is impossible to exist in agreement areas.
mmmh I see. Have a nice day.
 
Maybe have gentleman's agreement from it school/student years to devide the market. Desktop, laptop and workstation for owners of Intel, AMD and Nvidia other consumer devices for ARM companies?
Yeah, no, that's not how the tech industry works. Rather, it's intensely competitive, with big fish eating the small fish at every chance they get. Just looks at the long history of acquisitions and mergers for the companies you mentioned.

Nvidia backed out of smartphones and ARM SoCs because of the intense competition and high price of taking part - developing those SoCs is very expensive, and having their own GPU IP wasn't enough of an advantage to keep them in the game (anticompetitive moves from QC also reportedly played a large part in this, with the technologically superior Tegra 4 barely being adopted at all). ARM is on the other hand expanding rapidly into the server/datacenter space, after about five years of trying and failing, they're now truly gaining ground (and are highly competitive in terms of peak performance). Check out AnandTech's recent server reviews for some info there. ARM and QC are also trying to get into the laptop market with WOA. Intel spent billions trying to get into the tablet and smartphone spaces, but ultimately lost out simply because their Atom CPU cores weren't competitive. AMD has an active ARM licence and has previously tried making an ARM server core (discontinued as it wasn't very good and Ryzen turned out to be a great success). And so on, and so on.

There are no non-compete agreements, just the realities of what is feasible and what is lucrative. The server accelerator market is certainly lucrative enough that anyone with a GPU or GPU-like chip IP could make massive amounts of money if they could make their product suitable for that. So if QC, ARM, PowerVR, or anyone else had a GPU design that could unproblematically scale up to the 200-300W range while maintaining the efficiency advantage they have in the 2-5W range, they would. As it stands, the cost of doing so would be massive for them as it would essentially necessitate a ground-up rearchitecting of their GPU architectures, and there's no guarantee whatsoever that they would be able to compete with what Nvidia and AMD are currently selling. So they don't bother. It would be a very, very, very expensive and risky gamble.
 
the cost of doing so would be massive for them
Mmm, cost has much ways to be justified. But my opinion is that may to begin with just one model GPU for graphic card in price tag which in sweet spot for consumers. Maybe this is between today's budget and middle cards in performance. If they succeed to make card which is with price like rtx 3050 ti, with power consumption like GTX 1650 and with teraflops like rtx 3070...and with manufacturing cost like GT 1030 :D
Not possible or just naughty example for comparisson I wrote?
 
Mmm, cost has much ways to be justified. But my opinion is that may to begin with just one model GPU for graphic card in price tag which in sweet spot for consumers. Maybe this is between today's budget and middle cards in performance. If they succeed to make card which is with price like rtx 3050 ti, with power consumption like GTX 1650 and with teraflops like rtx 3070...and with manufacturing cost like GT 1030 :D
Not possible or just naughty example for comparisson I wrote?
Yeah, that's not happening. Not only do none of them have the technology for that, but that would be a poor investment. If anything like this were to happen they would target the server/data center markets, not sonsumer gaming first. They might expand to gaming after establishing a foothold in server/datacenter, but only if they could stomach the investment needed to ensure driver compatibility with thousands and thousands of games. This would require a massive software development team with highly specialized skills and several years of development at the very least. At that point the hardware would already be obsolete. Hardware+driver mixes are a constantly moving target, and one that's extremely difficult to come close to if starting from little or nothing. Compute is much, much more straightforward, and would as such be the only way to begin. That margins in those markets are much higher obviously also helps - you can easily sell the same silicon for 2-3x the consumer-equivalent price in enterprise server/datacenter markets after all. That none of these actors have started pitching future server compute accelerators tells us that their GPUs aren't likely to scale up easily - if it was easy, they would be looking to cash in on a booming and enormously lucrative market.
 
The hightest-TDP low-profile cards to date have been 75W, and those were still double-wide.
Palit GTS 450 (106W) and PowerColor HD 5750 (86W) would like to have words with you.
images.jpeg-51.jpg

card1.jpg

card2.jpg
 
@Logoffon that Palit card at the top is just so collectible. :cool:
 
Palit GTS 450 (106W) and PowerColor HD 5750 (86W) would like to have words with you.
View attachment 194408View attachment 194409
Neat! I wasnt' aware anyone had made a half-height card with a 6-pin connector, as the chances of having an ATX PSU with dedicated GPU connectors in a Flex-ATX or custom half-height case were vanishingly small.

Always happy to be proved wrong, but I still don't think that they're going to get a 170W cooler into the space constraints of a half-height card. I suspect the increased thermal density of Samsung's 8nm might actually make it harder, so a 106W card based on a 40nm will be easier to cool than a 106W card based on Samsung's 8nm, all else being equal.
 
Neat! I wasnt' aware anyone had made a half-height card with a 6-pin connector, as the chances of having an ATX PSU with dedicated GPU connectors in a Flex-ATX or custom half-height case were vanishingly small.

Always happy to be proved wrong, but I still don't think that they're going to get a 170W cooler into the space constraints of a half-height card. I suspect the increased thermal density of Samsung's 8nm might actually make it harder, so a 106W card based on a 40nm will be easier to cool than a 106W card based on Samsung's 8nm, all else being equal.
Yep, as I said earlier in the thread, somewhere around 120W is likely to be the feasible maximum for cooling with a balls-to-the-wall HHHL GPU. Using something like an XT60 power connector with an included PCIe adapter would even allow for a noticeable increase in fin area given the chunk taken away by the PCIe power connector :P

But given the prevalence of SFF cases supporting full-size GPUs these days, it's highly unlikely for anyone to make a GPU like this. Too bad, really.
 
Back
Top