• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Microsoft Partners with AMD for Next-gen Xbox Hardware

The next XBox needs to just have Windows on it where you can switch it between gaming mode and desktop mode just like the Steamdeck.

I'm down with that as long as it isn't the bloated mess windows has become and is very streamlined to offer a desktop like experience with out all the downsides windows has currently, I'm not really holding my breath that they can achieve this but a console needs to be plug and play or it'll be very niche other companies have tried console like pc including valve and it's always went up in flames.

That being said other than a low end pc gaming replacement I still have a hard time believing pc gamers would embrace a M$ branded windows/console hybrid.

I'd love for valve to take another stab at it with what they've learned from the steamdeck more competition is never a bad thing.
 
Insert the token presenter, that has most likely never played a computer game in thier life...
 
I wonder if Nintendo has an exclusivity deal for Nvidia consoles, or if AMD's desperateness ability to deliver value at a low cost for stakeholders was the deciding factor.
Vendors don't do Intel at least because they want way too much money for it. It's hard to believe that Nvidia would be any different. Nintendo can do that, since they also take way too much money hardware-wise.

AMD does not do that. They seem to be more interested in dominating that market than to make a big buck on every unit sold.
 
I'm down with that as long as it isn't the bloated mess windows has become and is very streamlined to offer a desktop like experience with out all the downsides windows has currently, I'm not really holding my breath that they can achieve this but a console needs to be plug and play or it'll be very niche other companies have tried console like pc including valve and it's always went up in flames.

That being said other than a low end pc gaming replacement I still have a hard time believing pc gamers would embrace a M$ branded windows/console hybrid.

I'd love for valve to take another stab at it with what they've learned from the steamdeck more competition is never a bad thing.
Yeah I don't think this would target a PC gamer. This would just give console only players a way to have some sort of PC like experience.
 
Yeah I don't think this would target a PC gamer. This would just give console only players a way to have some sort of PC like experience.

For sure, but for it to be successful they'd probably need to convert some pc gamers. I doubt a hybrid console will make people want to switch from Playstation it's market is pretty carved out at this point and I doubt the majority of Nintendo market care about either.

It's debatable now that M$ is more of a game publisher if they even need it to be successful I fully believe pride and their ceo liking gaming has 100% to due with them even making another console after the Xbox one/series consoles didn't do overly well.
 
Exactly, "the updated interconnect" that was available for some years now, and AMD chose cheaper interconnect tech because it was "good enough", as said, with Zen 6 they're finally paying up for the better tech.
It's also possible that dies for new consoles will be on chiplets with new IF. We don't know this at the moment, as no leaker has been able to penetrate this level of information. Such die could be similar to Strix Halo philosophy, but with one 12-core CCD and bigger IOD/iGPU.

They usually develop new interconnect for new platforms that support higher standards. In this case, they are preparing IF to support several segments: EPYC Venice with PCIe 6.0, future Medusa Halo, and possibly future consoles. The testing vehicle for new IF is Strix Halo. On Strix Halo, IF area is more narrow, denser and distributed, as shown in the picture. We know this because CCD is shorter and a tad smaller than vanilla Zen5 CCD. They have not disclosed details, but it has most likely to do with denser fan-out links and TSVs.
Picture1.jpg


EPYC-wise, the IF is usually co-developed with whatever PCIe standard is supported on a new gen platform. IF is usually equal or faster than supported PCIe. On Zen5 CCD, IF speed is 36 Gbps per single link, whereas PCIe 5.0 is 32 Gbps. On Zen6, IF single link will have ~64 Gbps, if not more, and PCIe 6.0 on Venice platform will also bring 64 Gbps per lane. The speed of IFOP is similar to xGMI on IFIS. Plus, specific EPYC SKUs allow GMI-Wide up to 72 Gbps, with two x16 IF links to each CCD, for specific workloads and data center clients. Such new IF, 'GMI4', could feature in new consoles if Sony and Microsoft are happy that monolithic dies are not necessary. Again, we don't know this at the moment.

There was no need to have faster and more expensive IF before, as chips are very competitive in current market with what they have. Less is more. Besides, cost would increase unnecessarily. And more complaints that final products are even more expensive. So, do you want a good and affordable product in its class or prohibitively expensive product with diminishing returns? You can't have it both ways.
Note the differences between the expensive packaging used for the Strix halo (seen in hard to find $2000+ devices with zero upgradability due to everything being soldered), similar in appearance to something like an ARL or Apple M series chip with it's tiles placed as close to each other as possible.
This is nonsense. Note that there is twice as much to pay for similar Apple system. Note that Intel does not offer such silicon at all. Does Nvidia offer anything in this segment at this price point? No. Qualcomm? No. Anyone else? No. And somehow I hear you complaining. One can literally order now the top SKU 395 for $1,500 from GMK. Any better offer from anybody else with similar silicon?

SSDs and WiFi chip can be upgraded, even more modules can be upgraded on Framework Strix Halo, including different IO ports. I personally do not enjoy soldered platforms, but this is in comparison with Apple, who will cut your clicking hand off by $1,000 if you dare to click on 16GB of extra RAM module and extra 2TB SSD, from their store only, it should be said. It is Apple's platforms that are 100% not upgradeable by users who just want to instal standard components available in open market, just like they do in DIY desktop space. Apple is the most anti-consumer company in terms of selling closed platforms. There is no doubt about this. It's not surprising that their laptop segment has been stagnating in recent years, as buyers are increasingly tech savvy and want to have more options to exchange components on their own rather than pay a fortune and lose their clicking fingers in exchange for propriatery parts.

M chips with tiles? Are you talking about Ultra chip? Apple chips are monolithic. There are no tiles.
And the cheap packaging used for the 9955X3D, with lower density interconnects forcing further physical separation, leading to the latency issues X3D tries to mitigate and the IF drawing idle power that cannot be lowered to what competing monolithic chips can achieve
You act as if you re-discovered America yesterday. Monolithic chips, of course, have their advantages in terms of latency and idle power, but also drawbacks, such as lower yields as die size increases. Silicon production is a balancing game of economics and technological viability. There is no perfect solution. Trade-offs simply need to be good enough for a product to be successful. Entire industry is slowly moving towards chiplets in increasing number of segments, and AMD has championed chiplets to their great success. Even Nvidia will have a chiplet-based GPU die on Vera Rubin Ultra.

You should look more closely into physically close tiles on Arrow Lake and ask questions about latency that has had a catastrophic effect on gaming prformance. Physical distance between dies is not everything. Also, I will take idle power any time than 275HX CPU that almost melts laptops, being allowed to run at 99 degrees, guzzling over 200W at CPU socket power.

As regards to 9955X3D, here is almost 5 hour session below testing the same monster gaming laptops, one with a 'terrible' CPU 9955X3D and another one with 275HX that...appears to have its tiles, quoting you: "placed as close to each other as possible". I'd recommend you to watch it. It's very informative. Ryzen is on average 16% faster in 35 games while using less power; 3% faster in GPU-bound games and whopping 55% faster in CPU-bound games.
https: //www.youtube.com/watch?v=kB1APliFiYA

It amazes me that more advanced CPU on paper, Arrow Lake 275HX, is not able to beat Ryzen in a halo laptop, and how AMD still manages to win here. Intel folks must be furious that they paid so much for this architecture and cannot defeat AMD, even when the CPU guzzles 200W. Astonishing! There are questions to be asked here about power-to-performance scaling, and idle power is the least important one in this laptop segment.

R23 scaling AMD Z5 Strix Halo vs Arrow Lake H.jpg

Zooming out, AMD's margins on revenues are 54% (Intel 34% only), they produce client chips on much cheaper N4 node (Intel pays a fortune for N3), and somehow Intel is still not able to beat them, not only in halo laptops, but across segments and with tiles as close to each other as possible, to use your words. Something stinks in the kingdom of Denmark... The new CEO now mandates minimum 50% margins on any future architrectures, as they have been recklessly runninng the company in an unsustainable way for several years in order to come across as competitve at any price. It cannot work like this in a long run. Check it out. Your colleagues wrote about it.

In addition, much faster interconnect fabric on Arrow Lake somehow manages to have latency too high, impacting gaming and throttling Gen5 SSD speed by 2GB/s. If I was Sony or Microsoft, putting aside compatibility debate, I would not have confidence to give Intel contract at this junction to develop my new consoles. No way, by looking at issues they have on client products.

Such issues are making hundreds of thousands of buyers switching to AMD. This trend is clearly visible in any quarterly numbers. Intel have consistently been losing market share in last 7 years. Slowly, but consistently going down. Their Xeons are now two generations behind in average performance. They have nothing to offer in console segment as their iGPU does not scale enough beyond smaller designs for laptops and handhelds. Arrow Lake mega-APU project was cancelled a few years ago. They can barely release a desktop GPU in small quantities. Nvidia has nothing to offer for big consoles either. Why would they bother if they earn tens of billions in data center? It's not surprising that Sony and Microsoft do not work with them on development of new consoles.
 
Vendors don't do Intel at least because they want way too much money for it. It's hard to believe that Nvidia would be any different. Nintendo can do that, since they also take way too much money hardware-wise.

AMD does not do that. They seem to be more interested in dominating that market than to make a big buck on every unit sold.

Two minutes, and I think I can clear this up.

Intel is a company that is enormous, and has the management structures to prove it. They want to sell a premium product in a premium market because their stuff is now vital to industry, and their business model is high margins on the volume of product that they can pump out. Given their position in the global silicon market, that's high profit medium volume production.

Nvidia is damn near a clone of Intel. The difference is their gravy train is tied to AI rather than just putting out the best product...so they do have an expiration date. They sell huge profit margins, medium volume, and since they don't own foundries are volume limited.

AMD is the polar opposite. It's volume they want to sell, to as many people as will buy, with as much fluidity as possible. They structured themselves, historically, such that they could get lower profitability per unit, but mop up all the sloppy seconds. MS and Sony want to drive cost down, so AMD steps up to offer them collaborative work. Their licensed tech in a boat load of consoles, which they don't have to fight for production on. It's being the service end of the system and not the production end...which gives up some profit for enormous volumes. You can be damn sure that the 50 series cards each make Nvidia a profit, but the million strong console launches make sure that AMD has a business with a long tail end that will basically be free money after the small initial investment of developing the hardware with MS, Sony, or the big N.


Regarding the Switch discussion...it was about how MS chose AMD for being lowest cost. People immediately chimed in that the MS deal was with AMD because it was cost effective, then made wild assertions about Nintendo, then made wild assumptions that if Nintendo had used Tegra 3 for the console it'd be even more expensive...and all of this ended with the reasonable conclusion that Nintendo will never pursue the best because it'd get in the way of profitability.



-to the topic at hand-
Is anyone else feeling like this might be an actually pretty awesome situation?

I'm looking at the best possible outcome here. MS releases a tablet akin to the Steam Deck...but running a stripped down version of Windows. All of the crap ejected, so we get a pure hit of concentrated "it runs even more games than Proton without being fiddly." Provide a central hub station that can store a boat load of games cheap...and that you can use an old HDD on to get huge storage for cheap (when paired with a small SSD for burst speeds). Hook the base station into your TV and tablet with ultra high bandwidth low distance connection...that doesn't pollute your existing WiFi networks, and allows for the base console to stream lesser used games while you've got the common used games downloaded to the tablet itself. What you get is a product that is as easy as using Steam, is only as fiddly as Windows, has the ability to control your media (and report usage back to MS) without distorting it, and requires no effort to go from gaming on a TV to busting the tablet out to (presumably) jump onto some sort of public transit and enjoy while on your way somewhere. I know that's not realistic for everyone, but MS is presumably trying to sell this to the upwardly mobile people renting apartments, who would absolutely be served by this...and maybe could even buy into the ecosystem instead of just a console.

I know that's putting on rose tinted glasses and optimism...but I can dream that maybe somebody at MS has been paying attention... At least they're actively learning from the "console will always be online" debacle and coming forward with the "we'll support all the marketplaces" logic that usually couches acceptance of this silliness.
 
It's worth noting here that you've picked the single TPU tested card from the RDNA4 lineup capable of that efficiency, from both models and samples tested. The 9070XT and 9060XT are both considerably worse. I'd wager if a 5070Ti (closest power budget vs 9070XT) or even 5080 were limited to 233w to match the Powercolor Hellhound 9070, it would easily retain an efficiency advantage.

For example, I wouldn't claim Ampere was massively more efficient than RDNA2 because the A2000 exists. It just shows virtually all cards are pushed beyond their efficiency sweetspot, perhaps by quite a lot, and significantly power limiting a given GPU pumps that efficiency considerably.
The A2000 has significantly lower clocks than other Ampere SKUs; the 3050 clocks forty five percent higher in TPU's game suite. The 9070 XT, on the other hand, is clocked about eight percent higher than the 9070. As for limiting a larger GPU to the same power target as a smaller GPU to surpass its efficiency, that is hardly a new technique, and isn't useful when comparing GPUs with wildly different performance, for instance, the 5070 Ti and the 9060 XT. With that being said, my point was that Nvidia isn't significantly more efficient. RDNA 4, when compared to its Blackwell competition, isn't outclassed in efficiency like most AMD GPUs since the release of Maxwell. Nvidia is still more efficient, but it's close enough that it doesn't matter.
 
Last edited:
The A2000 has significantly lower clocks than other Ampere SKUs; the 3050 clocks forty five percent higher in TPU's game suite. The 9070 XT, on the other hand, is clocked about eight percent higher than the 9070. As for limiting a larger GPU to the same power target as a smaller GPU to surpass its efficiency, that is hardly a new technique, and isn't useful when comparing GPUs with wildly different performance, for instance, the 5070 Ti and the 9060 XT. With that being said, my point was that Nvidia isn't significantly more efficient. RDNA 4, when compared to its Blackwell competition, isn't outclassed in efficiency like most AMD GPUs since the release of Maxwell. Nvidia is still more efficient, but it's close enough that it doesn't matter.
There's also the point where RDNA 4 uses a better node 4 nm. Blackwell has the same 5 nm as Ada.
 
There's also the point where RDNA 4 uses a better node 4 nm. Blackwell has the same 5 nm as Ada.
Lithographically, 4 nm is essentially 5 nm. Only Nvidia and TSMC know if it's inferior to the node AMD is using.
 
Intel is a company that is enormous, and has the management structures to prove it. They want to sell a premium product in a premium market because their stuff is now vital to industry, and their business model is high margins on the volume of product that they can pump out. Given their position in the global silicon market, that's high profit medium volume production.
They want to sell... well said. Unfortunately, their margins have been well below 50% for three consecutive years now, contributing to their increasingly challenging situation. They slashed prices of brand new Xeon 6 by unprecedented $5,000 per single CPU and it's still not selling well, as EPYCs outperform those by up to 40% in 2P systems and bring a completely different tier of performance and cost of ownership value. Also, another ~10,000 workers will lose jobs in July. Intel is also volume-limited for all components and tiles they produce on TSMC to stay competitive.
Revenues Intel 2025 Q1.png

So, their desire to sell premium products with high margin is a little bit against what reality throws at us to look at, as they face growing competition like never before. Increasing number of their consumer CPUs in DIY are becoming dirt cheap too. This is due to less interest from buyers, for variety of reasons. They don't even want to sell Battlemage GPUs at any reasonable volume, which concurs with your observation that some products without high margin are not promoted. Finally, they have not shown ANY evidence that their consumer integrated graphics could scale to a living-room console level of performance.

In this context, despite Intel bidding for a new console, they were not in a position to convince either Sony or Microsoft. Lost opportunity to be in more than one hundred million households in future.

On the other hand, AMD has very healthy and consistent margins in recent years, gradually growing their presence in all silicon segments. This is one of their key strengths, to span silicon offer from lower margin markets to high margin markets. The new console adventure may create a new tier in home gaming, with a premium design rumoured to cost close to ~$1,000, which is a significant shift from current offerings. In addition to entry and standard console, we might get a 'halo' console tier too next time. Entry device will target 4K/60Hz, standard console will probably be targetting 4K/90-120Hz and the top model anything above that with upscaling. TV manufacturers are already preparing and releasing growing number of models 4K/144-165Hz and soon more 4K/240Hz models, so the top halo console will need to be able to address above 120Hz in some popular games. Monitors are already ready, so new consoles willl also need to introduce DP port, alongside HDMI, to give users more connecitvity options, as not everyone has a big TV or plays on it.
Revenues AMD 2025 Q1.png


Nvidia, as you said, has its own AI gravy train with insane margins. They have no interest in home console market not only due to lower margins of the segment, but predominantly because they still do not have a CPU part of silicon that would scale to this level of performance and compete with Zen6 cores. They partnered with MediaTek to deliver Arm-based CPU in N1X for laptops, so they clearly still need a partner in the industry to perfect consumer silicon offer. For Nvidia, it's less about lack of high margin on consoles, but more about their current inability to offer a suitably scaled APU for home consoles. Therefore, Nvidia is irrelevant for home consoles. They can try to bid in 2030 for 2035 generation of consoles.
Revenues NV 2025 Q1.jpg
 
Last edited:
The A2000 has significantly lower clocks than other Ampere SKUs; the 3050 clocks forty five percent higher in TPU's game suite.
Why would clock speeds be a deciding factor? the A2000 smashes the 3050 in watts per frame, and the 3050 is only 2-7% faster. But, they both use the same Arch, and the A2000 stands well clear of the rest of the 'spread' of other GeForce Ampere arch cards.
As for limiting a larger GPU to the same power target as a smaller GPU to surpass its efficiency, that is hardly a new technique
Only what AMD has done here is almost exactly that, effectively cut 12.5% compute off and then power limit this one SKU to 72% of the power budget of the top part resulting in 20% higher efficiency than their two other RNDA4 parts. Both other parts on the line-up going for max performance for their silicone are significantly less efficient. Ergo, I do not believe the example of using the 9070 alone justifies the argument that their efficiency has caught right up (not significantly more efficient than Blackwell, in your words, plus it also depends what you might personally consider significant) when it stands alone from the pack.
Nvidia is still more efficient, but it's close enough that it doesn't matter.
I've not seen enough data to make that conclusion, what I have seen would suggests to me that Blackwell retains ~10-15% (or perhaps even higher) efficiency advantage at the architectural level against RDNA4. You could call that insignificant by your own personal measure, or perhaps you disagree it's even 10%.

Just to bring it all back a bit, I'm definitely not trying to be rude, and I'm not saying "I'm right, you're wrong" either, we've both looked at a variety of information (with some decent crossover) and come to different conclusions based on that information, and there's bound to be differences between how we would each choose a word to describe the conclusion (ie the word "significant"). I think it would take a good chunk of testing, like multiple cards from each Arch tested across a multitude of power limits, across multiple rendering loads, with highly accurate power consumption/performance data to be able to draw what I'd consider a complete conclusion about the efficiency levels and differences between RDNA4 and Blackwell. What we both have at our disposal (unless there's relevant data I'm missing) only paints part of the entire picture.

I appreciate hearing your viewpoint, insights and conclusion and the polite discourse.
 
Last edited:
not seen enough data to make that conclusion, what I have seen would suggests to me that Blackwell retains ~10-15% (or perhaps even higher) efficiency advantage at the architectural level against RDNA4
Ultimately, this is not a concern for future consoles. Those will be based on RDNA5 architecture.

Far more pertinent and exciting question is whether console gaming APU die would still be monolithic or would it move to chiplets for the first time ever. Is efficiency on chiplets mature enough for this transition?

I'd expect iGPU to be a little bit better in performance than desktop 9070XT. This means that iGPU die, or part of die, would need to be bigger than current IOD on Strix Halo with 40 CUs.

We will never see a 1000 usd hardware subsidized to 599/699 again.
My impression is that this time around Sony and Microsoft might release three tiers of consoles, with 'halo tier for $1,000 or so, plus standard and entry.
 
Ultimately, this is not a concern for future consoles. Those will be based on RDNA5 architecture.
Oh yeah for a new playstation or xbox it barley matters because with relative certainty they'd be running AMD architecture anyway and even if nvidia had a significant power efficiency advantage it likely wouldn't change that at all. Multiple great reasons discussed why that's the case, like back compat and AMD delivering both CPU and GPU.

When reading through the thread the only thing that piqued my interest was the comment about the efficiency differences which I wanted to engage in as I find it an interesting topic, and one I have some thoughts and findings on that seem at odds with some other popularised thinking.
 
My impression is that this time around Sony and Microsoft might release three tiers of consoles, with 'halo tier for $1,000 or so, plus standard and entry.

Possibly but I was more talking about how they use to subsidize the hardware it would be like a 1800 usd console hardware being subsidized to $999. Never going to happen again.

If the console is 1000 usd they will at worst break even but more likely make a profit on each system sold.
 
Possibly but I was more talking about how they use to subsidize the hardware it would be like a 1800 usd console hardware being subsidized to $999. Never going to happen again. If the console is 1000 usd they will at worst break even but more likely make a profit on each system sold.
There is nothing wrong with subsidizing a product if it brings wider benefits on a gaming platform. In case of consoles, those are primarily vehicles for selling other things, such as games and micro-transactions. They make money on services too. Consoles are not a status symbol, like iPhone, but more a medium.
GPU Console Sony PS.webp

French government does the same with TGV high-speed train network, but on a much bigger scale. Those lines and trains are subsidized and national carrier SNCF does not have a huge direct profit, but once you connect entire country, create cross-border network in EU and get hundreds of millions of people annually whizzing 200 mph for tourist, private and business purposes, everything scales beyond tracks and trains and brings wider benefits to economy and society.
 
There is nothing wrong with subsidizing a product if it brings wider benefits on a gaming platform. In case of consoles, those are primarily vehicles for selling other things, such as games and micro-transactions. They make money on services too. Consoles are not a status symbol, like iPhone, but more a medium.
View attachment 404388
French government does the same with TGV high-speed train network, but on a much bigger scale. Those lines and trains are subsidized and national carrier SNCF does not have a huge direct profit, but once you connect entire country, create cross-border network in EU and get hundreds of millions of people annually whizzing 200 mph for tourist, private and business purposes, everything scales beyond tracks and trains and brings wider benefits to economy and society.


Not saying there is anything wrong or that these companies shouldn't only that they no longer do it at least since 2013 and the Xbox one/PS4 era. The last subsidized Nintendo console was probably the Gamecube they went low end hardware every generation after that.
 
Not saying there is anything wrong or that these companies shouldn't only that they no longer do it at least since 2013 and the Xbox one/PS4 era. The last subsidized Nintendo console was probably the Gamecube they went low end hardware every generation after that.
Hardware was roughly ok for the price in 2020. Sure, they could have included DP port, higher capacity SSD and whatnot, but hey, it's never perfect.
 
Hardware was roughly ok for the price in 2020. Sure, they could have included DP port, higher capacity SSD and whatnot, but hey, it's never perfect.

Yeah it wasn't bad but they were still essentially running 6600XT/6700 like GPU with a downclocked 2700 for 500 usd not a terrible deal but nothing special either. The Xbox specifically used a lot of silicon to just match the PS5 about 17% larger soc and hasn't really show up in actual games likely due to the Series S being the base spec and the OS not being as efficient.
 
They want to sell... well said. Unfortunately, their margins have been well below 50% for three consecutive years now, contributing to their increasingly challenging situation. They slashed prices of brand new Xeon 6 by unprecedented $5,000 per single CPU and it's still not selling well, as EPYCs outperform those by up to 40% in 2P systems and bring a completely different tier of performance and cost of ownership value. Also, another ~10,000 workers will lose jobs in July. Intel is also volume-limited for all components and tiles they produce on TSMC to stay competitive.
View attachment 404262
So, their desire to sell premium products with high margin is a little bit against what reality throws at us to look at, as they face growing competition like never before. Increasing number of their consumer CPUs in DIY are becoming dirt cheap too. This is due to less interest from buyers, for variety of reasons. They don't even want to sell Battlemage GPUs at any reasonable volume, which concurs with your observation that some products without high margin are not promoted. Finally, they have not shown ANY evidence that their consumer integrated graphics could scale to a living-room console level of performance.

In this context, despite Intel bidding for a new console, they were not in a position to convince either Sony or Microsoft. Lost opportunity to be in more than one hundred million households in future.

On the other hand, AMD has very healthy and consistent margins in recent years, gradually growing their presence in all silicon segments. This is one of their key strengths, to span silicon offer from lower margin markets to high margin markets. The new console adventure may create a new tier in home gaming, with a premium design rumoured to cost close to ~$1,000, which is a significant shift from current offerings. In addition to entry and standard console, we might get a 'halo' console tier too next time. Entry device will target 4K/60Hz, standard console will probably be targetting 4K/90-120Hz and the top model anything above that with upscaling. TV manufacturers are already preparing and releasing growing number of models 4K/144-165Hz and soon more 4K/240Hz models, so the top halo console will need to be able to address above 120Hz in some popular games. Monitors are already ready, so new consoles willl also need to introduce DP port, alongside HDMI, to give users more connecitvity options, as not everyone has a big TV or plays on it.
View attachment 404277

Nvidia, as you said, has its own AI gravy train with insane margins. They have no interest in home console market not only due to lower margins of the segment, but predominantly because they still do not have a CPU part of silicon that would scale to this level of performance and compete with Zen6 cores. They partnered with MediaTek to deliver Arm-based CPU in N1X for laptops, so they clearly still need a partner in the industry to perfect consumer silicon offer. For Nvidia, it's less about lack of high margin on consoles, but more about their current inability to offer a suitably scaled APU for home consoles. Therefore, Nvidia is irrelevant for home consoles. They can try to bid in 2030 for 2035 generation of consoles.
View attachment 404281

Being completely fair, I didn't state the underlying truth. That is that everyone wants to sell huge volumes of high profit margin materials. That....I guess should be obvious. This is how you get companies that are ubiquitous for what they do...like Intel was 7 years ago. They could sell a bad chip, claim it was infinitely better than what AMD was squeezing out, and they were right. AMD put out the best costing stuff.

It's my fundamental belief that businesses that are restricted to volume work on either end of the spectrum, because they cannot afford to be in the middle unless growing. AMD did a great job of this with their Ryzen line. It was about 80%, then 90%, then as fast as, and finally the fastest silicon on the market. They did this after the absolute damp squib of Bulldozer...but they did it by always being the most affordable option. Sell volume, build trust, release a market shifter, start charging premium, and finally get fat. Intel unfortunately is well past the get fat stage, and is now being eaten by the competition.

Back to the topic at hand. If Nvidia is well into their own "get fat" period off of AI, then they don't need margins business. They are incentivized to not devalue their brand by being a console solution...because if you're looking at 50% margin on a $10 CPU that means the consumer is only getting about $5 of CPU. When you are looking at a $2 flat charge out of a $10 CPU you are getting $8...so surprisingly enough you could have almost half the performance efficiency per dollar and break even. As such, this screams of business that AMD is perfect for...just like the old PowerPC used to be Nintendo's jam when IBM absolutely had business but could not crack the console market...despite having surprisingly competent hardware given they were largely their own bubble of an industry that believed themselves incapable of being touched. I personally believe that MS maybe asked for a quote from Nvidia and Intel...but under such a heavy NDA that we'd never hear about it. It was also probably shipped off back to MS with the respective fun answers. Those being Nvidia suggesting they'd damage their brand, and Intel quoting so heavy on costing that they'd never win.


I'm also of the mind that Intel could win this business...if they didn't have people at the top who paper over their own efficiency by having the fattest margins. Remember children, you don't have to compete with the fastest by being most efficient, so that big block high displacement volume engine is just as fast as the little rice burner at 6000 rpm. The thing is, Intel probably cannot move fast enough. I know that sounds silly...but imagine them taping off a fab, spending years getting things going...and discovering with design time 10 years from inception to launch was just too slow to design their own fully featured console that competes in the market. It sounds silly...but Intel doesn't move quickly. It doesn't sell its IP to other people. It'd be trying to release the Nintendo Switch (1) today, and competing with the other consoles that are 3-5 years newer. That...isn't a business plan that works for anyone but Nintendo.
 
Regarding the update with Lisa Su commentary, obviously AMD has decided to go full stream into gaming. This means powerful mobile and console SoCs that Nvidia is not necessarily interested in making. I think AMD will win this particular battle in the GPU space as they are motivated with the right IP. Nvidia is more into AI, doesn't have much in the CPU department and they have been content with just selling discrete GPUs to PC enthusiasts through AIBs which is already a contentious relationship.
 
Those for some insane reasons wanting an nvidia console... the GB10, Which is 20 arm cores+thor gpu specs... is being sold for 3-4k.
Nvidia doesn't care about console sized margins.
 
Bummer. I'd really like an Nvidia console and/or handheld beyond the Switch 2. Microsoft's already working hard on getting Windows on ARM with Qualcomm and Nvidia releasing mobile chipsets, so software wouldn't be a huge issue for next-gen consoles. And having the efficiency advantage of ARM and Nvidia would be a distinguishing factor over the PS6.

I wonder if Nintendo has an exclusivity deal for Nvidia consoles, or if AMD's desperateness ability to deliver value at a low cost for stakeholders was the deciding factor.
You forget history. Nvidia ROYALLY screwed MS on the first xbone, cutting of supply just as it was gaining steam. They also screwed Sony on the PS3, selling them obsolete trash they promptly undercut. Neither is really willing to go back to them again.
 
Why would clock speeds be a deciding factor? the A2000 smashes the 3050 in watts per frame, and the 3050 is only 2-7% faster. But, they both use the same Arch, and the A2000 stands well clear of the rest of the 'spread' of other GeForce Ampere arch cards.

Only what AMD has done here is almost exactly that, effectively cut 12.5% compute off and then power limit this one SKU to 72% of the power budget of the top part resulting in 20% higher efficiency than their two other RNDA4 parts. Both other parts on the line-up going for max performance for their silicone are significantly less efficient. Ergo, I do not believe the example of using the 9070 alone justifies the argument that their efficiency has caught right up (not significantly more efficient than Blackwell, in your words, plus it also depends what you might personally consider significant) when it stands alone from the pack.

I've not seen enough data to make that conclusion, what I have seen would suggests to me that Blackwell retains ~10-15% (or perhaps even higher) efficiency advantage at the architectural level against RDNA4. You could call that insignificant by your own personal measure, or perhaps you disagree it's even 10%.

Just to bring it all back a bit, I'm definitely not trying to be rude, and I'm not saying "I'm right, you're wrong" either, we've both looked at a variety of information (with some decent crossover) and come to different conclusions based on that information, and there's bound to be differences between how we would each choose a word to describe the conclusion (ie the word "significant"). I think it would take a good chunk of testing, like multiple cards from each Arch tested across a multitude of power limits, across multiple rendering loads, with highly accurate power consumption/performance data to be able to draw what I'd consider a complete conclusion about the efficiency levels and differences between RDNA4 and Blackwell. What we both have at our disposal (unless there's relevant data I'm missing) only paints part of the entire picture.

I appreciate hearing your viewpoint, insights and conclusion and the polite discourse.
First of all, I definitely appreciate your tone and honest attempt to discuss. Clock speeds matter, because beyond the threshold voltage, frequency has a linear relationship with voltage. In other words, power consumption will increase by the cube of the frequency. This is why the A2000 is so efficient despite being a bigger GPU than the 3050. The A2000's low frequency allow it to stay at 0.75 V while the 3050 requires 1.08 V at full load.
 
Bummer. I'd really like an Nvidia console and/or handheld beyond the Switch 2. Microsoft's already working hard on getting Windows on ARM with Qualcomm and Nvidia releasing mobile chipsets, so software wouldn't be a huge issue for next-gen consoles. And having the efficiency advantage of ARM and Nvidia would be a distinguishing factor over the PS6.

I wonder if Nintendo has an exclusivity deal for Nvidia consoles, or if AMD's desperateness ability to deliver value at a low cost for stakeholders was the deciding factor.

Sounds like pure Colombian cope. Nvidia since day one has been pushing Tegra onto anyone and everyone with the slightest interest and Nintendo looks for the most cost effective solution (cheapest, decent performance while having good battery life) as their business model is not the same as Sony or Xbox, where they sell hardware at a loss and make it back on software and services. It's a deal that just works for both parties involved. Unless AMD undercuts Nvidia with a chip that matches whatever Tegra based chip they're supplying Nintendo. Ninvidia isn't splitting anytime soon.

And thinking about your comment some more those last couple jabs make no sense, as for AMD to do that they have to be desperate.
 
Last edited:
Back
Top