• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Microsoft Details Xbox Series X SoC, Drops More Details on RDNA2 Architecture and Zen 2 CPU Enhancements

Joined
Feb 11, 2009
Messages
5,404 (0.97/day)
System Name Cyberline
Processor Intel Core i7 2600k -> 12600k
Motherboard Asus P8P67 LE Rev 3.0 -> Gigabyte Z690 Auros Elite DDR4
Cooling Tuniq Tower 120 -> Custom Watercoolingloop
Memory Corsair (4x2) 8gb 1600mhz -> Crucial (8x2) 16gb 3600mhz
Video Card(s) AMD RX480 -> RX7800XT
Storage Samsung 750 Evo 250gb SSD + WD 1tb x 2 + WD 2tb -> 2tb MVMe SSD
Display(s) Philips 32inch LPF5605H (television) -> Dell S3220DGF
Case antec 600 -> Thermaltake Tenor HTCP case
Audio Device(s) Focusrite 2i4 (USB)
Power Supply Seasonic 620watt 80+ Platinum
Mouse Elecom EX-G
Keyboard Rapoo V700
Software Windows 10 Pro 64bit
imagine MS designing a standard (DX12Ultimate) that their own console does not support lol
 
Joined
Oct 12, 2005
Messages
682 (0.10/day)
AMD (or Intel but it's very early days for them with decent graphics) could make a monster APU with HBM on chip memory for the desktop space if they wanted, I am not sure why they don't. I guess it’s because it could impact sales of GPUs where it makes decent money. Maybe the upcoming 5nm+ with it’s 80% density increase over 7nm plus Micron starting to churn out HBM which will lower the price hugely should allow this revolution but surely the ultimate aim for Intel/AMD APUs is to win as many market segments as possible and this includes the high end.

I accept there will always be a few segments where APUs aren't suitable, things like very specialist server and AI stuff as the ratios of CPU vs GPU vs memory is very different or the ultra high end enthusiast market where cost is not a factor and it’s all about highest performance at any wattage but for the most part APUs are the future of all other segments. They already have low cost builds sown up and we should be seeing low end turning into mid range then hopefully moving to high end.

In the next few years both AMD and Intel will have the ability to build APUs with massive CPU performance, massive GPU performance and enough HBM3 stacked system memory to be used as the unified system memory for both CPU and GPU, and so getting rid of the slow and energy expensive off chip DDR memory. This has to be where these companies are heading long term isn't it???

So lets say in 2022 when 5nm is very mature (3nm should be here by then too) then AMD could build an APU with 16 CPU cores, 120CU and 32GB HBM3. I know this sounds crazily expensive to achieve right now but when you actually sit down and look at the costs it's honestly the better value route for performance.

The cost of separately putting a system like this together in 2022 would be roughly as follows:

CPU (16 Cores Zen 4) = $500
GPU (120CU RDNA 3.0 with 16GB GDDR6) = $1200
RAM (32GB DDR5) = $400

Total = $2100

So the first question is, would AMD be happy to sell this APU for less than $2100? I would say yes very happy considering the XSX chip is roughly half this chip and they sell that for probably less than ~$100 each to Microsoft! Now ok, that is in massive quantities and the HBM memory will add a lot to the costs but not too get it close to $2100 and so it would still be a huge margin product for anyone who makes it.

The XSX is only 360mm^2, moving to 5nm+ increases the density by 80% so even though I am suggesting doubling pretty much everything the HBM reduces the bus size coupled with 5nm it wouldn't be a crazily big chip, certainly no bigger than 600mm^2, which is big but still very doable. It would actually be around the same size and design as AMD's Fiji chip which had a launch price of just $649 back in 2015.

With densities and efficiencies afforded by the advanced 19 layer EUV process on TSMC's 5nm node plus Zen 4's power efficiencies plus RDNA 3.0 being 50% more power efficient than RDNA 2.0 which is 50% more efficient than RDNA 1.0, I truly believe this seemingly fanciful list of specifications for an APU is actually easily do-able if there is a will to make it.

Granted, clocks might have to be tamed a little to keep temperatures under control as transistor density will be very high but I would think the massive bandwidth and ultra low latencies of having HBM as the on die system memory would go a long way to making up for lower clocks.

Maybe the CU count won't be the most important thing to focus on in the future, maybe it will be all down to the number of accelerators they can include on a self-contained, purpose built, giant, higher performance APU with ultra low latency insanely wide bandwidth memory that will actually be the key differentiator allowing for things not possibly on a non-APU system.

I'm not saying this is a perfect fit for all use cases but it would amazingly powerful for a lot of gamers, workstations, programmers, high-res professional image editing, 3D rendering, etc, etc.

These types of chip would be perfect for something like the MS Surface equivalent of the Apple Mac Pro in 2022 .

Intel and AMD and currently locked in a fierce battle for CPU supremacy, Intel is also doing what it can to start a battle in the GPU space too and possibly Nvidia will join the APU fight if it does buy ARM but ultimately the victor will be the company that truly nails APUs at the majority of performance segments first.

For whatever reason we have yet to see a powerful APU come to the desktop, it's only low cost, low power stuff but even though AMD seems to be doing all the ground work to set this up maybe it will be Intel that brings something truly special first? Intel looks to be playing around with big.LITTLE CPU cores which lends itself to APU designs from a thermal management standpoint. Plus Intel are doing their utmost to build a decent scaleable graphics architecture with it's Xe range. But the biggest indicator of Intel's intentions is their work on Foveros which shows they understand the future needs all in one APUs with system memory on the die but again it's just down to which company beings a finished product to market first.

I guess it's just 'watch this space' for who does it first but this is certainly the future of computing, imo. What is very interesting is ARM based chips are way ahead of x86 when it comes to all in one chip design, just look at the likes of Fujitsu's A64FX, this is an incredible chip. So x86 has to be careful to not be caught napping. If Intel or AMD don't make a super high end APU soon then Apple will very quickly show what is possible using ARM or even their own design using the RISC V architecture to completely relinquish their dependence on everyone for their chips, especially if Nvidia buys ARM. Amazon are kind of making APUs, Huawei do, Google do, Nvidia too, ok all these are very different types of chips to what I am talking about above but what I'm saying is they all have the experience of designing very high performance, complex chips and with Windows already supporting ARM and emulators working pretty well now then a move away from x86 could happen very quickly if one of these massive players nails a super powerful APU with key accelerators for the PC market.

One thing is for sure, we have an amazing few years ahead of us.

There are many things that make theses kinds of APU not happening.

Yes, HBM is a good solution for memory bandwidth since it's the main thing that affect APU, but there is a lot more to overcome in the PC market before getting there. And once we are there, is the APU would be a better option than a CPU with a dedicated GFX?

It might be not.

1. HBM cost more than GDDRxx memory. So right there you increase your cost. The APU will cost more with the same memory. Also you have to decide if HBM is dedicated to the GPU or shared with CPU. If shared, you then have the choice of working with 2 memory type if you want to be able to expend memory (DDR4-5 + HBM) or you only have access to HBM. The second option is a bit against the PC spirit. And there are already device doing that, Theses consoles...

2. GPU power usage can be 200+ watt where desktop high end CPU is more around 100 watt. You will a new motherboard that is able to deliver such current (increasing the price) and you will have to remove 300+ watt from that APU. This is not impossible for sure but it require serious cooling solution where splitting the power help since you can have more surface to dissipate the heat.

3. The motherboard I/O pannel would need to have more video output if you don't want to loose anything versus an internal GPU. the socket will need to be able to run these signal and the motherboard too. Might not be needed by many but enthousiast might not like just 1 DP and 1 HDMI.

4. Setting all this in a single package will need a big socket and a complex interposer to be able to accomodate everything. Increasing the cost of theses APU.

Dedicated GFX card aren't just good money maker, they make sense. It remove a big portion of the heat from the motherboard and CPU area. Also right now there is no bandwidth issue between the GPU and CPU and having both respect the PC spirit where you can upgrade one and keep the other and vice versa.

And lastly, they already do that, it's named PS4/5 and Xbox 1*. Intel have made a NUC with a vega GPU integrated and the sales were not spectacular at all.

The APU only make sense if it's cheaper than a dedicated GPU + CPU. as long as it's not the case, it won't happen for the general public.
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
Console peasants won't see it. It sounds not nice, but its true. If you tell them its truly 4K, its truly so.

Inb4 the countless discussion on the subject on TPU, just you wait :)

We don't really know how the variable-rate shading is implemented.

EDIT: Used wrong word in sentence above. Fixed now.

If there's temporal anti-aliasing involved, you might have a "real 4k" picture (1x1 across the whole frame), but it may take 4 or 5 frames for the GPU to render it (improving the resolution over multiple frames). Newer and newer techniques will better hide the weak points of GPUs, so it becomes harder and harder to see the flaws.
 
Last edited:
Joined
Feb 3, 2012
Messages
200 (0.04/day)
Location
Tottenham ON
System Name Current
Processor i7 12700k
Motherboard Asus Prime Z690-A
Cooling Noctua NHD15s
Memory 32GB G.Skill
Video Card(s) GTX 1070Ti
Storage WD SN-850 2TB
Display(s) LG Ultragear 27GL850-B
Case Fractal Meshify 2 Compact
Audio Device(s) Onboard
Power Supply Seasonic 1000W Titanium
Would be nice to see more games support cross platform play now that consoles have a healthy amount or muscle behind them.
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
Dedicated GFX card aren't just good money maker, they make sense. It remove a big portion of the heat from the motherboard and CPU area. Also right now there is no bandwidth issue between the GPU and CPU and having both respect the PC spirit where you can upgrade one and keep the other and vice versa.

It should be noted that there's "no bandwidth issue between GPUs and CPUs" because GPU programmers spend an extraordinary amount of effort buffering and pipelining communications between the GPU and CPU.

None of that effort would be needed on a hypothetical GPU/CPU system with low latency communications (or at least, it'd greatly simplify programming / optimization). However, as the 'HSA Foundation' (aka AMD) discovered, building a set of software to actually take advantage of a low-latency GPU/CPU connection is itself a major undertaking. Its the classic chicken-and-egg situation.

It'd be great if CPUs / GPUs had lower latency, but all software assumes high latency, so none of the current software would benefit. So no point making a high-performance APU.

---

Consoles are the obvious exception. Any console's software is specifically written for the console. Which means all software running on PS5 or XBox Series X will be able to make this assumption and benefit. (But maybe not! If a PC port is in the works, then the game engine needs to assume a high-latency barrier once again).
 
Last edited:
Joined
Feb 21, 2006
Messages
1,985 (0.30/day)
Location
Toronto, Ontario
System Name The Expanse
Processor AMD Ryzen 7 5800X3D
Motherboard Asus Prime X570-Pro BIOS 5013 AM4 AGESA V2 PI 1.2.0.Ca.
Cooling Corsair H150i Pro
Memory 32GB GSkill Trident RGB DDR4-3200 14-14-14-34-1T (B-Die)
Video Card(s) AMD Radeon RX 7900 XTX 24GB (24.3.1)
Storage WD SN850X 2TB / Corsair MP600 1TB / Samsung 860Evo 1TB x2 Raid 0 / Asus NAS AS1004T V2 14TB
Display(s) LG 34GP83A-B 34 Inch 21: 9 UltraGear Curved QHD (3440 x 1440) 1ms Nano IPS 160Hz
Case Fractal Design Meshify S2
Audio Device(s) Creative X-Fi + Logitech Z-5500 + HS80 Wireless
Power Supply Corsair AX850 Titanium
Mouse Corsair Dark Core RGB SE
Keyboard Corsair K100
Software Windows 10 Pro x64 22H2
Benchmark Scores 3800X https://valid.x86.fr/1zr4a5 5800X https://valid.x86.fr/2dey9c
Mainstream PCs stuck on DDR4 rather than using GDDR6 is dumb, especially given how many consumers now use laptops with fully-soldered BGA everything.

Mainstream pc's are only "Stuck" on DDR4 for year and abit we will see DDR5 in 2021.

And below explains why CPU's don't use GDDR its for a reason.

 
Joined
Jun 3, 2010
Messages
2,540 (0.50/day)
Mainstream pc's are only "Stuck" on DDR4 for year and abit we will see DDR5 in 2021.

And below explains why CPU's don't use GDDR its for a reason.

AMD not only invented the thing, AMD also went ahead and announced heterogeneous memory architecture with vega. What will come up next is, AMD will announce hybrid gpu memory architecture. Gpus will come with both single column HBM2E and all the mainstream vram options to go along with it. The question is the gpu architecture to go along with it whether the gpu can access different memory interfaces effectively. It will possibly come up with pipeline differentiation towards their use in the advanced mode with a variable shading utility soft mode at start.
Don't toil over ray tracing too much. It is predominantly a supersampling method, all that rasterization change is to incorporate more sampling per pixel. Ray tracing just enables that without incrementally more fuss than the old school method.
 
Joined
Jun 16, 2016
Messages
409 (0.14/day)
System Name Baxter
Processor Intel i7-5775C @ 4.2 GHz 1.35 V
Motherboard ASRock Z97-E ITX/AC
Cooling Scythe Big Shuriken 3 with Noctua NF-A12 fan
Memory 16 GB 2400 MHz CL11 HyperX Savage DDR3
Video Card(s) EVGA RTX 2070 Super Black @ 1950 MHz
Storage 1 TB Sabrent Rocket 2242 NVMe SSD (boot), 500 GB Samsung 850 EVO, and 4TB Toshiba X300 7200 RPM HDD
Display(s) Vizio P65-F1 4KTV (4k60 with HDR or 1080p120)
Case Raijintek Ophion
Audio Device(s) HDMI PCM 5.1, Vizio 5.1 surround sound
Power Supply Corsair SF600 Platinum 600 W SFX PSU
Mouse Logitech MX Master 2S
Keyboard Logitech G613 and Microsoft Media Keyboard
There is a rumour that the Xbox series X will cost $599...

I don't see that as a ridiculous price. Adjusted for inflation, it's just ahead of the Xbox One, and way below the PS3 at launch. Even though the constant dollar price is not too bad, I think that there's going to be quite the sticker shock just from the large number. Also, I fully expect that EU and Canada are going to get an even higher price, while they try their hardest to win back the US market.

1597899740198.png


Source: Ars Technica https://arstechnica.com/gaming/2020/02/is-the-us-market-ready-to-embrace-a-500-game-console/

As for the technical details, I can see there being some issue with two CCX's for the CPU, but apparently this has been the case for Zen 2 the whole time. The chiplets are 8 core, but each chiplet has 2 CCX's. I suppose that this means that we're still going to see better latencies compared to Zen and Zen+, but likely there is no special sauce with this CPU so we are likely going to have CPU limitations compared to PCs just like this gen, just not to the same level of disparity.

All rumors point to AMD having weaker raytracing hardware onboard, which is a bit worrying but may benefit Turing card holders as we're probably going to see crossplatform games ship with a competent raytracing level that runs well on 2070 and up. I'm a little miffed that my 2070 Super is going to be relegated to "console level performance" now but if DLSS takes off, then at least I won't have a hard time playing at 4k for the next few years. I'm sure Ampere will be amazing, but this time, first time since I started my PC journey with Maxwell, I may sit this one out.

I find it really interesting that they're pushing the lower core count mode. Relying on 7 cores at 3.8 GHz for most games would definitely be a boon to any Intel i7 holders. I wonder if we'll see any big multiplatform games decide to use SMT.
 
Last edited:
Joined
Jul 9, 2015
Messages
3,413 (1.06/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Joined
Mar 21, 2016
Messages
2,198 (0.74/day)
The key is having artists indicate which meshes should be shaded at 2x2 (Effectively 1080p), and which sections should be shaded at 1x1 (effectively 4k), and inbetween (2x1 and 1x2: depending if an area has horizontal patterns or vertical patterns to preserve).
That or train AI inference to do that instead.
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
That or train AI inference to do that instead.

Why not both? Lol. A lot of these techniques can be combined.

The original DLSS clearly had a "checkerboard" pattern going on and a bit of shimmering. Maybe variable-rate shading would be a better "baseline image" for DLSS to be applied on.
 
Joined
Nov 3, 2011
Messages
690 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H115i Elite Capellix XT
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB, Toshiba N300 10TB HDD, Seagate Ironwolf 4T HDD
Display(s) Acer Predator X32FP 32in 160Hz 4K IPS FreeSync/GSync DP, LG 27UL600 27in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
XSX GPU has 5 MB L2 cache while RX 5700 XT has 4 MB L2 cache which is a 25% increase. No information on ROPS count.

XSX GPU has up to 560 GB/s memory bandwidth while RX 5700 XT has 448 GB/s which is up to 25% increase.

All rumors point to AMD having weaker raytracing hardware onboard, which is a bit worrying but may benefit Turing card holders as we're probably going to see crossplatform games ship with a competent raytracing level that runs well on 2070 and up. I'm a little miffed that my 2070 Super is going to be relegated to "console level performance" now but if DLSS takes off, then at least I won't have a hard time playing at 4k for the next few years. I'm sure Ampere will be amazing, but this time, first time since I started my PC journey with Maxwell, I may sit this one out.

I find it really interesting that they're pushing the lower core count mode. Relying on 7 cores at 3.8 GHz for most games would definitely be a boon to any Intel i7 holders. I wonder if we'll see any big multiplatform games decide to use SMT.
For Minecraft path trace, XSX has 1080p with 30 to 60 fps range.

RTX 2080 Ti has about 71 fps with 54 fps minimums at 1080p.
 
Last edited:
Joined
Jun 3, 2010
Messages
2,540 (0.50/day)
The original DLSS clearly had a "checkerboard" pattern going on and a bit of shimmering.
Checkerboard pattern coupled with rotated grid 'supersampling' is a method to increase resolution per available shading sample. When applying better grades of resolution means lowering anisotropic filtering level(you just lose lod if you scale the image larger even if it is better distributed), I kind of have sought out which filters make good use of available Z rate. Everything prior to object primitive calls is very blurry and it wouldn't surprise me if some shimmering to go along with it.
 
Joined
Feb 3, 2012
Messages
200 (0.04/day)
Location
Tottenham ON
System Name Current
Processor i7 12700k
Motherboard Asus Prime Z690-A
Cooling Noctua NHD15s
Memory 32GB G.Skill
Video Card(s) GTX 1070Ti
Storage WD SN-850 2TB
Display(s) LG Ultragear 27GL850-B
Case Fractal Meshify 2 Compact
Audio Device(s) Onboard
Power Supply Seasonic 1000W Titanium
Seeing the Dreamcast on that console price chart makes me want to see Toy Commander get a remaster for current consoles.
 
Top