This is just pretty much just Strix point with the TDP increased to desktop levels.
I cannot see them realistically moving away from the current chiplet designs due to the ease of scalability and benefits of being able to grade dies between consumer and enterprise with ease.
Currently AMD needs to bulk order 3 things to cover the majority of their product stack, All of enterprise and HEDT are under 1 IO die, All of Consumer Desktop is under another IO die while sharing a common CCD design. Now you can add Zen
xc CCDs to cater for specific enterprise designs but they are willing to pay top money for those parts so they are profit makers in comparison. Then there is mobile/specialist falling the majority under strix point in this generation
Adding another CCD design removes all the benefits of scale and the new CCD design would be considerably larger leading to both higher cost and also a higher defect rate (increasing costs further)
For context a Zen 5 CCD is ~71mm2, Strix point is ~178mm2 and Strix Halo is massive in comparison at 307mm2.
I thought about reworking the current CCD design to cut out some non essential stuff but unless you are willing to remove ALL GPU context from the IO die and go back to the days where your CPU was purely a CPU and cannot give you a screen out for diagnostics/extra screen output etc there isnt really much to remove that realistically as all "non essential" aspects are either required by the standards expected to be used (Audio DSP etc) or are "expected" by user in normal use case (USB-C monitors/dongles requirings all the USB functionality ontop of the display IO)
IF you did sacrifice the GPU aspect there is a fair amount of IO die space that is usable as more memory controllers or more realistically additional PCI-e lanes and I would at that point argue that AMD could push ALL of the Misc IO into the chipset dies and just grant them either dedicated 4x per chipset die vs the daisy chain they do currently or even widen the connections to 8x and make them capable to support high speed networking (10Gbe) or host multiple NVME drives off them for bulk storage on a grade lower pci-e speed.