Monday, July 5th 2021

AMD 4700S Desktop Kit Features PlayStation 5 SoC Without iGPU

Previously, we have assumed that AMD 4700S desktop kit is based on Xbox Series X APU. Today, according to the findings of Bodnara, who managed to access one of these units, and we got some interesting discoveries. The chip powering the system is actually the PlayStation 5 SoC, which features AMD Zen 2 based system architecture, with 8 cores and 16 threads that can boost up to 3.2 GHz. The board that was tested features SK Hynix GDDR6 memory running at 14 Gbps, placed on the backside of the board. The APU is attached to AMD A77E Fusion Controller Hub (FCH), which was the one powering Xbox One "Durango" SoC, leading us to previously believe that the AMD 4700S is derived from an Xbox Series X system.

The graphics of this APU are disabled, however, it was the same variant of RDNA 2 GPU used by the PlayStation 5. Right out of the box, the system is equipped with a discrete GPU coming in a form of the Radeon 550, and this configuration was tested by the Bodnara team. You can find the images of the system and some performance results below.
Performance:

Sources: Bodnara, via VideoCardz
Add your own comment

33 Comments on AMD 4700S Desktop Kit Features PlayStation 5 SoC Without iGPU

#1
Mussels
Moderprator
Oh goody, the final specs on the useless box!

I'm curious how that ram performs vs DDR4/5 however
Posted on Reply
#2
delshay
I would think the APU is more likely lasered off because it is broken.
Posted on Reply
#3
Mussels
Moderprator
delshayI would think the APU is more likely lasered off because it is broken.
Either that, or forced to by contract

Like maybe there was a huge order for dev kits that got cancelled and they thought to sell the whole lot off, but had to nerf the GPU first?
The second slot only runs at x4, which would have been for an NVME slot in the console
Posted on Reply
#4
Post Nut Clairvoyance
MusselsOh goody, the final specs on the useless box!

I'm curious how that ram performs vs DDR4/5 however
That's the first attached image. The sourced review linked didn't say what software is used and I cant read Korean in the image, nor translate the image (I probably can but don't really want to). The latency and speed seems about right for DDR4, I'm guessing from the top it is latency, write, read and copy. Last row I have no idea). The original review is pretty poor too and reads like an advertisement for the kit. Zero actual CPU bench, some game fps with low requirement games for modern CPU (league, overwatch) and probably GPU bounded "battleground" (translated) which might be PUBG, and didn't go into details at all.
Posted on Reply
#5
lZKoce
Ahhh that LP card is so cute :)
Posted on Reply
#6
beautyless
The actual bandwidth when using with iGPU would be much higher.
Posted on Reply
#7
Mussels
Moderprator
I'm definitely not super salty we cant see what the APU would really be like with windows benchmarks for a true comparison to the consoles, nope not at alllllll
Posted on Reply
#8
Chrispy_
Given this is a "salvaged parts being sold for cheap" is that a full-fat RX 550 or some bastardised 2GB DDR4 variant?

Even the full-fat RX550 was a lame duck with performance close to the GT1030 but twice the power draw and lacking an encoder to match NVENC. At this performance level, power consumption and encoder performance are kind of a big deal because the actual 3D compute/gaming performance is bad enough to not be worth consdering over an IGP (unless, of course, you don't even HAVE and IGP like in this case).
Posted on Reply
#9
AnarchoPrimitiv
How sick would it be if the CUs actually worked.... If it did, this thing would be used for epic SFF builds
Posted on Reply
#10
Chrispy_
MusselsI'm definitely not super salty we cant see what the APU would really be like with windows benchmarks for a true comparison to the consoles, nope not at alllllll
The APU with GDDR6 instead of DDR4 would have been really impressive, for sure - 36CUs to the 6700XT's 40CUs but with a more restrictive TDP and having to share the GDDR6 with the CPU cores it would probably still perform exceptionally well. I would have to guess that the power/clock/cache deficit would put it in the ballpark of the rumoured RX 6600 with 28CU. Not bad for something that could rival a NUC.
Posted on Reply
#11
Mussels
Moderprator
Chrispy_The APU with GDDR6 instead of DDR4 would have been really impressive, for sure - 36CUs to the 6700XT's 40CUs but with a more restrictive TDP and having to share the GDDR6 with the CPU cores it would probably still perform exceptionally well. I would have to guess that the power/clock/cache deficit would put it in the ballpark of the rumoured RX 6600 with 28CU. Not bad for something that could rival a NUC.
Just... that in a gaming laptop. Just do it already AMD. Make a (soldered) DDR6 16GB and 32GB version and have a pro gamer laptop with the same specs as the consoles.

I know how powerful my 5800x can be in a 95W TDP, so i can only dream of what would be possible with laptop optimised parts
Posted on Reply
#12
ppn
original ZEN2 is ~~74 mm²
this 4700S ps5 ZEN2 ~~38.5mm² is ~heavily dawngraded to 128 bit though, if it has any impact on games that should be it.

What they should do is integrate 45W ZEN3 shrinked 30mm² on 5nm ondie RX6900~~RTX3090 shrinked to 200mm² and a fast NVMe on the back side.
and call it a day or a ps6 pro. cant wait for the PS6pro.
Posted on Reply
#13
Robin Seina
This could be really interesting to own, if someone found a way to enable iGPU...
Posted on Reply
#14
InVasMani
I'd be curious to see how much performance the PCIE 2.0 x4 lane bottleneck higher end discrete GPU's given the higher GDDR6 bandwidth would balance out before the upside of the GDDR6 is outweighed by the higher bus bandwidth of PCIE 3.0 at x4/x8/x16 against a similar Zen 2 CPU and clock frequency. The results would be intriguing and give a good idea of how integrating a single chip of GDDR6 or HBM on MB's might bolster performance used as a cache buffer perhaps. On the note of integrating a single chip of GDDR6 is enough to meet Windows 10 requirements.
Posted on Reply
#15
bug
...SoC Without iGPU
This just... sounds wrong.
Posted on Reply
#16
Nated4wgy
MusselsOh goody, the final specs on the useless box!

I'm curious how that ram performs vs DDR4/5 however
It's right there, in the first picture under performance.
Posted on Reply
#17
Freebird
delshayI would think the APU is more likely lasered off because it is broken.
I would think it is because drivers won't support it (even if the majority of CUs work) and they don't want people mucking around with it regardless and maybe figuring out how to hack the PS5 GPU; Sony may want to keep it to itself for now.
Posted on Reply
#18
mechtech
Meh

Missing the most important component..............................the igpu.............................
Posted on Reply
#19
IceShroom
Chrispy_Given this is a "salvaged parts being sold for cheap" is that a full-fat RX 550 or some bastardised 2GB DDR4 variant?

Even the full-fat RX550 was a lame duck with performance close to the GT1030 but twice the power draw and lacking an encoder to match NVENC. At this performance level, power consumption and encoder performance are kind of a big deal because the actual 3D compute/gaming performance is bad enough to not be worth consdering over an IGP (unless, of course, you don't even HAVE and IGP like in this case).
RX 550 is faster than GT 1030 GDDR5 and defenitly faster then the DD4 version. And GT 1030 also lacks NVENC. So why spread lie.
Posted on Reply
#20
Chrispy_
IceShroomRX 550 is faster than GT 1030 GDDR5 and defenitly faster then the DD4 version. And GT 1030 also lacks NVENC. So why spread lie.
The RX550 is approximately 6% faster than the GT1030. I said 'performance close to' not 'slower than' so I don't consider that as spreading lies.
GT1030 lacks the Turing NVENC, but I thought it had the older generation Pascal NVENC? I wouldn't know because the 1030 is a turd that I've never purchased; Happy to stand corrected if I'm wrong.

Edit:
Just looked it up, 1050 is the first Pascal card with NVENC, I think I was mis-remembering the controversy surrounding the 1650 getting short-changed on the previous-gen NVENC.
Posted on Reply
#21
persondb
ppnoriginal ZEN2 is ~~74 mm²
this 4700S ps5 ZEN2 ~~38.5mm² is ~heavily dawngraded to 128 bit though, if it has any impact on games that should be it.
That's likely mostly due to the cache, the big ass L3 occupies most of the zen 2 die space and they cut it to a fourth(2x4mb vs 2x16mb) for consoles. There's also likely some other space savings as well like shorter pipelines or such since those CPUs don't need to go to more than 3.6/3.8GHz while the desktop parts are made with higher turbo in consideration.
ppnWhat they should do is integrate 45W ZEN3 shrinked 30mm² on 5nm ondie RX6900~~RTX3090 shrinked to 200mm² and a fast NVMe on the back side.
and call it a day or a ps6 pro. cant wait for the PS6pro.
SRAM caches don't scale that well on nodes. So no, there's never really going to be 30mm2 Zen3 in 5nm.
Posted on Reply
#22
Chrispy_
bugThis just... sounds wrong.
iSoC, where the i stands for incomplete
It's less of a mouthful than SoCmGPUfclr (SoC missing GPU for complex licensing reasons)
Posted on Reply
#23
Wirko
MusselsEither that, or forced to by contract
There's another reason: lower TDP and a cheaper VRM. Still, in all probability, AMD could have disabled all the parts that contain any MS's ""intelectual" "property""*, so at least a primitive iGPU would have remained alive, for emergency use. They'd have to include a video output connector, though.

* Intellectual is put in quotes, property is put in quotes, then the whole phrase is put in quotes once again.
Posted on Reply
#24
Jism
InVasManiI'd be curious to see how much performance the PCIE 2.0 x4 lane bottleneck higher end discrete GPU's given the higher GDDR6 bandwidth would balance out before the upside of the GDDR6 is outweighed by the higher bus bandwidth of PCIE 3.0 at x4/x8/x16 against a similar Zen 2 CPU and clock frequency. The results would be intriguing and give a good idea of how integrating a single chip of GDDR6 or HBM on MB's might bolster performance used as a cache buffer perhaps. On the note of integrating a single chip of GDDR6 is enough to meet Windows 10 requirements.
www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling/3.html

There's however a high latency penalty by using GDDR6 vs DDR4.

Furthermore: it would be cool if the thing could be reverse engineered and perhaps a HDMI output soldered on + working VGA.
Posted on Reply
#25
InVasMani
I think more interesting than this would be if you could put GDDR6 or HBM chips on DDR4 DIMM. Even better still if you can mix them for a tiered memory storage.
Posted on Reply
Add your own comment
Copyright © 2004-2021 www.techpowerup.com. All rights reserved.
All trademarks used are properties of their respective owners.