• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Sony Reveals PS5 Hardware: RDNA2 Raytracing, 16 GB GDDR6, 6 GB/s SSD, 2304 GPU Cores

Xbox series X is the clear Winner... 10.3 TFLOPs max compute throughput, compared to Microsoft's 12 TFLOPs , Also Xbox software tends to use the headwear better then sony we have seen in in old 360 where sony had better hardware but yet 360 ran games smother and now we have both on the NEW Xbox better hardware and combine it with better software = super smooth game play.

You can't know this. Xbox has 10 GB VRAM + 6 GB system RAM, while normal PCs go ****today**** with 16 GB system RAM and 8 GB VRAM.
It will be particularly interesting to see these consoles in 3-4 years when the games will become more demanding for hardware resources.

12 vs 10.3 is not much of a difference, especially when you have a locked FPS at 60 or so.
 
The GPU is a whole different story from the one on the Xbox Series X Velocity Engine semi-custom chip. Sony decided to go with 36 RDNA2 compute units ticking at up to 2.23 GHz engine clock

Given that the X series GPU also runs at a pretty high frequency, this leaves me to believe RDNA2 is seriously more power efficient than anything else right know.
 
Given that the X series GPU also runs at a pretty high frequency, this leaves me to believe RDNA2 is seriously more power efficient than anything else right know.

Not that. It's 50% more power efficient, what's striking is that the process supports an AMD architecture to go that high in frequency. Normally, AMD has always lagged behind Nvidia in the frequency department.
 
doesn't matter if xbox series x is more powerful, sony always has better exclusives and more of them, plus there are a lot of sony exclusives on my backlog.
 
Linux itself is much better with multiple cores than Windows, if MS hasn't heavily tweaked the XB series X scheduler then they'll be wasting a lot of zen2 power.

considering what they've been able to get out of the One X and its garbage Jaguar cores I'm guessing it's much better than windows.
You can't know this. Xbox has 10 GB VRAM + 6 GB system RAM, while normal PCs go ****today**** with 16 GB system RAM and 8 GB VRAM.
It will be particularly interesting to see these consoles in 3-4 years when the games will become more demanding for hardware resources.

12 vs 10.3 is not much of a difference, especially when you have a locked FPS at 60 or so.


The major difference is 12 is the min with the Xbox and 10.3 is the max with the PS5 due to Sony in their infinite wisdom going with a variable boost clock ( which for a console doesn't make sense )..... Likely to make the gpu look closer in spec. Also the 10GB of vram seems to run significantly faster than Sony's setup around 100GBs.

Out of these two systems I will be buying the PS5 but I would feel alot better about it if the specs where reversed I personally don't care about the cost.
 
Last edited:
What I want to know is what is the IPC gain on zen 2 with this ram? Zen is fairly memory bound...
 
Well that should settle the debate about the upcoming "big Navi" cause it's gonna be a beast w/better IPC, clocks, more CU, hardware RT & who knows 3d audio?
Since its also a CPU die its probably the benefits of SOI production VS standard direct Silicon production, I wonder if they are moving their GPUs to the same production method.
 
12 Channel SSD controler on console!! :wtf:
 
RDNA2 is much more efficient but also clocks faster than any GPU until today. Great signs for big Navi indeed.
 
12 Channel SSD controler on console!! :wtf:

Likely it will use the SSD space as some type of lower level memory, loading textures directly from the SSD during gameplay?

NVMe standard pales in comparison, as always the PC users get less :(
 
Cerny gave a simple explanation on why they went with fewer CUs at higher frequency as opposed to more CUs at a lower frequency. Quoting Eurogamer, quoting Cerny:
[snip]
Price might've been a factor as well, dunno.
That explanation sure has the sound of someone really wanting their inferior solution to look better than it is. Sure, caches and other things will run faster, but the amounts, bandwidth etc. will also be lower. Adding more CUs also adds more cache etc., and pretty much every "advantage" he extolls here is countered by building a wider GPU, at least one that's also backed by solid caches and VRAM.

Also, compared to the XSX's fixed clocks, variable clocks (even if they're using AMD's clever SmartShift to allocate power between the CPU and GPU) are a poor fit for a console. And it's not like it will matter much when the maximum boost numbers for both parts are lower than the competition; all that tells us is that even in a best-case scenario for PS5, it's still slower across the board in both CPU and GPU power.

Even if the presentation was filmed a few days ago I am still tempted to think the PS5 engineering team spent the past 48 hours furiously overclocking their APU with the marketing team breathing down their necks :P

That SSD looks like a beast though, very interested in how good a job it does in compensating for the VRAM bandwidth deficiency. Also very interested in how they will certify m.2 drives for add-on storage considering no m.2 drive on the market is even close to those numbers in sustained performance.

Likely it will use the SSD space as some type of lower level memory, loading textures directly from the SSD during gameplay?

NVMe standard pales in comparison, as always the PC users get less :(
I agree that this drive is bonkers, but "as always PC users get less"? What on earth are you on about? The last time consoles were faster/better than PCs were in the mid-to-late 90s ...
 
Even if the presentation was filmed a few days ago I am still tempted to think the PS5 engineering team spent the past 48 hours furiously overclocking their APU with the marketing team breathing down their necks :p

This is funny :laugh:

I meant "The secret sauce here is that Sony is using its own protocol instead of NVMe, in supporting 6 data priority tiers versus 2 on NVMe."
 
I agree that this drive is bonkers, but "as always PC users get less"? What on earth are you on about? The last time consoles were faster/better than PCs were in the mid-to-late 90s ...


Technically the PS3 and Xbox 360 had features that didn't exist on pc yet.... PS3 with the CPU and Xbox with the GPU.
 
Technically the PS3 and Xbox 360 had features that didn't exist on pc yet.... PS3 with the CPU and Xbox with the GPU.

On PC, a SATA SSD is faster in gaming than a PCIe NVMe SSD. Anyone care to explain as to why and why the console gets virtually no loading times right now?

PCs also don't get other things, but they are software related like upscaling for instance.

Another question:
The Variable Rate Shading - how much performance will it give on its own?
 
Looks like some pretty beefy spec for a console. But given how the world situation is right now with the virus and economic impact and properly some time in the future, I don't think I will be a PS5 owner with in the next year or so.
 
On PC, a SATA SSD is faster in gaming than a PCIe NVMe SSD. Anyone care to explain as to why and why the console gets virtually no loading times right now?

PCs also don't get other things, but they are software related like upscaling for instance.

Another question:
The Variable Rate Shading - how much performance will it give on its own?


I think you quoted the wrong person...... but my guess is software isn't coded to take advantage of the extra speed sorta like games currently being fine at around 12 to 16 threads and not benefiting from 24 or 36 to the same extent at least when it comes to 1% lows.

I personally still think Sony put way too much money into the storage system and should have focused on sustained clocks for the CPU/GPU or a wider GPU.
 
I agree that this drive is bonkers, but "as always PC users get less"? What on earth are you on about? The last time consoles were faster/better than PCs were in the mid-to-late 90s ...

Well, look at this gameplay comparison between PS4 and Ryzen 5 1600/ GTX 1060/ 16 GB RAM.
There is no difference in the quality. While we can assume that the PC is (much) faster, no?


I think you quoted the wrong person...... but my guess is software isn't coded to take advantage of the extra speed sorta like games currently being fine at around 12 to 16 threads and not benefiting from 24 or 36 to the same extent at least when it comes to 1% lows.

I personally still think Sony put way too much money into the storage system and should have focused on sustained clocks for the CPU/GPU or a wider GPU.

Well, I only wanted to agree with what you've just said and continue the discussion adding to what you've said.
You are not the wrong person ;)
 
Even if the presentation was filmed a few days ago I am still tempted to think the PS5 engineering team spent the past 48 hours furiously overclocking their APU with the marketing team breathing down their necks :p
But they filmed it before that.

Concerning the variable clocks, I don't mind it because a) it's not based on thermals, so everyone gets the same performance regardless and b) it's to keep cooling noise acceptable.
One example he gave for lowering the cpu clock is when it's executing a lot of 256-bit instructions because to maintain the clocks they would need to increase the size of the power supply and fan.
 
That explanation sure has the sound of someone really wanting their inferior solution to look better than it is. Sure, caches and other things will run faster, but the amounts, bandwidth etc. will also be lower. Adding more CUs also adds more cache etc., and pretty much every "advantage" he extolls here is countered by building a wider GPU, at least one that's also backed by solid caches and VRAM.

He's not wrong, adding more CUs adds more cache however it's not that simple. In a GPU each CU has some cache dedicated to it, if you add more CUs they still have access to the same amount of cache. You'll still encounter the same limitations in memory bound situations.

A GPU with less CUs and higher clocks will perform better in memory bound situations. And it's not uncommon that some parts of a shader contain scalar code which will also run better.

Not that it matters much in this case because PS5's GPU isn't equivalent TFLOP wise with the one in X.

Well, look at this gameplay comparison between PS4 and Ryzen 5 1600/ GTX 1060/ 16 GB RAM.

I can pull hundreds of examples where the games look noticeably better on such a PC, just browse Digital Foundry's YT channel and you'll find plenty examples. And we both know RDR2 was an unusually poor PC port.
 
Last edited:
But they filmed it before that.

Concerning the variable clocks, I don't mind it because a) it's not based on thermals, so everyone gets the same performance regardless and b) it's to keep cooling noise acceptable.
One example he gave for lowering the cpu clock is when it's executing a lot of 256-bit instructions because to maintain the clocks they would need to increase the size of the power supply and fan.


I'm guessing they've know for a quite a while the Xbox was more powerful.... I'm pretty sure this has more to do with them trying to keep a normal form factor. I still think in a console it doesn't make a whole lot of sense as it forces developers to target a lower level of performance to make sure all PS5 games perform the same in all environments.

Apparently Microsoft tested their system in the desert to make sure it would sustain its clocks.

At the same time these people are much smarter than me so I'm sure their reason are valid.
 
I'm guessing they've know for a quite a while the Xbox was more powerful.... I'm pretty sure this has more to do with them trying to keep a normal form factor. I still think in a console it doesn't make a whole lot of sense as it forces developers to target a lower level of performance to make sure all PS5 games perform the same in all environments.
Yeah, I'm sure they all have their inside information and knew all about the other months in advance. I'm not sure what you mean by "it forces developers to target a lower level of performance to make sure all PS5 games perform the same in all environments"?

Apparently Microsoft tested their system in the desert to make sure it would sustain its clocks.
Well, the design does seem more effience at cooling, and the fan is much larger than what it used to be in past consoles. I just it mantains acceptable noise levels. I keep my console on my desk.

At the same time these people are much smarter than me so I'm sure their reason are valid.
Yeah, I'm sure both have done a ton of research that lead to their decisions.

I don't really care much either way, since if I were to buy any console, it would be this one, because of the exclusives.
 
399 again would be sweet. But there was that rumor a while ago saying they were having trouble keeping costs under $450, I think it was. Given the custom hardware they're putting on it, I wouldn't be surprised if that were true. Then again, Sony has sold consoles at a loss before.

16GB GDDR6 ram, whats the current market price on that? GPU, CPU, psu, controllers, SSD size? etc.

Random guess $550 US$ at least
 
I don't really care much either way, since if I were to buy any console, it would be this one, because of the exclusives.


Same here PS5 regardless for me... I personally just like the route at a hardware level Microsoft took. At the end of the day only one of them plays Playstation exclusives so that will be my choice.


Sony going with variable clocks at least from my point of view means developers can't depend on those max clocks in all scenarios whether that be power draw or someone sitting in a 30c room vs a 20c room.

it adds a variable to development that doesn't exist on the other console.
 
Back
Top