• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Sony PlayStation 5 Promises 4K 120Hz Gaming

Joined
Jun 2, 2017
Messages
7,906 (3.14/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Bifff consoles and their crappy upscaled graphics. Next you say you love DLSS on it's all glory, or are you some double standard guy that it's OK on consoles but when PC game does that it's next to blasphemy.

I guess oyu haven't seen Spiderman on the PS4.
 
Joined
Jun 2, 2017
Messages
7,906 (3.14/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Well true I haven't done console gaming since Sega megadrive on 90s. But let me guess locked 30fps at some upscaled resolution?

Nope my nephew has it and it rocks. Unfortunately there is no FPS counter on the PS4 so I can't confirm but it looked way past 30 FPS and it was at 1080P. I was amazed but the fact is some console games are absolutely beautiful.
 
Joined
Sep 7, 2017
Messages
3,244 (1.34/day)
System Name Grunt
Processor Ryzen 5800x
Motherboard Gigabyte x570 Gaming X
Cooling Noctua NH-U12A
Memory Corsair LPX 3600 4x8GB
Video Card(s) Gigabyte 6800 XT (reference)
Storage Samsung 980 Pro 2TB
Display(s) Samsung CFG70, Samsung NU8000 TV
Case Corsair C70
Power Supply Corsair HX750
Software Win 10 Pro
Spiderman is practically a first party game. Or 2nd party at least.. Insomniac.. but they work closely with Sony and have been on board with all of their techniques since the old days. It's going to be hit and miss with neutral ports.

edit: Actually to be fair, the engine was based on Sunset Overdrive, which was an XOne original, in the strange instance when Insomniac jumped ship (worth playing btw).. so I guess you could partly call it a neutral game.
 
Joined
Jun 2, 2017
Messages
7,906 (3.14/day)
System Name Best AMD Computer
Processor AMD 7900X3D
Motherboard Asus X670E E Strix
Cooling In Win SR36
Memory GSKILL DDR5 32GB 5200 30
Video Card(s) Sapphire Pulse 7900XT (Watercooled)
Storage Corsair MP 700, Seagate 530 2Tb, Adata SX8200 2TBx2, Kingston 2 TBx2, Micron 8 TB, WD AN 1500
Display(s) GIGABYTE FV43U
Case Corsair 7000D Airflow
Audio Device(s) Corsair Void Pro, Logitch Z523 5.1
Power Supply Deepcool 1000M
Mouse Logitech g7 gaming mouse
Keyboard Logitech G510
Software Windows 11 Pro 64 Steam. GOG, Uplay, Origin
Benchmark Scores Firestrike: 46183 Time Spy: 25121
Spiderman is practically a first party game. Or 2nd party at least.. Insomniac.. but they work closely with Sony and have been on board with all of their techniques since the old days. It's going to be hit and miss with neutral ports.

edit: Actually to be fair, the engine was based on Sunset Overdrive, which was an XOne original, in the strange instance when Insomniac jumped ship (worth playing btw).. so I guess you could partly call it a neutral game.

Sunset Overdrive was a great game!!!!!!
 
Joined
Mar 10, 2014
Messages
1,793 (0.49/day)
Nope my nephew has it and it rocks. Unfortunately there is no FPS counter on the PS4 so I can't confirm but it looked way past 30 FPS and it was at 1080P. I was amazed but the fact is some console games are absolutely beautiful.

Well by digital foundry it's locked frame paced 30 FPS, which sounds like a normal console game. Not saying it can't be good though and I agree console games can be beautiful and good looking. Just don't expect them to be very high fps games, good frame pacing is a key here how game feels. Spiderman seems to do it right, but there's many console ports that fall short on frame pacing and which makes thing even worse are targeted at 30 fps.

Edit: Well speaking of digital foundry I finally remembered where did I heard about xbox ones 120Hz mode, their Sekiro console review it was.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
God i hope you're joking because if you aren't, you're showing how clueless you are. Navi 10 IS going into both consoles, simply because AMD has nothing better as of now, consoles are due Q2/Q3 next year, what do you think they can do in these year exactly? Navi 10 is going into both those consoles unless they ditch AMD, just not under the form of 5700 or 5700 xt, it'll be a cut down of those, possibly a cut down of the 5700 so the best we're looking at here is 8-9 TFLops vs 6 of the XoneX and 4,5 of the PS4 Pro, a good 2k gaming experience at 60-70 fps on the latest games, 4k60 fps? Rofl don't make me laugh, surely some light game will be capable of that, but most of the games console users play? No, not at all, unless they keep upscaling stuff and scamming unaware casual console users, like they always did.

But the best part is...We need to THANK consoles? Jesus, you are one hell of a comedian. We have to DAMN consoles every day, for the low quality trash they made popular and for how they casualized the entire market making much more stupid games sell like hotcakes. They ruined the videogame industry in an irreparable way, and there seems to be no end to that as trash products continue to get praised and greedy devs are always more inclined to satisfy that lack of taste and of experience to make tons of money.
...so you have no idea how AMD's semi-custom chip business works, then, or how console SoCs are made. Not to sound snide, but semi-custom doesn't mean combining off-the-shelf parts onto a PCB. Look at the past two console waves: fully custom silicon, where the "semi-" part comes from the IP blocks in said silicon being largely standardized AMD parts. Does an Xbox One or PS4 use an otherwise available GCN chip? No, but they use GCN-based GPUs with some custom features and a custom CU count. The same goes for the PS4 Pro and Xbox One X. The One X has 40 CUs. Are you saying it's then based on a cut-down Vega 64 die? 'Cause that's the only >36 CU <28nm die AMD has made until the VII. My point being: Navi 10 is a PC part, for PC cards. It is a specific die design, and while the next-gen consoles are guaranteed to use the same architecture (it's announced!), they're not going to use the same chip. We don't know whether the new consoles will be MCM packages or monolithic dice, but regardless of that, the console makers inevitably ask for added features that make directly transplanting in a PC chip next to impossible. There's no way whatsoever the Navi 10 die in its PC form is making its way into a PS5 or Xbox whatever.

I can't be bothered responding to the rest of your ... "post", as you seem utterly incapable of putting forward an informed and though-out argument, and prefer screaming and yelling. No thanks.
I don't see how. Even Radeon 7 and RTX 2080 are barely getting by at 4k on semi new to new games.

I really don't understand these 4k console announcements (let alone 8k from Microsoft) when the PC world is crippling by as it is.

Even more retarded is that Google streaming service promising 4k @ 60fps. At what kind of quality cost? What's the point of this?
Again, consoles aren't PCs, and overhead is a real thing. Then there's the fact that console settings are generally not "PC Ultra", as that's usually the "turn everything to 11, regardless of what makes sense in terms of performance impact" option, and console game designers tend to spend a lot of time optimizing this. The Xbox One X does 4k30 in quite a few titles with a 40 CU 1172MHz 12nm GCN GPU (and a terrible CPU). If RDNA delivers the IPC/"perf per FLOP" increase promised, imagine what a 60-ish (perhaps higher) CU design at slightly higher clocks could do. While Microsofts stated "4x the power" is vague AF, and likely includes CPU perf, native 4k60 this generation is a given (a Radeon VII can do 4k60 just fine at High settings in a lot of games, and has to deal with Windows), and 4k120 in select esport titles (think Rocket League etc.) ought to be doable. And 1080p120 shouldn't be a problem whatsoever. Heck, my 4-year-old Fury X does that in esports games, and the CPU-crippled One X does 1080p60 in a bunch of games.

As for 8k, I can't imagine that being for anything but streaming, or even just saying "we have HDMI 2.1" in a "cool" way.
 
Joined
Sep 7, 2017
Messages
3,244 (1.34/day)
System Name Grunt
Processor Ryzen 5800x
Motherboard Gigabyte x570 Gaming X
Cooling Noctua NH-U12A
Memory Corsair LPX 3600 4x8GB
Video Card(s) Gigabyte 6800 XT (reference)
Storage Samsung 980 Pro 2TB
Display(s) Samsung CFG70, Samsung NU8000 TV
Case Corsair C70
Power Supply Corsair HX750
Software Win 10 Pro
a Radeon VII can do 4k60 just fine at High settings in a lot of games, and has to deal with Windows

Not enough to convince me to upgrade. I'm using a Vega 64 with Freesync atm.. it does fine as well on a lot of games.. but it seems like VII has the same issue. Just less severe (I noticed new things like AC Odyssey falling to a crippling 30-40fps.. with the 2080 not much better hovering 40fps..and the new Tomb Raider hovering 40 fps). I don't think a true 4K card really exists yet. It's just about there, but not quite. edit: CPU speeds could be a factor in what I've seen though. Those framerates drops especially occur in the big population areas. The new games that I see really do well are things that I already do well (like Forza).
 
Joined
Mar 16, 2017
Messages
1,666 (0.64/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
Nope my nephew has it and it rocks. Unfortunately there is no FPS counter on the PS4 so I can't confirm but it looked way past 30 FPS and it was at 1080P. I was amazed but the fact is some console games are absolutely beautiful.
See Horizon Zero Dawn for another great example. I don’t care what resolution it’s running—it’s an amazingly beautiful game from 10’ away on my 48” UHD TV. I do know that on the PS4 Pro, you can select a few different rendering options, with one being better frame rates.
 
Joined
Feb 17, 2017
Messages
852 (0.33/day)
Location
Italy
Processor i7 2600K
Motherboard Asus P8Z68-V PRO/Gen 3
Cooling ZeroTherm FZ120
Memory G.Skill Ripjaws 4x4GB DDR3
Video Card(s) MSI GTX 1060 6G Gaming X
Storage Samsung 830 Pro 256GB + WD Caviar Blue 1TB
Display(s) Samsung PX2370 + Acer AL1717
Case Antec 1200 v1
Audio Device(s) aune x1s
Power Supply Enermax Modu87+ 800W
Mouse Logitech G403
Keyboard Qpad MK80
That's a small chip by console standards. They normally go with 330-350mm^2.


That's exactly why they'd need to go with bigger chip.

PS4 Pro GPU is 232 mm^2, they won't go any higher than that, because even if they decrease frequency the chip is still going to consume too much, and offer a limited performance level.

Just speculating, but looking at the Zen 2 lineup, it would appear that there is an efficiency sweet spot for that chip, and it’s somewhere just north of 4.0GHz. I would suspect the same is true of Navi. With the right clocks, they can likely make a pretty efficient part that still brings the FPS. A wider chip with lower clocks might help them get them there, too. At 7nm, they are positioned well to make something that isn’t really that big for the application.
Zen 2 =/= Navi 10

RDNA is basically GCN with a different skin, and the partial confirmation of that is that they didn't talk about TDP or merely consumption at e3 nor at Computex

...so you have no idea how AMD's semi-custom chip business works, then, or how console SoCs are made. Not to sound snide, but semi-custom doesn't mean combining off-the-shelf parts onto a PCB. Look at the past two console waves: fully custom silicon, where the "semi-" part comes from the IP blocks in said silicon being largely standardized AMD parts. Does an Xbox One or PS4 use an otherwise available GCN chip? No, but they use GCN-based GPUs with some custom features and a custom CU count. The same goes for the PS4 Pro and Xbox One X. The One X has 40 CUs. Are you saying it's then based on a cut-down Vega 64 die? 'Cause that's the only >36 CU <28nm die AMD has made until the VII. My point being: Navi 10 is a PC part, for PC cards. It is a specific die design, and while the next-gen consoles are guaranteed to use the same architecture (it's announced!), they're not going to use the same chip. We don't know whether the new consoles will be MCM packages or monolithic dice, but regardless of that, the console makers inevitably ask for added features that make directly transplanting in a PC chip next to impossible. There's no way whatsoever the Navi 10 die in its PC form is making its way into a PS5 or Xbox whatever.

I can't be bothered responding to the rest of your ... "post", as you seem utterly incapable of putting forward an informed and though-out argument, and prefer screaming and yelling. No thanks.

So what you said is incorrect, Navi 10 is the same uarch they're going to use into consoles, and it has its limits just like Polaris 10 had, they will probably use a chip with a die size between 5700 and 5700XT and lower the clocks so much the performance will go under that of a Desktop RX 5700, which is not even close to offer 4k120fps, since not even 5700XT is able to offer that in normal conditions. I never said they used same die size or chips they use for PC hardware, i'm just saying that whatever they use, performance will fall inevitably behind their PC hardware solutions, both for CPU and GPU parts.
 
Joined
Mar 16, 2017
Messages
1,666 (0.64/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
Zen 2 =/= Navi 10

RDNA is basically GCN with a different skin, and the partial confirmation of that is that they didn't talk about TDP or merely consumption at e3 nor at Computex

I know Zen and Navi aren't the same. It's not really what I'm talking about. I'm saying that any given architecture on any given node is going to have a sweet spot for performance per watt. Polaris had one, too, but AMD elected to blow right past that mark and pushed those chips with more voltage to get higher clocks.

The specs of Navi 10 are pretty well known at this point. 5700 XT is a 225W TBP card. It has almost 2x the transistors and considerably higher clocks versus RX590, yet has the same TBP. How hard is AMD pushing 5700XT? I suspect pretty hard since they are still playing from behind. We don't really know what the specs of Navi will be in the consoles. XboxOne X had a 40CU GPU with "Polaris features," which was 4 more CUs than any available retail Polaris card. It wasn't full Polaris, but we don't really know how AMD customized those chips for the consoles. Still, I suspect Sony and MS will get more CUs than the 5700 XT, but AMD will probably lop 400-500mhz off to hit that power sweet spot. Just using last gen's history as a guide.
 

INSTG8R

Vanguard Beta Tester
Joined
Nov 26, 2004
Messages
7,966 (1.12/day)
Location
Canuck in Norway
System Name Hellbox 5.1(same case new guts)
Processor Ryzen 7 5800X3D
Motherboard MSI X570S MAG Torpedo Max
Cooling TT Kandalf L.C.S.(Water/Air)EK Velocity CPU Block/Noctua EK Quantum DDC Pump/Res
Memory 2x16GB Gskill Trident Neo Z 3600 CL16
Video Card(s) Powercolor Hellhound 7900XTX
Storage 970 Evo Plus 500GB 2xSamsung 850 Evo 500GB RAID 0 1TB WD Blue Corsair MP600 Core 2TB
Display(s) Alienware QD-OLED 34” 3440x1440 144hz 10Bit VESA HDR 400
Case TT Kandalf L.C.S.
Audio Device(s) Soundblaster ZX/Logitech Z906 5.1
Power Supply Seasonic TX~’850 Platinum
Mouse G502 Hero
Keyboard G19s
VR HMD Oculus Quest 2
Software Win 10 Pro x64
Well at least their goals are reasonable unlike MSs 8K nonsense. I mean my PS4 is mainly a media server/BD Player. I’m expecting things to get completely digital so it will probably take a pretty impressive exclusive to get onboard the 3-4 games I’ve bought for PS4 means I’ll need serious incentive to upgrade if no BD drive.
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
So what you said is incorrect, Navi 10 is the same uarch they're going to use into consoles, and it has its limits just like Polaris 10 had, they will probably use a chip with a die size between 5700 and 5700XT and lower the clocks so much the performance will go under that of a Desktop RX 5700, which is not even close to offer 4k120fps, since not even 5700XT is able to offer that in normal conditions. I never said they used same die size or chips they use for PC hardware, i'm just saying that whatever they use, performance will fall inevitably behind their PC hardware solutions, both for CPU and GPU parts.
Navi 10 is a specific die. Navi is a uarch. So unfortunately yes, you did say they would use a specific die:
Navi 10 IS going into both consoles
You seem to constantly mix up architecture code names and die code names, which makes it incredibly hard to understand what you're trying to say. You might not have meant to say that they were going to use the Navi 10 die in consoles, but that is what you did say. Next time, take a step back and reread you post before posting, it might help you get your point across.

As for what you're saying about RDNA being "basically GCN with a different skin", that's nonsense. There are significant low-level changes to the architecture, even if it retains compatibility with the GCN ISA and previous instruction handling and wavefront widths. Do we know yet if the changes translate into improved performance? Not really, as reviews haven't arrived. But every single technically competent analyst out there says these changes should help increase the hardware utilization and alleviate memory bandwidth strain - the two biggest issues with GCN. GamersNexus has an excellent overview with David Kanter, or you could take a look at AnandTech's brief summary here.

As for die size, it's true that consoles tend to go for mid-sized chips (nothing else makes much sense given their mass-produced nature), but at this point we have zero idea how they will be laid out. CU count? No idea. Clock speeds? No idea. MCM packaging or monolithic die? No idea. It's kind of funny that you say "it'll probably use a chip with a die size between 5700 and 5700XT" though, as they use different bins of the exact same die and die size is thus identical. They're both Navi 10, the lower-binned version with 4 CUs disabled just gets a suffix added to it.

Lastly, you seem to read me as saying the 5700 XT can do 4k120FPS, for some reason. I've never said that - though I'm sure it can in lightweight esports titles. CS:GO? Very likely. Rocket League? Sure. I wouldn't be surprised. But obviously this isn't a 4k gaming card - heck, I stated this quite categorically a few posts back. Every single demonstration AMD has done has been at 1440p. Does this translate 1:1 to console gaming (even if we make the rather significant assumption of equal hardware specs)? No. Consoles have far more optimized performance (as developers have a single (or at least no more than 3-4) hardware configuration to optimize for, rather than the ~infinite combinations in the PC space. Consoles also have lower-level hardware access for games, leading to better utilization and less overhead. The lightweight OSes also lead to less overhead. And, of course, they tend to skip very expensive rendering options that are often a part of "Ultra" settings on PC. This is how an Xbox One X can do 1800p or even native 2160p30 on a GPU with very similar power to an RX 480 with a terrible, low-speed CPU.
Not enough to convince me to upgrade. I'm using a Vega 64 with Freesync atm.. it does fine as well on a lot of games.. but it seems like VII has the same issue. Just less severe (I noticed new things like AC Odyssey falling to a crippling 30-40fps.. with the 2080 not much better hovering 40fps..and the new Tomb Raider hovering 40 fps). I don't think a true 4K card really exists yet. It's just about there, but not quite. edit: CPU speeds could be a factor in what I've seen though. Those framerates drops especially occur in the big population areas. The new games that I see really do well are things that I already do well (like Forza).
Not trying to convince you to upgrade ;) Heck, I'm very much a proponent of keeping hardware as long as one can, and not upgrading until it's actually necessary. I've stuck with my Fury X for quite a while now, and I'm still happy with it at 1440p, even if newer games tend to require noticeably lowered settings. I wouldn't think an RX 5700 makes much sense as an upgrade from a Vega 64 - I'd want at least a 50% performance increase to warrant that kind of expense. Besides that, isn't AC Odyssey notoriously crippling? I haven't played it, so I don't know, but I seem to remember reading that it's a hog. You're right that a "true 4k" card doesn't exist yet if that means >60fps in "all" games at Ultra settings - but this (all games at max settings at the highest available resolution) isn't something even ultra-high-end hardware has typically been capable of. We've gotten a bit spoiled with recent generations of hardware and how developers have gotten a lot better at adjusting quality settings to match available hardware. Remember that the original Crysis was normally played at resolutions way below 1080p even on the best GPUs of the time - and still chugged! :)
 
Joined
Feb 17, 2017
Messages
852 (0.33/day)
Location
Italy
Processor i7 2600K
Motherboard Asus P8Z68-V PRO/Gen 3
Cooling ZeroTherm FZ120
Memory G.Skill Ripjaws 4x4GB DDR3
Video Card(s) MSI GTX 1060 6G Gaming X
Storage Samsung 830 Pro 256GB + WD Caviar Blue 1TB
Display(s) Samsung PX2370 + Acer AL1717
Case Antec 1200 v1
Audio Device(s) aune x1s
Power Supply Enermax Modu87+ 800W
Mouse Logitech G403
Keyboard Qpad MK80
I know Zen and Navi aren't the same. It's not really what I'm talking about. I'm saying that any given architecture on any given node is going to have a sweet spot for performance per watt. Polaris had one, too, but AMD elected to blow right past that mark and pushed those chips with more voltage to get higher clocks.

The specs of Navi 10 are pretty well known at this point. 5700 XT is a 225W TBP card. It has almost 2x the transistors and considerably higher clocks versus RX590, yet has the same TBP. How hard is AMD pushing 5700XT? I suspect pretty hard since they are still playing from behind. We don't really know what the specs of Navi will be in the consoles. XboxOne X had a 40CU GPU with "Polaris features," which was 4 more CUs than any available retail Polaris card. It wasn't full Polaris, but we don't really know how AMD customized those chips for the consoles. Still, I suspect Sony and MS will get more CUs than the 5700 XT, but AMD will probably lop 400-500mhz off to hit that power sweet spot. Just using last gen's history as a guide.

Xbox One X had completely different thing under the hood, it was more Vega based than anything else, and the die size was bigger, that's why consumption hits 200W. Doesn't matter how AMD custom those chips for console, they can't do miracles, consoles hardware is always underpowered for multiple reasons, and it won't be able to achieve 4k120fps in most of the games, just like xbox one x can't achieve 4k30 fps on most of the games. Someone else claimed the PS5 will be 6x the performance of a Xbox one X, which is around 6 TFLops, that is 36 TFLops, i mean... they'll be lucky if it will do 1,5x, let alone 6x...

Navi 10 is a specific die. Navi is a uarch. So unfortunately yes, you did say they would use a specific die:

You seem to constantly mix up architecture code names and die code names, which makes it incredibly hard to understand what you're trying to say. You might not have meant to say that they were going to use the Navi 10 die in consoles, but that is what you did say. Next time, take a step back and reread you post before posting, it might help you get your point across.

As for what you're saying about RDNA being "basically GCN with a different skin", that's nonsense. There are significant low-level changes to the architecture, even if it retains compatibility with the GCN ISA and previous instruction handling and wavefront widths. Do we know yet if the changes translate into improved performance? Not really, as reviews haven't arrived. But every single technically competent analyst out there says these changes should help increase the hardware utilization and alleviate memory bandwidth strain - the two biggest issues with GCN. GamersNexus has an excellent overview with David Kanter, or you could take a look at AnandTech's brief summary here.

As for die size, it's true that consoles tend to go for mid-sized chips (nothing else makes much sense given their mass-produced nature), but at this point we have zero idea how they will be laid out. CU count? No idea. Clock speeds? No idea. MCM packaging or monolithic die? No idea. It's kind of funny that you say "it'll probably use a chip with a die size between 5700 and 5700XT" though, as they use different bins of the exact same die and die size is thus identical. They're both Navi 10, the lower-binned version with 4 CUs disabled just gets a suffix added to it.

Lastly, you seem to read me as saying the 5700 XT can do 4k120FPS, for some reason. I've never said that - though I'm sure it can in lightweight esports titles. CS:GO? Very likely. Rocket League? Sure. I wouldn't be surprised. But obviously this isn't a 4k gaming card - heck, I stated this quite categorically a few posts back. Every single demonstration AMD has done has been at 1440p. Does this translate 1:1 to console gaming (even if we make the rather significant assumption of equal hardware specs)? No. Consoles have far more optimized performance (as developers have a single (or at least no more than 3-4) hardware configuration to optimize for, rather than the ~infinite combinations in the PC space. Consoles also have lower-level hardware access for games, leading to better utilization and less overhead. The lightweight OSes also lead to less overhead. And, of course, they tend to skip very expensive rendering options that are often a part of "Ultra" settings on PC. This is how an Xbox One X can do 1800p or even native 2160p30 on a GPU with very similar power to an RX 480 with a terrible, low-speed CPU.

Navi 10 is a specific die yes, that's what they'll use on consoles, just like they used Polaris 10 on PS4 Pro, and Vega 10 on Xbox one X, do you want me to say "based" too? Will it change anything technically? No, those they use on consoles might be custom chips, but they're 95% what they developed for PC hardware essentially, just not the same sizes they use for PC hardware and possibly with a slight different configurations, that's a pretty sure thing, the best you can expect is having the cut on a 5700XT which is 251mm^2 with lower frequencies. And if the data about consumption are correct, full fledged 5700 XT is 225W, with lower frequencies we're talking about not less than 180W, because the more they decrease that the more performance they lose, that is still 180W without counting the rest of the console, so what they'll be selling, ~300W consoles?
How is it funny what i said? When they ever used bigger die sizes on consoles compared to PC hardware? They never did that, Xbox One X is Vega based, and the smallest Vega at 16nm on PC hardware is 495mm^2, and that's how they pulled out ~6 TFLops, mantaining relatively low frequency (Xbox One X GPU die size is 359mm^2). Binning won't change that much anything really, even because talking about Binning in consoles chips is a bit crazy...
No, i don't read you as saying 5700XT capable of 4k120fps, but then how do you think 4k120fps on PS5 or Xbox scarlett possible? Since whatever consoles will be equipped with can't be as powerful as the PC hardware version of it. And i don't want to hear about upscales because 4k is 4k or UHD to be more precise, if they claim 4k that needs to be the resolution in game, and no upscales are allowed. About fps i don't even comment, because it's totally absurd.
 
Joined
Mar 10, 2014
Messages
1,793 (0.49/day)
Xbox One X had completely different thing under the hood, it was more Vega based than anything else, and the die size was bigger, that's why consumption hits 200W. Doesn't matter how AMD custom those chips for console, they can't do miracles, consoles hardware is always underpowered for multiple reasons, and it won't be able to achieve 4k120fps in most of the games, just like xbox one x can't achieve 4k30 fps on most of the games. Someone else claimed the PS5 will be 6x the performance of a Xbox one X, which is around 6 TFLops, that is 36 TFLops, i mean... they'll be lucky if it will do 1,5x, let alone 6x...



Navi 10 is a specific die yes, that's what they'll use on consoles, just like they used Polaris 10 on PS4 Pro, and Vega 10 on Xbox one X, do you want me to say "based" too? Will it change anything technically? No, those they use on consoles might be custom chips, but they're 95% what they developed for PC hardware essentially, just not the same sizes they use for PC hardware and possibly with a slight different configurations, that's a pretty sure thing, the best you can expect is having the cut on a 5700XT which is 251mm^2 with lower frequencies. And if the data about consumption are correct, full fledged 5700 XT is 225W, with lower frequencies we're talking about not less than 180W, because the more they decrease that the more performance they lose, that is still 180W without counting the rest of the console, so what they'll be selling, ~300W consoles?
How is it funny what i said? When they ever used bigger die sizes on consoles compared to PC hardware? They never did that, Xbox One X is Vega based, and the smallest Vega at 16nm on PC hardware is 495mm^2, and that's how they pulled out ~6 TFLops, mantaining relatively low frequency (Xbox One X GPU die size is 359mm^2). Binning won't change that much anything really, even because talking about Binning in consoles chips is a bit crazy...
No, i don't read you as saying 5700XT capable of 4k120fps, but then how do you think 4k120fps on PS5 or Xbox scarlett possible? Since whatever consoles will be equipped with can't be as powerful as the PC hardware version of it. And i don't want to hear about upscales because 4k is 4k or UHD to be more precise, if they claim 4k that needs to be the resolution in game, and no upscales are allowed. About fps i don't even comment, because it's totally absurd.

Uhm no. Console chips are APUs. Current gen have Jaguar cpu cores and GCN graphics part on them. I.E Xbox one X scorpio looks like this:
125347

Edit: I agree though I don't think it's any way feasible to have 4k120fps apu out of console like power budgets.
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Xbox One X had completely different thing under the hood, it was more Vega based than anything else, and the die size was bigger, that's why consumption hits 200W. Doesn't matter how AMD custom those chips for console, they can't do miracles, consoles hardware is always underpowered for multiple reasons, and it won't be able to achieve 4k120fps in most of the games, just like xbox one x can't achieve 4k30 fps on most of the games. Someone else claimed the PS5 will be 6x the performance of a Xbox one X, which is around 6 TFLops, that is 36 TFLops, i mean... they'll be lucky if it will do 1,5x, let alone 6x...
1: Exactly. The One X had a semi-custom GPU that merged features from Vega and Polaris, being fully neither but a hybrid. It's also quite large, yes - in fact, bigger than any discrete Polaris GPU, but significantly smaller than the 14nm Vega die.
2: What you're saying about not being able to "do miracles" here has no bearing on this discussion whatsoever. You were saying that consoles use off-the-shelf GPU dice. They don't. They use custom silicon.
3: The number of games in which the Xbox One X can do native 4k30 isn't huge, but it's there nonetheless. An RX 480, which has roughly the same FLOPS, can't match its performance at equivalent quality settings in Windows. This demonstrates how the combination of lower-level hardware access and more optimization works in consoles' favor.
4: Performance can mean more than FLOPS. Case in point: RDNA. GCN does a lot of compute (FLOPS) but translates that rather poorly to gaming performance (for various reasons). RDNA aims to improve this ratio significantly - and according to the demonstrations and technical documentation provided, this makes sense. My Fury X does 8.6 TFLOPs, which is 9% more than the RX 5700, yet the 5700 is supposed to match or beat the RTX 2060, which is 34% faster than my GPU. In other words, "6x performance" does not in any way have to mean "6x FLOPS". Also, console makers are likely factoring in CPU comparisons, where they're easily getting a 2-3x improvement with the move from Jaguar to Zen2. Still, statements like "The PS5 will be 6x the performance of the Xbox One X" are silly and unrealistic simply because you can't summarize performance into a single number that applies across all workloads. For all we know they're factoring in load time reductions from a fast SSD into that, which would be downright dumb.

Navi 10 is a specific die yes, that's what they'll use on consoles, just like they used Polaris 10 on PS4 Pro, and Vega 10 on Xbox one X,
Again: NO THEY DIDN'T. You just stated - literally one paragraph ago! - that the consoles used a GPU "more Vega based than anything else", but now you're saying it's Vega 10? You're contradicting yourself within the span of five lines of text. Can you please accept that saying "Vega 10" means something different and more specific than "Vega"? Because it does. Now, "Vega 10" can even mean two different things: a code name for a die (never used officially/publicly), which is the chip in the V56/64, or the marketing name Vega 10, which is the iGPU in Ryzen APUs with 10 CUs enabled. These are entirely different pieces of silicon (one is a pure GPU die, the other a monolithic APU die), but neither are found in any console. Navi 10 (the RX 5700/XT die) will never, ever, be found in an Xbox or Playstation console, unless something very weird happens. Consoles use semi-custom designs based on the same architecture, but semi-custom means not the same.

do you want me to say "based" too? Will it change anything technically? No, those they use on consoles might be custom chips, but they're 95% what they developed for PC hardware essentially, just not the same sizes they use for PC hardware and possibly with a slight different configurations, that's a pretty sure thing,
Yes, it will change something technically. That's the difference between an architecture and a specific rendition of/design based on said architecture. One is a general categorization, one is a specific thing. Saying "battle royale game" and saying "Apex Legends" isn't the same thing either, but have a very similar relation to each other - the latter being a specific rendition of the broader category described by the former.

the best you can expect is having the cut on a 5700XT which is 251mm^2 with lower frequencies. And if the data about consumption are correct, full fledged 5700 XT is 225W, with lower frequencies we're talking about not less than 180W, because the more they decrease that the more performance they lose, that is still 180W without counting the rest of the console, so what they'll be selling, ~300W consoles?
Again, you're arguing against some sort of straw man that has no basis in what I was saying. We know nothing specific about the GPU configurations of the upcoming consoles. We can make semi-educated guesses, but quite frankly that's a meaningless exercise IMO. We'll get whatever MS and Sony thinks is the best balance of die area, performance, features, power draw and cost. There are many ways this can play out. Your numbers are likely in the ballpark of correct, but we don't know nearly enough to say anything more specific about this - all we can make are general statements like "the GPU isn't likely to be very small or overly large" or "power draw is likely to be within what can be reasonably cooled in a mass-produced console form factor".

Now, one thing we can say with relative certainty is that the new console GPUs will likely clock lower than retail PC Navi dGPUs, both for power consumption and QC reasons (fewer discarded dice due to failure to meet frequency targets, less chance of failure overall). AMD tends to push the clocks of their dGPUs high, so it's reasonable to assume that if the RX 5700 XT consumes ~225W at ~1755MHz "game clock" (average in-game boost), downclocking it by 2-300MHz is likely to result in significant power savings. After all, power scaling in silicon ICs is nowhere near linear, and pushing clocks always leads to disproportionate increases in power consumption. Just look at the gains people got from underclocking (and to some extent undervolting) Vega. If a card consumes 225W at "pushed" clocks, it's not unlikely to get it to, say, 150W (33% reduced power) with much less than 33% performance lost. And if their power target is, for example, 200W, they could go for a low-clocked (< 3GHz) 8c Zen2 CPU at <65W and a "slow-and-wide" GPU that's ultimately faster than the 5700XT. I'm not saying this will happen (heck, I'm not even saying I think it's likely), but I'm saying it's possible.

The current consoles demonstrate this quite well: An RX 580 consumes somewhere between 180 and 200W alone. An Xbox One X consumes around the same power with 4 more CUs, 8 CPU cores, 50% more VRAM, a HDD, an optical drive, and so on. In other words, the Xbox One X has more CUs than the 480, but consumes noticeably less power for the GPU itself.

How is it funny what i said?
Have I said that something you said is funny?

When they ever used bigger die sizes on consoles compared to PC hardware? They never did that, Xbox One X is Vega based, and the smallest Vega at 16nm on PC hardware is 495mm^2, and that's how they pulled out ~6 TFLops, mantaining relatively low frequency (Xbox One X GPU die size is 359mm^2).
Xbox One X is a Vega-Polaris hybrid, and what I said is that it's bigger than any PC Polaris die - the biggest of which has 36 CUs. I never mentioned Vega in that context.

Binning won't change that much anything really, even because talking about Binning in consoles chips is a bit crazy...
I never mentioned binning in relation to consoles, I mentioned binning because you were talking as if the 5700 and 5700XT are based off different dice, which they aren't.

No, i don't read you as saying 5700XT capable of 4k120fps, but then how do you think 4k120fps on PS5 or Xbox scarlett possible? Since whatever consoles will be equipped with can't be as powerful as the PC hardware version of it. And i don't want to hear about upscales because 4k is 4k or UHD to be more precise, if they claim 4k that needs to be the resolution in game, and no upscales are allowed. About fps i don't even comment, because it's totally absurd.
1: We don't know the CU count of the upcoming consoles. I don't expect it to be much more than 40, but we've been surprised before. The Xbox One X has 40 CUs, and MS is promising a significant performance uplift from that - even with, let's say 20% higher clocks due to 7nm (still far lower than dGPUs) and 25% more work per clock, they'd need more CUs to make a real difference - after all, that just adds up to a 50% performance increase, which doesn't even hit 4k60 if the current consoles can at best hit 4k30. Increasing the CU count to, say, 50 (for ease of calculation, not that I think it's a likely number) would boost that to a 87,5% increase instead, for a relatively low power cost (<25% increase) compared to boosting clock speeds to a similar level of performance.
2: As I've been trying to make clear for a few posts now, frame rates depend on the (type of) game. Forza runs flawlessly at 4k30 on the One X. High-end AAA games tend not to. A lightweight esports title might even exceed this, though the X isn't capable of outputting more frames - consoles generally run strict VSYNC (and the FreeSync implementation on Xbox is ... bad). I'm saying I don't doubt they'll be able to run lightweight esports titles at 4k120. Again: My 2015-era Fury X runs Rocket League at 1440p120 flawlessly (I've limited it there, no idea how high it really goes, but likely not above 150), so a newer console with a significantly faster architecture, a similar CU count, more VRAM and other optimizations might absolutely be able to reach 4k120 in a lightweight title like that. Heck, they might even implement lower-quality high FPS modes in some games for gamers who prefer resolution and sharpness over texture quality and lighting - it can make sense for fast-paced games. Am I saying we'll see the next CoD or BF at 4k120 on the upcoming consoles? Of course not. 4k60 is very likely the goal for games like that, but they might not be able to hit it consistently if they keep pushing expensive graphical effects for bragging rights. 1080p120 is reasonably realistic for games like that, though. The CPU is the main bottleneck for high FPS gaming on the One X, after all, so at 1080p these consoles should really take off. My point is: games are diverging into more distinct performance categories than just a few years ago, where esports titles prioritize FPS and response times, while a lot of other games prioritize visual quality and accept lower frame rates to achieve this. This makes it very difficult to say whether or not a console "can run 4k120" or anything of the sort, as the performance difference between different games on the same platform can be very, very significant. Backwards compatibility on consoles compound this effect.
3: I was quite specific about whether or not I meant upscaled resoultions - please reread my posts if you missed this. When I say native 4k, I mean native 4k (yes, technically UHD, but we aren't cinematographers now, are we?).
 
Joined
Feb 17, 2017
Messages
852 (0.33/day)
Location
Italy
Processor i7 2600K
Motherboard Asus P8Z68-V PRO/Gen 3
Cooling ZeroTherm FZ120
Memory G.Skill Ripjaws 4x4GB DDR3
Video Card(s) MSI GTX 1060 6G Gaming X
Storage Samsung 830 Pro 256GB + WD Caviar Blue 1TB
Display(s) Samsung PX2370 + Acer AL1717
Case Antec 1200 v1
Audio Device(s) aune x1s
Power Supply Enermax Modu87+ 800W
Mouse Logitech G403
Keyboard Qpad MK80
Uhm no. Console chips are APUs. Current gen have Jaguar cpu cores and GCN graphics part on them. I.E Xbox one X scorpio looks like this:

Edit: I agree though I don't think it's any way feasible to have 4k120fps apu out of console like power budgets.

Guys, it's not like i ever said consoles use 100% exact silicon used on PC hardware, but the performance and the architecture is the same, they might change configuration, they might build monolithic die APUs, but the juice is the same one essentially, even if Microsoft pays AMD for a custom chip, they're going to use the same technology they have for PC, just adapted and customized, but again, it's the same stuff.

1: Exactly. The One X had a semi-custom GPU that merged features from Vega and Polaris, being fully neither but a hybrid. It's also quite large, yes - in fact, bigger than any discrete Polaris GPU, but significantly smaller than the 14nm Vega die.
2: What you're saying about not being able to "do miracles" here has no bearing on this discussion whatsoever. You were saying that consoles use off-the-shelf GPU dice. They don't. They use custom silicon.
3: The number of games in which the Xbox One X can do native 4k30 isn't huge, but it's there nonetheless. An RX 480, which has roughly the same FLOPS, can't match its performance at equivalent quality settings in Windows. This demonstrates how the combination of lower-level hardware access and more optimization works in consoles' favor.
4: Performance can mean more than FLOPS. Case in point: RDNA. GCN does a lot of compute (FLOPS) but translates that rather poorly to gaming performance (for various reasons). RDNA aims to improve this ratio significantly - and according to the demonstrations and technical documentation provided, this makes sense. My Fury X does 8.6 TFLOPs, which is 9% more than the RX 5700, yet the 5700 is supposed to match or beat the RTX 2060, which is 34% faster than my GPU. In other words, "6x performance" does not in any way have to mean "6x FLOPS". Also, console makers are likely factoring in CPU comparisons, where they're easily getting a 2-3x improvement with the move from Jaguar to Zen2. Still, statements like "The PS5 will be 6x the performance of the Xbox One X" are silly and unrealistic simply because you can't summarize performance into a single number that applies across all workloads. For all we know they're factoring in load time reductions from a fast SSD into that, which would be downright dumb.
1: The Xbox One X has a semi custom GPU based on Vega architecture, not Polaris.
2: It's a way of saying that they can't do much of what they have already developed which is already at its limit, and yes we do know it because it's been like that for years, and it's not going to change
3: Because they're different architectures. And optimizations still can't do miracles either.
4: Yes they might have improved that, they might be approaching nvidia's level of optimization in that term, but the fact stands, AMD themselves compared their 5700 XT to a 2070, and a 5700 to a 2060, and while most of the time take the performance crown, they sometimes lose, let's say they're probably going to battle the new "super" cards nvidia is preparing, which sounds like they're going to basically a 2060Ti and a 2070Ti. Anyway 5700 XT 5700 are roughly that performance level, and it doesn't seem to me that they're capable of doing what Sony is saying in no way. Nothing close to 6x performance, but much closer to a 2x. Well if they're factoring other stuff in that number i don't know, but it's not fair because it's simply not true to state that, if you talk about performance, it's computing performance only, unless you want to kinda scam your customers, and it wouldn't be the first time...
Again: NO THEY DIDN'T. You just stated - literally one paragraph ago! - that the consoles used a GPU "more Vega based than anything else", but now you're saying it's Vega 10? You're contradicting yourself within the span of five lines of text. Can you please accept that saying "Vega 10" means something different and more specific than "Vega"? Because it does. Now, "Vega 10" can even mean two different things: a code name for a die (never used officially/publicly), which is the chip in the V56/64, or the marketing name Vega 10, which is the iGPU in Ryzen APUs with 10 CUs enabled. These are entirely different pieces of silicon (one is a pure GPU die, the other a monolithic APU die), but neither are found in any console. Navi 10 (the RX 5700/XT die) will never, ever, be found in an Xbox or Playstation console, unless something very weird happens. Consoles use semi-custom designs based on the same architecture, but semi-custom means not the same.
Again they did: Polaris 10 includes All RX 4xx cards; Polaris 20 includes RX 570 and 580; Polaris 30 includes RX 590, Vega 10 includes Vega 3/6/8/10/11/20/48/56/64 and Vega 20 includes Vega VII
Polaris 10/20/30, they're all polaris, Vega 10 and 20, both Vega. Xbox One X GPU is built based on Vega 10 family, it's not like any of those mentioned before, but it's still Vega and very possibly part of Vega 10 group. But even if they aren't, even if it's not Vega 10 but it's Vega "Xbox", what does it matter, same stuff, same performance per watt, same die limitations as it's still part of the Vega family and they share everything. I agree it doesn't mean the same at 100% but at 90% it's still considerable the same, not exact but almost.
Yes, it will change something technically. That's the difference between an architecture and a specific rendition of/design based on said architecture. One is a general categorization, one is a specific thing. Saying "battle royale game" and saying "Apex Legends" isn't the same thing either, but have a very similar relation to each other - the latter being a specific rendition of the broader category described by the former.
What will it change technically? The disposition inside the die? The configuration? Apart from that? Performance/Watt are still the same, and won't go any higher. Not quite an accurate analogy, i'd say one is Half Life 2, and the other is Half Life 2 Episode 1 or 2. Basically Half life 2, apart from the story.
Again, you're arguing against some sort of straw man that has no basis in what I was saying. We know nothing specific about the GPU configurations of the upcoming consoles. We can make semi-educated guesses, but quite frankly that's a meaningless exercise IMO. We'll get whatever MS and Sony thinks is the best balance of die area, performance, features, power draw and cost. There are many ways this can play out. Your numbers are likely in the ballpark of correct, but we don't know nearly enough to say anything more specific about this - all we can make are general statements like "the GPU isn't likely to be very small or overly large" or "power draw is likely to be within what can be reasonably cooled in a mass-produced console form factor".
Well what are you arguing about then? We'll get whatever AMD is capable of doing with their chips, nothing more, and what Sony or Microsoft will be able to sell at a decent price, and not consume like an oven.
Now, one thing we can say with relative certainty is that the new console GPUs will likely clock lower than retail PC Navi dGPUs, both for power consumption and QC reasons (fewer discarded dice due to failure to meet frequency targets, less chance of failure overall). AMD tends to push the clocks of their dGPUs high, so it's reasonable to assume that if the RX 5700 XT consumes ~225W at ~1755MHz "game clock" (average in-game boost), downclocking it by 2-300MHz is likely to result in significant power savings. After all, power scaling in silicon ICs is nowhere near linear, and pushing clocks always leads to disproportionate increases in power consumption. Just look at the gains people got from underclocking (and to some extent undervolting) Vega. If a card consumes 225W at "pushed" clocks, it's not unlikely to get it to, say, 150W (33% reduced power) with much less than 33% performance lost. And if their power target is, for example, 200W, they could go for a low-clocked (< 3GHz) 8c Zen2 CPU at <65W and a "slow-and-wide" GPU that's ultimately faster than the 5700XT. I'm not saying this will happen (heck, I'm not even saying I think it's likely), but I'm saying it's possible.

The current consoles demonstrate this quite well: An RX 580 consumes somewhere between 180 and 200W alone. An Xbox One X consumes around the same power with 4 more CUs, 8 CPU cores, 50% more VRAM, a HDD, an optical drive, and so on. In other words, the Xbox One X has more CUs than the 480, but consumes noticeably less power for the GPU itself.
Alright, but 33% reduction in clocks, won't give you only 3% performance loss, or even only 13%, it'll be something around 20-25%, which is still pretty significant. Again no, they never did that, because if they struggle meeting the Power limit by reducing clocks, widening the die, will only give them back that Power consumption they got rid of by lowering clocks, maybe not at the same price, but very close to that, so it doesn't make sense. You're still comparing Polaris based chips with a Vega based chip of a Xbox one X. Xbox One X die has 16 less CUs than the slower PC version of Vega, which is 56, with lower clocks ofc. But if you want to keep comparing it to Polaris have it you way.
Have I said that something you said is funny?
Well you said "It's kind of funny that you say" Which is not actually the same thing and i probably misunderstood that, but that doesn't sound that much different, anyway i don't mind that, no worries.
Xbox One X is a Vega-Polaris hybrid, and what I said is that it's bigger than any PC Polaris die - the biggest of which has 36 CUs. I never mentioned Vega in that context.
Vega, not polaris.
I never mentioned binning in relation to consoles, I mentioned binning because you were talking as if the 5700 and 5700XT are based off different dice, which they aren't.
I actually understood they were different die sizes but we're still not sure they're the same die size tho (or i missed something official from AMD?), anyway it makes perfectly sense, it costs less to just use lower binned die sizes to make a slower chip that way, i guess we'll see a different die size if they ever make a more powerful chip and name it RX 5800 or something.
1: We don't know the CU count of the upcoming consoles. I don't expect it to be much more than 40, but we've been surprised before. The Xbox One X has 40 CUs, and MS is promising a significant performance uplift from that - even with, let's say 20% higher clocks due to 7nm (still far lower than dGPUs) and 25% more work per clock, they'd need more CUs to make a real difference - after all, that just adds up to a 50% performance increase, which doesn't even hit 4k60 if the current consoles can at best hit 4k30. Increasing the CU count to, say, 50 (for ease of calculation, not that I think it's a likely number) would boost that to a 87,5% increase instead, for a relatively low power cost (<25% increase) compared to boosting clock speeds to a similar level of performance.
2: As I've been trying to make clear for a few posts now, frame rates depend on the (type of) game. Forza runs flawlessly at 4k30 on the One X. High-end AAA games tend not to. A lightweight esports title might even exceed this, though the X isn't capable of outputting more frames - consoles generally run strict VSYNC (and the FreeSync implementation on Xbox is ... bad). I'm saying I don't doubt they'll be able to run lightweight esports titles at 4k120. Again: My 2015-era Fury X runs Rocket League at 1440p120 flawlessly (I've limited it there, no idea how high it really goes, but likely not above 150), so a newer console with a significantly faster architecture, a similar CU count, more VRAM and other optimizations might absolutely be able to reach 4k120 in a lightweight title like that. Heck, they might even implement lower-quality high FPS modes in some games for gamers who prefer resolution and sharpness over texture quality and lighting - it can make sense for fast-paced games. Am I saying we'll see the next CoD or BF at 4k120 on the upcoming consoles? Of course not. 4k60 is very likely the goal for games like that, but they might not be able to hit it consistently if they keep pushing expensive graphical effects for bragging rights. 1080p120 is reasonably realistic for games like that, though. The CPU is the main bottleneck for high FPS gaming on the One X, after all, so at 1080p these consoles should really take off. My point is: games are diverging into more distinct performance categories than just a few years ago, where esports titles prioritize FPS and response times, while a lot of other games prioritize visual quality and accept lower frame rates to achieve this. This makes it very difficult to say whether or not a console "can run 4k120" or anything of the sort, as the performance difference between different games on the same platform can be very, very significant. Backwards compatibility on consoles compound this effect.
3: I was quite specific about whether or not I meant upscaled resoultions - please reread my posts if you missed this. When I say native 4k, I mean native 4k (yes, technically UHD, but we aren't cinematographers now, are we?).

1: We don't but we can assume it'll be something smaller than 5700 XT, maybe as big as 5700, with slower clocks. Nothing close to 6x the performance or 4k120fps.
2: Agreed but most of the games won't be able to do that, plain simple, and with most i'm talking about a good 70% if not more...Not on consoles, since there's no real Pro gamer community or esposts of any kind on consoles, and 80% of the console market is casual gaming. It's actually the CPU what i'm excited about, since previous consoles had a decent graphics chip, and an absolute pile of garbage as CPU, hopefully Zen 2 will change that forever, so that freaking devs can develop proper games and not being bottlenecked by consoles' ridiculous CPUs
3: Well you were talking about "1500-1800p" in one of your previous posts, so that's why i stated that. We're not cinematographers, and i wasn't trying to correct you when i said "UHD to be more precise", i just don't like the "4k" term, i started hating it in the latest years, maybe because it has been used inappropriately for years now, for any sort of advertisement.
 
Joined
Sep 7, 2017
Messages
3,244 (1.34/day)
System Name Grunt
Processor Ryzen 5800x
Motherboard Gigabyte x570 Gaming X
Cooling Noctua NH-U12A
Memory Corsair LPX 3600 4x8GB
Video Card(s) Gigabyte 6800 XT (reference)
Storage Samsung 980 Pro 2TB
Display(s) Samsung CFG70, Samsung NU8000 TV
Case Corsair C70
Power Supply Corsair HX750
Software Win 10 Pro
Not trying to convince you to upgrade ;) Heck, I'm very much a proponent of keeping hardware as long as one can, and not upgrading until it's actually necessary. I've stuck with my Fury X for quite a while now, and I'm still happy with it at 1440p, even if newer games tend to require noticeably lowered settings. I wouldn't think an RX 5700 makes much sense as an upgrade from a Vega 64 - I'd want at least a 50% performance increase to warrant that kind of expense. Besides that, isn't AC Odyssey notoriously crippling? I haven't played it, so I don't know, but I seem to remember reading that it's a hog. You're right that a "true 4k" card doesn't exist yet if that means >60fps in "all" games at Ultra settings - but this (all games at max settings at the highest available resolution) isn't something even ultra-high-end hardware has typically been capable of. We've gotten a bit spoiled with recent generations of hardware and how developers have gotten a lot better at adjusting quality settings to match available hardware. Remember that the original Crysis was normally played at resolutions way below 1080p even on the best GPUs of the time - and still chugged! :)

I hear you on all of that. Perhaps my expectations are a bit too high when I say there's no true 4K card. It's just that I feel this new gen is a better "I can mostly get by on 4K" iteration, just as the top end of last gen, except with improvements. I suppose I'm waiting for 4K to be an easily expected feature, much like 1080p is now. That probably won't be until they all start targetting 8K. Then we repeat the madness all over again :p
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Guys, it's not like i ever said consoles use 100% exact silicon used on PC hardware,
Sorry, but you did:
Navi 10 is a specific die yes, that's what they'll use on consoles, just like they used Polaris 10 on PS4 Pro, and Vega 10 on Xbox one X
"Specific die" means "100% exact [same] silicon".

You can argue that that wasn't what you meant, but that's what I've been trying to get you to get on board with this whole time: that a GPU architecture (such as Navi, Vega or Polaris) is something different than a specific rendition of/design based on said architecture (such as Navi 10, Vega 10 or Polaris 10, 20, etc.). You consistently mix the two up - whether intentionally or not - which makes understanding what you're trying to say very difficult. I really shouldn't need to be arguing for the value of precise wording here.

It's also worth pointing out that even "Navi" is a specific rendition of something more general - the RDNA architecture. Just like Vega, Polaris, Fiji, and all the rest were different renditions of various iterations of GCN. Which just serves to underscore how important it is to be specific with what you're saying. Navi is RDNA, but in a few years, RDNA will mean more than just Navi.

but the performance and the architecture is the same, they might change configuration, they might build monolithic die APUs, but the juice is the same one essentially, even if Microsoft pays AMD for a custom chip, they're going to use the same technology they have for PC, just adapted and customized, but again, it's the same stuff.
... which is exactly why differentiating between an architecture and its specific iterations is quite important. After all, both a Vega 64 dPGU and a Vega 3 iGPU are based off the same architecture, but are radically different products. Unless you're talking only about the architecture, then, being specific about what you're talking about becomes quite important to getting your point across.

1: The Xbox One X has a semi custom GPU based on Vega architecture, not Polaris.
2: It's a way of saying that they can't do much of what they have already developed which is already at its limit, and yes we do know it because it's been like that for years, and it's not going to change
3: Because they're different architectures. And optimizations still can't do miracles either.
4: Yes they might have improved that, they might be approaching nvidia's level of optimization in that term, but the fact stands, AMD themselves compared their 5700 XT to a 2070, and a 5700 to a 2060, and while most of the time take the performance crown, they sometimes lose, let's say they're probably going to battle the new "super" cards nvidia is preparing, which sounds like they're going to basically a 2060Ti and a 2070Ti. Anyway 5700 XT 5700 are roughly that performance level, and it doesn't seem to me that they're capable of doing what Sony is saying in no way. Nothing close to 6x performance, but much closer to a 2x. Well if they're factoring other stuff in that number i don't know, but it's not fair because it's simply not true to state that, if you talk about performance, it's computing performance only, unless you want to kinda scam your customers, and it wouldn't be the first time...
1: The One X has been reported to be a hybrid between Vega and Polaris, though it's not entirely clear what that means (not surprising, given that there are never deep-level presentations or whitepapers published on console APUs). I'd assume it means it has Vega NCUs with some other components (such as the GDDR5 controller) being ported from Polaris. After all, there's no other rendition of Vega with GDDR5. Also, they've likely culled some of the compute-centric features of Vega to bring the die size down.
2: That argument entirely neglects the low-level changes between GCN and RDNA. We still don't know their real-world benefits, but all competent technical analysts seem to agree that the "IPC" or perf/clock and perf/TFLOP gains AMD are promoting are believable.
3: There's no real performance per CU per clock difference between Polaris and Vega, at least not for gaming. Gaming performance scaling between Polaris and Vega is very close to linear (or when looking at "performance per FLOP") when factoring in CU counts, clock speeds (and to some degree memory bandwidth). In other words, expecting a GDDR5-equipped, low-clocked, 40-CU Vega to perform close to a GDDR5-equipped, higher-clocked, 36-CU Polaris is entirely reasonable.
4: The 5700/XT is a ~250mm2 die. The RTX 2080 is a 545mm2 die, and the 2080 Ti is a 754mm2 die. Of course AMD aren't aiming for the high end with this - it's a clear (upper) midrange play. Remember, AMD is no stranger to large dice - Fiji was 596mm2. Vega 10 is 495mm2. If AMD were going for the high end, they'd add more CUs. Aiming for the mid-range with a smaller die makes sense on a new node with somewhat limited availability, but there's no doubt they have plans for a (likely much) larger RDNA-based die. This might be "Navi 2" or "Arcturus" or whatever, but nonetheless it's obviously coming at some point in the future. You're arguing as if Navi 10 is the biggest/highest performance configuration possible for Navi, which ... well, we don't know, but it's one hell of an assumption. What's more likely - that AMD spent 3-4 years developing an architecture that at best matches their already existing products at a smaller die size, but with no chance of scaling higher, or that they made an architecture that scales from low to high performance? My money is definitely on the latter.

As for the "6x performance" for the PS5, I never said that, but I tried giving you an explanation as to how they might arrive at those kinds of silly numbers - essentially by adding up multiple performance increases from different components (this is of course both misleading and rather dishonest, but that's how vague PR promises work, and we have to pick them apart ourselves). There's no way whatsoever they're claiming it to have 6x the GPU performance of the One X. That, as you say, is impossible - and if it did, they would definitely say so very specifically, as that'd be a huge selling point. But 2-3x GPU performance (not in FLOPS, but in actual gaming performance)? Doesn't sound too unlikely if they're willing to pay for a large enough die.

Again they did: Polaris 10 includes All RX 4xx cards;
Correct.
Polaris 20 includes RX 570 and 580;
Correct.
Polaris 30 includes RX 590,
Correct.
and Vega 20 includes Vega VII
Correct.
Vega 10 includes Vega 3/6/8/10/11/20/48/56/64
Nope. Vega 10 (the die) is Vega 56 and 64. The issue here is that "Vega 10" is also the marketing name for the highest-specced mobile Vega APU. A marketing name and an internal die code name is not the same thing whatsoever, even if the name is identical. Vega APUs are based off an entirely different die design than the Vega 10 die - otherwise they'd also be 495mm2 GPU-only dice. The same goes for the Macbook Pro lineup's Vega 20 and Vega 16 chips, which are based off the Vega 12 die. This is of course confusing as all hell - after all, the names look the same, but the marketing naming scheme is based on the number of CUs enabled on the die, while the die code names are (seemingly) arbitrary - but that's how it is.

Polaris 10/20/30, they're all polaris, Vega 10 and 20, both Vega.
Yes, that's how architectures work.
Xbox One X GPU is built based on Vega 10 family,
Again: no. If there is such a thing as a "Vega 10 family", it includes only the cards listed on the page linked here. I'm really sounding like a broken record here, but "Vega" and "Vega 10" are not the same thing.

it's not like any of those mentioned before, but it's still Vega and very possibly part of Vega 10 group. But even if they aren't, even if it's not Vega 10 but it's Vega "Xbox", what does it matter, same stuff, same performance per watt, same die limitations as it's still part of the Vega family and they share everything. I agree it doesn't mean the same at 100% but at 90% it's still considerable the same, not exact but almost.
Again, what you're describing is, very roughly, the difference between an architecture and a specific rendition of that architecture, yet you refuse outright to acknowledge that these two concepts are different things.

What will it change technically? The disposition inside the die? The configuration? Apart from that? Performance/Watt are still the same, and won't go any higher. Not quite an accurate analogy, i'd say one is Half Life 2, and the other is Half Life 2 Episode 1 or 2. Basically Half life 2, apart from the story.
Again: If your analogy is that the game engine is the architecture and the story/specific use of the engine (the game) is the silicon die, then yes, absolutely. Both games based on an engine and dice based on an architecture share similar underpinnings (though often with minor variations for different reasons), but belonging within the same family. Which is why we have designations such as "architecture" and "die" - different levels of similar things. Though to be exact the engine isn't HalfLife 2, it's Source, and both HL2, HL2 EP1 and HL2 EP2 are all specific expressions of that engine - as well as a bunch of other games.

Well what are you arguing about then? We'll get whatever AMD is capable of doing with their chips, nothing more, and what Sony or Microsoft will be able to sell at a decent price, and not consume like an oven.
I'm arguing against you bombastically claiming that consoles will have Navi 10 - a specific die which they with 100% certainty won't have - and can't possibly be larger than this (which they can, if console makers want to pay for it).

Alright, but 33% reduction in clocks, won't give you only 3% performance loss, or even only 13%, it'll be something around 20-25%, which is still pretty significant. Again no, they never did that, because if they struggle meeting the Power limit by reducing clocks, widening the die, will only give them back that Power consumption they got rid of by lowering clocks, maybe not at the same price, but very close to that, so it doesn't make sense. You're still comparing Polaris based chips with a Vega based chip of a Xbox one X. Xbox One X die has 16 less CUs than the slower PC version of Vega, which is 56, with lower clocks ofc. But if you want to keep comparing it to Polaris have it you way.
I didn't say 33% reduction in clocks, i said power. The entire point was that - for example - with a 33% reduction in power, you wouldn't lose anywhere close to 33% of performance due to how voltage/clock scaling works. Similarly, a 33% reduction in clocks would likely lead to a much more than 33% reduction in power consumption. The exact specifics of this depends on both the architecture and the process node it's implemented on. Which is why widening the die is far "cheaper" in terms of power consumption than increasing clocks. Adding 25% more CUs will increase performance by close to 25% (given sufficient memory bandwidth and other required resources) while also increasing power by about 25%. Increasing clocks by 25% will give more or less the same performance (again, assuming the CUs are being fed sufficiently), but will increase power by far more than 25%. The downside with a wider die is that it's larger (duh), so it's more expensive to produce and will have lower production yields. PC GPUs and consoles make different calls on the balance between die size and speed, which was what I was trying to explain - which is why looking at PC clocks and die sizes isn't a very good predictor of future console GPU designs.

Vega, not polaris.
Hybrid. A lot of Vega, but not as compute-centric.
I actually understood they were different die sizes but we're still not sure they're the same die size tho (or i missed something official from AMD?), anyway it makes perfectly sense, it costs less to just use lower binned die sizes to make a slower chip that way, i guess we'll see a different die size if they ever make a more powerful chip and name it RX 5800 or something.
They are indeed the same die, with the 5700 being a "harvested" part with 4 CUs disabled.

1: We don't but we can assume it'll be something smaller than 5700 XT, maybe as big as 5700, with slower clocks. Nothing close to 6x the performance or 4k120fps.
2: Agreed but most of the games won't be able to do that, plain simple, and with most i'm talking about a good 70% if not more...Not on consoles, since there's no real Pro gamer community or esposts of any kind on consoles, and 80% of the console market is casual gaming. It's actually the CPU what i'm excited about, since previous consoles had a decent graphics chip, and an absolute pile of garbage as CPU, hopefully Zen 2 will change that forever, so that freaking devs can develop proper games and not being bottlenecked by consoles' ridiculous CPUs
3: Well you were talking about "1500-1800p" in one of your previous posts, so that's why i stated that. We're not cinematographers, and i wasn't trying to correct you when i said "UHD to be more precise", i just don't like the "4k" term, i started hating it in the latest years, maybe because it has been used inappropriately for years now, for any sort of advertisement.
1: That's a rather bombastic assumption with no real basis. There's little precedent for huge console dice, true, but we still don't know which features they'll keep and cut from their custom design, which can potentially give significant size savings compared to PC GPUs. It's entirely possible that the upcoming consoles will have GPUs with more CUs than a 5700XT. I'm not saying that they will,but it's entirely possible.
2: Which is irrelevant, as nobody has said that all games will run at those frame rates, just that they might happen in some games. The whole point here is trying to figure out what is the truth and what is not in an intentionally vague statement.
3: Seems like we agree here. The only reason I use "4k" is that typing 2160p is a hassle, and everyone uses 4k no matter if it's technically incorrect - people don't read that and expect DCI 4k. Anyhow, this wasn't the point of the part of my post you were respdonding to with that statement, but rather an explanation of the reasons why consoles can get more performance out of their GPU resources than similar PC hardware. Which they definitely can. No miracles, no, but undoubtedly more fps and resolution per amount of hardware resources. That's what a lightweight OS and a single development platform allowing for specific optimizations will do for you.
 
Joined
Feb 17, 2017
Messages
852 (0.33/day)
Location
Italy
Processor i7 2600K
Motherboard Asus P8Z68-V PRO/Gen 3
Cooling ZeroTherm FZ120
Memory G.Skill Ripjaws 4x4GB DDR3
Video Card(s) MSI GTX 1060 6G Gaming X
Storage Samsung 830 Pro 256GB + WD Caviar Blue 1TB
Display(s) Samsung PX2370 + Acer AL1717
Case Antec 1200 v1
Audio Device(s) aune x1s
Power Supply Enermax Modu87+ 800W
Mouse Logitech G403
Keyboard Qpad MK80
Sorry, but you did:

"Specific die" means "100% exact [same] silicon".

You can argue that that wasn't what you meant, but that's what I've been trying to get you to get on board with this whole time: that a GPU architecture (such as Navi, Vega or Polaris) is something different than a specific rendition of/design based on said architecture (such as Navi 10, Vega 10 or Polaris 10, 20, etc.). You consistently mix the two up - whether intentionally or not - which makes understanding what you're trying to say very difficult. I really shouldn't need to be arguing for the value of precise wording here.

It's also worth pointing out that even "Navi" is a specific rendition of something more general - the RDNA architecture. Just like Vega, Polaris, Fiji, and all the rest were different renditions of various iterations of GCN. Which just serves to underscore how important it is to be specific with what you're saying. Navi is RDNA, but in a few years, RDNA will mean more than just Navi.


... which is exactly why differentiating between an architecture and its specific iterations is quite important. After all, both a Vega 64 dPGU and a Vega 3 iGPU are based off the same architecture, but are radically different products. Unless you're talking only about the architecture, then, being specific about what you're talking about becomes quite important to getting your point across.

You are right, i partly mixed up what AMD does with what nvidia does, where one basically "shuts off" CUs and makes a new chip, the other directly cuts smaller dies for different chips. Still what i mean is Navi is going into consoles, and performance will be equal to Navi 10 die, because that's the technology they currently have, and they can offer a level of performance in a specific range, that's what i referred to when i said "They can't do miracles"

1: The One X has been reported to be a hybrid between Vega and Polaris, though it's not entirely clear what that means (not surprising, given that there are never deep-level presentations or whitepapers published on console APUs). I'd assume it means it has Vega NCUs with some other components (such as the GDDR5 controller) being ported from Polaris. After all, there's no other rendition of Vega with GDDR5. Also, they've likely culled some of the compute-centric features of Vega to bring the die size down.
2: That argument entirely neglects the low-level changes between GCN and RDNA. We still don't know their real-world benefits, but all competent technical analysts seem to agree that the "IPC" or perf/clock and perf/TFLOP gains AMD are promoting are believable.
3: There's no real performance per CU per clock difference between Polaris and Vega, at least not for gaming. Gaming performance scaling between Polaris and Vega is very close to linear (or when looking at "performance per FLOP") when factoring in CU counts, clock speeds (and to some degree memory bandwidth). In other words, expecting a GDDR5-equipped, low-clocked, 40-CU Vega to perform close to a GDDR5-equipped, higher-clocked, 36-CU Polaris is entirely reasonable.
4: The 5700/XT is a ~250mm2 die. The RTX 2080 is a 545mm2 die, and the 2080 Ti is a 754mm2 die. Of course AMD aren't aiming for the high end with this - it's a clear (upper) midrange play. Remember, AMD is no stranger to large dice - Fiji was 596mm2. Vega 10 is 495mm2. If AMD were going for the high end, they'd add more CUs. Aiming for the mid-range with a smaller die makes sense on a new node with somewhat limited availability, but there's no doubt they have plans for a (likely much) larger RDNA-based die. This might be "Navi 2" or "Arcturus" or whatever, but nonetheless it's obviously coming at some point in the future. You're arguing as if Navi 10 is the biggest/highest performance configuration possible for Navi, which ... well, we don't know, but it's one hell of an assumption. What's more likely - that AMD spent 3-4 years developing an architecture that at best matches their already existing products at a smaller die size, but with no chance of scaling higher, or that they made an architecture that scales from low to high performance? My money is definitely on the latter.

The Xbox One X has performance much closer to what Vega offers, if Polaris was capable of that performance they would've made a card which could battle the 1070 and 1080 back then, besides when the Xbox One X came out Navi was already out for PC, and they were still working at it at that time, just following logic here.

As for the "6x performance" for the PS5, I never said that, but I tried giving you an explanation as to how they might arrive at those kinds of silly numbers - essentially by adding up multiple performance increases from different components (this is of course both misleading and rather dishonest, but that's how vague PR promises work, and we have to pick them apart ourselves). There's no way whatsoever they're claiming it to have 6x the GPU performance of the One X. That, as you say, is impossible - and if it did, they would definitely say so very specifically, as that'd be a huge selling point. But 2-3x GPU performance (not in FLOPS, but in actual gaming performance)? Doesn't sound too unlikely if they're willing to pay for a large enough die.
As you said a few lines below, games on consoles are able to take advantage a bit better than on PC, for obvious reasons, but how can they use better something they don't have? Also what is GPU compute performance if not expressed in Flops? Cmon seriously, consoles never lived up their expectations, and this time will be the same, performance will be around 2x, most likely inferior to that, and they'll cloak that again with upscaling and another trickery.
Nope. Vega 10 (the die) is Vega 56 and 64. The issue here is that "Vega 10" is also the marketing name for the highest-specced mobile Vega APU. A marketing name and an internal die code name is not the same thing whatsoever, even if the name is identical. Vega APUs are based off an entirely different die design than the Vega 10 die - otherwise they'd also be 495mm2 GPU-only dice. The same goes for the Macbook Pro lineup's Vega 20 and Vega 16 chips, which are based off the Vega 12 die. This is of course confusing as all hell - after all, the names look the same, but the marketing naming scheme is based on the number of CUs enabled on the die, while the die code names are (seemingly) arbitrary - but that's how it is.

Again: no. If there is such a thing as a "Vega 10 family", it includes only the cards listed on the page linked here. I'm really sounding like a broken record here, but "Vega" and "Vega 10" are not the same thing.


Again, what you're describing is, very roughly, the difference between an architecture and a specific rendition of that architecture, yet you refuse outright to acknowledge that these two concepts are different things.
Well not really my fault if AMD uses confusing name scheme, Vega 10, includes Vega 10 which is for mobile yes, Vega APU are based off Vega, but since that time AMD was working on Vega 10 (not the chip), performance were that of a Vega 10 with a less CUs than Vega 56. The rest i already explained above, i partly confused nvidia's method with AMD's sorry for that.


Again: If your analogy is that the game engine is the architecture and the story/specific use of the engine (the game) is the silicon die, then yes, absolutely. Both games based on an engine and dice based on an architecture share similar underpinnings (though often with minor variations for different reasons), but belonging within the same family. Which is why we have designations such as "architecture" and "die" - different levels of similar things. Though to be exact the engine isn't HalfLife 2, it's Source, and both HL2, HL2 EP1 and HL2 EP2 are all specific expressions of that engine - as well as a bunch of other games.
I wasn't talking about the engine, i was talking about just the game, how the game looks and feel, and works, but not about the engine directly. Which i know it's Source btw.
I'm arguing against you bombastically claiming that consoles will have Navi 10 - a specific die which they with 100% certainty won't have - and can't possibly be larger than this (which they can, if console makers want to pay for it).
Agreed that's not what i meant, i already said why i made that mistake above...
I didn't say 33% reduction in clocks, i said power. The entire point was that - for example - with a 33% reduction in power, you wouldn't lose anywhere close to 33% of performance due to how voltage/clock scaling works. Similarly, a 33% reduction in clocks would likely lead to a much more than 33% reduction in power consumption. The exact specifics of this depends on both the architecture and the process node it's implemented on. Which is why widening the die is far "cheaper" in terms of power consumption than increasing clocks. Adding 25% more CUs will increase performance by close to 25% (given sufficient memory bandwidth and other required resources) while also increasing power by about 25%. Increasing clocks by 25% will give more or less the same performance (again, assuming the CUs are being fed sufficiently), but will increase power by far more than 25%. The downside with a wider die is that it's larger (duh), so it's more expensive to produce and will have lower production yields. PC GPUs and consoles make different calls on the balance between die size and speed, which was what I was trying to explain - which is why looking at PC clocks and die sizes isn't a very good predictor of future console GPU designs.

Yeah my bad here too, you were actually talking about performance, but i just used the wrong word, i made the point anyway. Yes adding more CUs is cheaper in terms of power consumption, but they'll also give less performance benefits compared to increase clock probably. Well i would say that it's actually a good predictor of future console GPU Performance, since they're based off the same architecture in the end...
Hybrid. A lot of Vega, but not as compute-centric.
Vega...Just Vega, probably not as compute-centric, and just not as powerful.
1: That's a rather bombastic assumption with no real basis. There's little precedent for huge console dice, true, but we still don't know which features they'll keep and cut from their custom design, which can potentially give significant size savings compared to PC GPUs. It's entirely possible that the upcoming consoles will have GPUs with more CUs than a 5700XT. I'm not saying that they will,but it's entirely possible.
2: Which is irrelevant, as nobody has said that all games will run at those frame rates, just that they might happen in some games. The whole point here is trying to figure out what is the truth and what is not in an intentionally vague statement.
3: Seems like we agree here. The only reason I use "4k" is that typing 2160p is a hassle, and everyone uses 4k no matter if it's technically incorrect - people don't read that and expect DCI 4k. Anyhow, this wasn't the point of the part of my post you were respdonding to with that statement, but rather an explanation of the reasons why consoles can get more performance out of their GPU resources than similar PC hardware. Which they definitely can. No miracles, no, but undoubtedly more fps and resolution per amount of hardware resources. That's what a lightweight OS and a single development platform allowing for specific optimizations will do for you.
1: There's literally NO precedent for huge console dies in recent times. I don't think that's possible, but we'll see, it makes no sense from any point of view, they would cost too much, they never did that before, and as much as power-cheap it can be, it's not free, and they can't afford that.
2: Well it is relevant, if you claim your console can do 4k120fps, people is expecting to see that in most the games, and not just a few. It's like saying "Hey look my 2600K and 1060 do 300fps on CSGO in FHD" and then i open up most of the games launched in the last 3 years, and i barely reach 60fps with just high settings.
3: Just in part, and that "phenomenon" is already dead basically, i still have to see a game running better on a console than on a PC with similar characteristics, at the very least they're on par. But what you say was kinda true some years back.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
So...Radeon Image Sharpening allows them to render at less than 4K and upscale to 4K while simultaneously sharpening it making 1440p->4K almost indistinguishable from native 4K.

Jump to 15:42:
It works better than DLSS at no performance cost. Navi exclusive at this point.
 
Top