• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Fudzilla: AMD Navi is no high end GPU

Joined
Feb 18, 2005
Messages
5,238 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
https://www.fudzilla.com/news/graphics/46038-amd-navi-is-not-a-high-end-card

We have been sitting on this piece of information for a while, but maybe it's the right time to share it with you. Navi 7nm the 2019 chip will not be a high end GPU, it will be a quite powerful performance/mainstream chip.

Think of it as the Radeon RX 580 / 480 replacement. It will be small, and is likely to perform as well as the Vega 14nm that shipped last year. In the Nvidia performance world Navi should perform close to Geforce GTX 1080 which is quite good for the mainstream part but probably on part of the mainstream part planned after the high end part.

...

So, the long story short, AMD won’t have anything in the high-end space faster than Vega between now and end of 2019. In GPU world this is eternity.

...

The earliest we would expect a Navi successor, a real high end chip, would be at some point in 2020.

If true (and this is Fudzilla so take with an entire salt shaker's worth of salt), not good news for AMD or competition in the GPU space.
 
Joined
Aug 6, 2017
Messages
7,412 (3.03/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
This is probably true. Their top tier Vega launched not so long ago, and is gonna get a 7nm shrink along with 4 stack HBM2 and maybe some of the features enabled. It might give 1080Ti/"1180" a run for the money. However, AMD must do something in the low-end/mid range segment cause nvidia is prolly gonna launch a "1160" later this year.
1080 performance for $300 would be nice. Unfortunately, that would have to be like +60% leap in performance. I'm expecting something between 580 and 1070 performance for $300. Something that would not undercut Vega 56.
 
Last edited:
Joined
Feb 14, 2012
Messages
1,743 (0.39/day)
Location
Romania
Since most people are on 1080p or lower, it is not a bad thing. Coupled with a Freesync monitor and a cheap Ryzen 1600 or the newer ones and AMD might want to enter that GTX1050Ti/1060 territory nVidia has a monopoly on ATM.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
GTX 1080 performance requires 4096 NCU shaders at ~1.6 GHz. Navi is more likely something closer to 3072 NCU shaders at around 1.8 GHz with GDDR6. I expect something like GTX 1070 performance for <$300 MSRP.

That said, Navi was originally billed as being a multi-GPU friendly architecture so we could theoretically see a high performance version of it two Navi cores for a total of 6144 NCU shaders likely attached to four stacks of HBM2 for $600+. I expect Titan Xp/GTX 1080 Ti like performance but at significantly higher power draw (250-350w).
 
Joined
Aug 13, 2010
Messages
5,384 (1.08/day)
AMD needs a high performing chip to compete against nvidia.
This is not where most of the money is, but it is crucial to success.

Get this architecture to scale it to 250W if it means getting +30-40% performance if needed.

Post Volta architecture is not gonna be forgiving towards AMD, as nvidia pours billions into this stuff.
 
Joined
Jan 8, 2017
Messages
8,929 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Navi is more likely something closer to 3072 NCU shaders at around 1.8 GHz with GDDR6.

Who is to say that GCN hasn't been revamped. I highly doubt all AMD is going to do is reap the advantages of a new node , namely just clock speed.

I expect Titan Xp/GTX 1080 Ti like performance but at significantly higher power draw (250-350w).

Or maybe not , this will give them the chance to clock these parts significantly lower and profit from the increased transistor density and multiple dies instead. Unlike typical CPU designs with GPUs higher core count and lower clocks speeds are more desirable as they also result in lower power consumption. With die space power consumption scales pretty much linearly as opposed to clock speed. If anything , with Navi, AMD will finally have the chance to make faster cards with lower power consumption and not be at the mercy of poor nodes that don't perform well at high clocks. And with the first wave of 7nm this is pretty much guaranteed to be the case.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.63/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Who is to say that GCN hasn't been revamped. I highly doubt all AMD is going to do is reap the advantages of a new node , namely just clock speed.
It's GCN6 and like I said, the focus is the ability to scale the architecture to meet custom chip order demands. HBCC was a the feature of Vega that was the largest step in the direction towards Navi.

If anything , with Navi, AMD will finally have the chance to make faster cards with lower power consumption and not be at the mercy of poor nodes that don't perform well at high clocks.
Architecture determines clockspeed more than node. Vega has long pipelines compared to GCN4 and lower which is why it has a higher clockspeed. Navi is likely to improve upon that.
 
Joined
Aug 6, 2017
Messages
7,412 (3.03/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
Who is to say that GCN hasn't been revamped.
They've preferred incremental changes to their gcn architecture rather than any major overhaul. Who is to say it is revamped ? It is not to be expected for them to revamp their architecture more than just small,incremental changes we've seen for years.
 
Joined
Sep 17, 2014
Messages
20,906 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
How much more do people need to have the reality sink in that AMD is no longer targeting the PC gamer for its GPUs.

They can make do with their custom chips in the consoles for 'gaming' and for PC they want maximum margins off the (semi-) pro space. Vega is a clear sign of that, and a Vega with 32GB HBM is exactly not aimed at gamers. Their GCN has a fundamental problem it cannot shed against Nvidia's efficiency oriented and modular arch. Now they are jumping from one new technology to the next in a desperate search to breathe more life into this dead end arch.

Its a losing battle, if it wasn't already lost with Fury. There is no high end GPU because they can't make one with a reasonable TDP. Navi is the only out they have, by fusing multiple mid tier GPUs together and expanding heavily on die space to at least be able to cool it proper. The whole idea is a testament of lacking progress.
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
15,911 (4.58/day)
Location
Kepler-186f
The whole idea is a testament of lacking progress.

I am hoping Vega 2 and Ryzen 2 in 2019 can match a 2080 ti system in games across the board, but it must also match it in min frame rates. If that day comes I may switch to red team, as it stands, most likely will sell my 1080 ti on the cheap when 2080 ti comes out, but hopefully they can prove us all wrong. lol
 
Joined
Jan 8, 2017
Messages
8,929 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
The whole idea is a testament of lacking progress.

It's the only way towards progress . I already mentioned it above , multi die GPUs are pretty much the only way AMD and Nividia could extract a considerable amount of performance without breaking TPD records in the future.

High TDPs are the result of high clocks and architectures designed to support such frequencies but that's not the ideal design goal for a GPU. Ideally all you need to do is stuff as many wide SIMD lanes and hardware threads as possible and make sure you have the cache and memory bandwidth to feed them , AMD already has the latter assured with HBM and the solution to the first problem right now are multiple dies.

Nvidia focused on increasing clocks and mitigating their effect in the last few years because that's all they could do with the increasing difficulty in procuring smaller nodes. They have been fortunate enough to have TSMC on their side which helped them a lot but sooner or later they'll need to resort the same tactic.
 
Joined
Aug 13, 2010
Messages
5,384 (1.08/day)
Lets ignore memory compression and caching algorithms that increase the graphics card's efficiency and give all credit to the factory that makes the dies. Might as well throw all other design aspects out of the window :)
 
Joined
Jan 8, 2017
Messages
8,929 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Lets ignore memory compression and caching algorithms that increase the graphics card's efficiency

Both AMD and Nvidia and ARM and pretty much every other major GPU manufacturer employes these techniques in their GPUs. They are not unique to Nvidia if you wanted to insinuate that.
 
Joined
Aug 13, 2010
Messages
5,384 (1.08/day)
Everybody also have a graphics driver that let their GPU accelerate things. Some do it better than others.
If Polaris was made by secret Umpa lumpa 3nm factory in a parallel realm it wouldn't have 2014's 28NM maxwell performance per watt numbers.
 
Last edited:
Joined
Jan 8, 2017
Messages
8,929 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
If Polaris was made by secret Umpa lumpa 3nm factory in a parallel realm it wouldn't have 2014's maxwell performance per watt numbers.

There is no need for that , it already has similar performance per watt to Maxwell. The fact that after you overclock it that's not the case anymore is partially because AMD is using an inferior node and partially because Nvidia uses a longer pipeline which generally copes better with power consumption when you increase the clocks.
 
Joined
Sep 17, 2014
Messages
20,906 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
To me its more about striking the right balance and timing innovation right. Even to this day there is no actual need for HBM in the gaming segment and the first 'problem' is only a problem when your single die solution is no longer enough. Multiple dies introduce a whole host of issues and complexity in drivers and on the board, and more importantly: a degree of inefficiency as everything on the board now needs to be either doubled or shared.

The problem that plagues AMD ever since Kepler is the performance they get out of a single shader. Their balance is off and HBM doesn't change that, it only serves to plug the holes because a GDDR5 interface would send the whole thing spiraling out of control in terms of TDP.

The die size problem is not new
HD 7970 Ghz 352 mm²
GTX 680 / 770 294 mm²

The TDP problems are not new
HD 7970 Ghz = 300W
GTX 680 / 770 = 230W

The crappy, low speed GDDR5 coupled with a super wide bus is not new
HD 7970 Ghz = 384 bit @ 1.5 Ghz
GTX 770 = 256 bit @ 1.75 Ghz

And then came Pascal...

Just take a minute to let the below comparison sink in - take any of the specs below and compare them like for like. Both 150 W GPUs on GDDR5. And then tell me again multi GPU is 'progress'... do you really want multiple Polaris chips on a board if you see this?

1523879590280.png


1523879654740.png
 
Joined
Jan 8, 2017
Messages
8,929 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Their balance is off and HBM doesn't change that, it only serves to plug the holes because a GDDR5 interface would send the whole thing spiraling out of control in terms of TDP.

AMD didn't spend millions on developing HBM just so they can shave off a few watts or in hopes that it'll benefit gaming performance in some way. The power consumption difference between GDDR5 and HBM per board is much , much smaller that you think it is , it's in the region of single digit watt figures.

They need HBM because GCN simply needs high memory bandwidth , this was the case ever since they moved away from TeraScale and VLIW and moved onto ISAs that focus on exploiting thread level parallelism , adding scalar instructions and such. Nvidia has since done the exact opposite , their architecture is still reliant on ILP and heavy pipelining , moving the scheduling from hardware to software (though now they are trying to bring it back). That's how Nvidia got around the need to use HBM and also avoided high power consumption on their cards , not just because of the almighty color compression as was suggested above. That is , if we are talking about architectural choices only.

So in conclusion , yes , HBM was a need for AMD but it has much deeper implications that just lowering the TDP by a few %.

The problem that plagues AMD ever since Kepler is the performance they get out of a single shader.

Do note how neither AMD nor Nvidia refers to their GPUs as having "shaders" anymore but instead use the terms "CUDA cores" or "stream processors". That's a subtlety that bears a lot of meaning. There isn't a single GPU manufacturer out there that designs GPUs for the sole purpose of pushing pixels anymore , no one is targeting PC gaming in actual fact.

AMD can extract a lot of performance out of their cores just fine , these things are designed by very smart individuals not idiots , the only "problem" is that for a lot of people said performance doesn't translate 100% in their favorite color being on top of the charts for every single game out there. I for one , don't care about that as much and I can get past this superficial way of measuring progress for what are otherwise vastly more complicated matters that can't always be reduced to "X card has better performance per watt and therefore everything else is inferior".
 
Last edited:
Joined
Feb 18, 2005
Messages
5,238 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
It's the only way towards progress . I already mentioned it above , multi die GPUs are pretty much the only way AMD and Nividia could extract a considerable amount of performance without breaking TPD records in the future.

I disagree. MCM GPUs have been touted as the holy grail for years yet there still aren't any in production, why is that?

Probably because GPUs are already massively parallel and adding more "cores" would give no benefit, in fact would actually decrease performance due to having to synchronise data between the cores.

That's why GPUs keep getting bigger and bigger and adding more SMXs/CUs, because that is the simplest and most effective way to get more parallelism and hence more performance.
 
Joined
Jan 8, 2017
Messages
8,929 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Probably because GPUs are already massively parallel and adding more "cores" would give no benefit, in fact would actually decrease performance

That's why GPUs keep getting bigger and bigger and adding more SMXs/CUs, because that is the simplest and most effective way to get more parallelism and hence more performance.

You are contradicting yourself , I don't know how you managed that in just 2 sentences but please make up your mind.

Probably because GPUs are already massively parallel and adding more "cores" would give no benefit, in fact would actually decrease performance due to having to synchronise data between the cores.

I have no idea where you get this stuff from but it's undoubtedly wrong.

There is hardly any synchronization that needs to be done across a GPU, that's the core idea behind their design which relies heavily on data level parallelism. GPUs are made up of multiple SIMD lanes and their effectiveness relies on not having to do any synchronization at all because that would result in having threads within a core not do anything , this is why they are called "streaming processors".
The only synchronizations that can be done in current architectures are simple barriers that can be placed on all threads within a CU/SM , that's all the functionality that AMD and Nvidia provides. The hardware literally doesn't support anything else , therefore you can have a million dies but as long as the cores aren't split across more than 1 die , there is absolutely no need to be concerned about synchronizing data.

Because of this aspect MCM GPUs are far easier to implement than people think , all a GPU needs to do is get through all the "work" as fast as possible , the time that it takes for a single task to be completed is irrelevant and so is the problem of synchronizing said tasks(for the most part). This simplifies these potential designs greatly , "all" you need to make sure is to have the bandwidth required to feed the multiple GPUs , low enough latency and smart scheduling than can either be done through hardware or software.
 
Last edited:
Joined
Sep 17, 2014
Messages
20,906 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
superficial way of measuring progress for what are otherwise vastly more complicated matters that can't always be reduced to "X card has better performance per watt and therefore everything else is inferior".

See that's where I disagree. Perf/watt is not 'superficial'; its the way you carve a future for an architecture. AMD proved that by releasing Ryzen and instantly getting back on Intel's level of perf/watt, and even beyond it in specific workloads. Tell the whole ARM business perf/watt isn't relevant, I dare you :) Perf/watt allows a company to make competitive products because it is directly tied to core counts and clocks and the performance you can extract out of a mm2 of silicon and within a certain TDP budget, which also relates to the ways you can actually USE the product. Higher perf/watt enables more performance in a smaller package, and even a smaller device, which is the one big growth market any computer device will inevitably fall into.
 
Joined
Jan 8, 2017
Messages
8,929 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
See that's where I disagree. Perf/watt is not 'superficial'; its the way you carve a future for an architecture. AMD proved that by releasing Ryzen and instantly getting back on Intel's level of perf/watt, and even beyond it in specific workloads. Tell the whole ARM business perf/watt isn't relevant, I dare you :) Perf/watt allows a company to make competitive products because it is directly tied to core counts and clocks and the performance you can extract out of a mm2 of silicon and within a certain TDP budget, which also relates to the ways you can actually USE the product. Higher perf/watt enables more performance in a smaller package, and even a smaller device, which is the one big growth market any computer device will inevitably fall into.

I am not necessarily saying it's superficial. It's important, sure , but it's something that gets thrown around way too often when people talk about architectures and the future of something. Because I have a fair bit more in depth knowledge about things than the average person it just kind of annoys me , that's all.

The real champions of performance per watt aren't even Nvdia in the grand scheme of things. ARM , Qualcomm , Apple have achieved figures far more impressive than anyone else.
 
Joined
Sep 17, 2014
Messages
20,906 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
I am not necessarily saying it's superficial. It's important, sure , but it's something that gets thrown around way too often when people talk about architectures and the future of something. Because I have a fair bit more in depth knowledge about things than the average person it just kind of annoys me , that's all.

The real champions of performance per watt aren't even Nvdia in the grand scheme of things. ARM , Qualcomm , Apple have achieved figures far more impressive than anyone else.

Absolutely but ARM doesn't run Crysis :toast:
 
Joined
Nov 13, 2007
Messages
10,232 (1.70/day)
Location
Austin Texas
Processor 13700KF Undervolted @ 5.6/ 5.5, 4.8Ghz Ring 200W PL1
Motherboard MSI 690-I PRO
Cooling Thermalright Peerless Assassin 120 w/ Arctic P12 Fans
Memory 48 GB DDR5 7600 MHZ CL36
Video Card(s) RTX 4090 FE
Storage 2x 2TB WDC SN850, 1TB Samsung 960 prr
Display(s) Alienware 32" 4k 240hz OLED
Case SLIGER S620
Audio Device(s) Yes
Power Supply Corsair SF750
Mouse Xlite V2
Keyboard RoyalAxe
Software Windows 11
Benchmark Scores They're pretty good, nothing crazy.
They probably need to make it more efficient before starting to glue them together?

Not great news for competition.
 
Joined
Jul 19, 2006
Messages
43,587 (6.72/day)
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS TUF x670e
Cooling EK AIO 360. Phantek T30 fans.
Memory 32GB G.Skill 6000Mhz
Video Card(s) Asus RTX 4090
Storage WD m.2
Display(s) LG C2 Evo OLED 42"
Case Lian Li PC 011 Dynamic Evo
Audio Device(s) Topping E70 DAC, SMSL SP200 Headphone Amp.
Power Supply FSP Hydro Ti PRO 1000W
Mouse Razer Basilisk V3 Pro
Keyboard Tester84
Software Windows 11
I won't even bother to read speculative news, it's rarely correct. One thing I know for sure is living in a time of GPU purgatory absolutely sucks.
 

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.27/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
I won't even bother to read speculative news, it's rarely correct. One thing I know for sure is living in a time of GPU purgatory absolutely sucks.

I can buy as many cards as I want. Just got to shop in bulk out of China :roll:
 
Top