• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial x86 Lacks Innovation, Arm is Catching up. Enough to Replace the Giant?

bug

Joined
May 22, 2015
Messages
13,287 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Since I am unfettered with firsthand knowledge, I think I know an easy shortcut.
A simple instruction set will put more pressure on the data caches, either by running further cycles, or further instructions(i-cache) to do the same amount of work, so will use up more overhead to do the same amount of work(d-cache).
However, its data caches are aligned -there are no divergent flow rates, so control management is simpler.
I'll guess it comes to how much overhead is present from wasted cycles due to the complex vs. simple instruction set difference, in reference to how much transistor - and therefore power - budget is saved from simplifying the instruction flow.
I'm not up to date with what ARM does these days, but one of RISC's advantages was that executing any instruction within the same timeframe/cycles dramatically simplifies scheduling. By contrast, ever since Intel went pipelined (Pentium, iirc), they essentially have a sizeable silicon chunk breaking complex instructions down into simple ones, emulating what RISC does.
Like I said, I don't know whether one will prevail over the other. Or whether a hybrid design will trump both.
 

ARF

Joined
Jan 28, 2020
Messages
4,057 (2.57/day)
Location
Ex-usa
The more energy efficient, which clearly ARM is, has to prevail.
 
Joined
Jun 3, 2010
Messages
2,540 (0.50/day)
The more energy efficient, which clearly ARM is, has to prevail.
But there is always the mediator compiler involved.
X load fewer cycles which is precisely what x86 is compiled to doing(race to sleep), in deference to Y<X load 'more cycles' is some proposition to pass up.
 

ARF

Joined
Jan 28, 2020
Messages
4,057 (2.57/day)
Location
Ex-usa
But there is always the mediator compiler involved.
X load fewer cycles which is precisely what x86 is compiled to doing(race to sleep), in deference to Y<X load 'more cycles' is some proposition to pass up.


x86 can't even sleep normally. Have you seen how many screening apps, for example, wake up the "idle" cores in order to get a reading about its current clock. CPU-Z does it, it's ridiculous.

As to my "theories", they are not exactly theories because it has been proved multiple times that a 3-watt Snapdragon is 3-4-5 TIMES faster than an Atom with equal wattage.
 
Joined
Jun 3, 2010
Messages
2,540 (0.50/day)
x86 can't even sleep normally. Have you seen how many screening apps, for example, wake up the "idle" cores in order to get a reading about its current clock. CPU-Z does it, it's ridiculous.

As to my "theories", they are not exactly theories because it has been proved multiple times that a 3-watt Snapdragon is 3-4-5 TIMES faster than an Atom with equal wattage.
Except, Atom FPU operates at 2:1 rate(half, sorry if I mismatched in writing). It doesn't even match its architected speed quotient.
Something on snapdragons, Qualcomm on the other hand is doing heavy customisation.
 
Joined
Feb 3, 2017
Messages
3,502 (1.31/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
x86 can't even sleep normally. Have you seen how many screening apps, for example, wake up the "idle" cores in order to get a reading about its current clock. CPU-Z does it, it's ridiculous.
What does this have to do with x86?
 

bug

Joined
May 22, 2015
Messages
13,287 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
The more energy efficient, which clearly ARM is, has to prevail.
If ARM was so clearly more energy efficient, all servers would be running ARM today.
While not necessarily a limitation, at least ARM's designs do not scale up as well as we need. Why? Energy efficiency, that's why.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.92/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
I'll guess it comes to how much overhead is present from wasted cycles due to the complex vs. simple instruction set difference, in reference to how much transistor - and therefore power - budget is saved from simplifying the instruction flow.
I would assume that engineers are smart enough to build extensions when there is a need for them, not because they're bored and feel like it. The point of having these more complicated operations is to save clock cycles. I don't know if the cache thing is really all that true because at the end of the day, you're still fiddling around with the same data if you don't use something like vector extensions for doing a bunch of floating point operations or something. At least the point of vectorization is that you can do everything (the same kind of operation every time,) in a couple cycles in parallel instead of say, 20 cycles serially. This works when data doesn't have any interdependencies with other solutions calculated at the same time.

...but now here is the kicker:
in reference to how much transistor - and therefore power - budget is saved from simplifying the instruction flow.
The thing is that a lot of these "faster" extensions are likely using more power, but probably not for the throughput they provide. So think of it this way, if AVX2 can speed up a workload by 100%, but power only increases by 50% while the AVX circuitry is active, then that's still a win.
 
Joined
Jul 7, 2019
Messages
141 (0.08/day)
I simply do not know all this junk discussions. But in one liner.

x86 is a standard set, ARM Is always custom BS. As a person who relies on a PC for most of the work and use a Smartphone for the mobile needs computing (Android specifically because the Filesystem is accessible to user making it a perfect on the go computer vs iOS trash locked down ecosystem) I simply do not see the market for ARM, the software ecosystem is just like Apple, it will be always custom.

Qualcomm had Centriq CPUs which were state of art ARM server chips heralded by Cloudflare and later abandoned entirely. The team which designed them are SD820's top cream which was a pure custom ARM core vs the regular Cortex based designs like SD835,45,55,65. Mostly like Samsung M cores in Exynos (which sucked anyways) and Apple cores. And that company which prides on the tons of patents *Abandoned* ARM server race. Why ? There's simply no ROI in that, That corporation is heavily focused on the R&D rather than that Broadcom which came to gulp that Qualcomm down and the primary beneficiary being Apple because of their Patents, it's all over at EEtimes regarding the Apple vs Qualcomm history where Apple is shameless as expected.

ARM Annapurna based Graviton 2 processors are made by Amazon directly and they do not sell them at all, it's just an initiative to make more profits vs purchasing Intel and AMD. It's full custom, I repeat. Same with Marvell as well. This ARM BS should end right now. The Win32 ecosystem which we all enjoy is built upon that x86 and both are perfect in terms of Legacy compatibility making it more powerful OS and HW solution, name one Platform in both SW and HW which rivals that ? Android is closest but with Google copying Apple BS everywhere from the point of marketing and HW (Made for Google trash clone of MFi program for licensing money) and SW (Filesystem retardation from 10 and up with new nerf called Scoped Storage, Copying Gestures) it's going to be a bust and locked down system.

This ARM glorification began when Anandtech started pushing that SPEC BS numbers, when the real world Application performance is even beat by the fastest phones like OnePlus and guess what ? That all powerful uber Omega class BS A series processors do not even have Emulation on their Locked down Apple Store and majority use it for garbage Social Media. Finally the scaling, it cannot simply scale like x86 cores and the business are trying hard since a decade to dethrone x86, I'm glad it's failing I hope it fails over and over and stop creeping into the PC area.
 

bug

Joined
May 22, 2015
Messages
13,287 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I simply do not know all this junk discussions. But in one liner.
In one line, ARM is as standard as x86. It gets more custom implementations, but there's a well defined instruction set, just like for x86/x86_64.
 

ARF

Joined
Jan 28, 2020
Messages
4,057 (2.57/day)
Location
Ex-usa
The results show Graviton 2 is pretty much on par with high-end x86 performance while being clocked significantly lower, using about half the power and a third of the silicon area of EPYC 7742. That's impressive for a cost-optimized cloud product - it doesn't need to win every single benchmark to be successful.

100+ Benchmarks Of Amazon's Graviton2 64-Core CPU Against AMD's EPYC 7742
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.92/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
100+ Benchmarks Of Amazon's Graviton2 64-Core CPU Against AMD's EPYC 7742
It really depends on what you're doing and when the geometric mean is this far off, you really have to wonder how much different some of the actual benchmarks are.
1590272364548.png


This doesn't sound too bad, until someone like me, who might want to run a PostgreSQL server on it sees those results:
1590272464282.png

1590272472665.png
 

bug

Joined
May 22, 2015
Messages
13,287 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
It really depends on what you're doing and when the geometric mean is this far off, you really have to wonder how much different some of the actual benchmarks are.
View attachment 156442

This doesn't sound too bad, until someone like me, who might want to run a PostgreSQL server on it sees those results:
View attachment 156443
View attachment 156444
Geometric mean is just the aggregate score, of course it's made up of better and worse numbers. You found a case where Graviton2 loses badly, but there are tests where is bests Epyc, too.
But the thing is, the CPUs are not for you (or the public, in general). They're for Amazon and their use cases. Even if it's overall slower, but better in perf/W, AMD (and Intel, for that matter) just lost a crapload of sales.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.92/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Geometric mean is just the aggregate score, of course it's made up of better and worse numbers. You found a case where Graviton2 loses badly, but there are tests where is bests Epyc, too.
But the thing is, the CPUs are not for you (or the public, in general). They're for Amazon and their use cases. Even if it's overall slower, but better in perf/W, AMD (and Intel, for that matter) just lost a crapload of sales.
It's not just "a case that performs badly," it performs badly with a technology that I've been using for a decade in my career. That means something to me, even if it doesn't to you. This is kind of important when you talk about practical uses of server technology. I always look at how PostgreSQL performs because that matters to me.

Edit: As I said...
It really depends on what you're doing
 

bug

Joined
May 22, 2015
Messages
13,287 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
It's not just "a case that performs badly," it performs badly with a technology that I've been using for a decade in my career. That means something to me, even if it doesn't to you. This is kind of important when you talk about practical uses of server technology. I always look at how PostgreSQL performs because that matters to me.
Besides the academic measurement of the performance (corner have been cut that affect some workloads), what does it really mean to you? You won't buy such a CPU and if you use it in Amazon, you pay for the machine size/performance anyway.
Of course, it's weakness and it is to be noted. But that won't affect you.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.92/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Besides the academic measurement of the performance (corner have been cut that affect some workloads), what does it really mean to you? You won't buy such a CPU and if you use it in Amazon, you pay for the machine size/performance anyway.
Of course, it's weakness and it is to be noted. But that won't affect you.
It is when you're a person who architects these kinds of systems and when you have a say in what kind of hardware it'll be running on. Capacity planning is kind of important when you want to scale.
 

bug

Joined
May 22, 2015
Messages
13,287 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
It is when you're a person who architects these kinds of systems and when you have a say in what kind of hardware it'll be running on. Capacity planning is kind of important when you want to scale.
And are you designing servers for Amazon?
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.92/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
And are you designing servers for Amazon?
No, I design and build systems that run on PostgreSQL that are hosted on cloud services like AWS, GCP, or Azure.

Edit: I've mainly worked with GCP though. I'd use AWS if I thought it was the better option.
 

bug

Joined
May 22, 2015
Messages
13,287 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
No, I design systems that run on PostgreSQL that are hosted on cloud services like AWS, GCP, or Azure.
Well then, worst case scenario, use whatever instance you were using already.

And keep in mind test at Phoronix are always run at default settings. That's not how you run PostgreSQL in production anyway.
I'm also thinking this could be problem with PostgreSQL on ARM, but even if true, it makes no difference unless/until it's patched.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.92/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
And keep in mind test at Phoronix are always run at default settings. That's not how you run PostgreSQL in production anyway.
Have you ever maintained a PostgreSQL server running in a production setting? I don't screw with the defaults if performance is adequate. One of the nice things about Postgres defaults is that they're pretty conservative so they handle large numbers of concurrent connections pretty well OOTB with respect to how much resources each connection consumes. Mucking with things like `work_mem` can have a negative impact on things like the number of concurrent connections you can process since you're altering the ratio of CPU to memory required per connection by changing it and depending on what you're doing, it might not even help you at all. Also mucking with parallel query settings might actually not only use more resources, but hurt performance. So unless you really know what you're doing and how your system operates in the wild, it's always wise to stick with defaults unless there is a reason to change them.

You are far better off optimizing your database design than trying to tweak your way to success with settings.
 

bug

Joined
May 22, 2015
Messages
13,287 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Have you ever maintained a PostgreSQL server running in a production setting? I don't screw with the defaults if performance is adequate. One of the nice things about Postgres defaults is that they're pretty conservative so they handle large numbers of concurrent connections pretty well OOTB with respect to how much resources each connection consumes. Mucking with things like `work_mem` can have a negative impact on things like the number of concurrent connections you can process since you're altering the ratio of CPU to memory required per connection by changing it and depending on what you're doing, it might not even help you at all. Also mucking with parallel query settings might actually not only use more resources, but hurt performance. So unless you really know what you're doing and how your system operates in the wild, it's always wise to stick with defaults unless there is a reason to change them.

You are far better off optimizing your database design than trying to tweak your way to success with settings.
Let's just say my experience with PostgreSQL has been very different from yours ;)
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.92/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Let's just say my experience with PostgreSQL has been very different from yours ;)
I'm starting to realize that. :laugh:

That's enough of a tangent though, my point is that when the geometric mean is that far off, you know that you're bound to find cases you care about where performance is sub-par.
 

bug

Joined
May 22, 2015
Messages
13,287 (4.04/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
I'm starting to realize that. :laugh:

That's enough of a tangent though, my point is that when the geometric mean is that far off, you know that you're bound to find cases you care about where performance is sub-par.
Goes without saying, you can find those cases even when the geometric mean is not that far off.
 
Top