Aquinus
Resident Wat-man
- Joined
- Jan 28, 2012
- Messages
- 13,199 (2.73/day)
- Location
- Concord, NH, USA
System Name | Apollo |
---|---|
Processor | Intel Core i9 9880H |
Motherboard | Some proprietary Apple thing. |
Memory | 64GB DDR4-2667 |
Video Card(s) | AMD Radeon Pro 5600M, 8GB HBM2 |
Storage | 1TB Apple NVMe, 2TB external SSD, 4TB external HDD for backup. |
Display(s) | 32" Dell UHD, 27" LG UHD, 28" LG 5k |
Case | MacBook Pro (16", 2019) |
Audio Device(s) | AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers |
Power Supply | Display or Thunderbolt 4 Hub |
Mouse | Logitech G502 |
Keyboard | Logitech G915, GL Clicky |
Software | MacOS 15.3.1 |
I think the title pretty much speaks for itself but, you know me and alcohol, so this has to turn into something a little more than just a basic question. So buckle your seat belts, it's time to get deep. (...and before anyone says it, I'm sure that's what she said.)
This is a pretty vague question and there really is no right or wrong answer but, generally speaking, a benchmark is a means to gauge performance... the problem is that unless you're benchmarking the exact kind of situation you intend to use your machine for, you're not always going to really get a complete picture of how it will perform for your particular use case. This often leaves us with synthetic benchmarks to give us a glimpse into how hardware will perform in the most broadest or strictest of senses depending on the benchmark. The thing is that how applications are implemented, like using additional passes of AA in graphics or the conversion of floating point math to long integer fixed point math, can make a huge impact on how different kinds of hardware perform (without even getting started with multi-threading effectiveness,) even if the end result is programmatically and mathematically equivalent.
So with that said, we know what benchmarks give us in the most basic of senses. They usually reduce your machine into a number but, is that enough? We do have applications like 3DMark which gives us a basic breakdown into 3 categories but, that doesn't really say what our hardware is good at and what needs improvement. When I'm debugging code and trying to find bottlenecks, I'm not usually using a basic score to figure it out, I'm profiling the stuff I'm working with and finding out what's taking a long time to run. I might just be a mere software engineer but, it seems to me that benchmarks should be a lot more like profiling code, showing you what your bottleneck is and what takes the most compute resources to execute, not just simply how capable your machine is. Focusing on latency over a score that really doesn't mean anything outside how it compares to other machines.
So tell me, what do you think? Are you satisfied with the benchmarks you have now or is there something you've been craving that just hasn't really become a thing? This is something I occasionally think about and I doubt I'm the only person. So, spill the beans. What do you think?

This is a pretty vague question and there really is no right or wrong answer but, generally speaking, a benchmark is a means to gauge performance... the problem is that unless you're benchmarking the exact kind of situation you intend to use your machine for, you're not always going to really get a complete picture of how it will perform for your particular use case. This often leaves us with synthetic benchmarks to give us a glimpse into how hardware will perform in the most broadest or strictest of senses depending on the benchmark. The thing is that how applications are implemented, like using additional passes of AA in graphics or the conversion of floating point math to long integer fixed point math, can make a huge impact on how different kinds of hardware perform (without even getting started with multi-threading effectiveness,) even if the end result is programmatically and mathematically equivalent.
So with that said, we know what benchmarks give us in the most basic of senses. They usually reduce your machine into a number but, is that enough? We do have applications like 3DMark which gives us a basic breakdown into 3 categories but, that doesn't really say what our hardware is good at and what needs improvement. When I'm debugging code and trying to find bottlenecks, I'm not usually using a basic score to figure it out, I'm profiling the stuff I'm working with and finding out what's taking a long time to run. I might just be a mere software engineer but, it seems to me that benchmarks should be a lot more like profiling code, showing you what your bottleneck is and what takes the most compute resources to execute, not just simply how capable your machine is. Focusing on latency over a score that really doesn't mean anything outside how it compares to other machines.
So tell me, what do you think? Are you satisfied with the benchmarks you have now or is there something you've been craving that just hasn't really become a thing? This is something I occasionally think about and I doubt I'm the only person. So, spill the beans. What do you think?
