• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Benchmate issues

You will already get very far with Cheat Engine. You will be able to break 80% of all benchmarks with the "Speedhack" feature. You can also replace the score in the process memory with it.
I also wrote a tool called TimerBench to examine the HPET bug many years ago. It has a very advanced timer injection feature that broke every benchmark back then.
I know all about it. Unfortunately, I don't promote the activity. Cheating is lame. I just set the shit up and Overclock.

Have found the program to have gotten better through time (years) and you've done a fine job with it. In my opinion it's trustworthy, windows verified software. I still have all my property and assets. Scans of the software show clean reports. Use of the software and the driver is smooth and stable. Submissions are easy to keep track of. The scores remain consistent.

Though, I do have a suggestion. If the file system integrity is compromised, which I've seen during OC, it shouldn't let you make a submissions and give you the reason why. I've had some subs pulled that I didn't notice the file integrity was fubar but had made the submissions anyways.
 
What the point in cheating in hwbot stuff? You'll be found it quickly because the scores just don't line up.

Bragging rights of a cheater? No thank you.
 
mhm. So do you have any other example other than cinebench? Or is that your bread and butter defense?




Absolutely no thanks.




??

So it sounds like.......you. Modify benchmark data to...build an ecosystem reliant on the scores the modifications you do, make.

I dont see how my statement was wrong at all. The argument seems centered around "YOU CANT TRUST THESE PEOPLE!!!!!" and then







Why should I trust you any more than any of the established benchmark publishers? I am all for programmers solo or otherwise that do projects like this. My statement was not only legitimate but I also didnt argue against it. I didnt come in here shitting on your software. I made a very clear statement, like I reiterated here, that I have absolutely NO reason to trust you more than the development teams already authoring the benchmarks in question.

I would consider the OP and subsequent posts more brutal in there observations of your software, then my personal take on if its even needed. I am not trying to sway anyone. You probably shouldnt respond so hostile to people talking about your software. At the end of the day you are big enough to not care if I use it or not. Which I wont, literally ever.

Like most informed users I have reservations on software modifying other software, especially benchmarks, maybe you are old enough to remember the early 2000's. I dont know you. Your software touches shit I dont think it should. I have no good reason to think and have heard no good explanations to think its needed or that you are more trustworthy then the developers.
I had a gut feeling, that asking for that favor to fact check was not going to be well received. I tried to edit it and give more context, but the post was in a moderation queue, so I could not access it. This is only about the "every benchmark has file integrity built in" and the "3DMark installer" statement. If I have offended you, I am sorry.

Each benchmark has its own set of problems, that needed (and still need) to be solved to preserve them. The main problem is that most benchmarks are abandoned after a few years. If a new platform, OS or unsanctioned tweak comes along, it endangers the competitive value of the benchmark. For example a single forbidden tweak that alters the workload (as in less work gets done) can be the end of the benchmark on HWBOT. You can only see so much on a screenshot. BenchMate gives the opportunity to fix these issues from the outside.

BenchMate does not change benchmark data. It listens and reacts to WIN32 API calls. In some cases I replace Windows functionality to fix a bug or add a feature.

Trust is actually a very important topic for benchmarking. I completely agree with you that we should be very careful what and whom to trust. If I see a score posted on the internet and it is just a screenshot, I have zero trust in it. What OS version was used? Was the score photoshopped? Did the benchmark really finish successfully? What version of the benchmark was used? How was the system doing during the benchmark run in terms of frequencies, temperature, voltages? To establish trust you need information. BenchMate adds that information so you can decide for yourself if you want to trust a score or not.

Though, I do have a suggestion. If the file system integrity is compromised, which I've seen during OC, it shouldn't let you make a submissions and give you the reason why. I've had some subs pulled that I didn't notice the file integrity was fubar but had made the submissions anyways.
I don't understand how you can upload a score (using a result file) if the file integrity check failed. It should look like the screenshot below and will not let you create a submission file for HWBOT. Maybe you can post an example score?
 

Attachments

  • benchmate-file-integrity-failed.webp
    benchmate-file-integrity-failed.webp
    25.3 KB · Views: 12
I don't understand how you can upload a score (using a result file) if the file integrity check failed. It should look like the screenshot below and will not let you create a submission file for HWBOT. Maybe you can post an example score?
What do you mean understand? I'm not a liar. Here's the submission. Or one of them I should say.

It was a while ago. It didn't even dawn on me there was an issue, I made several subs like this. All got yanked.


enjoy!

Correct? File integrity in the red = baaad

1747753974057.png
 
I'm trying to understand the issue, not calling you a liar.

So, do I get this right: You were able to generate a result file from BenchMate. Then you uploaded the result file to HWBOT.
 
I'm trying to understand the issue, not calling you a liar.

So, do I get this right: You were able to generate a result file from BenchMate. Then you uploaded the result file to HWBOT.
I uploaded using benchmate. The file was not uploaded to the site individually. If thats what you're asking.

The only "file" I upload separately would be HWBot Aquamark.
 
Wow, that's a huge bug! That should not be possible at all. I will look into it asap and report back.

Thanks for reporting.

Edit: @ShrimpBrime, I was not able to replicate this bug. Not with BM 13.4, 12.1 or 11.3 as in the screenshot.

I tested it by using a different C4D scene (the one from CB15 Extreme) in R20, which gave me a valid score according to CB, but was successfully detected by BenchMate. The result file can not be saved, it just shows the result dialog without buttons. F6 does not save the result as well.

benchmate-cb-r20-invalid-result.png


As R20 on HWBOT is an open category, that also allows the upload of screenshots, my guess is that your score is in fact a screenshot. Could that be the problem?

Maybe we need to improve the user experience here, so that this result window is without any doubts interpreted as an invalid score. For example not showing the score or add a big message in the result window. What do you think?
 
Last edited:
Wow, that's a huge bug! That should not be possible at all. I will look into it asap and report back.

Thanks for reporting.

Edit: @ShrimpBrime, I was not able to replicate this bug. Not with BM 13.4, 12.1 or 11.3 as in the screenshot.

I tested it by using a different C4D scene (the one from CB15 Extreme) in R20, which gave me a valid score according to CB, but was successfully detected by BenchMate. The result file can not be saved, it just shows the result dialog without buttons. F6 does not save the result as well.

View attachment 400576

As R20 on HWBOT is an open category, that also allows the upload of screenshots, my guess is that your score is in fact a screenshot. Could that be the problem?

Maybe we need to improve the user experience here, so that this result window is without any doubts interpreted as an invalid score. For example not showing the score or add a big message in the result window. What do you think?
Yes, I went and look through the report submissions button. That was in fact an uploaded screenshot. That is totally my fault then, I hadn't realized.

Yeah, instead of a score, it should say invalid in the score box or something. So guys like me can't squeeze in bunk screenshots lol.
 
Thanks for the feedback. Yeah, benching is heavy, often late hours and many different scores involved. All software involved in the process should help to focus on the difficult tasks, the benching itself.

I will remove the score from invalid results and add INVALID instead.
 
Thanks for the feedback. Yeah, benching is heavy, often late hours and many different scores involved. All software involved in the process should help to focus on the difficult tasks, the benching itself.

I will remove the score from invalid results and add INVALID instead.

1748053161602.png


1748053241645.png


Why does the GPU benchmark in Cinebench 2024 1.0, when launched from your application, state that I don't have a CPU with AVX2 instruction support, but if I launch it standalone, I do?
 
I removed the GPU workload from the bundled version because it adds a lot to the download size and HWBOT will never use it (finishes to quickly). The BM download is 1.5 GB right now and it would be 2+ GB if CB was added in full.

Unfortunately CB defaults to this error message instead of stating that the workload files can not be loaded.
 
I removed the GPU workload from the bundled version because it adds a lot to the download size and HWBOT will never use it (finishes to quickly). The BM download is 1.5 GB right now and it would be 2+ GB if CB was added in full.

Unfortunately CB defaults to this error message instead of stating that the workload files can not be loaded.
You're the man, thank you for the support!
 
Back
Top