This double or triples the review time. All so the software gets update next month and time was wasted. Say it is bloatware - the review says so. A update comes out and it fixes ALL the problems (magically). But the review isn't update. Now what?
Going too far certainly could - but that's not what i've asked for or suggested.
Finish the normal part of a review, install the basic software on it's default settings (which is ALREADY DONE for reviews to get all the screenshots of the software, mind you) and re-run specific, repeateable tests - one or two single threaded and multi threaded tests, and if its gaming related one repeatable 3D benchmark thats not always GPU bound
Considering that all the software and tools are already on the test system at this point, it's being overly dramatic to say it'll triple the test time.
As for the second part:
It's almost like reviews mention a software version, as well as time of review and have comments sections.
We've had TPU reviews literally do this in the past, as well - it may take me time to find examples but TPU reviews have had pre-release hardware and launch day hardware with buggy software, and they've literally just stated the at-the-time situation, said they've contacted the manufacturer and then users commenting in the forum threads provided the various updates as time went on
I'll try and find one of them for you, but it's not like it'll show up in google search easily
This isn't personally aimed at you, but at hardware reviews in general - and while it's lost knowledge now thanks to 3Dchipset going under (the owner literally vanished overnight, the website soon after) I used to BE a hardware reviewer.
This was my view on how I did reviews myself.
If you aren't confirming the advertised features work, you aren't reviewing the hardware. You're repeating the marketing.
You cant claim a motherboard is a great overclocker because the marketing says so - you need to test it.
You can't say one product is better or worse than another, without actually comparing the entire product as a whole - back then, fan noise was an often ignored example of this. People only cared about fastest and coldest.
You cant repeat claims and stake your websites reputation on specific features of the hardware working and how they function, without actually testing them - and you need to repeat the same experience end users will have
as neutrally as possible, or it's exactly the same as running cherry picked hardware.
You cant review a corsair USB or wireless headset without iCue running, since you drop from 7.1 virtual surround sound to stereo sound, lose the RGB and equalizers and it literally runs different drivers - It's not exactly expected to throw a 3D games suite into a headset review, but you're certainly expected to actually play games while wearing them and nothing stops you from running a quick test and noting if the software features came at a cost or not
(This is why TPU's method of a debloated OS is fine despite it not being how end users run their system - it's removing variables that have nothing to do with the hardware being reviewed. You can't assume what antivirus an end user will use, so use none.)
Synapse is good now. But could be dogshit later like it wasn’t for the longest time. Software development works like that.
Old synapse is where logitech and corsair are at right now, with resource heavy bloat.
They took the criticisms and fixed their software, as well as increasing the amount of their devices with hardware profile storage
Edit 9000: This post has had a billion edits, but heres the perfect example of what i mean how this DOES happen in reviews including TPU reviews, but erratically.
No one expects performance benchmarks in a mouse review, but we DO want it acknowledged if the software has issues or is resource heavy.
This is not triple the level of effort in a review, but it's exactly what users need to know - you lose control of the lighting if you don't use the resource heavy software, and you can understand that's going to affect battery life negatively.