• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

show me that a q6600 is no good now

Status
Not open for further replies.

Bucho

New Member
Joined
May 13, 2014
Messages
15 (0.00/day)
the 8120 will walk all over the Q6660 it will use twice the power of i7 and be 30% slower then a i7 but it will walk all over a 775 chip I recently build a budget core i5 and R7 265x and it blows the amd chips out of the water for about 100 dollars more and the difference in power consumption will more then account for that eventually
don't get me wrong its fine to settle for a AMD chip hell in the majority of cases you won't see any difference

At stock clocks that's true for sure since the Q6600 only has 2.4GHz and a low FSB of 266MHz (so even DDR2-533 or 667 memory is fast enough and higher speed memory or DDR3 would make no difference). The 8120 has 3.1GHz and a Turbo up to 4.0 (!) GHz with no FSB since the memory controller is in the CPU and it does support (and even makes some good use of) DDR3-1866 RAM.
Then the 8120 has 8 threads it can process, so (and that's the thing with these FX CPUs anyway) it HEAVILY depends on the program how good it utilizes the FX (bulldozer) architecture.
BUT if the Q6600 is overclocked to 3.6GHz+ the FX8120 @ stock will not always be faster, especially at programs that only make use of one or two threads and/or are older and not optimized for that FX in any way. But then again that FX8120 can be overclocked too, but you need a decent board and cooling for that, and also depending on your GPU a decent power supply since that FX needs some power when overclocked (but the Q6600 needs a lot too if overclocked and at higher VCore)

Anyway I am very interested in scores that Shambles1980 will post, but for "real life" tests he only used Thief. 3DMark at least shows what theoretically the CPU/GPU combo is capable of. Most of the games will be a different pair of shoes anyway.

@Delta6326
You have that Q6600 of yours at stock? Really?
You have a good board that should overclock great, a (way too) strong PSU, a decent cooler and then you even throw that HD7870 at that poor CPU.

At least set your FSB to 333MHz to get 3.0GHz out of that Q6600. If it is a G0 revision it may work at this setting even with default VCore or maybe a little more. Oh and remember that you do NOT want to leave most of the voltage setting on AUTO (so the board raises these values as it likes) when you overclock since these ASUS boards tend to use way more voltage on AUTO than what is required.
If that CPU needs a higher VCore to be stable you might want to enable LLC (loadline calibration). With that you will have only a small VDroop (difference between the VCore voltage between IDLE and LOAD) so that you maybe even can lower that VCore a little but the system will need more power and produce more heat under LOAD.

@HalfAHerz
Yes that's a big plus for AMD. You can get a pretty decent FX-8320 NEW for about 116 EUR (about 158 USD) here where I live. At least that used to be a good offer about a year ago (was ~ 130 EUR back then) but Intel wasn't sleeping and their Haswell CPUs are about 10-15% faster than their Ivy Bridge CPUs.
Also a FX-6300 for about 86 EUR (about 117 USD) is a good offer.

Intel CPUs always were a little more expensive, at least if you wanted a bigger CPU or one that can be overclocked.
The crazy thing is that Intel kind of really can price their performance and high end CPUs almost to anything they want. That started right after the launch of the Sandy Bridge CPUs. The i5-2500K dropped right after it's release in early 2011 to below 170 EUR (that about 232 USD) and in early 2012 it went up again to 190-200 EUR!!! (about 260-270 USD) right before the release of the i5-3570K that was sold for about 215 EUR (about 293 USD) and this price held up to mid 2013 right before the Haswell CPUs were released.
It seems like Intel does not see a reason to drop it's prices.

About that frequency thing ... check out some reviews about the FX-9590. As you surely know that's just a overclocked FX-83xx and that almost all of these can be clocked to ~ 4.8 GHz and even some more. But still at these clocks they often are slower than a i5-4670K @ stock and that's sad. But in some tests they really shine, but most of them are applications like crypting/decrypting/packing/converting aso. and in games the i5 usually is faster.
 
Joined
May 3, 2014
Messages
965 (0.26/day)
System Name Sham Pc
Processor i5-2500k @ 4.33
Motherboard INTEL DZ77SL 50K
Cooling 2 bay res. "2L of fluid in loop" 1x480 2x360
Memory 16gb 4x4 kingstone 1600 hyper x fury black
Video Card(s) hfa2 gtx 780 @ 1306/1768 (xspc bloc)
Storage 1tb wd red 120gb kingston on the way os, 1.5Tb wd black, 3tb random WD rebrand
Display(s) cibox something or other 23" 1080p " 23 inch downstairs. 52 inch plasma downstairs 15" tft kitchen
Case 900D
Audio Device(s) on board
Power Supply xion gaming seriese 1000W (non modular) 80+ bronze
Software windows 10 pro x64
the benches are all i could use sorry. had uninstalled pretty much everything getting ready for the new setup. i did set thief up in the most fair way as possible (a real world scenario where some one would be adjusting the settings to best suit the system they had) i will do the same for the 8120
the other 3 benches (valley dx11, dx9, opengl, fire strike and ice storm.) should help even out the actual performance numbers. the graphs in the 3dmark tests will be a better indication than over all scores.
i would hope to see a graph with smaller spikes and dips and a more constant averages the 8120 arrived today. but the ram hasn't yet which annoys me.

the q6600 G0 will easily run at 1600 (400) so that's a motherboard limitation but there are plenty of lga 775 bards that support 1600 fsb, the big attraction to the q6600 was its low 266 fsb with high multiple, this meant more boards could over clock it easier. where as at the time most cpu's had a lower multiple and higher fsb req.
400x9 on the go q6600 does need a lot of cooling though. and i was never able to get the q6600 higher than 3.7Ghz stable i could get it higher than 3.7 and bench test or game, but not occt stable. and if its not occt stable then its not stable (imo)
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.18/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
You are right, I forgot that. But I think that it works only on specific motherboards, not sure.

Edit: Drats my cpu has only 2 unlockable bins.

its socket 1155, with a P or Z series board.

since its the older socket, the OP could have hunted up something to suit there. i got this setup (CPU, mobo + 4GB ram) for $120 second hand, and its crapped all over my old AMD setup.

he's got his AMD setup now (or is it still coming?) and i'm just awaiting more feedback from him.
 
Joined
Apr 2, 2011
Messages
2,660 (0.56/day)
Hi guys, just signed up @ techpowerup.
...
The main problem (that has been discussed over and over all around the internet) is that somehow the CPU performance increase (kind of) flattened over the past ~ 8 years. I say kind of because the IPC (instuctions per cycle) performance increased and also the core count increased in certain classes of CPUs. So why don't we see the big boost?
One thing is that the MHz of the CPUs didn't increase that much, at least not like in the days of 486, Pentium /II/III/4 and K6, Athlon. Back then the MHz doubled and tripled von generation to generation and the IPC got better. No wonder a Athlon 1400 was almost twice as fast as a 700MHz Athlon, right? Or a 486DX2 66MHz was almost twice as fast as a 33MHz. At least in some raw benchmarks where the bus speed, memory speed and other things didn't bottleneck.
The other thing is that we still have quad cores as the mainstream and performance CPUs. That shifted a little but back in early 2007 you could buy a Quad Core, okay that was high end like the Socket 2011 CPUs (6 core) now, but Quad Core became mainstream around mid 2008.

...

Welcome to TPU!


Pleasantries aside, I can agree with pretty much everything you are saying but what I've quoted. The simple truth behind my reservation is that you missing the forest for the trees.

If it were 2004, I would be swearing, because I had to install another two servers in order to run the remote system for the six new employees that the company just hired on. They'd require their own additional hardware, I'd have to get everything configured, and after all of this I'd still have to find some way to make all of this stuff fit into both the server room and my budget. Luckily, it's 2014. I've got two Xeon 6 core, 12 thread processors chugging along with a dozen different virtual machines plugging away. I can shear off enough resources to run another instance, and my CPU chugs along with a slightly higher load to distribute.

That very same Xeon processor would need at least three Core architecture systems to replace it just on core count, not to mention a tangle of update requiring systems. The lack of any one person's ability to see improvements in processor speed is their own failing, not that of processor stagnation. Likewise, CPU utilization is rarely as simple as people credit it.

This said, the OP asks for one usage scenario. I've got an unknown set of games, with unknown optimizations. Some of the cited titles were developed for functionally single core machine, and ported to more powerful systems. I'm preventing a tax on my graphical systems by running at low resolution, and I'm preventing taxing my CPU by running games that don't apply it with a heavy burden. As such, I see no reason to switch from my high end Core processor to a low end i-series processor.

You cite the great innovations of the past in greater frequencies, but seem to forget why frequency increases have functionally become minute. Frequency increases actually stopped yielding huge results almost a decade ago. The real innovation that allowed the CPU market to improve was better miniaturization, new materials, and increases in the efficiency of chip design that increased IPC. I'd gladly pit one of the Athlon CPUs versus the P4 chips released at the same time, and demonstrate how better design won out over raw frequencies. This lesson holds true today, when AMD FX processors far more easily attain high frequencies, but are beaten by the Intel offerings with significantly lower frequencies.


Your point about core count is accurate, but misguided. The sad truth is that gaming is currently (though hopefully will not be for too much longer) more about consoles than about PCs. This truth was motivated by publishers, who have treated the PC market as second citizens for years. There are games out there that have made use of high end multi-core processors. We have had the capability to use all that extra power for years, but the cost to use it has been deemed prohibitively high. Since the Core processors were introduced we've been hearing about how games will be coming out soon that use all those extra CPU cores. Our reality is that these games exist, but don't have enough juice yet to make them the norm.


The OP's point that a q6600 is "good enough" is valid, but misguided. My calculator has as much computational power as the Apollo rockets system. It's good enough to get people to the moon and back, but isn't good enough to perform tasks that a modern cellular phone does regularly. "Good enough" is a cop-out, and replacing your BMW with a Geo Metro is not a solution. Comparing a q6600 to an i3 is misleading, but if all you qualify is fuel efficiency then the comparison is just peachy.
 
Joined
May 21, 2008
Messages
4,113 (0.71/day)
Location
Iowa, USA
System Name THE CUBE 2.0
Processor Intel i5 13600k
Motherboard MSI MPG Z690 EDGE DDR4
Cooling Phanteks PH-TC14PE BK 2x T30-120 Fan mod mount
Memory G.Skill TridentZ 3200 MT/s C15 32GB 2x16GB
Video Card(s) Gigabyte Aorus 1080 Ti 11GB OC: Core 2GHz, Mem 5.7GHz
Storage WD SN770 250GB / 3x WD SN850X 2TB / Toshiba X300 4TB / 2x RAID1 Toshiba P300 3TB
Display(s) Samsung 49" Odyssey OLED G95SC 240Hz 5120 x 1440
Case "THE CUBE" Custom built, pure Red Alder wood
Audio Device(s) Beyerdynamic DT 880
Power Supply Corsair RM1000X
Mouse Logitech G700
Keyboard Logitech G910
Software Windows 11 Pro
OK thanks for the info. I used to have it oced but when I switched over to w8 I got rid of it and could not remember where I started for the voltage.
 
Joined
Oct 24, 2009
Messages
430 (0.08/day)
Location
Belgium
System Name Illidan
Processor AMD Ryzen 9 5900X
Motherboard Gigabyte B550 Aorus Pro V2
Cooling Scythe Mugen 4
Memory G.Skill Trident Z 32GB DDR4 3000MHz 14CL
Video Card(s) AMD Radeon RX 6900 XT
Storage Crucial P1 1TB + Sandisk Ultra II 960GB + Samsung EVO Plus 970 2TB + F3 1TB + Toshiba X300 4TB
Display(s) Iiyama G-MASTER G4380UHSU-B1
Case Corsair 750D Airflow
Audio Device(s) Sony WH1000-XM4
Power Supply Seasonic Focus PX-850
Mouse Logitech G604
Keyboard Corsair Vengeance K70 (Cherry MX Red)
Software Windows 11 Pro
im a child of the era where role play games just had text on the screen lol. and i still think super mario brothers 3 has great graophics. but when it comes to modern computers anything lower than 1024x768 even by my extra low standards looks like poop. But anything above 1024x768 looks just fine to me lol. honestly cant tell the difference.
(my desktop is at 1080p though)
the only thing i can tell the difference with is 1080p vs 1080i and 720p looks better than 1080i but any way.
I am looking at some board ram and cpu bundles on the internet but i dont seem to see anything thats a good enough improvment for the money that i can then convince the wife i need to spend that much..

Maybe i should just hang back and wait for second gen 14nm cpus to hit the streets and pick up a decent used 22nm setup then.
cant be that far off now i guess.

as for i bought the q6600 when i did..
thats a bit different i upgraded from pentium D which was lga 775, the D was a very low end model. but i bought the best mother board i could at the time. this allowed me to then buy a Q6600 later


Q6600 is old and even a 2nd gen i5 runs circles around it. If you can't run 1080p (can't believe you actually run things at 1024*768 and not notice the difference at 1080p), it's worth upgrading. Pentium D was high end when it came out, but it aged very rapidely. I ran all my games at 1920*1080 on a Pentium D 830 back in the day when I had my new screen, except for crysis. Before that, I ran 1280*1024, which was a huge upgrade of 1024*768 for me.

Sounds to me the best upgrade you can do is your wife so you can buy more pc components :)
 
Joined
May 3, 2014
Messages
965 (0.26/day)
System Name Sham Pc
Processor i5-2500k @ 4.33
Motherboard INTEL DZ77SL 50K
Cooling 2 bay res. "2L of fluid in loop" 1x480 2x360
Memory 16gb 4x4 kingstone 1600 hyper x fury black
Video Card(s) hfa2 gtx 780 @ 1306/1768 (xspc bloc)
Storage 1tb wd red 120gb kingston on the way os, 1.5Tb wd black, 3tb random WD rebrand
Display(s) cibox something or other 23" 1080p " 23 inch downstairs. 52 inch plasma downstairs 15" tft kitchen
Case 900D
Audio Device(s) on board
Power Supply xion gaming seriese 1000W (non modular) 80+ bronze
Software windows 10 pro x64
my wife is not getting upgraded lol..
i dont game at 1024x768. its usually 1280x1024 although i should be 1280x800 to stay in the correct aspect ratio.
honestly i though i dont see much of a gfx improvemnt over anything 720p and up i do see anything below 1024x768 as horrible. but anything above its pretty much all the same.

@lilhasselhoffer
its not true that the resolutions did not stress the 7850 as stated the test ran on the Q6600 was balanced to remove gpu bottles and try my best to remove the cpu bottle.
With the settings listed the gpu did not bottle (these settings are a lot lower than what i usually play the game at) but the q6600 at 3.0 just did not have enough grunt.

the q6600 even at 3.7 was at 84% even at the options screen and would only ever get used more.
So where as the settings chosen could utilize the gpu at 100% If the cpu was taxed a bit more than usual then the gpu would take a hit. but at 3.7 the q6600 did have enough power to keep the low frame drops in the 50's at 3.0 it lost 9-11fps at the low end. which is quite a lot when your talking <60 fps

Thief its self is a pretty badly optimized game if you ask me.
some times it will use 4 cores sometimes only 3. its not very good at distributing the work load and has an exessivly high dependancy on the cpu (i can only imagine due to garrets physics needing to be able to bump things and for the lighting to cast shadows)
i do believe that this could have been intentional though, and that mantel would be showcased on the game to show improvments whilst in reality a good % of the increase would have come from better utilization of the hardware but not because of mantle but simply better optimization.

I will not be using mantle to test the 8120. it will be the same game same version and i even saved the game just before starting the initial test so will be taken from the exact same place and in game time.

Where as If i had all my games installed i probably would not have chosen thief as my test bed game. with the method used to test (set the options up the best for the hardware) and the fact it does have a built in bench mark utility, i think it will serve as a reasonable stand in.
My main interest will be in the fire strike tests though. they are very demanding and should show the limitations of the cpu's.
i added the ice storm tests as a bench for lower requierments.
So i do feel like i have covered all the bases for a fair result to be shown..

just wating on the ram to arrive and then i can test the 8120. it should be interesting to see the results.
 

Toothless

Tech, Games, and TPU!
Supporter
Joined
Mar 26, 2014
Messages
9,277 (2.52/day)
Location
Washington, USA
System Name Veral
Processor 5950x
Motherboard MSI MEG x570 Ace
Cooling Corsair H150i RGB Elite
Memory 4x16GB G.Skill TridentZ
Video Card(s) Powercolor 7900XTX Red Devil
Storage Crucial P5 Plus 1TB, Samsung 980 1TB, Teamgroup MP34 4TB
Display(s) Acer Nitro XZ342CK Pbmiiphx + 2x AOC 2425W
Case Fractal Design Meshify Lite 2
Audio Device(s) Blue Yeti + SteelSeries Arctis 5 / Samsung HW-T550
Power Supply Corsair HX850
Mouse Corsair Nightsword
Keyboard Corsair K55
VR HMD HP Reverb G2
Software Windows 11 Professional
Benchmark Scores PEBCAK
i dont game at 1024x768. its usually 1280x1024 although i should be 1280x800 to stay in the correct aspect ratio.
honestly i though i dont see much of a gfx improvemnt over anything 720p and up i do see anything below 1024x768 as horrible. but anything above its pretty much all the same.
I think I just died a little inside.

Well then again there are people that can't see the difference between 30fps and 60fps. If you don't see a quality improvement, then I guess gaming at a lower rez and getting more frames sure helps. How big is your monitor? 24in?
 
Joined
Apr 2, 2011
Messages
2,660 (0.56/day)
my wife is not getting upgraded lol..
i dont game at 1024x768. its usually 1280x1024 although i should be 1280x800 to stay in the correct aspect ratio.
honestly i though i dont see much of a gfx improvemnt over anything 720p and up i do see anything below 1024x768 as horrible. but anything above its pretty much all the same.

@lilhasselhoffer
its not true that the resolutions did not stress the 7850 as stated the test ran on the Q6600 was balanced to remove gpu bottles and try my best to remove the cpu bottle.
With the settings listed the gpu did not bottle (these settings are a lot lower than what i usually play the game at) but the q6600 at 3.0 just did not have enough grunt.

the q6600 even at 3.7 was at 84% even at the options screen and would only ever get used more.
So where as the settings chosen could utilize the gpu at 100% If the cpu was taxed a bit more than usual then the gpu would take a hit. but at 3.7 the q6600 did have enough power to keep the low frame drops in the 50's at 3.0 it lost 9-11fps at the low end. which is quite a lot when your talking <60 fps
.....

You misunderstand, and seem to be consciously avoiding reality. Allow me to dispel the difference in our understanding.

84% usage is sand bagging. 100% usage, and seeing how many valid FPS can be generated is testing. If you say that you are happy at lower resolutions then you are not testing your system's power.

My Ti-84 calculator can run a Galaga clone. My computer can run the same. Does it imply that the two are equals; no. You only test what a system is capable of by pushing settings to the maximum, and seeing when you can no longer do something. In gaming, this is adding all the post processing effects, and measuring whether you get a playable frame rate. Synthetic benchmarks are just that, attempts to standardize performance readings.

Don't agree with this, then you aren't really testing, what you are doing is playing with yourself. You set the rules, box in the performance, and reset the rules in order to get the outcome you want. Don't like the FPS being generated? All you do is crank the resolution down. Don't like the synthetic benchmarks? Crank the frequency up for a single viable test run, and get big numbers to provide that whatever you think is right is right. This isn't objective reason, it's putting blinders on.


I challenge you to lose the blinders, and to come back to reality. If you've got to set the resolution lower than native on a screen then you're creating blurriness. Can't tell the difference between 720p and 1080p, then you're not viewing the screen properly. What I'm seeing is someone growing older, and vision getting less acute. I've seen the same before from my parents. They got glasses, and suddenly the difference between 720p and 1080p was clear. The price of a Blu-ray movie suddenly made sense, because the visual quality was so much better. Saying that improved resolutions don't provide anything more is admitting that you need some help.

The q6600 is adequate for what you've cited. What you've cited is a severely limited facet of gaming. If you have so little ambition, then you cannot argue that a replacement is needed. When you open up your standards, you can see why the q6600 isn't really living up to more modern standards.
 
Joined
May 3, 2014
Messages
965 (0.26/day)
System Name Sham Pc
Processor i5-2500k @ 4.33
Motherboard INTEL DZ77SL 50K
Cooling 2 bay res. "2L of fluid in loop" 1x480 2x360
Memory 16gb 4x4 kingstone 1600 hyper x fury black
Video Card(s) hfa2 gtx 780 @ 1306/1768 (xspc bloc)
Storage 1tb wd red 120gb kingston on the way os, 1.5Tb wd black, 3tb random WD rebrand
Display(s) cibox something or other 23" 1080p " 23 inch downstairs. 52 inch plasma downstairs 15" tft kitchen
Case 900D
Audio Device(s) on board
Power Supply xion gaming seriese 1000W (non modular) 80+ bronze
Software windows 10 pro x64
i dunno if you missed the part where i said that higer resolutions and or higer display settings resulted in more drops below 60fps which means that the gpu was then the component slowing down the system.
resulting in more drops below 60 (mostly in to the 50's)

with reduced resolutions and lower gfx settings the 60fps target was achived for a longer
period.
The settings chose were the ones that most suited the 7850 at 900/1200 the fps only dropped down to 44 fps when the cpu was taxed, and changing the settings could not get rid of this 44fps low point. this had removed the gpu from the equation as the reason i could not sustain 60fps, but the cpu was confirmed as the reason that i had 44fps dips... (this is futher confirmed by 55fps dips at 3.7ghz...

now if you dont think that people in the real world would adjust settings for the best ballance of performance and frames then i think you are the one that needs to remove the blinkers.
i set the game up to run as efficiently as possible on the hardware. increasing the resolutions dropped the frame rates, thats a simple fact.
if the 8120 can achive more constant frames at the same or better settings then it is better. if it cant then it isnt..
its a prety simple premiss.

the 3 benches are set at the same for the testing process. i chose 1080p because for some reason some people seem to think it makes a difference to image quality, (personally i really dont see it) but as these are synthetic benches they can be set to a default quality with 1080p resolutions to see which cpu results in the better output.

i think you have a misconseption that at 1080p you cannot be bottle necked because the cpu is under less load, but the fact of the matter is atleast with the game i tested 1080p cause the gpu to be the reason the fps dropped. at 720p the gpu was able to handle the work load at the outlined settings.
you seem to be confuzing a set of tests to compare 2 things s evenly and as close to a real world setting as possible with what you would want a system to do "run all games at full res no issues"

what you propose is to set everything to the highest settings and then say "well i cant play it like that end of test"
my method is set it up so it is playable with the MINIMUM ammount frame drops under my target of 60fps. then use those as a comparison for the 8120 when the ram arrives.
"i dont care if i can get 8000 Fps in gauntlet on my pc when i could only get 20 fps on my old system.. All i want to know is what does it take to play at a constant 60fps with no drops, and which system can achive that at the highest settings, the higest settings the q6600 came closest to achieving this was outlined previously, any higer and it lowerd fps, any lower and it did not change.. so that is the sweet spot"
its a simple premiss of if the 8120 can run at those same settings without ever dropping below 60. or if it is able to run with only low points of 50+ then it was able to do the same work load better.

the diference being that i dont just test 2 cpu's and say "well i cant play this maxed out with either one of these so dont bother buying one"
and instead i can say "the 8120 actually works better at these settings by not dropping below 60fps at all) or i can say
(the q6600 was better as it achived the 60fps target and only droped to 44fps where the 8120 would drop to 30)

testing the power of something when it blaitantly is not powerfull enough to do the things you are testing it is a bit redundant.
in what reality do people stick everything on max try to play get 4fps and say well i cant play that and just not adjust any settings till they can.
this is one of the fundimental issues i have with bench tests they are all flawed. they never simly test at what settings and resolution does the setup work best and then compare the settings.

if bench marks stated that game A runs at a cnstant 60fps with v-synk on at these setings and this resolution with cpu a and gpu a.
but cpu b and gpu b can do exactly the same at these settings and this resolution. then that would be a better test.

there really is little reason to test the max possible settings and show that this system runs 14fps and this one runs 15fps
When in reality at settings tweaked by some one using it There could be a difference of 20fps at actuall settings people with that setup would probably use..
 
Last edited:

Toothless

Tech, Games, and TPU!
Supporter
Joined
Mar 26, 2014
Messages
9,277 (2.52/day)
Location
Washington, USA
System Name Veral
Processor 5950x
Motherboard MSI MEG x570 Ace
Cooling Corsair H150i RGB Elite
Memory 4x16GB G.Skill TridentZ
Video Card(s) Powercolor 7900XTX Red Devil
Storage Crucial P5 Plus 1TB, Samsung 980 1TB, Teamgroup MP34 4TB
Display(s) Acer Nitro XZ342CK Pbmiiphx + 2x AOC 2425W
Case Fractal Design Meshify Lite 2
Audio Device(s) Blue Yeti + SteelSeries Arctis 5 / Samsung HW-T550
Power Supply Corsair HX850
Mouse Corsair Nightsword
Keyboard Corsair K55
VR HMD HP Reverb G2
Software Windows 11 Professional
Benchmark Scores PEBCAK
Well duh you're going to get under 60fps if you crank the resolution up. You say you run 720p maxed settings, which ANY modern mid-high range GPU can handle, but when you keep those maxed settings and bump to 1080p, it's REALLY EASY TO SEE THAT ANY MID-HIGH RANGE GPU WILL HAVE ISSUES. (Not ultra range, which is 680 and up) This is why games have "Settings." I run an MSI GTX 660 OC, with an AMD Athlon II X4 620. Now while my CPU isn't as powerful as yours, I can still maintain 50-70FPS in BF3 Multiplayer at 1080p. Why? Because I don't go shitfaced with cranking the settings up. There is a HUGE difference between 720p and 1080p. (Source:

-Skyrim on a GT 220 @720p on a 900p monitor. Lowest possible settings. Pixelated. 20-40 FPS rate. (Aged and low grade GPU)

Skyrim on a GTX 660 OC @720p on a 900p monitor. Ultra settings. Not as many noticeable pixels. High FPS rate.

Skyrim on an GTX 660 OC @1080p on a 1080p monitor. Lowest possible settings. No major noticeable pixels. High FPS rate.

Also ran at Ultra settings +HD packs. FPS capped due to CPU being aged and low grade.. )

The only reason you don't see a difference is because you crank the settings so high that you can't really see the 720p pixels, which is fine. (AA will do that)

If you're going to game at such a low resolution, then upgrading is worthless, but it's too late for that. You're getting an upgrade. Great. You can probably run games better. Great. Now turn off your highassed settings and try gaming at 1080p. Still see no difference? Great. Then stay at 720p. But by no means is there ANY logical reason to say that there is no difference between 1080p and 720p. I have bad eyesight at times due to medical reasons, and I can still see the difference. Maybe you're using your monitor wrong, or your eyesight isn't as good as it used to be. Who knows?

All in all, you didn't need an upgrade because you play at a setting where even a low-end i3 can run easily. You don't push your hardware to the point of quality because you're bent on quantity, but that's your choice, not ours.
 
Joined
Apr 10, 2012
Messages
1,400 (0.32/day)
Location
78°55' N, 11°56' E
System Name -aLiEn beaTs-
Processor Intel i7 11700kf @ 5.055Ghz
Motherboard MSI Z490 Unify
Cooling Corsair H115i Pro RGB
Memory G.skill Royal Silver 4400 cl17 @ 4403mhz
Video Card(s) Zotac GTX 980TI AMP!Omega Factory OC 1418MHz
Storage Intel SSD 330, Crucial SSD MX300 & MX500
Display(s) Samsung C24FG73 144HZ
Case CoolerMaster HAF 932 USB3.0
Audio Device(s) X-Fi Titanium HD @ 2.1 Bose acoustimass 5
Power Supply CoolerMaster 850W v2 gold atx 2.52
Mouse Razer viper 8k
Keyboard Logitech G19s
Software Windows 11 Pro 21h2 64Bit
Benchmark Scores ► ♪♫♪♩♬♫♪♭
Joined
May 3, 2014
Messages
965 (0.26/day)
System Name Sham Pc
Processor i5-2500k @ 4.33
Motherboard INTEL DZ77SL 50K
Cooling 2 bay res. "2L of fluid in loop" 1x480 2x360
Memory 16gb 4x4 kingstone 1600 hyper x fury black
Video Card(s) hfa2 gtx 780 @ 1306/1768 (xspc bloc)
Storage 1tb wd red 120gb kingston on the way os, 1.5Tb wd black, 3tb random WD rebrand
Display(s) cibox something or other 23" 1080p " 23 inch downstairs. 52 inch plasma downstairs 15" tft kitchen
Case 900D
Audio Device(s) on board
Power Supply xion gaming seriese 1000W (non modular) 80+ bronze
Software windows 10 pro x64
well at the settings i play at the cpu i had still couldnt keep the fps at a constant 60.. with v-synk on. and given that is the only thing i care about and no matter how i tweak the settings it simply is not possible to achive that with the cpu how is it i did not need an upgrade?

also i can lower the settings and play at 1080p but the frames would still be less than my target..
i honestly prefer a constant 60fps no drops to higher resolution "which i really cant tell the difference with and dropping frames every time i move or a stutter when i try and turn around fast.
AA does make the game look better so prehaps it is why i dont see the diference between 720p and 1080p because at 1080p i cant have AA on and still get frame drops. at 720p i can have AA on looks just as good to me, and i dont get the frame drops..
 
Last edited:

Toothless

Tech, Games, and TPU!
Supporter
Joined
Mar 26, 2014
Messages
9,277 (2.52/day)
Location
Washington, USA
System Name Veral
Processor 5950x
Motherboard MSI MEG x570 Ace
Cooling Corsair H150i RGB Elite
Memory 4x16GB G.Skill TridentZ
Video Card(s) Powercolor 7900XTX Red Devil
Storage Crucial P5 Plus 1TB, Samsung 980 1TB, Teamgroup MP34 4TB
Display(s) Acer Nitro XZ342CK Pbmiiphx + 2x AOC 2425W
Case Fractal Design Meshify Lite 2
Audio Device(s) Blue Yeti + SteelSeries Arctis 5 / Samsung HW-T550
Power Supply Corsair HX850
Mouse Corsair Nightsword
Keyboard Corsair K55
VR HMD HP Reverb G2
Software Windows 11 Professional
Benchmark Scores PEBCAK
well at the settings i pay at the cpu i had still couldnt keep the fps at a constant 60.. with v-synk on. and given that is the only thing i care about and no matter how i tweak the settings it simply is not possible to achive that with the cpu how is it i did not need an upgrade?
Vsync uses more resources... Just keep it off.
 
Joined
May 3, 2014
Messages
965 (0.26/day)
System Name Sham Pc
Processor i5-2500k @ 4.33
Motherboard INTEL DZ77SL 50K
Cooling 2 bay res. "2L of fluid in loop" 1x480 2x360
Memory 16gb 4x4 kingstone 1600 hyper x fury black
Video Card(s) hfa2 gtx 780 @ 1306/1768 (xspc bloc)
Storage 1tb wd red 120gb kingston on the way os, 1.5Tb wd black, 3tb random WD rebrand
Display(s) cibox something or other 23" 1080p " 23 inch downstairs. 52 inch plasma downstairs 15" tft kitchen
Case 900D
Audio Device(s) on board
Power Supply xion gaming seriese 1000W (non modular) 80+ bronze
Software windows 10 pro x64
Vsync uses more resources... Just keep it off.
tripple buffer v-synk uses more resources than no v-synk?
i was pretty sure that it saved the frames up so that it could display them reducing drops...

or more accuratly continued to draw frames even when the front buffer did not need them because it had room to do so in the 3rd buffer which means you dont get the performance hit at all..
 
Last edited:

Toothless

Tech, Games, and TPU!
Supporter
Joined
Mar 26, 2014
Messages
9,277 (2.52/day)
Location
Washington, USA
System Name Veral
Processor 5950x
Motherboard MSI MEG x570 Ace
Cooling Corsair H150i RGB Elite
Memory 4x16GB G.Skill TridentZ
Video Card(s) Powercolor 7900XTX Red Devil
Storage Crucial P5 Plus 1TB, Samsung 980 1TB, Teamgroup MP34 4TB
Display(s) Acer Nitro XZ342CK Pbmiiphx + 2x AOC 2425W
Case Fractal Design Meshify Lite 2
Audio Device(s) Blue Yeti + SteelSeries Arctis 5 / Samsung HW-T550
Power Supply Corsair HX850
Mouse Corsair Nightsword
Keyboard Corsair K55
VR HMD HP Reverb G2
Software Windows 11 Professional
Benchmark Scores PEBCAK
You really aren't that tech savvy, are you? Vsync forces the GPU to put out a certain amount of frames instead of naturally. You'll ONLY see 60fps due to your monitor, assuming that it is a 60hz. You'll only see as many frames as the hertz your monitor will give out. The ONLY time you need Vsync is if you're getting frame issues while gaming. Such as frames ripping and tearing. Most gamers leave Vsync off because it doesn't hurt to have 120fps, because all you'll see is 60 and no more.
 
Joined
May 3, 2014
Messages
965 (0.26/day)
System Name Sham Pc
Processor i5-2500k @ 4.33
Motherboard INTEL DZ77SL 50K
Cooling 2 bay res. "2L of fluid in loop" 1x480 2x360
Memory 16gb 4x4 kingstone 1600 hyper x fury black
Video Card(s) hfa2 gtx 780 @ 1306/1768 (xspc bloc)
Storage 1tb wd red 120gb kingston on the way os, 1.5Tb wd black, 3tb random WD rebrand
Display(s) cibox something or other 23" 1080p " 23 inch downstairs. 52 inch plasma downstairs 15" tft kitchen
Case 900D
Audio Device(s) on board
Power Supply xion gaming seriese 1000W (non modular) 80+ bronze
Software windows 10 pro x64
You really aren't that tech savvy, are you? Vsync forces the GPU to put out a certain amount of frames instead of naturally. You'll ONLY see 60fps due to your monitor, assuming that it is a 60hz. You'll only see as many frames as the hertz your monitor will give out. The ONLY time you need Vsync is if you're getting frame issues while gaming. Such as frames ripping and tearing. Most gamers leave Vsync off because it doesn't hurt to have 120fps, because all you'll see is 60 and no more.

all i see is 60fps "thats all any one can see on a 60hz monitor"
but the game renders as many as it can when using tripple buffering.
if fraps or similar was counting actuall frames the card was rendering and not the frames sent to the monitor every refresh it would show 120 fps or 300 or whatever it happens to be. with tripple bufferd v-sync but it will only count the actuall refreshed frames which is 60 because they only look at the front buffer...

tripple buffer lets the card render your 120-300fps so it works to its full potential, but the front buffer only refreshes 1ce per frame giving you the same performance as no v-synk but without tearing.. (the buffers have the latest frame to display exactly the time you need it because the gpu never stopped rendering them)

There is also limited time queing which is also sometimes called tripple buffering, that is not the same but in most instances works just as well but can sometimes display an old frame which would cause input lag.
real tripple buffering always displays the latest frame so you always get the same performance as no v-sync but with 0% chance of tearing.

v-sync alone (not tripple or double buffer) however forces the card to impose an artifical delay. this will leave you slower than your opponent in a first person shooter for instance. it could be 10ms slower displaying the image on your screen than it was on your opponents screen if they had no v-synk.
in that situation then yes you are right using vsynk is slower because it is delaying the image.. (but given i have repetedly said tripple buffering v-sync that point it totally mute)

Also with no v-sync on and if you dont see tearing then you gained nothing from not having it on any way as the frame that was just updated is pretty much the same as the one that was their any way.

but please explain to me how tripple buffering is bad because im not tech savvy enough to get your point.
 
Last edited:

Toothless

Tech, Games, and TPU!
Supporter
Joined
Mar 26, 2014
Messages
9,277 (2.52/day)
Location
Washington, USA
System Name Veral
Processor 5950x
Motherboard MSI MEG x570 Ace
Cooling Corsair H150i RGB Elite
Memory 4x16GB G.Skill TridentZ
Video Card(s) Powercolor 7900XTX Red Devil
Storage Crucial P5 Plus 1TB, Samsung 980 1TB, Teamgroup MP34 4TB
Display(s) Acer Nitro XZ342CK Pbmiiphx + 2x AOC 2425W
Case Fractal Design Meshify Lite 2
Audio Device(s) Blue Yeti + SteelSeries Arctis 5 / Samsung HW-T550
Power Supply Corsair HX850
Mouse Corsair Nightsword
Keyboard Corsair K55
VR HMD HP Reverb G2
Software Windows 11 Professional
Benchmark Scores PEBCAK
tripple buffer lets the card render your 120-300fps so it works to its full potential, but the front buffer only refreshes 1ce per frame giving you the same performance as no v-synk but without tearing..

.

Okay so you're making your GPU work harder than it should. Tell me why that's a good thing.
 
Joined
Apr 2, 2011
Messages
2,660 (0.56/day)
well at the settings i play at the cpu i had still couldnt keep the fps at a constant 60.. with v-synk on. and given that is the only thing i care about and no matter how i tweak the settings it simply is not possible to achive that with the cpu how is it i did not need an upgrade?

also i can lower the settings and play at 1080p but the frames would still be less than my target..
i honestly prefer a constant 60fps no drops to higher resolution "which i really cant tell the difference with and dropping frames every time i move or a stutter when i try and turn around fast.
AA does make the game look better so prehaps it is why i dont see the diference between 720p and 1080p because at 1080p i cant have AA on and still get frame drops. at 720p i can have AA on looks just as good to me, and i dont get the frame drops..


I'm going to take one last go at trying to explain this to you. My Raspberry-Pi can run Doom. It can run 1080p video. It can play games. Using your logic, my Raspberry-Pi is as good as an i3 or i5 processor, because at these specific settings it works. It also beats out the Intel Gigabit ethernet controller on my board, because it can connect to a dozen different sensors. My NIC only connects to one thing, and only transmits data using one protocol.

Is the nature of your argument apparent yet? If not, then I can also prove that my Ti-84 calculator beats the pants off of you q6600.


Now, stop putting numbers ahead of everything else. Gaming isn't about one processor beating out another by two points in a synthetic test, it's about being able to play the game. You crank all of your settings to high, fire up the game, and play. If the frame rate drops, you dial back the post-processing features, not the pixel count. If you have to slide the resolution down then you don't have enough power to adequately play the game. You've already said this is what you do, and still have 84% CPU utilization. All the synthetic benches in the world cannot counter-indicate that this means your system isn't up to the task of providing real gaming performance any more.


None of this even touches on faster interconnections (PCI-e 2.0 and above), and new instruction sets available in new systems. There's nothing like good new optimized code to show you how much quality a new processor and platform can deliver. You are welcome to deny this, bury your head in sand, and continue believing the q6600 is more than a venerable relic. Needless to say, I would be getting a new system to replace this aging system if ...it was my choice. Honestly, the Core processors are past their prime. They aren't bad, but a comparison between a $800 Core2Quad and a $100 i3 today is foolish. I made the same point in a previous post, but it seems to fall on deaf ears. 4960x versus q6600 is a rather stark difference.

Consider that a test of processing power is not a test for gaming. The numbers already quoted are computational values. My way of showing you that a q6600 isn't good any more is to set that monitor to native resolution, set the graphics to moderate settings, and let you see the crappy frame rates. By lowering resolution to playable levels you hide the age of your CPU. It's hard to see the inadequacies of a painting if you keep it in a dark room, just like it's hard to see a processor's shortcomings if you never push it to deliver a better performance.


Edit starts at elipses. Lost consciousness before finishing post.
 
Last edited:
Joined
May 3, 2014
Messages
965 (0.26/day)
System Name Sham Pc
Processor i5-2500k @ 4.33
Motherboard INTEL DZ77SL 50K
Cooling 2 bay res. "2L of fluid in loop" 1x480 2x360
Memory 16gb 4x4 kingstone 1600 hyper x fury black
Video Card(s) hfa2 gtx 780 @ 1306/1768 (xspc bloc)
Storage 1tb wd red 120gb kingston on the way os, 1.5Tb wd black, 3tb random WD rebrand
Display(s) cibox something or other 23" 1080p " 23 inch downstairs. 52 inch plasma downstairs 15" tft kitchen
Case 900D
Audio Device(s) on board
Power Supply xion gaming seriese 1000W (non modular) 80+ bronze
Software windows 10 pro x64
I'm going to take one last go at trying to explain this to you. My Raspberry-Pi can run Doom. It can run 1080p video. It can play games. Using your logic, my Raspberry-Pi is as good as an i3 or i5 processor, because at these specific settings it works. It also beats out the Intel Gigabit ethernet controller on my board, because it can connect to a dozen different sensors. My NIC only connects to one thing, and only transmits data using one protocol.

Is the nature of your argument apparent yet? If not, then I can also prove that my Ti-84 calculator beats the pants off of you q6600.


Now, stop putting numbers ahead of everything else. Gaming isn't about one processor beating out another by two points in a synthetic test, it's about being able to play the game. You crank all of your settings to high, fire up the game, and play. If the frame rate drops, you dial back the post-processing features, not the pixel count. If you have to slide the resolution down then you don't have enough power to adequately play the game. You've already said this is what you do, and still have 84% CPU utilization. All the synthetic benches in the world cannot counter-indicate that this means your system isn't up to the task of providing real gaming performance any more.


None of this even touches on faster interconnections (PCI-e 2.0 and above), and new instruction sets available in new systems. There's nothing like good new optimized code to show you how much quality a new processor and platform can deliver. You are welcome to deny this, bury your head in sand, and continue believing the q6600 is more than a venerable relic. Needless to say, I would be getting a new system to replace this aging system if Ieas

i think your simply missing my point because my point seems to be exactly the same as yours. lol. we just seem to word it diferently.
i have already bought the upgraded system for what i could affoard "if you read the full thread"
After that people were stating that my Q6600 may well have been better than the 8120 i bought to replace it.
so After a few heated words between a few other forum members i decided the best thing to do is to run a serieze of tests on the q6600 with the outlined test sequences (the 1 game i still had installed and 3 bench tests)
And then i would run the exact same tests on the 8120 when it arrived "still wating on the ram"
hopefully this will show that either the q6600 is infact better than the 8120 they are about the same or the 8120 is better than the q6600.
at no point did i say either of these processors were as good as or better than a i7 lol..

i set the game at the settings i did because they were the settings that best suited the gpu and cpu and was the actuall sweet spot for them.

I actually said the cpu was at 84% usage at the options screen.. it was fully used in the game the gpu also gets to 99% .. So it was configured properly.
i do believe that what has happend here is you did not fully read the thread and what has changed allong the way. but that is something i often do.
 
Joined
May 3, 2014
Messages
965 (0.26/day)
System Name Sham Pc
Processor i5-2500k @ 4.33
Motherboard INTEL DZ77SL 50K
Cooling 2 bay res. "2L of fluid in loop" 1x480 2x360
Memory 16gb 4x4 kingstone 1600 hyper x fury black
Video Card(s) hfa2 gtx 780 @ 1306/1768 (xspc bloc)
Storage 1tb wd red 120gb kingston on the way os, 1.5Tb wd black, 3tb random WD rebrand
Display(s) cibox something or other 23" 1080p " 23 inch downstairs. 52 inch plasma downstairs 15" tft kitchen
Case 900D
Audio Device(s) on board
Power Supply xion gaming seriese 1000W (non modular) 80+ bronze
Software windows 10 pro x64
Okay so you're making your GPU work harder than it should. Tell me why that's a good thing.
How is it working harder than having v-sync off?
i really dont see what your trying to get at here..
 

Bucho

New Member
Joined
May 13, 2014
Messages
15 (0.00/day)
Welcome to TPU!
Thx

(Xeon Servers)

The lack of any one person's ability to see improvements in processor speed is their own failing, not that of processor stagnation. Likewise, CPU utilization is rarely as simple as people credit it.

Okay hold on, you didn't quote all I wrote and maybe (since english is not my native language [I live in Austria that's in central Europe]) I can't express myself the way I want. So please excuse my errors that may lead to a misunderstanding.
I did mention that you can slap a 10 core Xeon on your average desktop Socket 2011 board, but if you are the average customer you will have no benefit of it. Of course there are some programs than can utilize all cores and so this CPU would be equally faster than a dual or quad core. And yes - it is true that in the professional / server segment (that's what I meant when I wrote "in certain classes") the performance increased just the same as it did back then. It is a huge improvement like you said to have only one server handling the needs of what had to be done with multiple machines back then.

We are talking about the average user here, most of them just use the PC to browse the internet, communicate, play games or write/paint stuff. Then there is the user @ work who uses the PC for communication/general office use/develop(program)/... .
Both in 99.9% of the case do not benefit of more cores or more CPUs.
In this case here we are talking about a gamer who probably uses his PC 50/50 for games/internet.

You cite the great innovations of the past in greater frequencies, but seem to forget why frequency increases have functionally become minute. Frequency increases actually stopped yielding huge results almost a decade ago. The real innovation that allowed the CPU market to improve was better miniaturization, new materials, and increases in the efficiency of chip design that increased IPC. I'd gladly pit one of the Athlon CPUs versus the P4 chips released at the same time, and demonstrate how better design won out over raw frequencies. This lesson holds true today, when AMD FX processors far more easily attain high frequencies, but are beaten by the Intel offerings with significantly lower frequencies.
I wasn't praising high MHz and wasn't pointing that out to be the messiah to performance, but rather explaining why we saw and felt quite a big boost in performance when changing a system or CPU. I mean we went from 4.77 (IBM XT) to 8+ (286) to 16+ (386) ... up to 3800 MHz (Pentium 4 HT) quite seamless (at least with Intel CPUs). And CPU architectures change and have their limits or let's say frequency ranges where they perform best (maybe that's because of timings, latency and the general structure how the CPU/whole system is built). So IPC AND frequency raised quite a lot since the architectures were getting better (with the case where that wasn't quite the fact (Netburst) they only dramatically raised the frequency). I have been there all along the way and had a Pentium III-S 1.4GHz that outperfomed a Willamette P4 (with SD-RAM) even up to 1.8GHz. But after a while I bought a P4 2.8GHz with HT and 200MHz FSB and clocked it even higher and was happy because the increase also was there because of the big MHz and the overall changes (faster memory, bus system aso). Then I got myself a nice Socket 479 adapter and a Pentium-M and clocked that to 2.6GHz and no P4 under 4GHz stood a chance (in games and even some programs too). I didn't mind the MHz because the performance was there. Then they switched to dual cores and even quad cores, new architectures (Core 2 and finally Core i) but the core count was the same and the frequency was only slowly rising up. Okay games and applications had to catch up with dual and even four core support ... but it seems they still do that and there are many out there that only use one core.

Your point about core count is accurate, but misguided. The sad truth is that gaming is currently (though hopefully will not be for too much longer) more about consoles than about PCs. This truth was motivated by publishers, who have treated the PC market as second citizens for years. There are games out there that have made use of high end multi-core processors. We have had the capability to use all that extra power for years, but the cost to use it has been deemed prohibitively high. Since the Core processors were introduced we've been hearing about how games will be coming out soon that use all those extra CPU cores. Our reality is that these games exist, but don't have enough juice yet to make them the norm.
But that's the problem ... if we have nothing to feed the cores with they are useless (for gaming needs). And better IPC performance alone, since frequencies are almost the same for the last few years, doesn't really bring the Ohhhs and Ahhs we need to be motivated to buy new PCs.
-So you have a Quad core - fine ... more cores are barely needed nowadays (hope that changes in the future or maybe not to "need" them but benefit of more cores)
-You do not have the oldest generation (so the IPC is okay)
-and/or Frequencies are high
Why do I need to upgrade? Because of 30-40% more CPU power (that maybe don't even transist into performance in games or apps) and some less power usage?

And remember I am not talking about server or workstation class PCs ... there existed multi CPU systems many many years from now (for desktop PCs) like dual Pentium Pro, Pentium II/III and Celerons, Athlon MPs aso. but they were ignored for gaming all the way along. I am sure (at least for the last few years) a lot of developers moved their focus to console games like you said. With the PS3 and XB360 it slowly changed that games use more than one core. I faced that with my Pentium-M back when GTA IV on PC came out. I couldn't really play it because of that single core no matter what settings I used (GPU was a GeForce 7900GT 512MB AGP overclocked).
Today there is no game that I can't play (even quite decent) with my ~6 years old Core 2 Quad (overclocked quite good).
Think of trying to play GTA IV (Dez 2008) with a late 2002/early 2003 CPU. That would have been a P4 2.66/2.8GHz 133MHz FSB or a Athlon XP 2600+ / 2800+.
Take that 6 year time period at any date back then and try to play a top game on a 6 year old CPU.

I still remember back then when we were converting DVDs on our P III 500MHz to DivX ... and that took all night (a good 6-8 hours or so)
A few years later still with a single core P4 or Athlon 64 with high MHz we did the same in like 2-3 hours. Then with a dual core in like 1.5 hours and now with a fast quad core in like 40 minutes.
You see those jumps were big every time because the IPC got better and the rise in MHz and finally the core count.
All I am trying to say is that for the average user/gamer with a good quad core it seems to be of no need to upgrade and it also seems to stay that way for some more time since even Skylake will have only 4 cores in the mainstream market.
That's why even a used i5-2500K is sold at quite a high price because if you have one you I bet you can stay with that for even the next 2-3 years and have no problems playing the latest games.
 

Toothless

Tech, Games, and TPU!
Supporter
Joined
Mar 26, 2014
Messages
9,277 (2.52/day)
Location
Washington, USA
System Name Veral
Processor 5950x
Motherboard MSI MEG x570 Ace
Cooling Corsair H150i RGB Elite
Memory 4x16GB G.Skill TridentZ
Video Card(s) Powercolor 7900XTX Red Devil
Storage Crucial P5 Plus 1TB, Samsung 980 1TB, Teamgroup MP34 4TB
Display(s) Acer Nitro XZ342CK Pbmiiphx + 2x AOC 2425W
Case Fractal Design Meshify Lite 2
Audio Device(s) Blue Yeti + SteelSeries Arctis 5 / Samsung HW-T550
Power Supply Corsair HX850
Mouse Corsair Nightsword
Keyboard Corsair K55
VR HMD HP Reverb G2
Software Windows 11 Professional
Benchmark Scores PEBCAK
How is it working harder than having v-sync off?
i really dont see what your trying to get at here..
You're forcing your GPU to maintain 60fps, and to render 3x that. (Or so you say)

Having your CPU/GPU at 99% constantly in a game is not good for the health of the hardware.
 

Bucho

New Member
Joined
May 13, 2014
Messages
15 (0.00/day)
...
After that people were stating that my Q6600 may well have been better than the 8120 i bought to replace it.
so After a few heated words between a few other forum members i decided the best thing to do is to run a serieze of tests on the q6600 with the outlined test sequences (the 1 game i still had installed and 3 bench tests)
And then i would run the exact same tests on the 8120 when it arrived "still wating on the ram"
hopefully this will show that either the q6600 is infact better than the 8120 they are about the same or the 8120 is better than the q6600.
at no point did i say either of these processors were as good as or better than a i7 lol..

...

That link @TheHunter posted is quite interesting
http://www.legitreviews.com/upgrading-from-intel-core-2-quad-q6600-to-core-i7-4770k_2247

Funny to see that in games at 1080p and high settings both systems are close together and that the GPU is limiting. At lower res and/or lower settings the i7 goes up and away.
Since most of the people will play at 1080p and rather high settings they may not get that huge upgrade with that i7. But like you said what is important to you and I think is too is the minimum FPS. To bad he didn't show these and only tested two games.
Keep in mind that the Q6600 was at stock and the i7 4770K too so it's 4 old cores @ 2.4GHz against 4 new cores (and HT) @ 3.5+GHz plus all the goodies around like no 266MHz FSB, slow DDR2 RAM, slower PCI-E bus ...
 

Bucho

New Member
Joined
May 13, 2014
Messages
15 (0.00/day)
You're forcing your GPU to maintain 60fps, and to render 3x that. (Or so you say)

Having your CPU/GPU at 99% constantly in a game is not good for the health of the hardware.

Triple buffer in combination with v-sync doesn't mean it renders 60fps three times but to render max. 3 frames ahead of monitor sync.

And why is it not good for the hardware to run at 100%???
That's the way it should be, otherwise you will not use one of your components (CPU/GPU) full potential.
(Like using a Titan black edition on a Celeron D or an i7-4960K with a GeForce 6200 PCI-E for 3D gaming)

As long as temperatures and voltages are within certain limits these components will not degrade much faster compared to run them at like 60%.
 
Status
Not open for further replies.
Top