• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Pentium Silver J5005 Catches Up With Legendary Core 2 Quad Q6600

Joined
Sep 15, 2007
Messages
3,944 (0.65/day)
Location
Police/Nanny State of America
Processor OCed 5800X3D
Motherboard Asucks C6H
Cooling Air
Memory 32GB
Video Card(s) OCed 6800XT
Storage NVMees
Display(s) 32" Dull curved 1440
Case Freebie glass idk
Audio Device(s) Sennheiser
Power Supply Don't even remember
If Intel was really sitting on their hands for the past decade, you'd expect that AMD's latest CPUs would've blown past them. That hasn't happened, and the reason is because the wave of Moore's Law has been dashed against the rock that is the fundamental limits of physical silicon; all the billions in the world can't overcome physics. Not to mention that CPU design is really f**king difficult and all the easy wins have long since been won, leaving only the really, really difficult stuff.

Some might argue that the heritage of the Core architecture, which is itself descended from the original P6 architecture, is to blame, but I'd argue exactly the opposite: that P6 was such a good design that its fundamentals remain in use over two decades after its conception. Perhaps Core is due for replacement, but anything that hopes to succeed it will have to be very special.



Comparing ARM benchmarks to x86 benchmarks is idiotic.

If AMD had 1,000 times the cash like Intel....you can do the math. Zen was created from ashes and not a vault stuffed with gold like Scrooge McDuck (that Intel has relatively speaking). It's painfully obvious Intel did nothing, b/c they didn't have to. All they did was refine to extract maximum earnings from the product (which is what every grubby investor and CEO demands). Performance, prowess, reputation, etc, is of NO CONCERN. Besides, they can just buy all of those (see most publications in history for that proof).
 
Joined
Sep 17, 2014
Messages
20,929 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
If AMD had 1,000 times the cash like Intel....you can do the math. Zen was created from ashes and not a vault stuffed with gold like Scrooge McDuck (that Intel has relatively speaking). It's painfully obvious Intel did nothing, b/c they didn't have to. All they did was refine to extract maximum earnings from the product (which is what every grubby investor and CEO demands). Performance, prowess, reputation, etc, is of NO CONCERN. Besides, they can just buy all of those (see most publications in history for that proof).

It seriously does not make any sense for Intel to not develop the next best CPU. Could they have moved faster? Perhaps. But the reality simply is that performance increases are stalling in the CPU world and its a trend that is greater than Intel. Even on ARM you see the leaps getting smaller, the CPU releases less interesting and you see midrange SoCs doing all the work 'we' need them to do on a smartphone. Its also evident why and how ARM has leaped as it did in such a short time: all it needed was to re-implement all of the tricks in the best practices book. They tried some new things such as big.Little; Beyond that, its minor refinements.

There are two major factors in play here:
- Physics
- Best practices

In both factors there are diminishing returns. You can see it very well in the comparison of Passmark earlier: the 65nm Q6600 needs 105w, the 32nm needs 35w and the 14nm needs 10w. Those gains are attributable to smaller nodes and refinement of best practices, but in both aspects there is going to be an end to it, and even a 20% efficiency boost right now will only yield 2w advantage at the same performance. But on the Q6600, 20% efficiency is more than 20w; that's enough to put two additional Pentiums on the same power budget and triple the score, basically.

The eternal AMD-counter 'but what if they had money' is simply a fairy tale of could have would have but never really did happen. What if AMD had not released FX. What if they had fired Raja two years ago. Who knows. They didn't and that is what counts. It doesn't change the fact that neither company is capable of really surpassing the other right now, even Intel has not provided us with their next best thing that will obliterate all that preceded it. Its simply not there and the demand that exists for high performance CPUs can be satisfied in other ways, like multi socket (look at recent Intel press releases) and other ways of scaling that are far more efficient than creating an even more complex single-die solution.
 
Last edited:
Joined
Sep 15, 2007
Messages
3,944 (0.65/day)
Location
Police/Nanny State of America
Processor OCed 5800X3D
Motherboard Asucks C6H
Cooling Air
Memory 32GB
Video Card(s) OCed 6800XT
Storage NVMees
Display(s) 32" Dull curved 1440
Case Freebie glass idk
Audio Device(s) Sennheiser
Power Supply Don't even remember
It seriously does not make any sense for Intel to not develop the next best CPU. Could they have moved faster? Perhaps. But the reality simply is that performance increases are stalling in the CPU world and its a trend that is greater than Intel. Even on ARM you see the leaps getting smaller, the CPU releases less interesting and you see midrange SoCs doing all the work 'we' need them to do on a smartphone.

There are two major factors in play here:
- Physics
- Best practices

In both factors there are diminishing returns. You can see it very well in the comparison of Passmark earlier: the 65nm Q6600 needs 105w, the 32nm needs 35w and the 14nm needs 10w. Those gains are attributable to smaller nodes and refinement of best practices, but in both aspects there is going to be an end to it, and even a 20% efficiency boost right now will only yield 2w advantage at the same performance. But on the Q6600, 20% efficiency is more than 20w.

The eternal AMD-counter 'but what if they had money' is simply a fairy tale of could have would have but never really did happen. What if AMD had not released FX. What if they had fired Raja two years ago. Who knows. They didn't and that is what counts. It doesn't change the fact that neither company is capable of really surpassing the other right now, even Intel has not provided us with their next best thing that will obliterate all that preceded it. Its simply not there and the demand that exists for high performance CPUs can be satisfied in other ways, like multi socket (look at recent Intel press releases) and other ways of scaling that are far more efficient than creating an even more complex single-die solution.

You do realize that intel basically proves this by not even working on a new design until Zen shipped, right? Now, they're busy copying the "glue" for the real next gen. After all the years of refreshing (it will be FOUR years of no changes at all, plus all the way back to SB) and they didn't even work on a new architecture. It really can't be more obvious. They were content on raking in the cash on a highly refined architecture. Plus, you just saw it with core counts. Quad core was high end for desktop and the rest were out of reach economically. Magically, cores are increased across the board (and prices slashed). It has everything to do with holding back to generate cash, b/c there was no need without AMD pressure. Nvidia is doing the same thing.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I knew someone would say this, that's why I said base clock speed alone. ;)
But that makes your extrapolation fall apart entirely, as you'd also need boost clock to scale to match for performance to increase the way you say. So, it would need a 2,4GHz base clock and a 4,5GHz boost clock. Which ... well, won't happen :p
 
Joined
Sep 17, 2014
Messages
20,929 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
You do realize that intel basically proves this by not even working on a new design until Zen shipped, right? Now, they're busy copying the "glue" for the real next gen. After all the years of refreshing (it will be FOUR years of no changes at all, plus all the way back to SB) and they didn't even work on a new architecture. It really can't be more obvious. They were content on raking in the cash on a highly refined architecture. Plus, you just saw it with core counts. Quad core was high end for desktop and the rest were out of reach economically. Magically, cores are increased across the board (and prices slashed). It has everything to do with holding back to generate cash, b/c there was no need without AMD pressure.

To you that is proof of your argument, and to me it is proof of mine. Its a matter of interpretation and I don't buy the one that says Intel was sitting on its hands entirely. Of course, they COULD have done more but there was no economical incentive like you also say. And last I checked Intel was in the business to make money. Its an idea AMD could learn from ;)

Coffee Lake for example. The 6 core wave was already in the works years ago. They still had to 'rush' it, apparently. The reality is that its just bad planning, but to then think that Intel would have been capable of so much more, perhaps is giving Intel too much credit. The fact that today we haven't seen any radical new design speaks volumes; there are no radical new designs. Even AMD, with its 'radical new design' only has new iterations on a roadmap that refine what's there now. And what is there now? A CPU that falls just shy of matching Intel in most situations and exceeds it in a much smaller set of situations, at a very similar power budget. Its so similar in fact, you'd almost think it was on purpose.

None of that makes sense if the next best CPU design is up for grabs, be it in the rich or the poor company. The reality is: its not up for grabs. Its difficult to improve on what we have today. Any investment into a better design is going to be extremely time consuming and costly, with a minimal return on the investment. Why do you think the foundries etc are pushing those smaller nodes so hard? The reason is simple: its where the greatest gains are going to come from in the foreseeable future and the entire semicon industry rides on those gains to sell product and keep their production lines profitable. No gain means the whole machine comes to a grinding halt and sales plummet as consumers only have sidegrades to choose from. (Again: look at recent history for proof of that)
 
Last edited:
Joined
Jun 21, 2011
Messages
165 (0.04/day)
It seriously does not make any sense for Intel to not develop the next best CPU. Could they have moved faster? Perhaps. But the reality simply is that performance increases are stalling in the CPU world and its a trend that is greater than Intel. Even on ARM you see the leaps getting smaller, the CPU releases less interesting and you see midrange SoCs doing all the work 'we' need them to do on a smartphone. Its also evident why and how ARM has leaped as it did in such a short time: all it needed was to re-implement all of the tricks in the best practices book. They tried some new things such as big.Little; Beyond that, its minor refinements.

There are two major factors in play here:
- Physics
- Best practices

In both factors there are diminishing returns. You can see it very well in the comparison of Passmark earlier: the 65nm Q6600 needs 105w, the 32nm needs 35w and the 14nm needs 10w. Those gains are attributable to smaller nodes and refinement of best practices, but in both aspects there is going to be an end to it, and even a 20% efficiency boost right now will only yield 2w advantage at the same performance. But on the Q6600, 20% efficiency is more than 20w; that's enough to put two additional Pentiums on the same power budget and triple the score, basically.

The eternal AMD-counter 'but what if they had money' is simply a fairy tale of could have would have but never really did happen. What if AMD had not released FX. What if they had fired Raja two years ago. Who knows. They didn't and that is what counts. It doesn't change the fact that neither company is capable of really surpassing the other right now, even Intel has not provided us with their next best thing that will obliterate all that preceded it. Its simply not there and the demand that exists for high performance CPUs can be satisfied in other ways, like multi socket (look at recent Intel press releases) and other ways of scaling that are far more efficient than creating an even more complex single-die solution.

Agreed. There hasn't been a revolution in chipmaking for quite a while now... Intel's own procrastination/complacency has led AMD to catch-up on many levels. Intel's neglect / cruise control in manufacturing has allowed competitors to overtake them, even. I'd never imagine a day where GlobalFoundries, of all things, was delivering Intel some fab smackdown (even if it is borrowed from Samsung).

With process limitations we can see that Intel's designs come close to the design's threshold on power/performance, and that is it's main issue right now. They can wiggle around and get some extra performance here and there, but they really need a jump on the 7nm or lower nodes. 10nm is a loss already. They need to get their fabbing processes in a row so they can plot a course for their CPUs. Personally, I think this is the problem with doing away with engineers and sticking with the managers who can manage Investor relations. They talk up the BS for the Investors' sake.

Intel, right now, has a server strategy. It doesn't have a mobile or desktop one. Because the big thing people talk about around the boardroom coffee pot (or caviar tray) is Cloud.
 
Joined
Apr 12, 2013
Messages
6,750 (1.67/day)
Agreed. There hasn't been a revolution in chipmaking for quite a while now... Intel's own procrastination/complacency has led AMD to catch-up on many levels. Intel's neglect / cruise control in manufacturing has allowed competitors to overtake them, even. I'd never imagine a day where GlobalFoundries, of all things, was delivering Intel some fab smackdown (even if it is borrowed from Samsung).

With process limitations we can see that Intel's designs come close to the design's threshold on power/performance, and that is it's main issue right now. They can wiggle around and get some extra performance here and there, but they really need a jump on the 7nm or lower nodes. 10nm is a loss already. They need to get their fabbing processes in a row so they can plot a course for their CPUs. Personally, I think this is the problem with doing away with engineers and sticking with the managers who can manage Investor relations. They talk up the BS for the Investors' sake.

Intel, right now, has a server strategy. It doesn't have a mobile or desktop one. Because the big thing people talk about around the boardroom coffee pot (or caviar tray) is Cloud.
You're combining the two issues of chip making ~ chip design(s) are still improving rapidly in the ARM world, part of that comes from going real wide like Ax & Mongoose. Also they don't have legacy instructions to worry about like x86 does.

Chip fabrication & all the advances from smaller nodes are coming to a halt. The next 10 years might well be the last time we see Si being mentioned wrt chips, before something like Graphene or Ge (alloy?) takes over.
 
Joined
Feb 23, 2016
Messages
132 (0.04/day)
System Name Computer!
Processor i7-6700K
Motherboard AsRock Z170 Extreme 7+
Cooling EKWB on CPU & GPU, 240 slim and 360 Monsta, Aquacomputer Aquabus D5, Aquaaero 6 Pro.
Memory 32Gb Kingston Hyper-X 3Ghz
Video Card(s) Asus 980 Ti Strix
Storage 2 x 950 Pro
Display(s) Old Acer thing
Case NZXT 440 Modded
Audio Device(s) onboard
Power Supply Seasonic PII 600W Platinum
Mouse Razer Deathadder Chroma
Keyboard Logitech G15
Software Win 10 Pro
Everyone knows those Geekbench scores are rigged and not to be trusted. I laugh whenever i see someone shout about arm vs x86 geekbench scores.

Whether or not Intel innovates as much as they can/could is another question! I'm guessing not...
 
Joined
Sep 17, 2014
Messages
20,929 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
You're combining the two issues of chip making ~ chip design(s) are still improving rapidly in the ARM world, part of that comes from going real wide like Ax & Mongoose. Also they don't have legacy instructions to worry about like x86 does.

Chip fabrication & all the advances from smaller nodes are coming to a halt. The next 10 years might well be the last time we see Si being mentioned wrt chips, before something like Graphene or Ge (alloy?) takes over.

The designs are *changing* rapidly in the ARM world is what I'd rather say. The big advantage of ARM is its customization options and those are being put to good use with more specific task oriented designs. Its a catch 22. We speak of the legacy that x86 has, but the reason it has that is because for a majority of users the CPU is a swiss knife that has to do it all. These specialized designs are not swiss knifes, in fact, they become blunt instruments out of their comfort zone. What you see with ARM is that specific code exists and a CPU design follows. In x86, a lot of that was the other way around - a good example is AVX.
 
Joined
Apr 12, 2013
Messages
6,750 (1.67/day)
Everyone knows those Geekbench scores are rigged and not to be trusted. I laugh whenever i see someone shout about arm vs x86 geekbench scores.

Whether or not Intel innovates as much as they can/could is another question! I'm guessing not...
Rigged, did ARM pay GB to make Intel x86 look bad :rolleyes:
I didn't realize AT & other sites were paid shills as well, since they publish GB scores in some of their reviews :shadedshu:
 
Joined
Apr 8, 2008
Messages
328 (0.06/day)
So Finally we can say that J5005 can play Crysis.. this time for real.
 
Joined
Sep 17, 2014
Messages
20,929 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
So Finally we can say that J5005 can play Crysis.. this time for real.

I think Crysis 3 will still run like shit. As in sub 20 fps. I remember my Ivy 3570k @ 4.2 really was not happy with that game :D
 
Joined
Jun 21, 2011
Messages
165 (0.04/day)
You're combining the two issues of chip making ~ chip design(s) are still improving rapidly in the ARM world, part of that comes from going real wide like Ax & Mongoose. Also they don't have legacy instructions to worry about like x86 does.

Chip fabrication & all the advances from smaller nodes are coming to a halt. The next 10 years might well be the last time we see Si being mentioned wrt chips, before something like Graphene or Ge (alloy?) takes over.

I'm talking about the existing chip designs by Intel on their "viable" process nodes (14nm, not 10nm). In order for Intel to continue delivering something without revolutionizing computing, it'll need to make inroads with fabrication technologies. Of course design and fabrication are different sides of the same coin. I am not weighing in with ARM IP at all

Or in other words, Intel still has a reasonable single-thread performance lead, but it's main obstacle has been shrinking their dies.
 
Joined
May 2, 2017
Messages
7,762 (3.05/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
To attempt to cut through the back-and-forth here a bit: we are currently in the unprecedented situation that Intel hasn't improved IPC in their MSDT segment for a full three generations. The Skylake base arch has not been changed whatsoever for Kaby Lake or Coffee Lake - all we've seen are clock speed optimizations (which are both design and manufacturing dependent) and additional cores. Also, for mobile, they've clearly done some impressive work in power consumption, squeezing four cores into 15W. But IPC stands unchanged since August 2015. Up until that, every single revision of the Core arch saw some IPC improvement, even if it was in the ~5% range.

On the other hand, they've significantly changed the cache layout for their HEDT parts, which definitely impacts IPC. Time will tell if these changes trickle down into MSDT, but they might not be worth it for consumer workloads.

The real thing holding Intel back right now is that alongside this (likely at least somewhat planned) stagnant IPC is that their process development has also stagnated - 10nm was originally supposed to launch in 2016, but has now been postponed (again!) to 2019. It's likely that Intel had planned on process improvements carrying them along without requiring (increasingly complex, difficult and expensive) arch revisions, at least for a generation or two. Suddenly, that's not a viable tactic, and they're left largely without options.

Is this Intel's fault? Are they lagging behind? No, not really. Production node misfires happen rather frequently - look at TSMC 20nm or whatever GF had planned before they licensed Samsung 14nm. The issue here is that this coincides with a significant slowdown in arch improvements. Which isn't really surprising, seeing how Core is now on it's 8th-ish (in some cases 10th or so) revision - all the low-hanging fruit has long since been picked, and to truly murder that metaphor, Intel is having to build exponentially taller ladders to reach what little is left.

Intel clearly had a plan to minimize arch development costs as they were getting uncomfortably high. This is reasonable from a business perspective. They gambled on process tech keeping them well ahead, and this bet is in the process of being lost. They still have an advantage, but it's shrinking rapidly.


Atom, on the other hand (which is also what this discussion is supposed to be about ;) ), is easier to work with. It's gone through just as many revisions, but as it's mainly been designed for power efficiency (often to the detriment of performance) and has had minuscule development budgets compared to Core, so there's far more low-hanging fruit left - and they can likely borrow some ladders and equipment from the Core team, making it even easier.
 
Joined
Nov 1, 2017
Messages
521 (0.22/day)
9600s? those were some new GFX cards for that...

Mine was the 7800GTX / 8800GT gen for me...

Well I first had a 6600GT in my old build but dreamt for the 8800GT for a long time (Crysis!!!!!!!). When I went to buy it, there were only 9600GT's so I bought one and bought a second one a month or two after :)
 

bug

Joined
May 22, 2015
Messages
13,224 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Fanboys told you this? I'm pretty sure everyone with half a brain told you that. They haven't done anything since SB except use their fabs to their potential. That's spending cash, not innovating.
Intel has moved at a glacial pace in absolute desktop performance. But they have vastly improved in pretty much every other area. Since Sandy Bridge we got: IGPs capable of handling 4k, triple battery life for portables, granular voltage and frequency control, CPUs that perform amazingly with only ~15W TDP and went from 8 PCIe 2.0 lanes to 24 PCIe 3.0 lanes.
Yet for people like you, that equates to "They haven't done anything since SB".
 
Joined
Sep 17, 2014
Messages
20,929 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Intel has moved at a glacial pace in absolute desktop performance. But they have vastly improved in pretty much every other area. Since Sandy Bridge we got: IGPs capable of handling 4k, triple battery life for portables, granular voltage and frequency control, CPUs that perform amazingly with only ~15W TDP and went from 8 PCIe 2.0 lanes to 24 PCIe 3.0 lanes.
Yet for people like you, that equates to "They haven't done anything since SB".

Glacial pace - and then there is this perspective:
http://www.hardwarecanucks.com/foru...600k-vs-i7-8700k-upgrading-worthwhile-13.html



 
  • Like
Reactions: bug

bug

Joined
May 22, 2015
Messages
13,224 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Yeah, they have added improvements to SSE and AVX, but since it usually takes specialized software to put those to good use, I thought I didn't mention those (like I didn't mention QuickPath). But those are improvements in their own right.
 
Joined
Jul 5, 2013
Messages
25,559 (6.48/day)
Since when is Passmark a meaningful benchmark?
Passmark is a very useful benchmark utility. It's not the most popular, but runs on anything and will give a fair & insightful perspective on comparing many different platforms, old and new.
 

Wolflow

New Member
Joined
May 2, 2018
Messages
4 (0.00/day)
Q6600 has never been really impressive, in fact…

What made it interesting (as "potentially useful", not much more) was the simple fact its price was similar to the highest performing "non-eXtreme" C2D while having twice the cores and marginally lower overclocking headroom.

As soon as the 4 cores became useful outside of almost pure CPU in-cache workloads, it was so much behind newer ones it lost all of its appeal, being affected by bottlenecks already known back when the C2Q brand came out.

The exact same situation currently exists with the R7-2700 : while it's not considerably better than, say, the R5-2400G outside of specific professional uses (zero productivity concern for regular end-users), it has the potential to offer a decent step-up even compared to the R5-2600(x) because of the cores count.
 
Joined
Jul 5, 2013
Messages
25,559 (6.48/day)
Q6600 has never been really impressive, in fact…
You are very much alone in that opinion. At the time it was an amazing CPU in comparison to everything else on the market except the higher end C2Q's. It was an overclocking dream as well.
 
  • Like
Reactions: bug
Top