Monday, May 28th 2018

Intel Pentium Silver J5005 Catches Up With Legendary Core 2 Quad Q6600

The Core 2 Quad Q6600 quad-core processor is close to many a PC enthusiast's heart. It was the most popular quad-core processor by Intel in the pre-Nehalem LGA775 era, and continues to be found to this date on builds such as home-servers. Over a decade later, Intel's low-power Pentium Silver J5005 quad-core processor, which enthusiasts won't consider for anything, appears to have caught up with the Q6600. A CPU Passmark submission by a Redditor compares the J5005 with the Q6600, in which the latter is finally beaten. The J5005 scored 2,987 marks, compared to the Q6600's 2,959 marks. It's interesting to note here, that the J5005 is clocked at just 1.50 GHz, compared to the 2.40 GHz of the Q6600. Its TDP is rated at just 10W, compared to 95-105W of the Q6600.
Sources: CPU PassMark Database, dylan522p (Reddit)
Add your own comment

46 Comments on Intel Pentium Silver J5005 Catches Up With Legendary Core 2 Quad Q6600

#26
TheGuruStud
AssimilatorIf Intel was really sitting on their hands for the past decade, you'd expect that AMD's latest CPUs would've blown past them. That hasn't happened, and the reason is because the wave of Moore's Law has been dashed against the rock that is the fundamental limits of physical silicon; all the billions in the world can't overcome physics. Not to mention that CPU design is really f**king difficult and all the easy wins have long since been won, leaving only the really, really difficult stuff.

Some might argue that the heritage of the Core architecture, which is itself descended from the original P6 architecture, is to blame, but I'd argue exactly the opposite: that P6 was such a good design that its fundamentals remain in use over two decades after its conception. Perhaps Core is due for replacement, but anything that hopes to succeed it will have to be very special.



Comparing ARM benchmarks to x86 benchmarks is idiotic.
If AMD had 1,000 times the cash like Intel....you can do the math. Zen was created from ashes and not a vault stuffed with gold like Scrooge McDuck (that Intel has relatively speaking). It's painfully obvious Intel did nothing, b/c they didn't have to. All they did was refine to extract maximum earnings from the product (which is what every grubby investor and CEO demands). Performance, prowess, reputation, etc, is of NO CONCERN. Besides, they can just buy all of those (see most publications in history for that proof).
Posted on Reply
#27
Vayra86
TheGuruStudIf AMD had 1,000 times the cash like Intel....you can do the math. Zen was created from ashes and not a vault stuffed with gold like Scrooge McDuck (that Intel has relatively speaking). It's painfully obvious Intel did nothing, b/c they didn't have to. All they did was refine to extract maximum earnings from the product (which is what every grubby investor and CEO demands). Performance, prowess, reputation, etc, is of NO CONCERN. Besides, they can just buy all of those (see most publications in history for that proof).
It seriously does not make any sense for Intel to not develop the next best CPU. Could they have moved faster? Perhaps. But the reality simply is that performance increases are stalling in the CPU world and its a trend that is greater than Intel. Even on ARM you see the leaps getting smaller, the CPU releases less interesting and you see midrange SoCs doing all the work 'we' need them to do on a smartphone. Its also evident why and how ARM has leaped as it did in such a short time: all it needed was to re-implement all of the tricks in the best practices book. They tried some new things such as big.Little; Beyond that, its minor refinements.

There are two major factors in play here:
- Physics
- Best practices

In both factors there are diminishing returns. You can see it very well in the comparison of Passmark earlier: the 65nm Q6600 needs 105w, the 32nm needs 35w and the 14nm needs 10w. Those gains are attributable to smaller nodes and refinement of best practices, but in both aspects there is going to be an end to it, and even a 20% efficiency boost right now will only yield 2w advantage at the same performance. But on the Q6600, 20% efficiency is more than 20w; that's enough to put two additional Pentiums on the same power budget and triple the score, basically.

The eternal AMD-counter 'but what if they had money' is simply a fairy tale of could have would have but never really did happen. What if AMD had not released FX. What if they had fired Raja two years ago. Who knows. They didn't and that is what counts. It doesn't change the fact that neither company is capable of really surpassing the other right now, even Intel has not provided us with their next best thing that will obliterate all that preceded it. Its simply not there and the demand that exists for high performance CPUs can be satisfied in other ways, like multi socket (look at recent Intel press releases) and other ways of scaling that are far more efficient than creating an even more complex single-die solution.
Posted on Reply
#28
TheGuruStud
Vayra86It seriously does not make any sense for Intel to not develop the next best CPU. Could they have moved faster? Perhaps. But the reality simply is that performance increases are stalling in the CPU world and its a trend that is greater than Intel. Even on ARM you see the leaps getting smaller, the CPU releases less interesting and you see midrange SoCs doing all the work 'we' need them to do on a smartphone.

There are two major factors in play here:
- Physics
- Best practices

In both factors there are diminishing returns. You can see it very well in the comparison of Passmark earlier: the 65nm Q6600 needs 105w, the 32nm needs 35w and the 14nm needs 10w. Those gains are attributable to smaller nodes and refinement of best practices, but in both aspects there is going to be an end to it, and even a 20% efficiency boost right now will only yield 2w advantage at the same performance. But on the Q6600, 20% efficiency is more than 20w.

The eternal AMD-counter 'but what if they had money' is simply a fairy tale of could have would have but never really did happen. What if AMD had not released FX. What if they had fired Raja two years ago. Who knows. They didn't and that is what counts. It doesn't change the fact that neither company is capable of really surpassing the other right now, even Intel has not provided us with their next best thing that will obliterate all that preceded it. Its simply not there and the demand that exists for high performance CPUs can be satisfied in other ways, like multi socket (look at recent Intel press releases) and other ways of scaling that are far more efficient than creating an even more complex single-die solution.
You do realize that intel basically proves this by not even working on a new design until Zen shipped, right? Now, they're busy copying the "glue" for the real next gen. After all the years of refreshing (it will be FOUR years of no changes at all, plus all the way back to SB) and they didn't even work on a new architecture. It really can't be more obvious. They were content on raking in the cash on a highly refined architecture. Plus, you just saw it with core counts. Quad core was high end for desktop and the rest were out of reach economically. Magically, cores are increased across the board (and prices slashed). It has everything to do with holding back to generate cash, b/c there was no need without AMD pressure. Nvidia is doing the same thing.
Posted on Reply
#29
Valantar
qubitI knew someone would say this, that's why I said base clock speed alone. ;)
But that makes your extrapolation fall apart entirely, as you'd also need boost clock to scale to match for performance to increase the way you say. So, it would need a 2,4GHz base clock and a 4,5GHz boost clock. Which ... well, won't happen :P
Posted on Reply
#30
Vayra86
TheGuruStudYou do realize that intel basically proves this by not even working on a new design until Zen shipped, right? Now, they're busy copying the "glue" for the real next gen. After all the years of refreshing (it will be FOUR years of no changes at all, plus all the way back to SB) and they didn't even work on a new architecture. It really can't be more obvious. They were content on raking in the cash on a highly refined architecture. Plus, you just saw it with core counts. Quad core was high end for desktop and the rest were out of reach economically. Magically, cores are increased across the board (and prices slashed). It has everything to do with holding back to generate cash, b/c there was no need without AMD pressure.
To you that is proof of your argument, and to me it is proof of mine. Its a matter of interpretation and I don't buy the one that says Intel was sitting on its hands entirely. Of course, they COULD have done more but there was no economical incentive like you also say. And last I checked Intel was in the business to make money. Its an idea AMD could learn from ;)

Coffee Lake for example. The 6 core wave was already in the works years ago. They still had to 'rush' it, apparently. The reality is that its just bad planning, but to then think that Intel would have been capable of so much more, perhaps is giving Intel too much credit. The fact that today we haven't seen any radical new design speaks volumes; there are no radical new designs. Even AMD, with its 'radical new design' only has new iterations on a roadmap that refine what's there now. And what is there now? A CPU that falls just shy of matching Intel in most situations and exceeds it in a much smaller set of situations, at a very similar power budget. Its so similar in fact, you'd almost think it was on purpose.

None of that makes sense if the next best CPU design is up for grabs, be it in the rich or the poor company. The reality is: its not up for grabs. Its difficult to improve on what we have today. Any investment into a better design is going to be extremely time consuming and costly, with a minimal return on the investment. Why do you think the foundries etc are pushing those smaller nodes so hard? The reason is simple: its where the greatest gains are going to come from in the foreseeable future and the entire semicon industry rides on those gains to sell product and keep their production lines profitable. No gain means the whole machine comes to a grinding halt and sales plummet as consumers only have sidegrades to choose from. (Again: look at recent history for proof of that)
Posted on Reply
#31
LemmingOverlord
Vayra86It seriously does not make any sense for Intel to not develop the next best CPU. Could they have moved faster? Perhaps. But the reality simply is that performance increases are stalling in the CPU world and its a trend that is greater than Intel. Even on ARM you see the leaps getting smaller, the CPU releases less interesting and you see midrange SoCs doing all the work 'we' need them to do on a smartphone. Its also evident why and how ARM has leaped as it did in such a short time: all it needed was to re-implement all of the tricks in the best practices book. They tried some new things such as big.Little; Beyond that, its minor refinements.

There are two major factors in play here:
- Physics
- Best practices

In both factors there are diminishing returns. You can see it very well in the comparison of Passmark earlier: the 65nm Q6600 needs 105w, the 32nm needs 35w and the 14nm needs 10w. Those gains are attributable to smaller nodes and refinement of best practices, but in both aspects there is going to be an end to it, and even a 20% efficiency boost right now will only yield 2w advantage at the same performance. But on the Q6600, 20% efficiency is more than 20w; that's enough to put two additional Pentiums on the same power budget and triple the score, basically.

The eternal AMD-counter 'but what if they had money' is simply a fairy tale of could have would have but never really did happen. What if AMD had not released FX. What if they had fired Raja two years ago. Who knows. They didn't and that is what counts. It doesn't change the fact that neither company is capable of really surpassing the other right now, even Intel has not provided us with their next best thing that will obliterate all that preceded it. Its simply not there and the demand that exists for high performance CPUs can be satisfied in other ways, like multi socket (look at recent Intel press releases) and other ways of scaling that are far more efficient than creating an even more complex single-die solution.
Agreed. There hasn't been a revolution in chipmaking for quite a while now... Intel's own procrastination/complacency has led AMD to catch-up on many levels. Intel's neglect / cruise control in manufacturing has allowed competitors to overtake them, even. I'd never imagine a day where GlobalFoundries, of all things, was delivering Intel some fab smackdown (even if it is borrowed from Samsung).

With process limitations we can see that Intel's designs come close to the design's threshold on power/performance, and that is it's main issue right now. They can wiggle around and get some extra performance here and there, but they really need a jump on the 7nm or lower nodes. 10nm is a loss already. They need to get their fabbing processes in a row so they can plot a course for their CPUs. Personally, I think this is the problem with doing away with engineers and sticking with the managers who can manage Investor relations. They talk up the BS for the Investors' sake.

Intel, right now, has a server strategy. It doesn't have a mobile or desktop one. Because the big thing people talk about around the boardroom coffee pot (or caviar tray) is Cloud.
Posted on Reply
#32
R0H1T
LemmingOverlordAgreed. There hasn't been a revolution in chipmaking for quite a while now... Intel's own procrastination/complacency has led AMD to catch-up on many levels. Intel's neglect / cruise control in manufacturing has allowed competitors to overtake them, even. I'd never imagine a day where GlobalFoundries, of all things, was delivering Intel some fab smackdown (even if it is borrowed from Samsung).

With process limitations we can see that Intel's designs come close to the design's threshold on power/performance, and that is it's main issue right now. They can wiggle around and get some extra performance here and there, but they really need a jump on the 7nm or lower nodes. 10nm is a loss already. They need to get their fabbing processes in a row so they can plot a course for their CPUs. Personally, I think this is the problem with doing away with engineers and sticking with the managers who can manage Investor relations. They talk up the BS for the Investors' sake.

Intel, right now, has a server strategy. It doesn't have a mobile or desktop one. Because the big thing people talk about around the boardroom coffee pot (or caviar tray) is Cloud.
You're combining the two issues of chip making ~ chip design(s) are still improving rapidly in the ARM world, part of that comes from going real wide like Ax & Mongoose. Also they don't have legacy instructions to worry about like x86 does.

Chip fabrication & all the advances from smaller nodes are coming to a halt. The next 10 years might well be the last time we see Si being mentioned wrt chips, before something like Graphene or Ge (alloy?) takes over.
Posted on Reply
#33
Owen1982
Everyone knows those Geekbench scores are rigged and not to be trusted. I laugh whenever i see someone shout about arm vs x86 geekbench scores.

Whether or not Intel innovates as much as they can/could is another question! I'm guessing not...
Posted on Reply
#34
Vayra86
R0H1TYou're combining the two issues of chip making ~ chip design(s) are still improving rapidly in the ARM world, part of that comes from going real wide like Ax & Mongoose. Also they don't have legacy instructions to worry about like x86 does.

Chip fabrication & all the advances from smaller nodes are coming to a halt. The next 10 years might well be the last time we see Si being mentioned wrt chips, before something like Graphene or Ge (alloy?) takes over.
The designs are *changing* rapidly in the ARM world is what I'd rather say. The big advantage of ARM is its customization options and those are being put to good use with more specific task oriented designs. Its a catch 22. We speak of the legacy that x86 has, but the reason it has that is because for a majority of users the CPU is a swiss knife that has to do it all. These specialized designs are not swiss knifes, in fact, they become blunt instruments out of their comfort zone. What you see with ARM is that specific code exists and a CPU design follows. In x86, a lot of that was the other way around - a good example is AVX.
Posted on Reply
#35
R0H1T
Owen1982Everyone knows those Geekbench scores are rigged and not to be trusted. I laugh whenever i see someone shout about arm vs x86 geekbench scores.

Whether or not Intel innovates as much as they can/could is another question! I'm guessing not...
Rigged, did ARM pay GB to make Intel x86 look bad :rolleyes:
I didn't realize AT & other sites were paid shills as well, since they publish GB scores in some of their reviews :shadedshu:
Posted on Reply
#36
Xajel
So Finally we can say that J5005 can play Crysis.. this time for real.
Posted on Reply
#37
Vayra86
XajelSo Finally we can say that J5005 can play Crysis.. this time for real.
I think Crysis 3 will still run like shit. As in sub 20 fps. I remember my Ivy 3570k @ 4.2 really was not happy with that game :D
Posted on Reply
#38
LemmingOverlord
R0H1TYou're combining the two issues of chip making ~ chip design(s) are still improving rapidly in the ARM world, part of that comes from going real wide like Ax & Mongoose. Also they don't have legacy instructions to worry about like x86 does.

Chip fabrication & all the advances from smaller nodes are coming to a halt. The next 10 years might well be the last time we see Si being mentioned wrt chips, before something like Graphene or Ge (alloy?) takes over.
I'm talking about the existing chip designs by Intel on their "viable" process nodes (14nm, not 10nm). In order for Intel to continue delivering something without revolutionizing computing, it'll need to make inroads with fabrication technologies. Of course design and fabrication are different sides of the same coin. I am not weighing in with ARM IP at all

Or in other words, Intel still has a reasonable single-thread performance lead, but it's main obstacle has been shrinking their dies.
Posted on Reply
#39
Valantar
To attempt to cut through the back-and-forth here a bit: we are currently in the unprecedented situation that Intel hasn't improved IPC in their MSDT segment for a full three generations. The Skylake base arch has not been changed whatsoever for Kaby Lake or Coffee Lake - all we've seen are clock speed optimizations (which are both design and manufacturing dependent) and additional cores. Also, for mobile, they've clearly done some impressive work in power consumption, squeezing four cores into 15W. But IPC stands unchanged since August 2015. Up until that, every single revision of the Core arch saw some IPC improvement, even if it was in the ~5% range.

On the other hand, they've significantly changed the cache layout for their HEDT parts, which definitely impacts IPC. Time will tell if these changes trickle down into MSDT, but they might not be worth it for consumer workloads.

The real thing holding Intel back right now is that alongside this (likely at least somewhat planned) stagnant IPC is that their process development has also stagnated - 10nm was originally supposed to launch in 2016, but has now been postponed (again!) to 2019. It's likely that Intel had planned on process improvements carrying them along without requiring (increasingly complex, difficult and expensive) arch revisions, at least for a generation or two. Suddenly, that's not a viable tactic, and they're left largely without options.

Is this Intel's fault? Are they lagging behind? No, not really. Production node misfires happen rather frequently - look at TSMC 20nm or whatever GF had planned before they licensed Samsung 14nm. The issue here is that this coincides with a significant slowdown in arch improvements. Which isn't really surprising, seeing how Core is now on it's 8th-ish (in some cases 10th or so) revision - all the low-hanging fruit has long since been picked, and to truly murder that metaphor, Intel is having to build exponentially taller ladders to reach what little is left.

Intel clearly had a plan to minimize arch development costs as they were getting uncomfortably high. This is reasonable from a business perspective. They gambled on process tech keeping them well ahead, and this bet is in the process of being lost. They still have an advantage, but it's shrinking rapidly.


Atom, on the other hand (which is also what this discussion is supposed to be about ;) ), is easier to work with. It's gone through just as many revisions, but as it's mainly been designed for power efficiency (often to the detriment of performance) and has had minuscule development budgets compared to Core, so there's far more low-hanging fruit left - and they can likely borrow some ladders and equipment from the Core team, making it even easier.
Posted on Reply
#40
AltCapwn
phanbuey9600s? those were some new GFX cards for that...

Mine was the 7800GTX / 8800GT gen for me...
Well I first had a 6600GT in my old build but dreamt for the 8800GT for a long time (Crysis!!!!!!!). When I went to buy it, there were only 9600GT's so I bought one and bought a second one a month or two after :)
Posted on Reply
#41
bug
TheGuruStudFanboys told you this? I'm pretty sure everyone with half a brain told you that. They haven't done anything since SB except use their fabs to their potential. That's spending cash, not innovating.
Intel has moved at a glacial pace in absolute desktop performance. But they have vastly improved in pretty much every other area. Since Sandy Bridge we got: IGPs capable of handling 4k, triple battery life for portables, granular voltage and frequency control, CPUs that perform amazingly with only ~15W TDP and went from 8 PCIe 2.0 lanes to 24 PCIe 3.0 lanes.
Yet for people like you, that equates to "They haven't done anything since SB".
Posted on Reply
#42
Vayra86
bugIntel has moved at a glacial pace in absolute desktop performance. But they have vastly improved in pretty much every other area. Since Sandy Bridge we got: IGPs capable of handling 4k, triple battery life for portables, granular voltage and frequency control, CPUs that perform amazingly with only ~15W TDP and went from 8 PCIe 2.0 lanes to 24 PCIe 3.0 lanes.
Yet for people like you, that equates to "They haven't done anything since SB".
Glacial pace - and then there is this perspective:
www.hardwarecanucks.com/forum/hardware-canucks-reviews/76333-i7-2600k-vs-i7-8700k-upgrading-worthwhile-13.html



Posted on Reply
#43
bug
Yeah, they have added improvements to SSE and AVX, but since it usually takes specialized software to put those to good use, I thought I didn't mention those (like I didn't mention QuickPath). But those are improvements in their own right.
Posted on Reply
#44
lexluthermiester
ghaziSince when is Passmark a meaningful benchmark?
Passmark is a very useful benchmark utility. It's not the most popular, but runs on anything and will give a fair & insightful perspective on comparing many different platforms, old and new.
Posted on Reply
#45
Wolflow
Q6600 has never been really impressive, in fact…

What made it interesting (as "potentially useful", not much more) was the simple fact its price was similar to the highest performing "non-eXtreme" C2D while having twice the cores and marginally lower overclocking headroom.

As soon as the 4 cores became useful outside of almost pure CPU in-cache workloads, it was so much behind newer ones it lost all of its appeal, being affected by bottlenecks already known back when the C2Q brand came out.

The exact same situation currently exists with the R7-2700 : while it's not considerably better than, say, the R5-2400G outside of specific professional uses (zero productivity concern for regular end-users), it has the potential to offer a decent step-up even compared to the R5-2600(x) because of the cores count.
Posted on Reply
#46
lexluthermiester
WolflowQ6600 has never been really impressive, in fact…
You are very much alone in that opinion. At the time it was an amazing CPU in comparison to everything else on the market except the higher end C2Q's. It was an overclocking dream as well.
Posted on Reply
Add your own comment
Apr 24th, 2024 21:06 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts