Monday, May 28th 2018

Intel Pentium Silver J5005 Catches Up With Legendary Core 2 Quad Q6600

The Core 2 Quad Q6600 quad-core processor is close to many a PC enthusiast's heart. It was the most popular quad-core processor by Intel in the pre-Nehalem LGA775 era, and continues to be found to this date on builds such as home-servers. Over a decade later, Intel's low-power Pentium Silver J5005 quad-core processor, which enthusiasts won't consider for anything, appears to have caught up with the Q6600. A CPU Passmark submission by a Redditor compares the J5005 with the Q6600, in which the latter is finally beaten. The J5005 scored 2,987 marks, compared to the Q6600's 2,959 marks. It's interesting to note here, that the J5005 is clocked at just 1.50 GHz, compared to the 2.40 GHz of the Q6600. Its TDP is rated at just 10W, compared to 95-105W of the Q6600.
Sources: CPU PassMark Database, dylan522p (Reddit)
Add your own comment

46 Comments on Intel Pentium Silver J5005 Catches Up With Legendary Core 2 Quad Q6600

#1
Devon68
That TDP is impressive.
Posted on Reply
#2
dj-electric
Very nice, very applicable for home-server use and such
Posted on Reply
#3
Valantar
Not bad at all. I was using a Core2Quad Q9450 until I put together my Ryzen build last year, and it worked surprisingly well for gaming even a full 8 years in - but then it was OC'd by 32% too, to 3,52GHz. So it should have been at least 50% faster than the J5005, at least if CPUbenchmark is anything to go by. Of course, that isn't saying much. My Fire Strike scores nearly doubled after upgrading my CPU :p Still, a CPU like this should be able to feed a low-end GPU (RX 550, GT 1030, GTX 1050) surprisingly well. Bodes well for the future of SFF gaming, or just low-wattage HTPC, server or NAS use.
Posted on Reply
#4
Darmok N Jalad
Think this is what we will be seeing in the rumored $400 Surface device? I can’t see how they could use a Core-based CPU and hit that price.
Posted on Reply
#5
Valantar
Darmok N Jalad, post: 3847630, member: 170588"
Think this is what we will be seeing in the rumored $400 Surface device? I can’t see how they could use a Core-based CPU and hit that price.
Considering that the J5005 is a desktop chip: No. The N5000 might be an option - same arch (Atom-derivative Gemini Lake), 6W vs. 10W TDP (4,8W "Scenario Design Power", whatever that means), 1,1Ghz vs 1,5GHz base, 2,7GHz vs. 2,8GHz boost. Same cache and most other specs, though slightly downclocked on the iGPU too. Wouldn't be surprised to see it appear there. Should be a decent chip for light usage, though I'd hope they'd have a higher short-term boost clock for reduced touch latency and so on.
Posted on Reply
#6
Darmok N Jalad
Valantar, post: 3847634, member: 171585"
Considering that the J5005 is a desktop chip: No. The N5000 might be an option - same arch (Atom-derivative Gemini Lake), 6W vs. 10W TDP (4,8W "Scenario Design Power", whatever that means), 1,1Ghz vs 1,5GHz base, 2,7GHz vs. 2,8GHz boost. Same cache and most other specs, though slightly downclocked on the iGPU too. Wouldn't be surprised to see it appear there. Should be a decent chip for light usage, though I'd hope they'd have a higher short-term boost clock for reduced touch latency and so on.
Yeah, and the x7-z8700 in the Surface 3 was just a 2W CPU. Granted, Intel has gotten better at reducing TDP and providing more boost since that generation of SOC.
Posted on Reply
#7
qubit
Overclocked quantum bit
That j5005 that I've never heard of is actually much better than the Q6600 since the clock speed is so much lower and it consumes less power while giving the same processing performance. So, extrapolating based on base clock speed alone, if the j5005 also ran at 2.4GHz, then it would hit 4779 CPU Marks, quite a bit faster than that Q6600.
Posted on Reply
#8
Valantar
qubit, post: 3847646, member: 46003"
That j5005 that I've never heard of is actually much better than the Q6600 since the clock speed is so much lower and it consumes less power while giving the same processing performance. So, extrapolating based on base clock speed alone, if the j5005 also ran at 2.4GHz, then it would hit 4779 CPU Marks, quite a bit faster than that Q6600.
It turbos to 2.8GHz, and can likely maintain those clocks for quite a while even at 10W.
Darmok N Jalad, post: 3847645, member: 170588"
Yeah, and the x7-z8700 in the Surface 3 was just a 2W CPU. Granted, Intel has gotten better at reducing TDP and providing more boost since that generation of SOC.
2W? Wow, that's useless. That's lower than most high-end phones these days (check out AnandTech's SoC deep dives - some pull 8-9W under GPU loads, which is ridiculous, though most top out at 4-ish). You sure they didn't run it in some sort of TDP-up mode? Any tablet with some modicum of thermal transfer between the SoC and the case (as long as it isn't plastic ...) should be able to dissipate 4-5W easily. I guess the good thing is that there is no new Gemini Lake Atom out yet, so the N5000 is as low as they can go for now :p
Posted on Reply
#9
qubit
Overclocked quantum bit
Valantar, post: 3847658, member: 171585"
It turbos to 2.8GHz, and can likely maintain those clocks for quite a while even at 10W.
I knew someone would say this, that's why I said base clock speed alone. ;)
Posted on Reply
#10
altcapwn
Oh damn, nostalgia hit hard here. I still remember my first PC built around a Q6600, Thermaltake Element V and a Geforce 9600GT in SLI. Miss those times <3.
Posted on Reply
#11
Assimilator
But all the fanboys told me that Intel hasn't been innovating since the Sandy Bridge days!

Seriously though, while it's nice to dream of a world where we get 25% more performance every 2 years, those days are long gone. I prefer to look forward to the day when I can finally have a proper (x86) CPU in my smartphone and run Windows on everything.
Posted on Reply
#13
Darmok N Jalad
Valantar, post: 3847658, member: 171585"
It turbos to 2.8GHz, and can likely maintain those clocks for quite a while even at 10W.

2W? Wow, that's useless. That's lower than most high-end phones these days (check out AnandTech's SoC deep dives - some pull 8-9W under GPU loads, which is ridiculous, though most top out at 4-ish). You sure they didn't run it in some sort of TDP-up mode? Any tablet with some modicum of thermal transfer between the SoC and the case (as long as it isn't plastic ...) should be able to dissipate 4-5W easily. I guess the good thing is that there is no new Gemini Lake Atom out yet, so the N5000 is as low as they can go for now :p
I guess I should qualify that as SDP, not TDP, that’s ARK for you! Still, the passively cooled 6Y30 m3 Surface is configurable from 3.8W to 7W. I suppose similar tuning could be done with Apollo Lake—it will still be better than the z8700. I had one of those Surface 3s—it was barely serviceable on web browsing. Adblock was a must, since the browsers loaded full sites instead of mobile sites like smartphones and iPads do.
Posted on Reply
#14
newtekie1
Semi-Retired Folder
btarunr, post: 3847589, member: 43587"
It's interesting to note here, that the J5005 is clocked at just 1.50 GHz, compared to the 2.40 GHz of the Q6600.
Not exactly giving the entire story here. The J5005's base clock is 1.5GHz, but it's boost clock is 2.8GHz.

qubit, post: 3847659, member: 46003"
I knew someone would say this, that's why I said base clock speed alone. ;)
Yeah, but the boost clock is where the scores in the OP came from, not the base clock.
Posted on Reply
#15
phanbuey
altcapwn, post: 3847662, member: 175039"
Oh damn, nostalgia hit hard here. I still remember my first PC built around a Q6600, Thermaltake Element V and a Geforce 9600GT in SLI. Miss those times :love:.
9600s? those were some new GFX cards for that...

Mine was the 7800GTX / 8800GT gen for me...
Posted on Reply
#16
ghazi
Since when is Passmark a meaningful benchmark?
Posted on Reply
#17
Devon68
Since when is Passmark a meaningful benchmark?
Well I HOPE it is since I'm basing my laptop choice on it.
Posted on Reply
#18
TheGuruStud
Assimilator, post: 3847667, member: 7058"
But all the fanboys told me that Intel hasn't been innovating since the Sandy Bridge days!

Seriously though, while it's nice to dream of a world where we get 25% more performance every 2 years, those days are long gone. I prefer to look forward to the day when I can finally have a proper (x86) CPU in my smartphone and run Windows on everything.
Fanboys told you this? I'm pretty sure everyone with half a brain told you that. They haven't done anything since SB except use their fabs to their potential. That's spending cash, not innovating.
Posted on Reply
#19
R0H1T
Assimilator, post: 3847667, member: 7058"
But all the fanboys told me that Intel hasn't been innovating since the Sandy Bridge days!

Seriously though, while it's nice to dream of a world where we get 25% more performance every 2 years, those days are long gone. I prefer to look forward to the day when I can finally have a proper (x86) CPU in my smartphone and run Windows on everything.
Yeah congrats to Intel for catching up to a part that's well over a decade old :rolleyes:

In the meantime Apple, supposed nobodies in chip making, has them beat all ends up in the 5~15W TDP range with their Ax series ;)

https://browser.geekbench.com/v4/cpu/compare/7206190?baseline=8071021
Posted on Reply
#20
kruk
Two reasons, why this comparison is wrong:
- J5005 has only 1 sample in the DB
- due to turbo, J5005s true clock during the benchmark is unknown

Assimilator, post: 3847667, member: 7058"
But all the fanboys told me that Intel hasn't been innovating since the Sandy Bridge days!
Well, you'll love this ... :roll::



Sandy Bridge 32 nm 2C/4T @ 2.5 GHz fixed clock vs. J5005 14nm 4C/4T at unknown clock vs. Q6600 65nm 4C/4T @ 2.4 GHz fixed clock.
Posted on Reply
#21
TheGuruStud
R0H1T, post: 3847846, member: 131092"
Yeah congrats to Intel for catching up to a part that's well over a decade old :rolleyes:

In the meantime Apple, supposed nobodies in chip making, has them beat all ends up in the 5~15W TDP range with their Ax series ;)

https://browser.geekbench.com/v4/cpu/compare/7206190?baseline=8071021
You know bechmarks are a joke on ARM, right? Geekbench is the worst. They're not even relevant between ARM chips.
Posted on Reply
#22
R0H1T
TheGuruStud, post: 3847856, member: 42692"
You know bechmarks are a joke on ARM, right? Geekbench is the worst. They're not even relevant between ARM chips.
Geekbench is the best of the lot, it may not be the best benchmark ever but it's still better than any other cross platform alternative you can think of ~ not that there are many choices out there.
Posted on Reply
#23
techy1
this is NOT impressive - let me explain - if we would look at 2008 year cpu vs 1998 year cpu - that would be like x40 more ( not +400 but +4000 %) processing power if not more (Q6600 vs Pentium II 233/266). ik some might say "but what about low TDP" - true, add another +100% and enjoy "the epyc gainz"
Posted on Reply
#24
Assimilator
TheGuruStud, post: 3847773, member: 42692"
Fanboys told you this? I'm pretty sure everyone with half a brain told you that. They haven't done anything since SB except use their fabs to their potential. That's spending cash, not innovating.
If Intel was really sitting on their hands for the past decade, you'd expect that AMD's latest CPUs would've blown past them. That hasn't happened, and the reason is because the wave of Moore's Law has been dashed against the rock that is the fundamental limits of physical silicon; all the billions in the world can't overcome physics. Not to mention that CPU design is really f**king difficult and all the easy wins have long since been won, leaving only the really, really difficult stuff.

Some might argue that the heritage of the Core architecture, which is itself descended from the original P6 architecture, is to blame, but I'd argue exactly the opposite: that P6 was such a good design that its fundamentals remain in use over two decades after its conception. Perhaps Core is due for replacement, but anything that hopes to succeed it will have to be very special.

R0H1T, post: 3847846, member: 131092"
In the meantime Apple, supposed nobodies in chip making, has them beat all ends up in the 5~15W TDP range with their Ax series ;)
Comparing ARM benchmarks to x86 benchmarks is idiotic.
Posted on Reply
#25
R0H1T
Assimilator, post: 3847869, member: 7058"
Comparing ARM benchmarks to x86 benchmarks is idiotic.
Geekbench isn't ARM specific, the fact that you don't know that is odd since Passmark is arguably even worse!
Posted on Reply
Add your own comment