• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

I3 10105f or Celeron G6900?

The very first post adresses this.


In which games and on what settings? My 2600x isn't a gaming monster and I was never really held back by it when I had a Geforce 2060.
FWIW, before upgrading to RKL last year, I was on 2600X too, in combo with RX 5700 XT factory OC card. As soon as the 11700k was implemented, my gaming experience was taken to a whole new level. This is at 1440p as well. Of course this is not purely form improvements in CPU architecture alone, Having PCIe v4 & high bandwidth RAM helped a lot as well, the later of which Zen+ can't handle.
But does your fps and 1% lows increase as result of that?


Not really. Skylake rebranding continued up to Comet Lake and yes i3 10100 was still just polished Skylake. Later launched Rocket Lake chips lacked how end product and that was the last time Skylake arch got rebranded. Most improvements were made in thermals and in clock speed over this time, but architecture didn't change, same with lithography. And IPC actually regressed a bit after Broadwell, but that was compensated with higher clock speeds.
Skylake architecture was not rebranded for Rocket lake, RKL uses cypress cove architecture. A fact you can check straight from the news room of the manufacturer.
 
Skylake architecture was not rebranded for Rocket lake, RKL uses cypress cove architecture. A fact you can check straight from the news room of the manufacturer.
Another less known fact is that Cypress Cove (RKL) changed the cache hierarchy to 32 KB L1 instruction + 48 KB L1 data per core vs the "classic" 32+32 setup. It doesn't really show in real world performance, but still...
 
If budget is so tight that you're debating a current-gen dual-core or socket-obsolete quad core, just go to ebay and pick up something like a used Ryzen5 1600 + B350 combo, or an old Skylake i7 bundle.

My mining rigs have Comet Lake dual cores with Hyperthreading (Pentium Gold) They're basically unusable for anything that isn't light single-tasking unless you enjoy waiting around a bunch and your time is worth nothing.
 
Skylake architecture was not rebranded for Rocket lake, RKL uses cypress cove architecture. A fact you can check straight from the news room of the manufacturer.
I ain't dumb, I know this, but i3 10105F was just Comet Lake refresh ported to work with 500 series chipsets. There never was any desktop Rocket Lake i3.
 
You said:
Yeah, because there wasn't anything bellow i3, so Intel re-released Comet Lake chips with unlocked memory tweaking support and a bit higher clock speeds.
 
Yeah, because there wasn't anything bellow i3, so Intel re-released Comet Lake chips with unlocked memory tweaking support and a bit higher clock speeds.
But you clearly said Rocket Lake was the last time Skylake got rebranded. Nevermind...
 
But you clearly said Rocket Lake was the last time Skylake got rebranded. Nevermind...
Do you really need to be so pointlessly pedantic?
 
Do you really need to be so pointlessly pedantic?
What's pedantic about stating the fact that Rocket Lake is not a Skylake refresh?

This is a forum for people with all levels of IT knowledge, therefore we should aim to do our best not to spread false information (knowingly or unknowingly), imo.
 
But does your fps and 1% lows increase as result of that?

I got much better fps now with the i7 12700K then with i7 6700K @4.5Ghz before, lowest before maybe 60 now minimum around 100fps, also got average higher fps.
 
What's pedantic about stating the fact that Rocket Lake is not a Skylake refresh?
Because I wasn't talking about it. I mentioned that when Rocket Lake launched there weren't anything less than i5 and i3 was faux Rocket Lake. The last Skylake rebrand.

I got much better fps now with the i7 12700K then with i7 6700K @4.5Ghz before, lowest before maybe 60 now minimum around 100fps, also got average higher fps.
Interesting. Have you tried disabling C states?
 
The Celeron hasn't been a good value gaming CPU since around 2015. Once games became more threaded, a dual-core with only two threads is now worthless. combine that with the incredible value of unlocked Pentium Haswell with nearly double the overclocked performance made it pointless for any gamer to settle for a $40 CPU.

The Pentium is the minimum value CPU for entry-level gaming, since they upgraded it to quad threads in 2017. But since 2019, i3 has more than doubled it's performance over that Pentium! There n is no reason to purchase less than the i3, if you're planning on any gaming.
 
In which games and on what settings? My 2600x isn't a gaming monster and I was never really held back by it when I had a Geforce 2060.
Most of the games we tested were more modern (Warhammer 2, GOW, RDR2, Elden Ring, Halo Infinite). Pre 2020 era games it wasn't as noticeable but it's really apparanet for the modern recent games and will continue for this year and 2023

Techpowerup users not liking the buying options he presented isn't us being snobs or not being realistic. There comes a point to where you just have to call a medicore decision a medicore decision

I mean heck he has such little confidence in something he hasn't even bought that he's already making plans to replace it. Not only that according to his previous threads he kinda insinuates that he doesn't already have a video card (which means he couldn't even buy the 10105F since it doesn't have intergrated graphics so it's a wash buy anyways).

What's the point of buying a ÂŁ60 Celeron to save up for a ÂŁ100 Core i3 over a longer period? Why not save for another month and buy the i3 straight away?
Yeah I don't see the purpose of buying something you have such little confidence in that your already making plans to replace it even though you haven't even bought it.

If he already had the setup or someone was selling it to him dirt cheap then that's 100% different then him buying everything brand new at max MSRP. Throw in that the 10105F doesn't even have integrated graphics and he doesn't even own the GPU yet and the obvious solution is just keep saving
 
Because I wasn't talking about it. I mentioned that when Rocket Lake launched there weren't anything less than i5 and i3 was faux Rocket Lake. The last Skylake rebrand.
Well, you didn't make that clear in your post. There is no faux Rocket Lake, by the way. Only Comet Lake (10th gen) and proper Rocket Lake (11th gen), which like you said, never got a release below Core i5 level. Comet Lake got a minor refresh with the 5-ending SKUs (like the 10105), but that's still Comet Lake.
 
Well, you didn't make that clear in your post. There is no faux Rocket Lake, by the way. Only Comet Lake (10th gen) and proper Rocket Lake (11th gen), which like you said, never got a release below Core i5 level. Comet Lake got a minor refresh with the 5-ending SKUs (like the 10105), but that's still Comet Lake.
Those chips were released alongside with Rocket Lake and were meant to be something fresh for 500 series chipset board buyers, who didn't want i5 or more, but also something that isn't as old as basic Comet lake line up. Yeah, it's really bizarre way to sell better bins of weak chips or manage their first multigeneration socket.
 
Last edited:
But does your fps and 1% lows increase as result of that?
My i7 6700K @ 4.5Ghz just wasn't able to push my GPU around 95~100% load all the time, especially with GTA V.
The i7 12700K has no problem to push my GPU to full load.
 
The i3 is much more powerful i know but the lga 1700 platform is modern and soon i could buy a faster cpu like an i3 12300 or i5 12400 also if i go with the celeron i can save the gpu for now while i save for an rx 6600 or Something similar, what do you think? at first it went with am4. but here the ryzen are very expensive.

Those are the options in motherboard.

Asrock Z590 steel legend 190$ or Asus z690m plus d4 245$.

B660, h670, h610, b560,h510 are extreme overpriced here so that was i choose z chipset cuz prices are closer to msrp.
forget a 2c/2t cpu for any game past 2010:

grab some 3600 ram and call it a day. look for a i5/i7 locked/unlocked later; lift the power limits, mind the vrm temps and you have ~Z590 at stock settings. just mind vrm heatsinks cooling. had no problemo w/i7-11700K & B560 tuf/wifi - until the board committed suicide while flashing the bios and that was my bad.
 
My i7 6700K @ 4.5Ghz just wasn't able to push my GPU around 95~100% load all the time, especially with GTA V.
The i7 12700K has no problem to push my GPU to full load.
Again interesting, but that makes me wonder if there wasn't something else like RAM, C state, EIST, not overclocked cache. Also if that obsession was actually meaningful. If you were using 1080p screen and getting like 200 fps already, then that bottleneck doesn't mean much. I'm not blaming you, but genuinely curious about 6700K. At 4.5GHz it should have been a tiny bit faster than i3 10100.
 
Again interesting, but that makes me wonder if there wasn't something else like RAM, C state, EIST, not overclocked cache. Also if that obsession was actually meaningful. If you were using 1080p screen and getting like 200 fps already, then that bottleneck doesn't mean much. I'm not blaming you, but genuinely curious about 6700K. At 4.5GHz it should have been a tiny bit faster than i3 10100.

I'm using a 165Hz 1440p monitor. Just overclocked the CPU at 4.5Ghz, the memory was running at 3000MHz, nothing else tweaked in the BIOS.
 
Again interesting, but that makes me wonder if there wasn't something else like RAM, C state, EIST, not overclocked cache. Also if that obsession was actually meaningful. If you were using 1080p screen and getting like 200 fps already, then that bottleneck doesn't mean much. I'm not blaming you, but genuinely curious about 6700K. At 4.5GHz it should have been a tiny bit faster than i3 10100.
Intel has been improving their IPC ever since Ryzen came out, not to mention that Alder Lake is a big jump on its own. The Sandy Bridge refresh refresh era is a thing of the past.

Another thing is that GPU performance has improved even more, so to push a high end one to 100% usage in a relatively old game, a really fast CPU is needed.
 
I'm using a 165Hz 1440p monitor. Just overclocked the CPU at 4.5Ghz, the memory was running at 3000MHz, nothing else tweaked in the BIOS.
That's a shame, because 3000Mhz is a bit low for overclocked RAM.
 
Intel has been improving their IPC ever since Ryzen came out, not to mention that Alder Lake is a big jump on its own. The Sandy Bridge refresh refresh era is a thing of the past.
There wasn't any Sandy Refresh, only Skylake. You can argue that Ivy and Haswell were refreshes, but they had some architectural improvements, node shrinks, power improvements and etc. Only Haswell received refresh, which I think was called Devil's Canyon. You know i7 4790K, i5 4690K and G3258, but that was special and mostly their Anniversary thing.

Another thing is that GPU performance has improved even more, so to push a high end one to 100% usage in a relatively old game, a really fast CPU is needed.
Um... It's complicated. I would argue that during 2006 CPU requirements not only rose but ballooned. Did you buy Core 2 Duo in 2006? By 2007 it was nearly e-waste in Red Faction Guerilla, Racedriver: Grid. Sure you could play all those games, but at average of like 40 fps with 1% lows in 20s. Also games like Crysis, GTA 4 were notoriously poorly threaded and required shit ton of IPC to run at 30 fps. Only Core 2 Quad could run sort of okay. And yet in 2005, single core was all you needed. After that, game devs really calmed down a bit with CPU requirements, but those older games can be surprisingly demanding even on non crappy hardware. And at the same time many proprietary crap like PhysX were implemented in games, which even modern hardware struggles to run in pure software mode. I played OG Mafia 2 release and it ran great, but with PhysX it was completely unplayable. Also in that era, some devs just completely gave up on CPU optimizations and games Like Victoria 2 still run at 10 fps in late games, because there's no MT support and it most likely can't even see all RAM. Our GPUs certainly have improved a lot, but don't underestimate how much more difficult games became to run fast on CPU. Unlike GPUs, CPUs haven't improved nearly as fast too, yet we want more, smarter AI, better simulations and etc.

3000 CL15 XMP.
That's pretty good.
 
G6900 is really bad and shouldn't be on anyone's shopping list if not for maybe some extreme low-end office work or as a stopgap to suffer through until you can get at least an i3.

 
Again interesting, but that makes me wonder if there wasn't something else like RAM, C state, EIST, not overclocked cache. Also if that obsession was actually meaningful. If you were using 1080p screen and getting like 200 fps already, then that bottleneck doesn't mean much. I'm not blaming you, but genuinely curious about 6700K. At 4.5GHz it should have been a tiny bit faster than i3 10100.
Stock i7-6700K has essentially the same performance in games as the i3-10100. Interestingly, even the Sandy Bridge i7-2600K overclocked to 5 GHz is able to keep up with the Comet Lake i3. But the 12700K has about 70% more ST performance than these, and games limited by a single thread will love it. GTA 5 is DX11 title and needs a powerful ST to push the graphics card. These are my results from a full benchmark run at maximum detail @ 1080p:

gta5.jpg


The 3300X here has a 4.5 GHz overclock, achieving ST performance of a stock i9-9900K or i5-11400. When you look at the figures, average time in maximum ST load went down from 86% to 65% with the 5800X3D, which is also reflected by the 1% and 0.1% lows. And incredibly, time spent at GPU limit (i.e. being GPU bound) went up from 13% to 79% in this benchmark :eek:
 
Last edited:
Back
Top