• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i3-12300

On those low end models review I feel like the test setup should be different, ditch the 1200W PSU and Z690 to get actually usable and meaningful power consumption results. Especially idle. My ancient athlon II setup has lower idle than those setups, simply because it has a 300W PSU.
 
Thank you so much for your Review w11z! Especially the Intel igp vs 5600g / 5700g / gt1030 & a few other "real" gpus. It was fun to read. Vega might be old and underpowered but it still amazes me how much you can get from it. But that conclusion depends a lot on the country you live. The cost of gpus in my country is so expensive relative to cpus that a years old used gtx1050 costs aproximately the same as a 5600g. A new gt1030-ddr5 costs 85% of a 5600g.
 
Nope, all handled on the RTX card. The CPU load should definitely be proportional to your framerate in any direct comparison of RTX off vs RTX on. If you're also using DLSS and your framerate is going up, then the CPU usage should also go up.

I'm not sure what your issue is, but it's not the normal behaviour. There are a few notes of people using older CPUs complaining about similar things on Nvidia's own forums but it's not intended nor expected. Could be PCIe bandwidth related, there were a couple of hints that updating the motherboard BIOS fixes that. It may also be specific to one game; Not all RTX implementations are bug-free on all platforms.


For AMD IGPs, sure. That's because the AMD IGPs are fast enough that they lack bandwidth. Don't take my comment out of context though - it was a specific reply to DDR5 on the UHD 730 which is too slow to need all of the DDR4 bandwidth. Sure, if it was faster it would be able to take advantage of more bandwidth but the reality is that 32EU just plain sucks.

You don't have to guess or extrapolate, there are already several sites or channels that have investigated DDR4 vs DDR5 IGP scaling for Alder Lake that show no improvement at all. For the 96EU laptop Alder Lake models coming soon, I'm sure we'll see different results like we're accustomed to with the more compent AMD IGPs.

Gamers Nexus did a pretty solid investigation showing that the UHD 770 performance gains absolutely nothing from moving from DDR4-3200CL14 to DDR5 5200CL38. Although within 5%, the DDR4 IGP performance is better in all but one of the games tested, and that's likely to be from the improved latency of DDR4.
Uhm, he is right, rt taxes the cpu a LOT.

Most of those benches are on on much newer CPUs. Not DDR3.
Ignore him, he doesn't know what the heck he is talking about. Cyberpunk with RT on is way heavier on the cpu than RT off. Tried with a 10900k (4400c16 ram), 11600k and now a 12900k with 6000c32 ddr 5 ram.

Actually, cyberpunk with rt off is incredibly light on the cpu. But once you turn it on, oh boy.
 
I feel the first 2 gen of Ryzen are good if you are using them for applications that supports multithreaded performance. After all, you get affordable 6c/12t and reasonably good value 8c/16t cores processor, while Intel is mainly limited to 4c/8t. Back then, most games don't support more than 4 cores since Intel decided that retail users only need 4 cores. In addition, single core performance is lower (coupled with lower clockspeed) on the Zen 1 and Zen+ chip as compared to Skylake.


This is correct based on my own testing when I was using the Ryzen 5 3400G in the past. Reducing latency of the ram barely moved the performance, if any at all. Increasing the bandwidth results in the biggest improvement in performance because it is bandwidth starved, especially so when both the CPU and GPU needs to access the RAM. That's why by virtue of the increase in bandwidth offered by DDR5, I think we should see significant improvements in iGPU performance. The UHD 730 is not great due to the limited 32EUs, but it is still part of Xe graphics, so with the increase in bandwidth, we should also see a good jump in performance if you just measure by %. Don't bother to measure difference in FPS because the FPS is going to be low and even like 2 to 5 FPS may be a fairly big improvement for an iGPU.
Problem is that was almost 5 years ago. And yet here we are with a 4c/8t cpu with better ipc and single core speed beating the shit out of most ryzen 6 and 8 core cpus from the past.
Where's that 8core future proof shit i kept hearing about since Zen 1 was released?
 
One of the selling points of Ryzen 7 5700G is its integrated graphics capability. The graphics cores are based on the Vega architecture, which is fairly old. In the Ryzen 7 5700G, you'll find eight Compute Units, which result in a total shader count of 512. The graphics cores are clocked at a frequency of 2 GHz and share the system's main memory as graphics memory.

That's also the reason why we include an additional data point, DDR4-3200, to get a feel for how dropping memory speed from DDR4-3800 (green bar) to the more affordable DDR4-3200 (brown bar) impacts the FPS rates. Overclocking the integrated GPU was very easy, using either the BIOS or Ryzen Master. Everything above 2.4 GHz resulted in visual rendering artifacts, but a 20% overclock is still very impressive.

Last but not least, another benchmark shows the performance with the IGP memory reservation set to 2 GB in the BIOS (as opposed to the default of 512 MB). There really is no difference outside of margin of error. The underlying reason is that the graphics driver is able to dynamically allocate additional memory if 512 MB isn't enough, so neither scenario will "run out" of VRAM.

I'm guessing this is a copy-paste from the 5700G review, at the top of p19 of the 12300 review...?
 
On those low end models review I feel like the test setup should be different, ditch the 1200W PSU and Z690 to get actually usable and meaningful power consumption results. Especially idle. My ancient athlon II setup has lower idle than those setups, simply because it has a 300W PSU.

In a comparison test, you need as similar a baseline as possible. The number of watts alone doesn't mean much: it's the difference between the resulting values that's important.
 
  • Like
Reactions: bug
Now I get it! Intel launched in very small quantities the 12100 to make them seem customer friendly and great in vfm, then made much more 12300s that will be sold in much higher price and offer 1-2% better performance. Smart marketing from Intel's side for not tech informed customers to be fooled by... :shadedshu: And it reminds me of the AMD's 3300X that was reviewed, got the vfm trophy of that generation and was nowhere to buy. Alongside the fake MSRPs from nVidia for the RTX30 GPU series. Sneaky tactics. Customers, beware!
Its sad news...and these tricks and marketing ways will never cease. In the end for all of these tech-companies making a quick turnaround profit is absolutely paramount. I retired from NIKE HQ in Beaverton, WA and we exclusively made many of our shoes in Vietnam, China and Indonesia for $8 a pair and sold them for just under $100-plus in the USA! Awareness? The only thing that all senior NIKE executives were always aware off, was that year-end bonuses with stock options up to 50% of annual salaries was a reality!
 
In a comparison test, you need as similar a baseline as possible. The number of watts alone doesn't mean much: it's the difference between the resulting values that's important.
Of course, but there's also the thing, that those lower end models are something very much usable and desired for other things like NAS/HTPC/Home Servers where numbers matter for 24/7 operation and choosing parts based on even single watt difference.
Huge PSU (as well as that massive high end GPU) like that is basically putting all the numbers to the moon and you get no numbers on how much would that set draw with a resonable PSU and iGPU instead. It could be 30W, could be 20W, you just don't know because all of those high power parts are blurring the image.
 
Problem is that was almost 5 years ago. And yet here we are with a 4c/8t cpu with better ipc and single core speed beating the shit out of most ryzen 6 and 8 core cpus from the past.
Where's that 8core future proof shit i kept hearing about since Zen 1 was released?
This 4c/8t "miracle" could have happened 5 years ago, why did it not?

Because monopoly. We are blessed to have a resurgent AMD. Without those Ryzen 6c and 8c, you'll only see this 4c/8t "miracle" appear at least 5 years later in 2027.
 
There's no good reason for me (or anybody really ) to buy an Intel CPU lower than a 12400F.
 
There's no good reason for me (or anybody really ) to buy an Intel CPU lower than a 12400F.
At least 95% of the computers on this planet are slower than a 12400F and people are happily using them, and making trillions of dollars in the process
 
At least 95% of the computers on this planet are slower than a 12400F and people are happily using them, and making trillions of dollars in the process
Most of those are prebulits used for office and workstation apps. For a 2022 DIY build though, a 6 core-12 thread CPU is probaly better for the long run. My 5c.
 
Most of those are prebulits used for office and workstation apps. For a 2022 DIY build though, a 6 core-12 thread CPU is probaly better for the long run. My 5c.
Like a Ryzen 5 1600? Stop thinking in core counts ;) look at architecture
 
Most of those are prebulits used for office and workstation apps. For a 2022 DIY build though, a 6 core-12 thread CPU is probaly better for the long run. My 5c.
U mean like the Ryzen 5 1600 which is insane 10% faster than I3 10100F in Multithread Applications?

In games its another thing like:
RTX 2070.

Odyssey
I3 100 FPS
R5 93 FPS

BFV
I3 191 FPS
R5 168 FPS

Horizon 4
I3 149 FPS
R5 134 FPS

Hitman 2
I3 116 FPS
R5 97 FPS


Kingdom deli....
I3 148 FPS
R5 144 FPS

Project Cars 2
I3 179 FPS
R5 151 FPS

RDR2
I3 102 FPS
R5 96 FPS

Yeah for sure a six core is the better option for future if u are going on gaming, thats the reason why the 12100F is the new budget king.
 
Like a Ryzen 5 1600? Stop thinking in core counts ;) look at architecture
It's everything isn't it?
IPC * cores * clock.
Hard to really look at any single one of those things in isolation.

R5 1600 was great but early Zen boost algorithm is pretty much off the table when more than two threads are running so it's largely a 3.2GHz part compared to any modern CPU from AMD or Intel that will usually tick over at 4GHz all-core. Throw in the older IPC and it's unflattering.

If anyone has a stock 1600 still, now is the time to manually overclock it - 3.8GHz seems likely for a pretty solid 15% boost and Zen1 was the last CPU that genuinuely saw big gains from manual, all-core OC's.
 
There's no good reason for me (or anybody really ) to buy an Intel CPU lower than a 12400F.
Not needing any more performance and not willing to splurge more money counts as pretty good reason for me. That's why I went with i3-12100F instead :]
 
Like a Ryzen 5 1600? Stop thinking in core counts ;) look at architecture
Nice point but I meant that for building a custom PC in 2022 with a new CPU, not used ones.
 
It isnt used. :roll:

Amd CPU Lineup 3.2022

Athlon 3000G 81€
Ryzen 1200 86€
Ryzen 1600 131€
Ryzen 3600 199€
Ryzen 5600G 236€
Ryzen 5600X 241€
 
There's no good reason for me (or anybody really ) to buy an Intel CPU lower than a 12400F.
I just recently purchased the G6900. Why ? Because I needed something cheap to use as a NAS cpu while providing strong single core performance (samba is a single core load).
The higher tier i3 12300 may also have some uses, has higher multi so I would assume there will be some wanting one for benchmarking or perhaps some other niche uses (older games that don't scale well with cores maybe ?).
 
Where's that 8core future proof shit i kept hearing about since Zen 1 was released?
If you believe in the buzzword that is ""future-proofing"" as a whole, there's no hope for you.

Also, huge amount of fanboy jargon in this thread, as usual. Sigh. When will people stop believing they are in a loving relationship with their favorite company?
 
If you believe in the buzzword that is ""future-proofing"" as a whole, there's no hope for you.

Also, huge amount of fanboy jargon in this thread, as usual. Sigh. When will people stop believing they are in a loving relationship with their favorite company?
I never believe in that shit since the 1st gen was so awful in gaming. But some loved to peddle the Tomb Raider benchmark as some sort of an evidence how more cores will be better off.

This 4c/8t "miracle" could have happened 5 years ago, why did it not?

Because monopoly. We are blessed to have a resurgent AMD. Without those Ryzen 6c and 8c, you'll only see this 4c/8t "miracle" appear at least 5 years later in 2027.
Again someone who's completely clueless. You do realize Intel already had a 6core mainstream cpu planned before Zen 1 was even released?
 
I never believe in that shit
Good, keep it that way.

Again someone who's completely clueless. You do realize Intel already had a 6core mainstream cpu planned before Zen 1 was even released?
True. But that doesn't mean AMD didn't give Intel a kick in its pants and force it to raise core counts way beyond 6 cores, as they should.
 
I never believe in that shit since the 1st gen was so awful in gaming. But some loved to peddle the Tomb Raider benchmark as some sort of an evidence how more cores will be better off.


Again someone who's completely clueless. You do realize Intel already had a 6core mainstream cpu planned before Zen 1 was even released?
Planned... while laughing at you and never needing to release it when faced with a lack of competition
 
Back
Top