• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

SiSoftware Compiles Early Performance Preview of the Intel Core i9-12900K

Joined
Oct 16, 2014
Messages
671 (0.19/day)
System Name Work in progress
Processor AMD Ryzen 5 3600
Motherboard Asus PRIME B350M-A
Cooling Wraith Stealth Cooler, 4x140mm Noctua NF-A14 FLX 1200RPM Case Fans
Memory Corsair 16GB (2x8GB) CMK16GX4M2A2400C14R DDR4 2400MHz Vengeance LPX DIMM
Video Card(s) GTX 1050 2GB (for now) 3060 12GB on order
Storage Samsung 860 EVO 500GB, Lots of HDD storage
Display(s) 32 inch 4K LG, 55 & 48 inch LG OLED, 40 inch Panasonic LED LCD
Case Cooler Master Silencio S400
Audio Device(s) Sound: LG Monitor Built-in speakers (currently), Mike: Marantz MaZ
Power Supply Corsair CS550M 550W ATX Power Supply, 80+ Gold Certified, Semi-Modular Design
Mouse Logitech M280
Keyboard Logitech Wireless Solar Keyboard K750R (works best in summer)
VR HMD none
Software Microsoft Windows 10 Home 64bit OEM, Captur 1 21
Benchmark Scores Cinebench R20: 3508 (WIP)

I'm not saying it can't, but Intel has some catching up to beat 5950X.
 
Joined
Feb 22, 2019
Messages
70 (0.04/day)
Well that's kind of the point ~ it's (mostly) useless unless mobos support it & even then it could be sketchy if not implemented properly!
1632616634654.png

This one supported both DDR2 and DDR3 (not both at once though)
I believe AL DDR4/5 support is more for flexibility of MoBo manufacturers, until CAS latencies come down, DDR5 may be a hard sell for some enthusiasts so the manufacturer can just make ddr4 and ddr5 variants of their boards
 
Joined
Mar 28, 2020
Messages
1,646 (1.11/day)
I feel people should not expect Intel to have a miracle inthe form of Alder Lake thag will allow them to pull themselves away from competition significantly. While its got 16 cores in the form of 8 pef, and 8 eff, it is not the same as 16 pef cores. Under load that favors multithread well, the big little config may not perform as well. They are not competing with 16 Bulldozer cores. The Zen3 cores are quite potent and easily beats the Skylake based cores easily.
 
Joined
Oct 27, 2009
Messages
1,133 (0.21/day)
Location
Republic of Texas
System Name [H]arbringer
Processor 4x 61XX ES @3.5Ghz (48cores)
Motherboard SM GL
Cooling 3x xspc rx360, rx240, 4x DT G34 snipers, D5 pump.
Memory 16x gskill DDR3 1600 cas6 2gb
Video Card(s) blah bigadv folder no gfx needed
Storage 32GB Sammy SSD
Display(s) headless
Case Xigmatek Elysium (whats left of it)
Audio Device(s) yawn
Power Supply Antec 1200w HCP
Software Ubuntu 10.10
Benchmark Scores http://valid.canardpc.com/show_oc.php?id=1780855 http://www.hwbot.org/submission/2158678 http://ww
I feel people should not expect Intel to have a miracle inthe form of Alder Lake thag will allow them to pull themselves away from competition significantly. While its got 16 cores in the form of 8 pef, and 8 eff, it is not the same as 16 pef cores. Under load that favors multithread well, the big little config may not perform as well. They are not competing with 16 Bulldozer cores. The Zen3 cores are quite potent and easily beats the Skylake based cores easily.
All they really need to do is have better single threaded performance and beat AMDs 12c in multithreaded, IF they price it accordingly.
Then the AMD 16c becomes a tradeoff rather than clear winner.

Rocket lake was competitive in single threaded and lost heavily in multi, and used more power...
Alders lake has to compete against Zen3+ and its added 15% ish FPS gains.
 
Joined
Jan 24, 2011
Messages
272 (0.06/day)
Processor AMD Ryzen 5900X
Motherboard MSI MAG X570 Tomahawk
Cooling Dual custom loops
Memory 4x8GB G.SKILL Trident Z Neo 3200C14 B-Die
Video Card(s) AMD Radeon RX 6800XT Reference
Storage ADATA SX8200 480GB, Inland Premium 2TB, various HDDs
Display(s) MSI MAG341CQ
Case Meshify 2 XL
Audio Device(s) Schiit Fulla 3
Power Supply Super Flower Leadex Titanium SE 1000W
Mouse Glorious Model D
Keyboard Drop CTRL, lubed and filmed Halo Trues
All they really need to do is have better single threaded performance and beat AMDs 12c in multithreaded, IF they price it accordingly.
Then the AMD 16c becomes a tradeoff rather than clear winner.

Rocket lake was competitive in single threaded and lost heavily in multi, and used more power...
Alders lake has to compete against Zen3+ and its added 15% ish FPS gains.
Leaked prices would indicate that Intel expects the 12900K to land somewhere between the 5900X and 5950X. If it was going to be faster, it would be more expensive.
 
Joined
Aug 11, 2012
Messages
20 (0.00/day)
Location
South Borneo
Processor 9900k, 3950x
Motherboard Z370 Strix, Max Hero
Cooling NZXT X72, TT M360 Plus
Memory Trident Z RGB 32g, Royal 32gb
Video Card(s) 2080Ti FE, 3080 Strix Gundam, 3090 Strix
Display(s) Dell S2716DG, 34" Ultrawide
Case Lian-Li O11DW, TT Core P3 White
Power Supply Seasonic
View attachment 218235
This one supported both DDR2 and DDR3 (not both at once though)
I believe AL DDR4/5 support is more for flexibility of MoBo manufacturers, until CAS latencies come down, DDR5 may be a hard sell for some enthusiasts so the manufacturer can just make ddr4 and ddr5 variants of their boards
Mine was P45-8D memory lover (with Q9550) still working normally the last time i remember, almost identical layout with DDR2 & 3.
The box says The world First 8 dimms mobo, with bonus Global Star cpu cooler, i like it a lot :D
 
Joined
Mar 16, 2017
Messages
1,669 (0.64/day)
Location
Tanagra
System Name Budget Box
Processor Xeon E5-2667v2
Motherboard ASUS P9X79 Pro
Cooling Some cheap tower cooler, I dunno
Memory 32GB 1866-DDR3 ECC
Video Card(s) XFX RX 5600XT
Storage WD NVME 1GB
Display(s) ASUS Pro Art 27"
Case Antec P7 Neo
Leaked prices would indicate that Intel expects the 12900K to land somewhere between the 5900X and 5950X. If it was going to be faster, it would be more expensive.
Perhaps. Didn't the 11-series pretty much see a price cut from MSRP almost right out of the gate?
 
Joined
Apr 5, 2016
Messages
183 (0.06/day)
Location
New Zealand
System Name Katzi
Processor Ryzen 7 5800X3D
Motherboard Gigabyte Aorus X570S Pro AX 1.1
Cooling Phanteks Glacier 360
Memory G.Skill Trident Z Neo F4-3600C16-16GTZNC (Dual Rank 32Gb)
Video Card(s) MSI Gaming X Trio RTX 3080
Storage Samsung SSD 980 1TB, 970 512GB Evo Plus, 1TB 870 QVO, 960 Pro
Display(s) AOC CQ27G2
Case NZXT H6 Black
Audio Device(s) Creative Soundblaster X3
Power Supply Corsair RMx850
Mouse Logitech G502X Plus & Razer Basilisk V3 Pro
Keyboard Keychron V2 translucent, Gateron Ink Black Silent, lubed & filmed.
In all my years, I can’t recall many boards supporting 2 kinds of memory. Granted, I might have missed something! The last time I could recall, it was when SDRAM came out over 25 years ago. I had a board that supported both RAM and SDRAM, though you could only use one or the other. Usually the dual support means the CPU will support both, but it’s vendor’s choice on which one to implement. I think Apollo Lake also had DDR3/DDR4 support, but no boards supported both. It would probably be a really complex design, too.

I had a Foxconn board back during the Core 2 Duo/Quad days that supported DDR2 and DDR3 it had two DIMM slots of each
 
Joined
Jan 6, 2013
Messages
349 (0.08/day)
quite certain this is run only on a single cluster (that is 8 cores). Either the efficiency or the performance ones, but I would be inclined to think that these were run on the little cores.
 
Joined
Jul 5, 2013
Messages
25,559 (6.48/day)
Suddenly people are up in arms about Sandra, which has been around and in use for almost 25 years. There's plenty of information about what each benchmark entails available on their website if you actually want to find out.

Here's a screenshot of the whole article since it has been taken down, just in case more people want to claim things like it isn't optimized for Intel's hybrid architecture, or that the results are invalid because it's running on Windows 10, or whatever other justification they want to come up with beyond "the product isn't out yet."

Thank You for that!

Considering Alder Lake was touted as the next coming of Jesus
It wasn't. But the results do show that there is merit to the design on X86.

until CAS latencies come down, DDR5 may be a hard sell for some enthusiasts
This. As with every new generation of memory, it takes time for the industry to refine it's offerings to surpass the previous generation.
 
Last edited:
Joined
Nov 24, 2012
Messages
27 (0.01/day)
Company I work for is very close to Intel. And everyone hates them. The devs, the devops, guys from the datacenters and even the business side is starting to argue that very high power consumption isn't worth the subsidies Intel gives us.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,065 (2.26/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/5za05v
quite certain this is run only on a single cluster (that is 8 cores). Either the efficiency or the performance ones, but I would be inclined to think that these were run on the little cores.
Please read the provided link to the screen grab of the original source that I added, it explains a bit more.
 
Joined
Feb 11, 2009
Messages
5,401 (0.97/day)
System Name Cyberline
Processor Intel Core i7 2600k -> 12600k
Motherboard Asus P8P67 LE Rev 3.0 -> Gigabyte Z690 Auros Elite DDR4
Cooling Tuniq Tower 120 -> Custom Watercoolingloop
Memory Corsair (4x2) 8gb 1600mhz -> Crucial (8x2) 16gb 3600mhz
Video Card(s) AMD RX480 -> RX7800XT
Storage Samsung 750 Evo 250gb SSD + WD 1tb x 2 + WD 2tb -> 2tb MVMe SSD
Display(s) Philips 32inch LPF5605H (television) -> Dell S3220DGF
Case antec 600 -> Thermaltake Tenor HTCP case
Audio Device(s) Focusrite 2i4 (USB)
Power Supply Seasonic 620watt 80+ Platinum
Mouse Elecom EX-G
Keyboard Rapoo V700
Software Windows 10 Pro 64bit
Leaked prices would indicate that Intel expects the 12900K to land somewhere between the 5900X and 5950X. If it was going to be faster, it would be more expensive.

idk, depends entirely on what Intel wants to do, if they want to win some mindshare back they might want to be faster yet cheaper then 5900x, I mean it is "only" 8 big cores so I would think it would be a power-move as well.
 
Joined
Aug 31, 2016
Messages
104 (0.04/day)
I just hope ADL is going to make up for Rocket Lake disaster in gaming performance and add it's own gain on top, making it around 40% faster in games vs Comet Lake, as it should be. Benchmarks doesn't matter anymore, Rocket Lake launch illustrated very well how irrelevant all of those benchmarks are for gaming CPUs, with 20% gains left and right and then 0% in actual games. Generational gains are just not across the board anymore, it will now be very common to see new CPU getting massive gains in some bechmarks and then no gains in others.
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Alders lake has to compete against Zen3+ and its added 15% ish FPS gains.
I just hope ADL is going to make up for Rocket Lake disaster in gaming performance and add it's own gain on top, making it around 40% faster in games vs Comet Lake, as it should be. Benchmarks doesn't matter, Rocket Lake launch illustrated very well how irrelevant all of those benchmarks are for gaming CPUs, with 20% gains left and right and then 0% in actual games.
To you both:
Zen3+'s 15% FPS gains was when running the CPUs at a lower clock speed.
FPS gains flatten out when the CPU is no longer the bottleneck, with most current games that happens a little over 4 GHz for Skylake family CPUs. This is also the reason why Rocket Lake showed little gains in gaming over Skylake. So until games become more CPU demanding, we should expect Alder Lake and other new CPUs to show minimal gains in games, despite having much faster cores.
 
Joined
Aug 31, 2016
Messages
104 (0.04/day)
To you both:
Zen3+'s 15% FPS gains was when running the CPUs at a lower clock speed.
FPS gains flatten out when the CPU is no longer the bottleneck, with most current games that happens a little over 4 GHz for Skylake family CPUs. This is also the reason why Rocket Lake showed little gains in gaming over Skylake. So until games become more CPU demanding, we should expect Alder Lake and other new CPUs to show minimal gains in games, despite having much faster cores.
I've heard that before. You are making up imaginary bottlenecks.

The reason why RKL didn't bring gains was because it was an afterthought backport that got better cores and no other upgrades. The reason why AMD is able to make big gen on gen gains, and go beyond the imaginary bottleneck that you are implying with their 5000 series CPUs and beat Comet Lake by big margin in some games, is because their generational updates are comprehensive, and they are going to do that again with Zen4. Alder Lake is a comprehensive upgrade and overhaul this time around as well, and it should not have the same problems as Rocket Lake. It is RKL arch bottlenecking itself not games bottlenecking it.
 
Joined
Nov 4, 2005
Messages
11,688 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I've heard that before. You are making up imaginary bottlenecks.

The reason why RKL didn't bring gains was because it was an afterthought backport that got better cores and no other upgrades. The reason why AMD is able to make big gen on gen gains, and go beyond the imaginary bottleneck that you are implying with their 5000 series CPUs and beat Comet Lake by big margin in some games, is because their generational updates are comprehensive, and they are going to do that again with Zen4. Alder Lake is a comprehensive upgrade and overhaul this time around as well, and it should not have the same problems as Rocket Lake. It is RKL arch bottlenecking itself not games bottlenecking it.


You must write for CNN.

It's a known fact that at higher resolution CPUs aren't the bottleneck for FPS. It's the GPU that is.

We have entered the Era where only scientific work, benchmarks, and poorly optimized software are the reasons CPUs aren't "fast enough".
 
Joined
Aug 31, 2016
Messages
104 (0.04/day)
You must write for CNN.

It's a known fact that at higher resolution CPUs aren't the bottleneck for FPS. It's the GPU that is.

We have entered the Era where only scientific work, benchmarks, and poorly optimized software are the reasons CPUs aren't "fast enough".

This is a very basic overgeneralization that was created for the sake of easier explanation and then turned into a myth over time. Resolution doesn't have anything to do with CPU bottleneck. The behavior of each game individually is what matters. If say 10900K is able to pull only 45 FPS in one scene because of CPU bottleneck (for example the game uses only one core) then it will have the same 45 FPS at 1080p, 1440p, 2160p or any other solution of the same aspect ratio (aspect ratio affects performance in CPU bound scenarios) as long as your GPU can handle it. Saying that CPU simply isn't a bottleneck at higher resolution period is very basic misunderstanding of how things work, somewhat bizarre for someone who was posting 2 posts a day on a tech forum for the last 16 years. What have you been doing all that time?

Again, I understand why this generalization is used when talking to people who are just getting into this for example, but to bring something like that up in a more advanced discussion is rather embarrassing.
 
Last edited:
Joined
Nov 4, 2005
Messages
11,688 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
This is a very basic overgeneralization that was created for the sake of easier explanation and then turned into a myth over time. Resolution doesn't have anything to do with CPU bottleneck. The behavior of each game individually is what matters. If say 10900K is able to pull only 45 FPS in one scene because of CPU bottleneck (for example the game uses only one core) then it will have the same 45 FPS at 1080p, 1440p, 2160p or any other solution of the same aspect ratio (aspect ratio affects performance in CPU bound scenarios) as long as your GPU can handle it. Saying that CPU simply isn't a bottleneck at higher resolution period is very basic misunderstanding of how things work, somewhat bizarre for someone who was posting 2 posts a day on a tech forum for the last 16 years. What have you been doing all that time?

Again, I understand why this generalization is used when talking to people who are just getting into this for example, but to bring something like that up in a more advanced discussion is rather embarrassing.


Durrrrrr......




"On popular demand from comments over the past several CPU reviews, we are including game tests at 720p (1280x720 pixels) resolution. All games from our CPU test suite are put through 720p using a RTX 2080 Ti graphics card and Ultra settings. This low resolution serves to highlight theoretical CPU performance because games are extremely CPU-limited at this resolution. Of course, nobody buys a PC with an RTX 2080 Ti to game at 720p, but the results are of academic value because a CPU that can't do 144 frames per second at 720p will never reach that mark at higher resolutions. So, these numbers could interest high-refresh-rate gaming PC builders with fast 120 Hz and 144 Hz monitors. Our 720p tests hence serve as synthetic tests in that they are not real world (720p isn't a real-world PC-gaming resolution anymore) even though the game tests themselves are not synthetic (they're real games, not 3D benchmarks)."

The architecture difference in out of order execution between AMD and Intel are so close that cache hits and cache size seem to be the determining factor. Neither is going to be able to pull a huge win unless they have some never before seen act of pre-determined calculations so cache hits are always 100%, or they find a way to reduce latency to 0ns. Either are impossible so it's down to cache and latency to cache.


What have I been doing for 16 years? Building servers for medical offices, working, having kids that have grown into teens able to drive not that it's any of your fucking business. I have also been paying attention to the evolution and revolution of tech, you are merely getting your fingers wet, let me know when you wake-up at 6AM the first time you fall asleep.at a on site install waiting for a RAID array to rebuild.
 
Joined
Jun 10, 2014
Messages
2,902 (0.80/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I've heard that before. You are making up imaginary bottlenecks.
That's nonsense.
FPS is a mesure of GPU performance, not CPU performance. And when the CPU is fast enough to fully saturate the GPU, the GPU becomes the bottleneck. This should be elementary knowledge, even to those without a CS degree.
If someone released a CPU with 10x faster cores tomorrow, it would not change the performance in most current games.

The reason why RKL didn't bring gains was because it was an afterthought backport that got better cores and no other upgrades.
That's an absurd statement.
Rocket Lake does in fact have higher single threaded performance. What the developers thought and felt during the development is irrelevant.

The reason why AMD is able to make big gen on gen gains, and go beyond the imaginary bottleneck that you are implying with their 5000 series CPUs and beat Comet Lake by big margin in some games, is because their generational updates are comprehensive, and they are going to do that again with Zen4. Alder Lake is a comprehensive upgrade and overhaul this time around as well, and it should not have the same problems as Rocket Lake. It is RKL arch bottlenecking itself not games bottlenecking it.
Clearly you are not familiar with Sunny Cove's design, or other CPU designs for that matter.
Both Sunny Cove and Golden Cove offer 19% IPC gains over their predecessor. The issues with Rocket Lake are tied to the inferior production node, not the underlying architecture.

This is a very basic overgeneralization that was created for the sake of easier explanation and then turned into a myth over time. Resolution doesn't have anything to do with CPU bottleneck.
CPU overhead is mostly linear with frame rate.
When the resolution is lower, the GPU bottleneck becomes less and the CPU bottleneck greater since the frame rate increases. With most games today you have to run them at 720p and/or low details with an absurdly high frame rate to show a significant difference between CPUs. This becomes a pointless and futile effort when actual gamers will not run the hardware under such conditions.
 
Joined
Aug 31, 2016
Messages
104 (0.04/day)
Durrrrrr......




"On popular demand from comments over the past several CPU reviews, we are including game tests at 720p (1280x720 pixels) resolution. All games from our CPU test suite are put through 720p using a RTX 2080 Ti graphics card and Ultra settings. This low resolution serves to highlight theoretical CPU performance because games are extremely CPU-limited at this resolution. Of course, nobody buys a PC with an RTX 2080 Ti to game at 720p, but the results are of academic value because a CPU that can't do 144 frames per second at 720p will never reach that mark at higher resolutions. So, these numbers could interest high-refresh-rate gaming PC builders with fast 120 Hz and 144 Hz monitors. Our 720p tests hence serve as synthetic tests in that they are not real world (720p isn't a real-world PC-gaming resolution anymore) even though the game tests themselves are not synthetic (they're real games, not 3D benchmarks)."

The architecture difference in out of order execution between AMD and Intel are so close that cache hits and cache size seem to be the determining factor. Neither is going to be able to pull a huge win unless they have some never before seen act of pre-determined calculations so cache hits are always 100%, or they find a way to reduce latency to 0ns. Either are impossible so it's down to cache and latency to cache.

This test only proves my point, it tests maximum framerate that CPUs can hit in tested games and clearly states that those CPUs are never going to reach higher FPS than that, regardless of resolution, which makes the statement of "CPU is no longer a bottleneck at higher resolutions" obviously incorrect. How often that actually happens in practice is whole another matter entirely, because you will be GPU bound in many games, that is where this oversimplification is coming from, but it is still objectively incorrect.

What have I been doing for 16 years? Building servers for medical offices, working, having kids that have grown into teens able to drive not that it's any of your fucking business. I have also been paying attention to the evolution and revolution of tech, you are merely getting your fingers wet, let me know when you wake-up at 6AM the first time you fall asleep.at a on site install waiting for a RAID array to rebuild.

That's an unnecessary overreaction. Obviously I didn't ask about your personal life, I assumed this goes without saying. All I asked was how is it that you don't understand the behavior of games and still throwing around oversimplified and incorrect principles despite almost 2 decades of experience around PCs and presumably games, if you enter the discussion about that specifically. There is no need for your personal information. Maybe I shouldn't have said that, but it was you starting the discussion with "CNN", which I assume is US TV news channel, so that was a huge insult right off the bat.

That's nonsense.
FPS is a mesure of GPU performance, not CPU performance. And when the CPU is fast enough to fully saturate the GPU, the GPU becomes the bottleneck. This should be elementary knowledge, even to those without a CS degree.
If someone released a CPU with 10x faster cores tomorrow, it would not change the performance in most current games.

CPU overhead is mostly linear with frame rate.
When the resolution is lower, the GPU bottleneck becomes less and the CPU bottleneck greater since the frame rate increases. With most games today you have to run them at 720p and/or low details with an absurdly high frame rate to show a significant difference between CPUs. This becomes a pointless and futile effort when actual gamers will not run the hardware under such conditions

And yet again you are creating imaginary bottlenecks and going by child logic of "you are going to be GPU bound anyway so there is no point in having faster CPU", as if a few latest AAA games were all that you have ever seen and played. I don't really see a point of further replying, you are just flat out wasting my time at this point.
 
Last edited:
Joined
Jul 5, 2013
Messages
25,559 (6.48/day)
This is a very basic overgeneralization
No, his assessment was spot-on.
that was created for the sake of easier explanation and then turned into a myth over time.
No, this is reality. Being the owner of several PCs with a wide range of CPU's, I can confirm definitively that for the last 12 years CPU's have reached a point were most are more than enough to do most general computing tasks efficiently. Anything 4cores and up will get the job done quickly, but even most dual cores do well. And for the last 6 years, there are no examples of mainstream(6core+) CPU's that can't do any task asked of them and do them in a speedy fashion. At this point in time, CPU makers are competing more against themselves than they are trying to meet the demands of software tasks.
you are just flat out wasting my time at this point.
No, you did that to yourself...
 
Joined
Aug 31, 2016
Messages
104 (0.04/day)
No, his assessment was spot-on.

No, this is reality. Being the owner of several PCs with a wide range of CPU's, I can confirm definitively that for the last 12 years CPU's have reached a point were most are more than enough to do most general computing tasks efficiently. Anything 4cores and up will get the job done quickly, but even most dual cores do well. And for the last 6 years, there are no examples of mainstream(6core+) CPU's that can't do any task asked of them and do them in a speedy fashion. At this point in time, CPU makers are competing more against themselves than they are trying to meet the demands of software tasks.
You are making the same mistake as he does, looking at this with child logic and through your subjective experience. Reality is that resolution has nothing to do with CPU bottleneck and the point of CPU bottleneck for a given game and CPU combination is the same for every resolution of the same aspect ratio, that is all I have been saying and you keep bringing your subjective technically incorrect oversimplified versions of that, or trying to bring in software other than what was originally discussed - games and only games.

No, you did that to yourself...
That much is true, but I am stuck for few hours anyway, so what to do... :p What I really meant by that though is that the discussion is normally supposed to go somewhere but if we cannot get past entry-level oversimplifications and for something as basic as CPU bottlenecking in games then it won't.

Nevermind I guess.
 
Last edited:
Top