• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD destroys Intel in energy efficiency - myth or reality?

Status
Not open for further replies.

SOS_Earth

New Member
Joined
Sep 20, 2024
Messages
16 (0.29/day)
Disclaimer: I don't want this thread to turn into a battleground between fans. As a personal opinion: both camps are wrong and right at the same time.

If the work session keeps cpu usage at 100% for hours, AMD is clearly the right choice. But it is more complicated in real life because the processor "dances" between 1 and 100% and the efficiency of Intel processors is much higher in idle, light or medium load. For a home user who uses the personal computer from reading news, watching videos, games to somewhat more complex activities (photo & video editing, CAD, virtual machines, etc.) I think that even God cannot give a clear verdict on the efficiency of the processor , if it is ok or not.

Below is a link to a Puget System article related to efficiency. The AMD 7000 and 9000 series are compared, but a 14900K (PL1 125W/PL2 253W) and 14900K "253" (PL1=PL2=253W) have also slipped in. The comparison between 14900K and 7950X/9950X may seem surprising to many. Is it really?

Puget link (see Watt-Hours)

As a personal example, just two examples and they are related to the i5-13500. It is easier with this system because it does not have a dedicated video card, LEDs, millions of fans, etc.
1. This work session exceeded 2h:30min and Average CPU Package is 3.8 W. Light activities, of course: reading, watching yt, writing this topic, browsing online stores, etc.
i5 session.jpg


2. Running the PCMark10 test shows the same Average CPU Package below 25W (estimated, screenshots are below). I compared my result with another 7700X + igpu just for performance. Unfortunately, I have no clue about the consumption of the 7700X, but how much can it be?

13500 power consumption.jpg
pc mark compare.jpg
 
Joined
Aug 29, 2024
Messages
37 (0.47/day)
Location
Shanghai
System Name UESTC_404
Processor AMD R9 7950x3d
Motherboard Gigabyte B650E Aorus Elite X Ax Ice
Cooling Deepcool AG620 digital
Memory Asgard ValkyrieII 6800c34->6400 16g*2
Video Card(s) Yeston 7900xt SAKURA SUGAR
Storage Zhitai TiPlus7100 1T
Case RANDOMLY PICKED TRASHBIN
Power Supply Thermalright TGFX850W
The io die of 7700x consumes about 15 watts on idle, so if you set the limit 10 watts higher, 7700x's performance will be better.

All these "platform idle consumption" below is based on my smart plug's indicator, not a software's result:
My 7950x3d+7900xt flagship pc consumes 200watts on idle(on windows desktop with wallpaper engine& some several light-weight services, the result is doubtly high, perhaps it's due to my 850w copper psu's efficiency&other reasons unknown), while using 7950x3d & a770 consumes only 150watts on idle. Besides, my 12300t+igpu ubuntu platform consumes only 24watts on idle, using a 650w gold sfx psu.
 
Last edited:
Joined
Jun 21, 2021
Messages
3,094 (2.49/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
My thought is that it is foolish to make a broad generalization of both companies' CPU technologies from this particular Puget analysis. After all, their primary goal is to compare AMD 7000 and 9000-series CPUs in context of workstation performance.

Presumably energy efficiency will differ between various product segments (server, workstation, enthusiast gamer, business desktop, mini PCs, gaming notebooks, ultrabooks, etc.).

One thing that other analyses has shown is that Apple M-series SoCs destroy Intel and AMD's comparable silicon in performance-per-watt but not peak performance. We've seen that here in various benchmark comparisons (a section of the TPU forums). iPhone and iPad SoCs don't have the top numbers but they often topple PC CPUs while sipping a fraction of the electricity.

Idle power consumption is not a particularly good metric to make any sort of assumption. You really need to run tests (benchmarks, etc.) that approximate typical usage.

And ultimately it's up to the individual consumer to decide how much this really matters. I have more than one PC. I have a power hogging gaming PC (Ryzen 5800X3D + RTX 4080 SUPER). I really don't care how efficient it is when I'm playing videogames. But yeah, I don't want that beast idling at god-knows-what-wattage. So my daily driver desktop Windows box (for e-mails, web, office productivity, etc.) is a mini PC powered by an Intel N100 SoC (the SoC by itself idles around 0.8 W I think).

And yes, I have a Mac mini M2 Pro that is my primary computer. It uses way less power than the Mac mini 2018 (Intel i7) that it replaced and still provides way more performance.

Sure the RTX 4080 SUPER has better top performance than the GPU cores in my Mac but at what cost? 320W or more? Since I don't game on my Mac I can boot up the gaming PC when I want to play.

In summary, there are devices in my house using various silicon (AMD, Intel, Apple, and probably some sort of Arm somewhere). I'm not running enterprise-caliber workstations or datacenter servers so I'm not particular about the power draw of my most powerful PCs because their primary usage case is gaming. But most of my time is doing computer tasks that are not gaming-related so I choose those devices with a bigger consideration to energy efficiency.

We can extend that to other tasks. My Intel N100-powered mini PC draws a little more power than my Raspberry Pi 4 but the former can play back 4K video without sweating (thank you Intel Quicksync) whereas the Raspberry Pi 4 struggles with high-bitrate 1080p video. So much so I gave up using the latter as a Kodi device. (Video playback is Raspberry Pi's Achilles heel.)

I'm more of a fan of picking the right tool for the right job rather than ogling over some company's logo. I wish my Mac was a credible gaming machine but it's not. I'm realistic.

There's also the undiscussed subject of software. Some companies do a better job writing software than others. Not just microcode or device drivers but this extends to things like APIs, libraries, developer tools, etc. I don't have personal contact with the latter but I use software that is definitely influenced by software written by Intel, AMD, Apple, Nvidia, etc.

The Puget article is something an IT director or corporate purchasing agent responsible for a large pool of workstations might want to read. But it's not something to base an all-reaching generalization about CPU efficiency.

Let's also remember that Puget is a systems builder. They are going to write articles that justify their own product offerings. If they want you to buy AMD-powered workstations they are going to cherry pick through benchmarks until they find a comparison that is favorable for AMD silicon.

All companies do this and not just tech firms. A florist will tell you why their Bouquet A are better than some other shop's arrangements.
 
Last edited:
Joined
Nov 16, 2023
Messages
1,349 (3.71/day)
Location
Nowhere
System Name I don't name my rig
Processor 14700K
Motherboard Asus TUF Z790
Cooling Air/water/DryIce
Memory DDR5 G.Skill Z5 RGB 6000mhz C36
Video Card(s) RTX 4070 Super
Storage 980 Pro
Display(s) Some LED 1080P TV
Case Open bench
Audio Device(s) Some Old Sherwood stereo and old cabinet speakers
Power Supply Corsair 1050w HX series
Mouse Razor Mamba Tournament Edition
Keyboard Logitech G910
VR HMD Quest 2
Software Windows
Benchmark Scores Max Freq 13700K 6.7ghz DryIce Max Freq 14700K 7.0ghz DryIce Max all time Freq FX-8300 7685mhz LN2
Intel already gives you the Time-Averaged spec of the CPU wattage of a 14900K as 125W. Max of 253w

AMD only gives the maximum wattage output needed to dissipate, 9950X at 170w. Average is probably much lower.

Intel

Fairly easy comparisons really. Looks like AMD for the win here.
 
Joined
May 22, 2024
Messages
408 (2.31/day)
System Name Kuro
Processor AMD Ryzen 7 7800X3D@65W
Motherboard MSI MAG B650 Tomahawk WiFi
Cooling Thermalright Phantom Spirit 120 EVO
Memory Corsair DDR5 6000C30 2x48GB (Hynix M)@6000 30-36-36-76 1.36V
Video Card(s) PNY XLR8 RTX 4070 Ti SUPER 16G@200W
Storage Crucial T500 2TB + WD Blue 8TB
Case Lian Li LANCOOL 216
Power Supply MSI MPG A850G
Software Ubuntu 24.04 LTS + Windows 10 Home Build 19045
Benchmark Scores 17761 C23 Multi@65W
Might be more fair to say that TSMC 7nm and below are superior to any flavour of Intel 10nm, rather than AMD processor architectures being inherently better in terms of efficiency - It would be hard to tell until both are made on the same node for the same segment. And yes, AMD chips with cIOD do have idle power problems.

My thought is that it is foolish to make a broad generalization of both companies' CPU technologies from this particular Puget analysis. After all, their primary goal is to compare AMD 7000 and 9000-series CPUs in context of workstation performance.

Presumably energy efficiency will differ between various product segments (server, workstation, enthusiast gamer, business desktop, mini PCs, gaming notebooks, ultrabooks, etc.).

...

And yes, I have a Mac mini M2 Pro that is my primary computer. It uses way less power than the Mac mini 2018 (Intel i7) that it replaced and still provides way more performance.

Sure the RTX 4080 SUPER has better top performance than the GPU cores in my Mac but at what cost? 320W or more? Since I don't game on my Mac I can boot up the gaming PC when I want to play.

In summary, there are devices in my house using various silicon (AMD, Intel, Apple, and probably some sort of Arm somewhere). I'm not running enterprise-caliber workstations or datacenter servers so I'm not particular about the power draw of my most powerful PCs because their primary usage case is gaming. But most of my time is doing computer tasks that are not gaming-related so I choose those devices with a bigger consideration to energy efficiency.

...
If you are interested in power efficiency and have excess performance on that 4080 SUPER, you can easily shave 100W off the power limit and lose not much more than 10%, and for less noise and heat too. That worked on my card one segment below and with the same silicon.
 
Joined
Jun 21, 2021
Messages
3,094 (2.49/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG UltraFine 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
VR HMD Oculus Rift S (hosted on a different PC)
Software macOS Sonoma 14.7
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
If you are interested in power efficiency and have excess performance on that 4080 SUPER, you can easily shave 100W off the power limit and lose not much more than 10%, and for less noise and heat too. That worked on my card one segment below and with the same silicon.
Nah, I don't care that much. My gaming sessions aren't long. Diddling with undervolt settings is a similar timesink to overclocking.

The same with diddling with my Mac settings. I don't.

I figure that the system was designed by people (mostly in their 50s) who have been doing this for 20, 30 years and have advanced degrees from some of the top engineering schools on this planet. I'm not really going to do a better job than some Apple hardware engineer who has a couple dozen approved patents. Or Nvidia. Or Asus.

If I paid $1000 for a gaming GPU, I want $1000 of performance. If I want $900 of performance, I'll pay $900. Especially for my high-end gaming GPU. Why cripple it? There are plenty of cheaper products on the market with lower performance. I paid a premium price for premium performance.

That's like installing a rev limiter on a Ferrari to conserve gas. If you don't want to burn that gas, grab the keys to the Prius.

On a few games I do limit FPS. That will reduce power/heat/fan noise but the stock coolers on my current graphics cards are pretty good. The 4080 SUPER Founders Edition is well designed and way quieter than the tragically infernal cooler on the 2070 SUPER Founders Edition.

Unfortunately so many modern games are so poorly written that you need to throw powerful GPUs at them just to get measly performance.
 
Last edited:

Count von Schwalbe

Moderator
Staff member
Joined
Nov 15, 2021
Messages
3,043 (2.78/day)
Location
Knoxville, TN, USA
System Name Work Computer | Unfinished Computer
Processor Core i7-6700 | Ryzen 5 5600X
Motherboard Dell Q170 | Gigabyte Aorus Elite Wi-Fi
Cooling A fan? | Truly Custom Loop
Memory 4x4GB Crucial 2133 C17 | 4x8GB Corsair Vengeance RGB 3600 C26
Video Card(s) Dell Radeon R7 450 | RTX 2080 Ti FE
Storage Crucial BX500 2TB | TBD
Display(s) 3x LG QHD 32" GSM5B96 | TBD
Case Dell | Heavily Modified Phanteks P400
Power Supply Dell TFX Non-standard | EVGA BQ 650W
Mouse Monster No-Name $7 Gaming Mouse| TBD
Monolithic silicon is always going to win at idle power consumption. Compare a 5700 or 5500 with a 5700X or 5600.

Efficiency is a matter of power used to complete a certain task. If the task is bottlenecked by anything other than the CPU, then efficiency can be measured by average power consumption while that task is running.
 
Joined
Jul 13, 2016
Messages
3,268 (1.07/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
But it is more complicated in real life because the processor "dances" between 1 and 100% and the efficiency of Intel processors is much higher in idle, light or medium load

Intel isn't more efficient at idle:

1729045555557.png


AMD has a slight lead in this chart but that's likely down to motherboard choice. People don't seem to account for the fact that your motherboard choice can swing your idle power consumption up to 50w or more depending on what features it needs to support and how it manages power. Particularly when you are talking about comparing a dual chip solution vs a single one. AMD's X670 and X870 motherboards are going to take 10w just on the extra chip but it's mid-range / low end boards will not. Some motherboards might get you a little performance at the cost of higher power consumption.

I'd also contest the idea that Intel is more efficient in mid-utilization workloads:

1729046664386.png


The charts shows two things:

1) Both AMD and Intel have efficient processors
2) Only Intel chips lower in the stack are efficient. You cannot get Intel's best performance without sacrificing efficiency. Meanwhile AMD's top chips top the efficiency stack.

That's before considering that AMD also scales up better in heavier workloads and wins in gaming efficiency:

1729046866071.png

1729046889142.png



For a home user who uses the personal computer from reading news, watching videos, games to somewhat more complex activities (photo & video editing, CAD, virtual machines, etc.) I think that even God cannot give a clear verdict on the efficiency of the processor , if it is ok or not.

In most of those activities AMD is more efficient. It's a tie in web browsing. You don't need god to answer this for you, you just need to either look at the charts or stop worrying. Quibling over idle power savings when the difference is an average of 3w is not remotely worth it regardless of who is leading. This isn't mobile, it's desktop. Intel's power consumption only starts to become a problem at the high end because cooling is difficult and heat building in the computer room is a problem, especially paired with a 4090 / 3090 / ect.

All these "platform idle consumption" below is based on my smart plug's indicator, not a software's result:
My 7950x3d+7900xt flagship pc consumes 200watts on idle(on windows desktop with wallpaper engine& some several light-weight services, the result is doubtly high, perhaps it's due to my 850w copper psu's efficiency&other reasons unknown), while using 7950x3d & a770 consumes only 150watts on idle. Besides, my 12300t+igpu ubuntu platform consumes only 24watts on idle, using a 650w gold sfx psu.

That's extremely high. I have a 7800X + 4090 + X670E mobo + 2 enterprise SSDs that consume 12w each 24/7 and my idle power consumption is 112w (measured with a P3 44000 Watt meter). Your PSU selection is not responsible for such a large idle power consumption. Something is not properly entering idle state for your system.

Monolithic silicon is always going to win at idle power consumption. Compare a 5700 or 5500 with a 5700X or 5600.

Efficiency is a matter of power used to complete a certain task. If the task is bottlenecked by anything other than the CPU, then efficiency can be measured by average power consumption while that task is running.

I'm not so sure about that one. Intel's chiplet based design with 2 low power cores built in the SOC die might actually consume less power than a monolithic die could, simply because the rest of the chip can be shut off and they can be placed close to where they are needed.

In addition, as the amount of data that needs to be transfered around a chip increases, so too does the efficiency benefit of having a high bandwidth data fabric to transfer said data around. Something that comes with a chiplet based design.

I can also imagine that stacking chiplets will provide increasing efficiency benefits as well. Shorter data lines that are closer to where they need to be, no need to route around other components on the chip.
 

freeagent

Moderator
Staff member
Joined
Sep 16, 2018
Messages
8,459 (3.76/day)
Location
Winnipeg, Canada
Processor AMD R7 5800X3D
Motherboard Asus Crosshair VIII Dark Hero
Cooling Thermalright Frozen Edge 360, 3x TL-B12 V2, 2x TL-B12 V1
Memory 2x8 G.Skill Trident Z Royal 3200C14, 2x8GB G.Skill Trident Z Black and White 3200 C14
Video Card(s) Zotac 4070 Ti Trinity OC
Storage WD SN850 1TB, SN850X 2TB, SN770 1TB
Display(s) LG 50UP7100
Case Fractal Torrent Compact
Audio Device(s) JBL Bar 700
Power Supply Seasonic Vertex GX-1000, Monster HDP1800
Mouse Logitech G502 Hero
Keyboard Logitech G213
VR HMD Oculus 3
Software Yes
Benchmark Scores Yes
My 5900X..



Screenshot 2024-10-15 221308.png
 

NoLoihi

New Member
Joined
Sep 15, 2024
Messages
8 (0.13/day)
The io die of 7700x consumes about 15 watts on idle, so if you set the limit 10 watts higher, 7700x's performance will be better.

And yes, AMD chips with cIOD do have idle power problems.
I’m still salty about it, so let it be known that AM4-platform chipsets also had power consumption problems. I meticulously weighed my options, buying used in, uh, 2020, 2021 (?), and, to what result?! The whacky chipsets have been pulling about as much as the CPUs all over, when using it for surfing, reading text, and such. Today, I wish I hadn’t bought it. I could be buying an N100 instead and save some cash; its Skymont successor, whenever it lands (… probably not too soon, if performance (per clock) is as good as announced), or something bigger from Intel, where they design for efficiency while in ordinary use, whereas AMD has had its wins mostly in, like, max utilisation throughput tasks.

Some undervolting also goes a long way, for both sides, but probably more for Intel.

I've had some problems in the meantime, but I've now seen evernessince’s post, an hour later, right when I wanted to hit send, so I’m not going to be replying much to it, but, some graphs are misleading (averages of instantaneous power versus total “power” (energy) consumption to complete each task, with task selection being debatable to begin with) and in particular this claim
so too does the efficiency benefit of having a high bandwidth data fabric to transfer said data around. Something that comes with a chiplet based design.
Is … uh … it leaves me confused. Lunar Lake has a high bandwith (I guess, if we say so) data fabric within a single die, and it, I don’t know, exists. :confused:

Forgot to add: Only thing I can’t deal with are the Guru3D numbers, that’s not what I had on mind. Humm.
I have a 7800X + 4090 + X670E mobo + 2 enterprise SSDs that consume 12w each 24/7 and my idle power consumption is 112w (measured with a P3 44000 Watt meter).
* That’s, uhm, outright ludicruous. I don’t mean the fun way, but rather, I’m regularly using laptops, I’ve looked at mini PCs in depth, and even going a full, base ten, order of magnitude lower, would not be considered good idle power draw there.
*(The automatic quoting feature didn't work while editing.)
 
Last edited:
Joined
Oct 23, 2020
Messages
56 (0.04/day)
I'd also contest the idea that Intel is more efficient in mid-utilization workloads:

View attachment 367752

The charts shows two things:

1) Both AMD and Intel have efficient processors
2) Only Intel chips lower in the stack are efficient. You cannot get Intel's best performance without sacrificing efficiency. Meanwhile AMD's top chips top the efficiency stack.
Both Intel and AMD have options that are significantly more power efficient variants of their high-end CPUs. For Intel it's the non-K and T variants, while for AMD it's the non-X and X3D variants.
But usually you don't see non-K variants of the high-end parts being tested, and pretty much never see T variants being tested.

For example here's the 14900K at lower power limits:
1729050681284.png
1729050634819.png


For the most part in the high-end AMD is more efficient by 10% or so. But at the mid-range and low-end Intel is slightly more efficient in most cases.
The difference isn't significant for either side.
 

tabascosauz

Moderator
Supporter
Staff member
Joined
Jun 24, 2015
Messages
8,124 (2.37/day)
Location
Western Canada
System Name ab┃ob
Processor 7800X3D┃5800X3D
Motherboard B650E PG-ITX┃X570 Impact
Cooling NH-U12A + T30┃AXP120-x67
Memory 64GB 6400CL32┃32GB 3600CL14
Video Card(s) RTX 4070 Ti Eagle┃RTX A2000
Storage 8TB of SSDs┃1TB SN550
Case Caselabs S3┃Lazer3D HT5
I’m still salty about it, so let it be known that AM4-platform chipsets also had power consumption problems. I meticulously weighed my options, buying used in, uh, 2020, 2021 (?), and, to what result?! The whacky chipsets have been pulling about as much as the CPUs all over, when using it for surfing, reading text, and such. Today, I wish I hadn’t bought it. I could be buying an N100 instead and save some cash; its Skymont successor, whenever it lands (… probably not too soon, if performance (per clock) is as good as announced), or something bigger from Intel, where they design for efficiency while in ordinary use, whereas AMD has had its wins mostly in, like, max utilisation throughput tasks.

Some undervolting also goes a long way, for both sides, but probably more for Intel.

I've had some problems in the meantime, but I've now seen evernessince’s post, an hour later, right when I wanted to hit send, so I’m not going to be replying much to it, but, some graphs are misleading (averages of instantaneous power versus total “power” (energy) consumption to complete each task, with task selection being debatable to begin with) and in particular this claim

Is … uh … it leaves me confused. Lunar Lake has a high bandwith (I guess, if we say so) data fabric within a single die, and, I don’t know, exists. :confused:

Intel's new disaggregation strategy is a bit different. Foveros in its current form (Meteor Lake and later) is pretty efficiency-focused as you can probably tell from the island E-cores (albeit not on desktop). But it's also on a completely different level in terms of technology; AMD's IFOP is really about as primitive as interconnects get these days. Unlike Intel tiling that still needs to meet (and practice even beat) strict idle power standards for laptops, IFOP was just cheap and easy above all else. Bringing it to laptops (Dragon Range) was just an afterthought, in only chunky form factors that don't care about efficiency.

In AMD's corner, GPU Infinity Link is quite a bit closer sophistication-wise and performance-wise but I might argue that even that is less cutting edge than Foveros. Might see a better apples to apples comparison between the two tech if Strix Halo decides to finally make an appearance, as it will require new tech.

It's not impossible to get IFOP CPUs down in package power, but it takes a lot of work (lowering voltages as much as possible) and a lot of luck (mobo choice).

tbh I think the idle power argument for CPUs is a bit overblown. By contrast, multi display idle for current gen GPUs (RTX 40 vs. RDNA3) gets real ugly real fast - the difference isn't 10-15W, it can get closer to 100W. And that's just hard to mitigate with whole system power figures.

Might also have something to do with Intel running Gear 2 vs. AMD running 1:1 UCLK for DDR5. On desktop Intel pretty much always idles lower (even against the APUs, slightly), but on mobile it's a different story even with the same silicon, as neither UCLK and FCLK are bound so strictly to high clocks all the time.
 
Last edited:

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
26,877 (3.82/day)
Location
Alabama
System Name RogueOne
Processor Xeon W9-3495x
Motherboard ASUS w790E Sage SE
Cooling SilverStone XE360-4677
Memory 128gb Gskill Zeta R5 DDR5 RDIMMs
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 2TB WD SN850X | 2x 8TB GAMMIX S70
Display(s) 49" Philips Evnia OLED (49M2C8900)
Case Thermaltake Core P3 Pro Snow
Audio Device(s) Moondrop S8's on schitt Gunnr
Power Supply Seasonic Prime TX-1600
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Moondrop Luna lights
VR HMD Quest 3
Software Windows 11 Pro Workstation
Benchmark Scores I dont have time for that.
tbh I think the idle power argument for CPUs is a bit overblown.
Same

This is my current CPU idle.

1729053441768.png


I wish I could dream big like you guys.

:(
 
Status
Not open for further replies.
Top