• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

I switched from LGA1700 to AM5, here are my thoughts

Joined
Jul 29, 2023
Messages
48 (0.07/day)
System Name Shirakami
Processor 7800X3D / 2000 & 2000 IF & UCLK
Motherboard Gigabyte B650i AORUS Ultra
Cooling Corsair iCUE H100i ELITE CAPELLIX w/ ultra slim 2x120mm fans
Memory 2x24GB Hynix M-Die @ 8000 - 38-48-48-40 - 40k tREFI & tuned subtimings
Video Card(s) 6900 XT Reference / -120 mV @ 2.4 GHz
Storage 1 & 2 TB NVMe - 1 TB SATA SSD
Display(s) LG 34GN850 (3440x1440) 160 Hz overclock
Case Lian Li Q58
VR HMD Reverb G2 V2
Software Fedora Linux 41 operating system (daily)
Earlier this week, I made the jump from my 13700k to a 7800X3D. Oddly enough, I didn't do this because of degradation or the current Intel debacle. Rather, I switched because of power draw and thread scheduling. My system was completely stable in the one year I owned it. My specifications were as follows,

13700k - ASUS Z690 ITX - DDR5 7200 2x24GB - 6900 XT

My new specifications are

7800X3D - Gigabyte B650i AORUS ULTRA - 7200 2x24GB @ 6000 - 6900 XT

My Intel system was fairly tuned (~10% more performance, mostly from the memory and ring) and the 7800X3D is.. as tuned as you can make it (~7% more performance).

To begin with, I loved my 13700k. I found it to be a wonderful CPU and a nice upgrade from a 10900k. Unfortunately, the power draw was quite immense in modern games - Cyberpunk 2077 averaged around 140 watts with spikes upwards of 165 on occasion (Hwinfo64 CPU package power). I tried mitigating this by disabling Hyperthreading, but the power draw only lowered to about 125 with spikes up to 150. I also tried disabling efficiency cores (HT still enabled), the power draw lowered to around 120 with spikes up to 140. Disabling both HT and e-cores resulted in the most substantial loss of power, going all the way down to 115 watts with no real spikes to speak of. The performance difference between all three setups was less than 10%, which is one of the main driving factors that pushed me towards changing platforms. If it's pulling upwards 140 watts with current games, I can't even imagine what the CPU will draw in five years from now.. and electricity isn't getting any cheaper.

Disabling HT feels bad, even if it's not a huge driving factor to gaming performance. Disabling the e-cores feels even worse, considering those are tangible cores unlike HT, where more instructions being shoved into the transistors on each clock cycle. There's also the argument that disabling either of those features will hurt gaming performance in the future - leaving you with only one option. Underclocking. You can set a flat 5 GHz p-cores and 3.9 GHz e-cores at a fairly reasonable voltage; the power consumption gets cut by roughly 25% in gaming scenarios.. but you also completely lose out on Raptor Lake being a monstrosity when it comes to clockspeed. It feels just as bad as the above concessions. There is also the issue of thread scheduling; I do not run Windows 11, I daily Linux which didn't get proper support for Raptor Lake until Kernel 6.6 in October 2023. I use the term support loosely, because in gaming scenarios, the e-cores still become loaded with instructions. Windows 10 was and is my go-to for whenever I need to dual boot, however, the thread scheduler in Windows 10 isn't anywhere near as nice when compared to Windows 11. It's a similar issue to Linux wherein the efficiency cores are loaded with instructions, causing noticeable microstutters in gaming situations.

Add all of the issues I've listed above, including the poor memory controller found on 13th and 14th generation - with stability drifting on and off when aiming for speeds beyond 7200 MT/s - alongside the IHS bending, requiring a third-party loading mechanism (which I purchased) - not to mention the current degradation due to the SVID requesting upwards of 1.65v ..

Needless to say, after over a year of ownership, I was fairly sick of it all.

The power was a problem I needed to solve. I undervolted + overclocked the system and set a strict power limit of 175W to assure AVX2 workloads wouldn't cause crashing. It only needed to be stable for games, not heavy workloads, therefore the voltage I aimed for was whatever could run Cyberpunk 2077 at a CPU bind.

The scheduling was an issue I needed to solve, so I purchased Process Lasso for when I use W10 - which I highly recommend, it's a wonderful application.

The bending was an issue I needed to solve, so I purchased the ThermalTake loading mechanism.

The lack of memory stability at XMP speeds (7200) was an issue I needed to solve, which took weeks of tuning PLL voltages, VDD & VDDQ, IMC voltages, SA voltages.. etc.

The heat itself was an issue I needed to solve; I was highly uncomfortable with my CPU pulling 140 watts in games only to thermal throttle during loadscreens since the radiator, IHS, and liquid were all saturated. (240mm AIO).

The only saving grace of my build was the ASUS Z690 Gaming Wifi ITX motherboard. I absolutely adored that motherboard, it was overengineered and the VRM's didn't ever rise past 65C in stress tests.. and the single threaded performance. Raptor Lake has absolutely insane ST performance. My 7800X3D achieves 655 points in CPU-Z single threaded, while the 13700k is upwards of 900.

Admittedly, as of writing this, I haven't been on AM5 long enough to have experienced SoC voltages murdering AM5 CPU's outright. I also wasn't around for when memory training took upwards of 5 minutes. And I wasn't here for when core parking was a problem either. However, in the here-and-now, I'm absolutely in-love with the ease-of-use of my 7800X3D. It performs about the same, sometimes better, than my 13700k did in gaming situations.. while drawing 40 watts. If I were recommending a system to a friend today, I couldn't in good conscience recommend LGA1700 - the experience I had with it was quite negative, even if the CPU itself was blistering fast.

I hope that whatever Intel releases for their next generation is more refined. Competition is a very, very good thing - without it, we wouldn't even have an alternative like the 7800X3D to choose from. I love both teams. I even purchased a 1700X the first day it came out and dealt with the fiasco that was X370 BIOS' being very buggy. I loved my 10900k as well, it was a wonderful CPU but made very obsolete when 12th generation released - considering the e-cores were almost on par in ST performance. The 3570k I had before that was rock-solid, I never had any issues with it whatsoever. But this entire LGA1700 has just been a mess for me personally; in retrospect I wish that I had just purchased a 12900k in place of the 13700k. At least on Alder Lake, disabling the efficiency cores granted you a boost to cache speed - so it wasn't a complete loss.

But as things stand, I'll be holding on this chip for a while. I should note the 7800X3D did not exist at the time of me purchasing my 13700k.
 
really 200w intel vs 80w amd makes a difference in power bill total
 
really 200w intel vs 80w amd makes a difference in power bill total
Plus an extra 25 W at all times when not under load.

1724327515467.png

If you're so concerned about power draw why are you using an AMD GPU?

I'm also a little confused as to -
Evelynn said:
the poor memory controller found on 13th and 14th generation
Compared to what? AMD's setup that can't go past 6400? Or more realistically, 6200 MT. Raptor Lake can do 7200 on any four DIMM board, 7600 on any two DIMM board, and with some effort, 8000+. Karhu stable.

This results in ~125 GBs/read/write/copy with ns around 45-50 compared to Zen 4/5 ~70-80 GB/s @ 55 ns.

As for the degradation issues, it's been patched, if that's still not enough, simply set voltages manually, presto, problem solved. If that's still not enough, Intel extended the warranty by two years on top of the base warranty.

Anyway, it's an odd time to sidegrade to the competing platform, with Zen 5 X3D around the corner along with Arrow Lake.
 
Plus an extra 25 W at all times when not under load.

View attachment 360088
If you're so concerned about power draw why are you using an AMD GPU?

I'm also a little confused as to -

Compared to what? AMD's setup that can't go past 6400? Or more realistically, 6200 MT. Raptor Lake can do 7200 on any four DIMM board, 7600 on any two DIMM board, and with some effort, 8000+. Karhu stable.

This results in ~125 GBs/read/write/copy with ns around 45-50 compared to Zen 4/5 ~70-80 GB/s @ 55 ns.

As for the degradation issues, it's been patched, if that's still not enough, simply set voltages manually, presto, problem solved. If that's still not enough, Intel extended the warranty by two years on top of the base warranty.

Anyway, it's an odd time to sidegrade to the competing platform, with Zen 5 X3D around the corner along with Arrow Lake.

I'm not too concerned with idle voltage, since I typically have an application open in the background that forces my cores to be at ~15% utilization at all times.

I use an AMD GPU because it's 16 GB and the RTX 3080 it replaced was 10GB - which caused me to run out of VRAM in Virtual Reality games very, very often - and still would, if I hadn't switched. It's also undervolted and underclocked to pull only 180w.

I didn't say the AMD IMC was better, it's definitely not. The 7800X3D has one CCX, resulting in a nominal speed of 64 GB/s read - compared to the 110+ of my 13700k.

You know full well that degradation can't be patched out. What Intel did is forced the SVID to be at board vendor's Fail Safe modes and implemented amperage limits across the lineup. Your CPU won't crash if it's degraded because it's going to throttle on amperage first.

Weird and overly aggressive comment to my recollection of the last year of dealing with LGA 1700's problems but you do you.
 
Doing an entire platform sidegrade to save 56 watts under gaming load while adding 25 watts on idle.

Uneasy about undervolting/disabling HT/disabling E cores (or using Win 10 for gaming and Process Lasso, or Win 11 for the scheduler) since ~10% performance lost. But then running a 3-600 W GPU at 180 W to save power (also resulting in a performance loss).

1724328204771.png


This entire concept is... confusing.

Nothing about my comment is aggressive, it's simply confused questioning instead of the praise you may have expected.

1724328025297.png


Note these CPU gaming tests are done with a 4090, so they represent worst case scenario for gaming power draw (when paired with fastest GPU).
 
I'm not a "Greeny" one, but im an ANTI-WATT escalation on the consumer market.
Where is these "Greedy" capitalists contributions for a better evolution of this world and their footprint of innovated technology advances to increased reduction of energy consumption...
I do not have the budget neither will take such hardware even with a good deal new/used, when seeing te current wattage of some cpus around, or the "Insane" gpu escalation, specially from the NVidia "Monster".
Sure we do have new great advances, lots of cool new things... but this is working a lot worst to all the world society and their economy... but seems to be great times for "We know who..."
 
I'm not sure this was the wisest move, considered you could have easily purchased a chip like a Core i9-14900T and upgraded to a much more modern Nvidia GPU such as the 4080 Super, undervolted it and made use of DLSS to further save power, performing leaps and bounds ahead of the RX 6900 XT all while using *significantly* less power. Besides, are you even running a 80 Plus Titanium power supply capable of 90% conversion efficiency at 10% load to make the best out of these power savings? It's become increasingly overlooked as people want to aggressively save some watts - if you have a Gold or even Platinum-grade power supply, they are often 60 to 80% efficient at 10% load, meaning that you are actually wasting power by having a reduced load.

In fact, simply upgrading the graphics card to an Nvidia model and using DLSS would have beat out all other power savings I can see here - DLSS with Frame Generation makes it possible to play games like Starfield targeting 4K/120 at below 150 watts GPU power on a 4080, and that is from experience
 
I'm not sure this was the wisest move, considered you could have easily purchased a chip like a Core i9-14900T and upgraded to a much more modern Nvidia GPU such as the 4080 Super, undervolted it and made use of DLSS to further save power, performing leaps and bounds ahead of the RX 6900 XT all while using *significantly* less power. Besides, are you even running a 80 Plus Titanium power supply capable of 90% conversion efficiency at 10% load to make the best out of these power savings? It's become increasingly overlooked as people want to aggressively save some watts - if you have a Gold or even Platinum-grade power supply, they are often 60 to 80% efficient at 10% load, meaning that you are actually wasting power by having a reduced load.

In fact, simply upgrading the graphics card to an Nvidia model and using DLSS would have beat out all other power savings I can see here - DLSS with Frame Generation makes it possible to play games like Starfield targeting 4K/120 at below 150 watts GPU power on a 4080, and that is from experience
Spot on.

Additionally, % energy efficiency improvements don't necessarily convey that a high end CPU will use 50-100 W under gaming load, but a high end GPU will use 2-6x that. So improving GPU efficiency is significantly more effective at reducing power consumption than moving to a CPU that has the same % efficiency improvement.

High efficiency PSUs are also underrated, not only do you reduce heat output into case, but also typically improve power quality sent to the components, prolonging their life.

1724329559522.png
 
D
Doing an entire platform sidegrade to save 56 watts under gaming load while adding 25 watts on idle.

Uneasy about undervolting/disabling HT/disabling E cores (or using Win 10 for gaming and Process Lasso, or Win 11 for the scheduler) since ~10% performance lost. But then running a 3-600 W GPU at 180 W to save power (also resulting in a performance loss).

View attachment 360100

This entire concept is... confusing.

Nothing about my comment is aggressive, it's simply confused questioning instead of the praise you may have expected.

View attachment 360097

Note these CPU gaming tests are done with a 4090, so they represent worst case scenario for gaming power draw (when paired with fastest GPU).

I didn't "sidegrade" only because of power draw, I listed many other problems I had. It took me weeks to tune the PLL's, IMC, and SA voltages to achieve 7200 MT/s - all of the voltages sweetspot on LGA1700, and by sweetspot I mean 0.020mV means the difference between stability and errors 24 hours into Karhu. AM5 doesn't have that issue, when you set 6000 ~ 6200 ~ 6400 - it either works or it doesn't. And the boards will, for the most part, happily run speeds in excess of 7200 MT/s even if the CPU itself can't utilize such high bandwidth.

I also discovered weeks into purchasing my system that the IHS will bend to degrees that are unacceptable due to thermal cycling, resulting in me having to purchase a third-party loading mechanism, take apart my system entirely - and installing it. Three times. Because the loading mechanism has the capability of being too loosely installed, resulting in the IMC which already has drifting stability issues compounding to be even worse since mounting pressure is very key to achieving memory stability.

I also mentioned that the IHS, radiator, and internal case air become saturated during gaming, resulting in the CPU thermal throttling during AVX2 enhanced loadscreens. Straight to 95C, even with an undervolt.

Also, the GPU I'm running loses 6% performance when undervolted - going from 255w down to 180w, when the SoC isn't factored in. I don't want to purchase an RTX 4080, it's over 1000 USD and I disagree with Nvidia's current pricing lineup and will not support it.

The entire point of this post was to list the issues I had with LGA1700 - not the issues that EVERYONE has with LGA1700. There are hundreds of thousands of people more than happy with their systems. I wanted to state the reasons why I switched and why I am not recommending LGA1700 to my friends currently.

.. and it's a lot more than 56 watts. Whether it's this HardwareUnboxed video which proves as much, or my own personal experiences.

1724329782285.png
 
it's a lot more than 56 watts. Whether it's this HardwareUnboxed video which proves as much, or my own personal experiences.
I trust TPU testing as it's reliable, and uses the correct "stock" settings from CPU manufacturers, rather than whatever the motherboard manufacturer decided to throw at the wall to get better "reviews" from YouTubers who test out of the box Cinebench runs and in game benchmarks, rather than the custom scenes TPU uses.

AM5 not having issues running EXPO 6000 isn't really an argument for the platform over Intel, since Intel doesn't have issues at XMP 6000-6400 MT either, at least with 32/48 GB kits, but at least it has the option of going much higher, plus gets better numbers even with identical MT/timings anyway (96/83/85 compared to 78/78/70 GB/s, both at 6000 MT). Seems you tried to push 7200 and had trouble, likely due to a Z690 motherboard or perhaps a bad memory kit, I find it hard to believe a Raptor Lake CPU genuinely was the cause of memory struggles at 7200 MT on a two DIMM board.

I see, so you bought AMD because of a moral argument against NVIDIA, cool.

Hopefully you got a decent price on the platform sidegrade at least, considering Zen 5 is out now, that affected prices a bit, though the 7800X3D seems to have held it's value. Motherboards at least have gotten cheaper, and the refresh boards don't offer anything new, so not missing out on anything there.

A word of advice, don't run IF out of sync with RAM, I see you're using 2100 MHz, but are running 3000 MHz/6000 MT RAM. You'll ironically get consistently better gaming performance from 2000 MHz IF, so 1/1.5/1.5 sync isn't broken.

Enjoy your 7800X3D.
 
I bought the day 1 13700KF and ran it at stock with an air cooler in an ITX case - I was able to pass a stock cinebench run without throttling on a simple peerless assassin with my chip - I know not all chips are the same.

I too ended up undervolting after doing some benches and seeing a 2 fps difference between 5.6 ghz and 5.3 ghz - tuned the ring and bought some cheapo 7600mhz ram and sold my old ddr5 kit to scratch the upgrade itch.

I think as any platform ages you will fall out of love with it for X reason - the 13700K's power draw, for when it was released, was fine -not great. At that time it was perfroming like a 12900ks for less $$ and W. It drew more than the 7700x but also had like 40% higher productivity and also better gaming performance - so it compared favorably to zen4 and now even zen5.

If your primary use is gaming tho the 7800x3d can't be beat. The whole power argument I always found to be moot because the 13700k idles at
1724331805375.png

And unless you're gaming 24/7 chances are you're idling more than you are actually loading (especially during normal desktop use). So your annual electricity bill is probably higher with the 7800x3d if you run your rig on for any amount of time outside of gaming. Again, doesn't matter - we're not talking massive numbers here - but during work/desktop usage it's running 15-20W lower consistently than the AM5 builds.
 
Last edited:
I don't want to purchase an RTX 4080, it's over 1000 USD and I disagree with Nvidia's current pricing lineup and will not support it.

Interesting, because the aforementioned 4080 Super has the exact same MSRP the 6900 XT had when it came out. Yet you purchased a 6900 XT all the same.
 
Have to agree with other comments. The reasons for this upgrade are really strange.

Blaming Intel that your 7200 ram is not stable, while running the same ram underclocked at 6000 on AMD.
Complaining about 13700K power draw vs 7800X3D while running a power hog of a GPU.
All that fiddling with HT and Ecores is also strange, when you could have just lowered the CPU power limit to 65W while losing maybe 2-3% FPS in games.
 
Have to agree with other comments. The reasons for this upgrade are really strange.

Blaming Intel that your 7200 ram does not get stable overclock to 7400, while running the same ram underclocked at 6000 on AMD.
Complaining about 13700K power draw vs 7800X3D while running a power hop of a GPU.
All that fiddling with HT and Ecores is also strange, when you could have just lowered the CPU power limit to 65W while losing maybe 1-2% FPS in games.

Yup, it's perfectly ok to get bored of something and upgrade it, I like exciting new things too - that's why I sold my 3090 and got the 4080 after all. No need to make up excuses for any of it :D
 
Guys, reread their post. It isn't just about the total watts dissipated, it's the fact that the heat soak from the CPU is far lower.

Yup, it's perfectly ok to get bored of something and upgrade it, I like exciting new things too - that's why I sold my 3090 and got the 4080 after all. No need to make up excuses for any of it :D
This is something I've also realised on these forums - a lot of people spend a lot of time trying to justify their hardware purchase to others, when the real reason is that they just wanted to buy something new and shiny. And that's totally fine! You don't have to justify anything that makes you happy and doesn't hurt others - it's your hard-earned cash!
 
At the end of the day, PC's are a hobby for alot of us and hobbies don't ask for justifications.
 
oooofff, that reference to another HWUB screenshot, people nowadays "rely" for their help, when testing methodology isn't bulletproof or somewhat inconsistent from their run to run tests.
 
And unless you're gaming 24/7 chances are you're idling more than you are actually loading (especially during normal desktop use). So your annual electricity bill is probably higher with the 7800x3d if you run your rig on for any amount of time outside of gaming.
QFT.
Btw, 4070S uses only 6w in normal desktop/browser/YouTube use, 140-180w in gaming typically, and it pretty much ties with 6900xt on performance. It would be a better sidegrade if the OP was looking for power savings.
 
Last edited:
And unless you're gaming 24/7 chances are you're idling more than you are actually loading (especially during normal desktop use). So your annual electricity bill is probably higher with the 7800x3d if you leave run your rig on for any amount of time outside of gaming. Again, doesn't matter - we're not talking massive numbers here - but during work/desktop usage it's running 15-20W lower consistently than the AM5 builds.

Light desktop usage like browsing the web / doing office tasks is not the same as idle. Such a workload requires the CPU to boost one to three cores to a much higher frequency on their frequency table. For CPUs like the 13700K, 13900K, 14900K, 14700K, ect that pushes a lot of power to achieve high clocks that could very well mean they are less efficient in light usage.

If you read OP's post he explicitly points out that he is in fact always running something taking at least 15% of the CPU at all times.

Plus an extra 25 W at all times when not under load.

I think it's wise to point out that TPU's idle power numbers are total system power consumption. When comparing the actual consumption of the CPU at idle, both Intel and AMD are similar:


1724335574760.png



This is important to note because if you look at TPU's test setup it's using an X670 board, which is a dual chipset solution. This dual chip solution likely increases power consumption as compared to a single chip B-class motherboard like what the OP has. There's also going to be variability depending on which motherboard you choose as some vendors tend to favor performance over saving a few idle watts.

This is inherently the caveat of whole system power metrics, you have to consider that the platform itself is a variable that impacts the resulting number and it can have a significant impact when we are talking sub 86w. You can't rightfully say that OP's platform is less efficient at idle because you have not accounted for the difference in platform, particularly given that OP's B class mobo has 1 less chip.

Weird and overly aggressive comment to my recollection of the last year of dealing with LGA 1700's problems but you do you.

That's just dgianstgefani, they've been using that whole system idle power consumption metric as the straw they grasp anytime anyone on TPU dare say AMD is more efficient. Of course Zen is absolutely more efficient, he might have a point if he were to say the platform was less efficient at idle.

Just take a look at his system specs:

1724337431539.png


Says all you need to know about which he thinks is better.


Have to agree with other comments. The reasons for this upgrade are really strange.

Blaming Intel that your 7200 ram is not stable, while running the same ram underclocked at 6000 on AMD.
Complaining about 13700K power draw vs 7800X3D while running a power hog of a GPU.
All that fiddling with HT and Ecores is also strange, when you could have just lowered the CPU power limit to 65W while losing maybe 2-3% FPS in games.

You loose a lot more thann 2-3% FPS in games (and yes this isn't a 13700K power scaling chart but good luck finding that):

1724336688795.png


Couldn't find a 13700K power scaling chart but here's one for the 13900K:

1724337035799.png



95w is really the lowest you want to go with the higher end 13th and 14th gen Intel CPUs. Below that performance starts dropping off a cliff. The 14900K sees it's MT performance nearly halved as does the 13900K. The 13700K won't be as bad due to the less cores to feed but it certainly wouldn't make sense to buy such a CPU a limit performance so much.
 
Last edited:
Interesting, because the aforementioned 4080 Super has the exact same MSRP the 6900 XT had when it came out. Yet you purchased a 6900 XT all the same.
I bought the 6900 XT for 500 USD on sale towards the end of life from AMD's website.
I don't mind that it was nearly 1k USD either, because the die size is massive compared to what Nvidia is currently offering you and what they have offered in the past.

Not only that, but I use Linux - the drivers for AMD GPU's are baked into the kernel. Nvidia on Linux is not ideal as the drivers were closed-source until recently, and the open-source drivers are not very mature.
1724337383233.png
1724337396105.png
1724337399632.png
 
At least you don't need to worry about the Inhell chip breaking anymore. And 7800X3D is practically the best gaming CPU even today.

edit: that 6900 XT should still be fine for some time, my 3080 is a weaker card, yet I play at 4K120 which has way more pixels to render than your 1440p UW.
 
I bought the 6900 XT for 500 USD on sale towards the end of life from AMD's website.
I don't mind that it was nearly 1k USD either, because the die size is massive compared to what Nvidia is currently offering you and what they have offered in the past.
View attachment 360116View attachment 360117View attachment 360118

Sure, you got it on a clearance deal, valid. Ada isn't that old yet. But why are we comparing a TSMC N4P chip (bleeding edge node) to an ancient processor from 2010 and developed during the late 2000s?
 
I bought the 6900 XT for 500 USD on sale towards the end of life from AMD's website.
I don't mind that it was nearly 1k USD either, because the die size is massive compared to what Nvidia is currently offering you and what they have offered in the past.

Not only that, but I use Linux - the drivers for AMD GPU's are baked into the kernel. Nvidia on Linux is not ideal as the drivers were closed-source until recently, and the open-source drivers are not very mature.
View attachment 360116View attachment 360117View attachment 360118

1724337863589.png


other reviews will show a 10-15W idle / normal usage delta... there's a few of them.

Also losing 3% at 125W at 720 isn't bad....
 
Nvidia on Linux is not ideal as the drivers were closed-source until recently, and the open-source drivers are not very mature.

Meanwhile

1724338112385.jpeg


1724338122999.png


1724338141637.png


1724338155797.png


1724338169898.png


1724338225459.jpeg



If you're going to parade as a linux enthusiast and god knows we need more of them, please do the research instead of echo chambering the last 20 years of GPUs on linux. It doesnt do anyone any good when you try hard to sell the environment as a whole and then say "but <fanboy argument even in this realm of computing>" makes everyone look bad.


As for the on topic part of this thread. big sad :c I idle at like 150w.
 
Back
Top