• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

CISC vs RISC - Does it affect cooling?

Joined
Dec 16, 2017
Messages
2,722 (1.19/day)
Location
Buenos Aires, Argentina
System Name System V
Processor AMD Ryzen 5 3600
Motherboard Asus Prime X570-P
Cooling Cooler Master Hyper 212 // a bunch of 120 mm Xigmatek 1500 RPM fans (2 ins, 3 outs)
Memory 2x8GB Ballistix Sport LT 3200 MHz (BLS8G4D32AESCK.M8FE) (CL16-18-18-36)
Video Card(s) Gigabyte AORUS Radeon RX 580 8 GB
Storage SHFS37A240G / DT01ACA200 / WD20EZRX / MKNSSDTR256GB-3DL / LG BH16NS40 / ST10000VN0008
Display(s) LG 22MP55 IPS Display
Case NZXT Source 210
Audio Device(s) Logitech G430 Headset
Power Supply Corsair CX650M
Mouse Microsoft Trackball Optical 1.0
Keyboard HP Vectra VE keyboard (Part # D4950-63004)
Software Whatever build of Windows 11 is being served in Dev channel at the time.
Benchmark Scores Corona 1.3: 3120620 r/s Cinebench R20: 3355 FireStrike: 12490 TimeSpy: 4624
So, I just watched a Techquickie video about fanless cooling...


At the beginning (well, more like starting at the 1:00 mark) Linus says that RISC is the "secret sauce" which allows phones and tablets to run without fans.

Of course, there are a few other factors, like power saving policies or thermal throttling, which have a part in this, but I was left wondering if RISC really is that much of a factor...

Does anyone know if that's actually true or not?

PS for mods: I wasn't sure whether to post this thread in the Cooling or Programming forums, feel free to move it if necessary...
 
Joined
Aug 20, 2007
Messages
20,709 (3.41/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage 2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches
Software Windows 11 Enterprise (legit), Gentoo Linux x64
RISC does have efficiency advantages, but tends to have lower IPC. Thus, the only way to get decent performance out of it is ramping clock rate. This sucks when talking high performance chips at the near literal silicon limit (5GHz+).

So desktops use CISC, mostly. Not getting into mOps just now.
 
Joined
May 8, 2016
Messages
1,735 (0.60/day)
System Name BOX
Processor Core i7 6950X @ 4,26GHz (1,28V)
Motherboard X99 SOC Champion (BIOS F23c + bifurcation mod)
Cooling Thermalright Venomous-X + 2x Delta 38mm PWM (Push-Pull)
Memory Patriot Viper Steel 4000MHz CL16 4x8GB (@3240MHz CL12.12.12.24 CR2T @ 1,48V)
Video Card(s) Titan V (~1650MHz @ 0.77V, HBM2 1GHz, Forced P2 state [OFF])
Storage WD SN850X 2TB + Samsung EVO 2TB (SATA) + Seagate Exos X20 20TB (4Kn mode)
Display(s) LG 27GP950-B
Case Fractal Design Meshify 2 XL
Audio Device(s) Motu M4 (audio interface) + ATH-A900Z + Behringer C-1
Power Supply Seasonic X-760 (760W)
Mouse Logitech RX-250
Keyboard HP KB-9970
Software Windows 10 Pro x64
x86 CPU's use CISC at instruction/programming level, and RISC-style at execution level.
Since that's most effective way of getting perfrormance out of the code.
More on CISC/RISC here :

ARM (the litteral base for anything mobile at this point), throws away CISC part to get more power efficient (which is a good thing to have on mobile).
However, not having CISC reduces performance efficiency (as said earlier), so speed overall picture is lower.
 
Last edited:

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
40,435 (6.61/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
Gaming consoles are risc based even and require cooling so tbf it doesnt matter whether risc/cisc, it depends on voltage, current, frequency along with process size and transistor count.
 
Joined
Feb 21, 2014
Messages
1,383 (0.37/day)
Location
Alabama, USA
Processor 5900x
Motherboard MSI MEG UNIFY
Cooling Arctic Liquid Freezer 2 360mm
Memory 4x8GB 3600c16 Ballistix
Video Card(s) EVGA 3080 FTW3 Ultra
Storage 1TB SX8200 Pro, 2TB SanDisk Ultra 3D, 6TB WD Red Pro
Display(s) Acer XV272U
Case Fractal Design Meshify 2
Power Supply Corsair RM850x
Mouse Logitech G502 Hero
Keyboard Ducky One 2
I thought current gen consoles were all CISC now.
 
Joined
Mar 28, 2018
Messages
1,791 (0.82/day)
Location
Arizona
System Name Space Heater MKIV
Processor AMD Ryzen 7 5800X
Motherboard ASRock B550 Taichi
Cooling Noctua NH-U14S, 3x Noctua NF-A14s
Memory 2x32GB Teamgroup T-Force Vulcan Z DDR4-3600 C18 1.35V
Video Card(s) PowerColor RX 6800 XT Red Devil (2150MHz, 240W PL)
Storage 2TB WD SN850X, 4x1TB Crucial MX500 (striped array), LG WH16NS40 BD-RE
Display(s) Dell S3422DWG (34" 3440x1440 144Hz)
Case Phanteks Enthoo Pro M
Audio Device(s) Edifier R1700BT, Samson SR850
Power Supply Corsair RM850x, CyberPower CST135XLU
Mouse Logitech MX Master 3
Keyboard Glorious GMMK 2 96%
Software Windows 10 LTSC 2021, Linux Mint
Joined
Jan 8, 2017
Messages
8,862 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Does anyone know if that's actually true or not?

No, it isn't. It's actually a pretty stupid way to put this, efficiency is given by the ratio of useful work done and power consumption, that's all there is to it. It doesn't matter if the work was done using a RISC or CISC type CPU.

The only reason ARM cores are more power efficient is because they are designed that way and that only applies in a limited set of scenarios it has nothing to do with the ISA. Try and run a highly vectorized and paralelized numerical simulation and you can bet the x86 designs are actually going to be more power efficient.
 
Joined
Mar 23, 2016
Messages
4,839 (1.65/day)
Processor Ryzen 9 5900X
Motherboard MSI B450 Tomahawk ATX
Cooling Cooler Master Hyper 212 Black Edition
Memory VENGEANCE LPX 2 x 16GB DDR4-3600 C18 OCed 3800
Video Card(s) XFX Speedster SWFT309 AMD Radeon RX 6700 XT CORE Gaming
Storage 970 EVO NVMe M.2 500 GB, 870 QVO 1 TB
Display(s) Samsung 28” 4K monitor
Case Phantek Eclipse P400S (PH-EC416PS)
Audio Device(s) EVGA NU Audio
Power Supply EVGA 850 BQ
Mouse SteelSeries Rival 310
Keyboard Logitech G G413 Silver
Software Windows 10 Professional 64-bit v22H2
The only reason ARM cores are more power efficient is because they are designed that way and that only applies in a limited set of scenarios it has nothing to do with the ISA.
The low power fabrication process (LPP) that Arm cores/SoCs are targeted for also plays a role in their power efficiency.
 
Joined
Oct 21, 2006
Messages
621 (0.10/day)
Location
Oak Ridge, TN
System Name BorgX79
Processor i7-3930k 6/12cores@4.4GHz
Motherboard Sabertoothx79
Cooling Capitan 360
Memory Muhskin DDR3-1866
Video Card(s) Sapphire R480 8GB
Storage Chronos SSD
Display(s) 3x VW266H
Case Ching Mien 600
Audio Device(s) Realtek
Power Supply Cooler Master 1000W Silent Pro
Mouse Logitech G900
Keyboard Rosewill RK-1000
Software Win7x64
As was said above, it comes down to power used.

The more work performed, the more transistors used, the higher the frequency, the hotter it runs.

RISC uses fewer instructions, but will use more operations to do the same work as a CISC processor; there may be efficiencies of architecture for certain tasks, but both processors
doing the same long term task should use close to the same power.
 
Joined
Feb 18, 2005
Messages
5,239 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
He's not wrong, he's just being clickbaity in a way that makes him look stupid, as is usual for Linus Trash Tips.

Generally in the past (and this is grossly oversimplifying), RISC traded performance for lower energy consumption and CISC has been the other way around. But the whole RISC/CISC line is very blurry (in fact most CPUs nowadays use variations of both instruction sets) and as such, not particularly relevant - as Vya says, what generally defines the performance/power characteristics of a chip are, surprise surprise, the performance and power characteristics required by its target market. Hence why x86 chips struggle mightily to fit into ARM power budgets, while ARM CPUs can't match the performance of x86.

RISC is only becoming a buzzword now with RISC-V aiming to become a competitor to ARM, time will tell whether that is successful.

At the end of the day there is no secret sauce or magic bullet, there are always compromises that have to be made. RISC compromises one way, CISC compromises another, modern CPUs use the best of both in order to avoid as much compromise as possible, and Linus is and always will be trash.
 
Joined
Aug 20, 2007
Messages
20,709 (3.41/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage 2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches
Software Windows 11 Enterprise (legit), Gentoo Linux x64
No, it isn't. It's actually a pretty stupid way to put this, efficiency is given by the ratio of useful work done and power consumption, that's all there is to it. It doesn't matter if the work was done using a RISC or CISC type CPU.

The only reason ARM cores are more power efficient is because they are designed that way and that only applies in a limited set of scenarios it has nothing to do with the ISA.

See my post. The ISA actually has a huge influence on this.

No it is not the only factor. But as a trend, yes, it's true. RISC vs CISC is not just some obscure language barrier, but a design choice.

I could illustrate with some lovely assembly examples, but I have enough of a headache today without that.

But the whole RISC/CISC line is very blurry

Also this. RISC ain't what it used to be. POWER is nearly at CISC level instruction quantity. Back in the day, some RISCs lacked hardware inteter multiply functions. That obviously changed.

there may be efficiencies of architecture for certain tasks, but both processors
doing the same long term task should use close to the same power.

Except implementing a CISC ISA adds transistors over RISC... that is the whole point of RISC at all, avoiding that.
 

silentbogo

Moderator
Staff member
Joined
Nov 20, 2013
Messages
5,470 (1.45/day)
Location
Kyiv, Ukraine
System Name WS#1337
Processor Ryzen 7 3800X
Motherboard ASUS X570-PLUS TUF Gaming
Cooling Xigmatek Scylla 240mm AIO
Memory 4x8GB Samsung DDR4 ECC UDIMM
Video Card(s) Inno3D RTX 3070 Ti iChill
Storage ADATA Legend 2TB + ADATA SX8200 Pro 1TB
Display(s) Samsung U24E590D (4K/UHD)
Case ghetto CM Cosmos RC-1000
Audio Device(s) ALC1220
Power Supply SeaSonic SSR-550FX (80+ GOLD)
Mouse Logitech G603
Keyboard Modecom Volcano Blade (Kailh choc LP)
VR HMD Google dreamview headset(aka fancy cardboard)
Software Windows 11, Ubuntu 20.04 LTS
Does anyone know if that's actually true or not?
Short answer - not. I'll give a longer explanation tomorrow, after getting some sleep (it's 1:30am)
 
Joined
Jan 8, 2017
Messages
8,862 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
The ISA actually has a huge influence on this.

Just saying that isn't enough, if you've got some way to prove it by all means go ahead. Just to be sure we are on the same page we're talking power efficiency here, 1 GFLOP/Watt is 1 GFLOP/Watt in AArch64 or x64 land, there is no difference. It's not up to the ISA to define how instructions actually get executed, the ISA just defines the high level behavior not the implementation. That's why you can have ARM cores in washing machines and data centers, the implementation makes that possible from a power envelope point of view not the ISA.

I could illustrate with some lovely assembly examples

You could but unless you have some way to prove that they affect power consumption in some fundamental way I am afraid it wont mean much. How could you possibly decide that an ARM ADD instruction is more power efficient than an x86 one, according to just the ISA.

There was a time when RISC processors where associated with less silicon and therefore less power but did you catch that ? You can only speak about power efficiency once it's implemented in actual silicon therefore that ultimately decides what is what.

Just because you look out there and notice that most ARM processors appear to be more power efficient you can't conclude it's the ISA that makes the difference, it's a flawed thought process.
 
Last edited:
Joined
Aug 20, 2007
Messages
20,709 (3.41/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage 2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches
Software Windows 11 Enterprise (legit), Gentoo Linux x64
How could you possibly decide that an ARM ADD instruction is more power efficient than an x86 one, according to just the ISA.

Logic. More instructions take more transistors. Break the acronyms down. What do Transistors require?

You do not analyze this on a command level. You have to look at the ISA as a whole. Some things called RISC aren't very "reduced" at all.
 
Last edited:
Joined
Oct 21, 2006
Messages
621 (0.10/day)
Location
Oak Ridge, TN
System Name BorgX79
Processor i7-3930k 6/12cores@4.4GHz
Motherboard Sabertoothx79
Cooling Capitan 360
Memory Muhskin DDR3-1866
Video Card(s) Sapphire R480 8GB
Storage Chronos SSD
Display(s) 3x VW266H
Case Ching Mien 600
Audio Device(s) Realtek
Power Supply Cooler Master 1000W Silent Pro
Mouse Logitech G900
Keyboard Rosewill RK-1000
Software Win7x64
Actually, the adder itself is probably very similar in both chips; those structures are very simple, and probably come out of a common library of parts in a silicon software designing tool.

Two XOR gates, two AND gates, and an OR gate gives you an adder with carry, and it scales linearly with increasing width.

Where the differences come in are the Register arrays, and various architecture differences.

Everything in Intel moves through an accumulator, in the basic structure, other architectures use arrays or registers that can be used as desired.
Intel has gotten a LOT more complex in the last 30 years, but it's basic architecture is still close to its roots.

So, the gates would be similar, but the way the data moves into and out of the chip are very different; Pipelines, math coprocessor segments, barrel multipliers, there's a lot of ways to skin that apple.

The big comparison would be how many clock cycles does it take to add two numbers, and what's the latency on the calculation?
How many clocks does it take to spit out the answer, after you ask the question?

Each (clock*stages*(number of transistors switching per clock)) adds to the power consumed; if you're simply adding a huge list of numbers, the most bare processor is going to be the most efficient.

If you want to multiply numbers, it gets a lot more complicated; different numbers take different amounts of time to multiply/divide in a processor, depending on how it's implemented.
:)

I did a bunch of assembler coding on both 8086/8087 chips, as well as M68000 chips; both had advantages and disadvantages, but both ate about the same power. :)

Real time processing on a Intel CISC processor is Very difficult; RISC processors tend to not have the same level of variance in processing times.
That's why a lot of things use simple PIC processors for their cores; they're relatively invariant.

For most advanced processors, the overall power envelope is huge, compared to the actual silicon doing the work. :)
 
Joined
Aug 20, 2007
Messages
20,709 (3.41/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage 2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches
Software Windows 11 Enterprise (legit), Gentoo Linux x64

Glad to see I'm not alone in that, lol.

By modern definitions basically all of those would be RISC though... strange.
 
Last edited:
Joined
Oct 21, 2006
Messages
621 (0.10/day)
Location
Oak Ridge, TN
System Name BorgX79
Processor i7-3930k 6/12cores@4.4GHz
Motherboard Sabertoothx79
Cooling Capitan 360
Memory Muhskin DDR3-1866
Video Card(s) Sapphire R480 8GB
Storage Chronos SSD
Display(s) 3x VW266H
Case Ching Mien 600
Audio Device(s) Realtek
Power Supply Cooler Master 1000W Silent Pro
Mouse Logitech G900
Keyboard Rosewill RK-1000
Software Win7x64
Nah; both of those had robust opcodes to choose from. A lot more now, tho.

I remember having to code for a Weitek co-processor in the early 90's; it was just different enough from an 8087 to be a pain in the ass. :)

The happiest thing to me is the added new registers in the Intel processors; you don't spend all your time swapping in and out of the accumulator.
The 68k was awesome because it had 7 registers you could use for whatever; I could put an operand in one, and run a series of calculations on it.
Now with SSE instructions, a bunch of code I wrote in the 80's would be a dream to write, lol.

A Korean guy came into our group in the mid '00's, and rewrote an algorithm that took 45 minutes to process in SSE, and it took 8 minutes to run. (3d tomography)
We added 256GB of memory to the machine, and added more cards/drives to the terabyte of data, all in raid 0, and got it down to 4 minutes.

I'd bet if we could have put it in RAM, it would have taken seconds, lol.

I learned 68k coding on one of these, Assembler only:


Hard to believe that was 30+ years ago, lol.
 
Last edited:

silentbogo

Moderator
Staff member
Joined
Nov 20, 2013
Messages
5,470 (1.45/day)
Location
Kyiv, Ukraine
System Name WS#1337
Processor Ryzen 7 3800X
Motherboard ASUS X570-PLUS TUF Gaming
Cooling Xigmatek Scylla 240mm AIO
Memory 4x8GB Samsung DDR4 ECC UDIMM
Video Card(s) Inno3D RTX 3070 Ti iChill
Storage ADATA Legend 2TB + ADATA SX8200 Pro 1TB
Display(s) Samsung U24E590D (4K/UHD)
Case ghetto CM Cosmos RC-1000
Audio Device(s) ALC1220
Power Supply SeaSonic SSR-550FX (80+ GOLD)
Mouse Logitech G603
Keyboard Modecom Volcano Blade (Kailh choc LP)
VR HMD Google dreamview headset(aka fancy cardboard)
Software Windows 11, Ubuntu 20.04 LTS
Ok, got my first coffee of the day, I think I'm ready for a slightly better write-up. When you see RISC - think ARM, cause I know very little about MIPS or RISC-V, and 'cause ARM is the most popular RISC arch.
1) Nowadays there is no "generalized" RISC. Behavior, capabilities, and power efficiency varies from arch to arch, and even if we take ARM alone as an example - cores are modular and can be tweaked for a specific purpose. E.g. you can add or remove execution blocks, tweak cache size etc. to meet the desired purpose or power target. So, not all RISC SoCs are created equal. Similar approach applies to CISC as well, just in a slightly convoluted way.
2) The only drastic difference in the CPU core that can potentially skew power draw, is additional circuitry required for branch prediction and out-of-order execution. AFAIK, most "little" ARM cores(the ones used most of the time) don't have OoO. But don't quote me on that. :fear:
3) When it comes to low-power SoCs for mobile devices, things get complicated. If you look at die shots of, let's say, SD855 and some typical Core M-Y, you'll see lots of stuff besides cores.
CPU cores themselves don't really do much to overall power efficiency, cause they take up around 20% of the total die size (and under similar circumstances consume similar amounts of power as well). Most of it comes from SA/SoC/PCH/System Hub or whatever it's called nowadays, peripherals, and the GPU. If you take CISC SoC, make GPU smaller, cripple the memory controller and slash half of your peripherals, you'll get nice and tidy sub-5W SoC. And in reverse - beef up the GPU and add a high-performance 5G modem and you'll get a high-performance 5W mobile SoC. That's why we have high-end tablets based on Core-M that do just fine in the absence of active cooling, and that's why modern smartphones have heatpipes in them.
4) Realistically, even voltages and clock speeds have little to do with efficiency, since they are nearly identical for both. It usually comes down to properly managing power states and the ability to shut down unused peripherals. Apparently Samsung and Qualcomm do a better job at it than Intel and most definitely AMD, as easy as that.
5) If you look at recent RISC entries in Enterprise/server segment, you'll quickly notice that the performance-per-watt metric isn't much different. With "fattened" cores, more cache, more peripheral controllers, wider memory bus and beefier SoC, something like ThunderX2 is about as fast as an eqivalent Xeon EP, while consuming comparable amount of power. And even in consoles, if you've ever taken apart a PS3, you would've seen how big the cooling system needs to be in order to cool a high-performance IBM Cell CPU.

I learned 68k coding on one of these, Assembler only:
Nice. Some years ago I got into microcontrollers. Learned AVR assembly in a week, did some projects - both personal and work-related (never touched Arduino before that, never bothered w/ C afterwards). Later switched to ARM. Dug into ARM assembly and nearly exploded my brain with a measly ARMv7-M instruction set :banghead: There are literally dozens of ways of doing the same thing, and even more ways of doing stuff in a way that nobody asked for... Can't remember the link, but last year I stumbled upon some hilarious tech conference presentation about it, and the speaker expressed everything I feel about ARM ISA :D:D:D I almost cried at the end :laugh:
 
Last edited:
Joined
Jan 8, 2017
Messages
8,862 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Logic. More instructions take more transistors.

And it's not up to the ISA to define the logic. I am still waiting on a real world example where an ISA is more power efficient compared to another one.

Do you know how much power does the actual execution of an instruction take in a CPU in terms of percentages ? <1-2% it's almost insignificant in the grand scheme of things , the scheduling, the pipeline depth, code analysis, cache, the buses, especially the buses, they all make up the majority of work that the CPU does and therefore consume the most power. All CPUs, ARM or x86 have those features nowadays so it would be nearly impossible to modify their power consumption in any real way by changing the ISA because most of the silicon present on the chip is ISA agnostic.

And so none of those things have anything to do with the ISA, as a matter of fact it probably wouldn't take much to make a Zen core for example execute ARM code natively and you'd likely see zero change in terms of power consumption.

As I said above the only reason it was once true that RISC processors were more power efficient was because back then there were only two primary power guzzlers inside a CPU, the actual execution of the instruction and the movement of data across it. If you simplified the execution, that would reflect in a noticeable manner in the power used by a chip, but now that's not the case anymore, CPUs do a lot more than simply execute instructions.
 
Last edited:
Joined
Oct 21, 2006
Messages
621 (0.10/day)
Location
Oak Ridge, TN
System Name BorgX79
Processor i7-3930k 6/12cores@4.4GHz
Motherboard Sabertoothx79
Cooling Capitan 360
Memory Muhskin DDR3-1866
Video Card(s) Sapphire R480 8GB
Storage Chronos SSD
Display(s) 3x VW266H
Case Ching Mien 600
Audio Device(s) Realtek
Power Supply Cooler Master 1000W Silent Pro
Mouse Logitech G900
Keyboard Rosewill RK-1000
Software Win7x64
My big delineation of cores as far as RISC vs CISC is a) number of opcodes, and b) instruction vs data size.

A RISC processor typically never does operations on a memory structure; a single SSE instruction in a modern intel core can do one operation on a whole multimegabyte data structure.

You throw it an operand and a pointer, and it's busy for as long as it takes; the same thing on a RISC processor takes a couple of pages of code.

Branch prediction, OoE, Compute Units; those are classically CISC structures to me.
Not really something I'd expect to find in a RISC processor.

This is my idea of a RISC processor:

There is less than one page of opcodes.

This is intel, from the beginning to now:
:)

There are variants for AMD vs intel, but there's a lot of overlap there; same thing, different names/calling styles.

I look at ARM's setup, and it's somewhere in the middle:

ARM includes some SSE-like instructions, and complex data processing instructions that the simpler processors don't have.

So I think the argument of what's most efficient really comes down to the specific application, and functional implementation.

If it has Disk/Flash access, GB's of RAM, Video; all of those functions are external to the CPU architecture, and their power costs across architectures are fairly the same, depending on feature set.
An embedded 7" screen is going to be more power efficient than an architecture that allows swappable video cards and an external monitor, for example.

The interfaces to all these peripherals is going to be a constant, the difference in what's feeding them data begins to be moot.

Overall System Feature sets are the biggest driver of power costs these days, not system CPU type.
 

silentbogo

Moderator
Staff member
Joined
Nov 20, 2013
Messages
5,470 (1.45/day)
Location
Kyiv, Ukraine
System Name WS#1337
Processor Ryzen 7 3800X
Motherboard ASUS X570-PLUS TUF Gaming
Cooling Xigmatek Scylla 240mm AIO
Memory 4x8GB Samsung DDR4 ECC UDIMM
Video Card(s) Inno3D RTX 3070 Ti iChill
Storage ADATA Legend 2TB + ADATA SX8200 Pro 1TB
Display(s) Samsung U24E590D (4K/UHD)
Case ghetto CM Cosmos RC-1000
Audio Device(s) ALC1220
Power Supply SeaSonic SSR-550FX (80+ GOLD)
Mouse Logitech G603
Keyboard Modecom Volcano Blade (Kailh choc LP)
VR HMD Google dreamview headset(aka fancy cardboard)
Software Windows 11, Ubuntu 20.04 LTS
My big delineation of cores as far as RISC vs CISC is a) number of opcodes, and b) instruction vs data size.
Swayed a bit off course. PIC microcontrollers are not comparable to desktop/smartphone CPUs: it's only slightly more complex than AVR, only due to weird instuction structure. That's probably why they switched to MIPS in PIC32 MCUs.
ARMV7-TDMI is also an MCU arch. You can do more complex stuff with it, but it's nowhere near as complex as ARMV8 even in earlier iterations. Now you have pretty much everything x86_64 has to offer, which includes SIMD instructions, more floating point/vector ops, more multimedia extensions, new crypto stuff etc.

branch prediction, OoE, Compute Units; those are classically CISC structures to me.
I noted earlier that "little" cores don't have OoO, but what I did not mention is that "big" cores in a modern ARM big.little setup do have out-of-order execution. Even in older 32-bit ARM SoCs, like Cortex-A9/A15 there is some sort of OoO. Hence moar complex, moar powa-a-a, moar hot.
 
D

Deleted member 185158

Guest
TEC cooling a cell phone with passive heat sink?

Pretty sure it's doable. Do I have an old phone somewhere???
I need to experiment cooling TEC with old Cell phones now. lol.
 
Joined
Aug 20, 2007
Messages
20,709 (3.41/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage 2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches
Software Windows 11 Enterprise (legit), Gentoo Linux x64
And it's not up to the ISA to define the logic.

I meant mental logic. Implementing a instruction, takes transistors. CISC has more instructions.

There you go, no example necessary.

I noted earlier that "little" cores don't have OoO, but what I did not mention is that "big" cores in a modern ARM big.little setup do have out-of-order execution. Even in older 32-bit ARM SoCs, like Cortex-A9/A15 there is some sort of OoO. Hence moar complex, moar powa-a-a, moar hot.

Yep and that's the most valid argument against this: Not that it's "wrong" but that the lines have blurred so much it's impossible to tell anymore anyhow.
 
Joined
Mar 23, 2016
Messages
4,839 (1.65/day)
Processor Ryzen 9 5900X
Motherboard MSI B450 Tomahawk ATX
Cooling Cooler Master Hyper 212 Black Edition
Memory VENGEANCE LPX 2 x 16GB DDR4-3600 C18 OCed 3800
Video Card(s) XFX Speedster SWFT309 AMD Radeon RX 6700 XT CORE Gaming
Storage 970 EVO NVMe M.2 500 GB, 870 QVO 1 TB
Display(s) Samsung 28” 4K monitor
Case Phantek Eclipse P400S (PH-EC416PS)
Audio Device(s) EVGA NU Audio
Power Supply EVGA 850 BQ
Mouse SteelSeries Rival 310
Keyboard Logitech G G413 Silver
Software Windows 10 Professional 64-bit v22H2
CISC has more instructions.
Only on the front end for decoding though, the back end is RISC (K.)

I'd say RISC won the battle between CISC, and RISC when the backend of a x86 processor decodes to a internal RISC ISA.
 
Joined
Mar 10, 2010
Messages
11,878 (2.31/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
I would imagine that Cerberas ,wafer scale engine is RISC , regardless that sucks power and spits flames.
 
Top