• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial x86 Lacks Innovation, Arm is Catching up. Enough to Replace the Giant?

Joined
Jun 3, 2010
Messages
2,540 (0.50/day)
"I have a problem, the world must change" is "contextual and within a frame of reference"? You must be a lawyer, methinks.
No, lawyers are authority trolls. I truthbomb for sport.
 

ARF

Joined
Jan 28, 2020
Messages
3,957 (2.55/day)
Location
Ex-usa
So, they refuse to admit that the vast majority of users, especially those in the poorer countries have very slow systems with terrible experience.

Keep in mind that very small part, niche of the market buys Ryzen 5 and higher.
 
Joined
Jun 3, 2010
Messages
2,540 (0.50/day)
I love moral high ground contests, even if it gets me low quality post tickets. Gotta love what you do best.
 

ARF

Joined
Jan 28, 2020
Messages
3,957 (2.55/day)
Location
Ex-usa
Citation?

Why? Everyone knows it :D

1593120341248.png


CPUs is cores or threads.
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
That's not true. Most people are having much deeper problems with Pentium-class systems and HDDs.
Pentium class systems are not supported by windows 10, they work to a degree but only just.
 

ARF

Joined
Jan 28, 2020
Messages
3,957 (2.55/day)
Location
Ex-usa
Pentium class systems are not supported by windows 10, they work to a degree but only just.

This is a news for me.

The Pentium G4560 will run fine on both Windows 7* and Windows 10. Windows 10 will be more future proof as Microsoft will end support for Windows 7 in 2020.
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
This is a news for me.


I thought you meant p4 fair enough I should have been specific as should you.
50% + have 4 or more core's.
 
Low quality post by mtcn77
Joined
Jun 3, 2010
Messages
2,540 (0.50/day)
Stay on topic!

Thank You and Have a Very Sunshiny Day.
I would kindly disagree. I haven't seen this level of frivolity anywhere. We are protecting our Intel atom interests. What is not there to like than sit and watch, eating popcorn!:lovetpu:
 
  • Like
Reactions: ARF
Joined
Jul 9, 2015
Messages
3,413 (1.06/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
The commonly cited reason is that Desktop chips provide a "mass production" target, subsidizing the lower-volume server market.
In other words, "because it was more expensive".
Except, that is not how it happened: they've evaporated after x86 servers (on the same process!) started beating the crap out of them.
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
In other words, "because it was more expensive".
Except, that is not how it happened: they've evaporated after x86 servers (on the same process!) started beating the crap out of them.

I'm not sure if you understand my argument.

x86 Desktop chips and x86 Server chips have the same core. The x86 Server chips mainly differ in "uncore", the way the chip is tied together (allowing for multi-socket configurations). Because x86 Desktop chips have a high-volume, low-cost part, Intel was able to funnel more effort into R&D to make x86 Desktops more and more competitive. x86 Servers benefited, using a similar core design.

That is to say: x86 Servers achieved higher R&D numbers, and ultimately better performance, thanks to the x86 Desktop market.

--------------

A similar argument could be made for these Apple-ARM chips. Apple has achieved higher R&D numbers compared to Intel (!!), because of its iPad and iPhone market. There's a good chance that Apple's A12 core is superior to Intel's now. We don't know for sure until they scale it up, but it wouldn't be surprising to me if it happened.

Another note: because TSMC handled process tech, while Apple handles architecture, the two halves of chip design have separate R&D Budgets. Intel is competing not only against Apple, but against the combined R&D efforts of TSMC + Apple. TSMC is not only funded through Apple's mask costs, but also through NVidia, AMD, and Qualcomm's efforts. As such, TSMC probably has a higher process-level R&D budget than Intel.

Its a simple issue of volume, and money. The more money you throw into your R&D teams, the faster they work. (assuming competent management).
 
Joined
Jan 8, 2017
Messages
8,942 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
The more money you throw into your R&D teams, the faster they work.

That's just a primitive theory, in practice it's the complete opposite. The more cash you throw at a problem the less efficient the whole process becomes, work isn't linearly salable like bad managers assume. Twice the R&D budget means single digit improvements rather than twice as better. One way to verify this is to look at AMD vs Intel and Nvidia, they have but a fraction of what those two's R&D budget is yet their products easily rival theirs.
 
Last edited:

bug

Joined
May 22, 2015
Messages
13,226 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
That's just a primitive theory, in practice it's the complete opposite. The more cash you throw at a problem the less efficient the whole process becomes, work isn't linearly salable like a bad managers assume. Twice the R&D budget means single digit improvements rather than twice as better. One way to verify this is to look at AMD vs Intel and Nvidia, they have but a fraction of what those two's R&D budget yet their products easily rival theirs.
It depends where you stand. If you're underfunded, yes, additional cash will speed things up. Past a certain point, it will do what you said. It's the famous "nine mothers cannot deliver a baby in one month" of sorts.
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
That's just a primitive theory, in practice it's the complete opposite. The more cash you throw at a problem the less efficient the whole process becomes, work isn't linearly salable like bad managers assume. Twice the R&D budget means single digit improvements rather than twice as better. One way to verify this is to look at AMD vs Intel and Nvidia, they have but a fraction of what those two's R&D budget is yet their products easily rival theirs.

Its certainly not "linear" improvement. A $2 Billion investment may only be 5% better than a $1 Billion investment.

But once the product comes out, why would anyone pay the same money for a product that's 5% slower? The die-size is the main variable regarding the of the cost of the chip. (The bigger the die, the square of simple errors builds up. It also costs more space on the wafer, leading to far fewer chips sold). The customer would rather have the product that's incrementally better at the same price.

Take NVidia vs AMD, they're really close, but NVidia has a minor improvement in performance/watt, and that's what makes all the difference in marketshare.
 
Joined
Jul 9, 2015
Messages
3,413 (1.06/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
I'm not sure if you understand my argument.
I did.
It would apply if RISC CPUs were faster, but more expensive. They used to be faster. At some point they have become slower.
I am not buying "but that's because of R&D money" argument.

As for having savings in the server market, by selling desktop chips: heck, just have a look at AMD. The market is so huge, you can have decent R&D while having only tiny fraction of the market.

The whole "RISC beats CISC" was largely based on CISC being much harder to scale up by implementing multiple ops ahead, at once, since instruction set was so rich. But hey, as transistor counts went up, suddenly it was doable, on the other hand, RISCs could not go much further ahead in the execution queue, and, flop, no RISCs.

And, curiously, no EPIC took off either.
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
Sort of. The Itanium CPU line was EPIC based, but that might have been the only one.

Intel's "EPIC" is pretty much VLIW. There are numerous TI DSPs that use VLIW that are in still major use today. AMD's 6xxx line of GPUs was also VLIW-based. So VLIW has found a niche in high-performance, low-power applications.

VLIW is an interesting niche between SIMD and traditional CPUs. Its got more FLOPs than traditional, but more flexibility than SIMD (but less FLOPs than SIMD). For the most part, today's applications seem to be SIMD-based for FLOPs, or Traditional for flexibility / branching. Its hard to see where VLIW will fit in. But its possible a new niche is carved out in between the two methodologies.
 
Joined
Jun 3, 2010
Messages
2,540 (0.50/day)
Intel's "EPIC" is pretty much VLIW. There are numerous TI DSPs that use VLIW that are in still major use today. AMD's 6xxx line of GPUs was also VLIW-based. So VLIW has found a niche in high-performance, low-power applications.

VLIW is an interesting niche between SIMD and traditional CPUs. Its got more FLOPs than traditional, but more flexibility than SIMD (but less FLOPs than SIMD). For the most part, today's applications seem to be SIMD-based for FLOPs, or Traditional for flexibility / branching. Its hard to see where VLIW will fit in. But its possible a new niche is carved out in between the two methodologies.
I understand the enthusiasm. VLIW is an interesting idea. The major proportion in which gpu architectures have moved away from that developmental path is, VLIW runs on vector code. SIMD on the other hand can run on scalar code. That is the one key difference, differentiating them. Old vector based execution units could run 8, or 10 wavefront simultaneously, depending on the vector register length. The problem is, to store vectors, you need to have available registers which decreases available wavefront count. This binds the pipelines both from starting and clearing.
What SIMD does better is register allocation. You can run a constantly changing execution mask to schedule work, vectors do it in a different way with no-op masks but full thread group wide. I think it is like running a seperate frontend inside the compiler. It is a clever idea to not leave any work to the gpu compiler. If you can run a computer simulation, this is where the hardware needs some resource management. Perhaps, future gpus can automatically unroll such untidy loops to always shuffle more active threads in a thread group to find the best execution mask for a given situation. Scalarization frees you from that. You stop caring about all available threads and look at maximally retired threads.
There is definitely an artistic element to it.
 
Last edited:
Joined
Jan 8, 2017
Messages
8,942 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
There was a very big problem with VLIW which is why it isn't used anymore in GPUs, you can't change the hardware otherwise you need to recompile or reinterpret in some way the instructions at the silicon level which more or less negates the advantage of not having to add complex scheduling logic on chip. VLIW didn't really make that much sense in a GPU because the ILP ended up being implied by the programming model of having to use wavefronts which are simple and cheap. On a CPU it made much more sense because the code is not expected to follow any particular pattern so having the ability to explicitly control the ILP is useful.
 

bug

Joined
May 22, 2015
Messages
13,226 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
First ARM-Based MacBook Could Start From Just $799, Hints Tipster; MacBook Pro May Carry a Higher Price
Yeah, because Apple has such a proven track record of lowering their prices.

I'm not saying it's impossible, they may lower the price if the laptop requires a couple of these to start: https://www.engadget.com/apple-braided-thunderbolt-3-cable-129-092133733.html
 
Top