• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial x86 Lacks Innovation, Arm is Catching up. Enough to Replace the Giant?

So, they refuse to admit that the vast majority of users, especially those in the poorer countries have very slow systems with terrible experience.

Keep in mind that very small part, niche of the market buys Ryzen 5 and higher.
 
I love moral high ground contests, even if it gets me low quality post tickets. Gotta love what you do best.
 
Citation?

Why? Everyone knows it :D

1593120341248.png


CPUs is cores or threads.
 
That's not true. Most people are having much deeper problems with Pentium-class systems and HDDs.
Pentium class systems are not supported by windows 10, they work to a degree but only just.
 
Pentium class systems are not supported by windows 10, they work to a degree but only just.

This is a news for me.

The Pentium G4560 will run fine on both Windows 7* and Windows 10. Windows 10 will be more future proof as Microsoft will end support for Windows 7 in 2020.
 
This is a news for me.


I thought you meant p4 fair enough I should have been specific as should you.
50% + have 4 or more core's.
 
Low quality post by mtcn77
Stay on topic!

Thank You and Have a Very Sunshiny Day.
I would kindly disagree. I haven't seen this level of frivolity anywhere. We are protecting our Intel atom interests. What is not there to like than sit and watch, eating popcorn!:lovetpu:
 
  • Like
Reactions: ARF
The commonly cited reason is that Desktop chips provide a "mass production" target, subsidizing the lower-volume server market.
In other words, "because it was more expensive".
Except, that is not how it happened: they've evaporated after x86 servers (on the same process!) started beating the crap out of them.
 
In other words, "because it was more expensive".
Except, that is not how it happened: they've evaporated after x86 servers (on the same process!) started beating the crap out of them.

I'm not sure if you understand my argument.

x86 Desktop chips and x86 Server chips have the same core. The x86 Server chips mainly differ in "uncore", the way the chip is tied together (allowing for multi-socket configurations). Because x86 Desktop chips have a high-volume, low-cost part, Intel was able to funnel more effort into R&D to make x86 Desktops more and more competitive. x86 Servers benefited, using a similar core design.

That is to say: x86 Servers achieved higher R&D numbers, and ultimately better performance, thanks to the x86 Desktop market.

--------------

A similar argument could be made for these Apple-ARM chips. Apple has achieved higher R&D numbers compared to Intel (!!), because of its iPad and iPhone market. There's a good chance that Apple's A12 core is superior to Intel's now. We don't know for sure until they scale it up, but it wouldn't be surprising to me if it happened.

Another note: because TSMC handled process tech, while Apple handles architecture, the two halves of chip design have separate R&D Budgets. Intel is competing not only against Apple, but against the combined R&D efforts of TSMC + Apple. TSMC is not only funded through Apple's mask costs, but also through NVidia, AMD, and Qualcomm's efforts. As such, TSMC probably has a higher process-level R&D budget than Intel.

Its a simple issue of volume, and money. The more money you throw into your R&D teams, the faster they work. (assuming competent management).
 
The more money you throw into your R&D teams, the faster they work.

That's just a primitive theory, in practice it's the complete opposite. The more cash you throw at a problem the less efficient the whole process becomes, work isn't linearly salable like bad managers assume. Twice the R&D budget means single digit improvements rather than twice as better. One way to verify this is to look at AMD vs Intel and Nvidia, they have but a fraction of what those two's R&D budget is yet their products easily rival theirs.
 
Last edited:
That's just a primitive theory, in practice it's the complete opposite. The more cash you throw at a problem the less efficient the whole process becomes, work isn't linearly salable like a bad managers assume. Twice the R&D budget means single digit improvements rather than twice as better. One way to verify this is to look at AMD vs Intel and Nvidia, they have but a fraction of what those two's R&D budget yet their products easily rival theirs.
It depends where you stand. If you're underfunded, yes, additional cash will speed things up. Past a certain point, it will do what you said. It's the famous "nine mothers cannot deliver a baby in one month" of sorts.
 
That's just a primitive theory, in practice it's the complete opposite. The more cash you throw at a problem the less efficient the whole process becomes, work isn't linearly salable like bad managers assume. Twice the R&D budget means single digit improvements rather than twice as better. One way to verify this is to look at AMD vs Intel and Nvidia, they have but a fraction of what those two's R&D budget is yet their products easily rival theirs.

Its certainly not "linear" improvement. A $2 Billion investment may only be 5% better than a $1 Billion investment.

But once the product comes out, why would anyone pay the same money for a product that's 5% slower? The die-size is the main variable regarding the of the cost of the chip. (The bigger the die, the square of simple errors builds up. It also costs more space on the wafer, leading to far fewer chips sold). The customer would rather have the product that's incrementally better at the same price.

Take NVidia vs AMD, they're really close, but NVidia has a minor improvement in performance/watt, and that's what makes all the difference in marketshare.
 
I'm not sure if you understand my argument.
I did.
It would apply if RISC CPUs were faster, but more expensive. They used to be faster. At some point they have become slower.
I am not buying "but that's because of R&D money" argument.

As for having savings in the server market, by selling desktop chips: heck, just have a look at AMD. The market is so huge, you can have decent R&D while having only tiny fraction of the market.

The whole "RISC beats CISC" was largely based on CISC being much harder to scale up by implementing multiple ops ahead, at once, since instruction set was so rich. But hey, as transistor counts went up, suddenly it was doable, on the other hand, RISCs could not go much further ahead in the execution queue, and, flop, no RISCs.

And, curiously, no EPIC took off either.
 
Sort of. The Itanium CPU line was EPIC based, but that might have been the only one.

Intel's "EPIC" is pretty much VLIW. There are numerous TI DSPs that use VLIW that are in still major use today. AMD's 6xxx line of GPUs was also VLIW-based. So VLIW has found a niche in high-performance, low-power applications.

VLIW is an interesting niche between SIMD and traditional CPUs. Its got more FLOPs than traditional, but more flexibility than SIMD (but less FLOPs than SIMD). For the most part, today's applications seem to be SIMD-based for FLOPs, or Traditional for flexibility / branching. Its hard to see where VLIW will fit in. But its possible a new niche is carved out in between the two methodologies.
 
Intel's "EPIC" is pretty much VLIW. There are numerous TI DSPs that use VLIW that are in still major use today. AMD's 6xxx line of GPUs was also VLIW-based. So VLIW has found a niche in high-performance, low-power applications.

VLIW is an interesting niche between SIMD and traditional CPUs. Its got more FLOPs than traditional, but more flexibility than SIMD (but less FLOPs than SIMD). For the most part, today's applications seem to be SIMD-based for FLOPs, or Traditional for flexibility / branching. Its hard to see where VLIW will fit in. But its possible a new niche is carved out in between the two methodologies.
I understand the enthusiasm. VLIW is an interesting idea. The major proportion in which gpu architectures have moved away from that developmental path is, VLIW runs on vector code. SIMD on the other hand can run on scalar code. That is the one key difference, differentiating them. Old vector based execution units could run 8, or 10 wavefront simultaneously, depending on the vector register length. The problem is, to store vectors, you need to have available registers which decreases available wavefront count. This binds the pipelines both from starting and clearing.
What SIMD does better is register allocation. You can run a constantly changing execution mask to schedule work, vectors do it in a different way with no-op masks but full thread group wide. I think it is like running a seperate frontend inside the compiler. It is a clever idea to not leave any work to the gpu compiler. If you can run a computer simulation, this is where the hardware needs some resource management. Perhaps, future gpus can automatically unroll such untidy loops to always shuffle more active threads in a thread group to find the best execution mask for a given situation. Scalarization frees you from that. You stop caring about all available threads and look at maximally retired threads.
There is definitely an artistic element to it.
 
Last edited:
There was a very big problem with VLIW which is why it isn't used anymore in GPUs, you can't change the hardware otherwise you need to recompile or reinterpret in some way the instructions at the silicon level which more or less negates the advantage of not having to add complex scheduling logic on chip. VLIW didn't really make that much sense in a GPU because the ILP ended up being implied by the programming model of having to use wavefronts which are simple and cheap. On a CPU it made much more sense because the code is not expected to follow any particular pattern so having the ability to explicitly control the ILP is useful.
 
First ARM-Based MacBook Could Start From Just $799, Hints Tipster; MacBook Pro May Carry a Higher Price
Yeah, because Apple has such a proven track record of lowering their prices.

I'm not saying it's impossible, they may lower the price if the laptop requires a couple of these to start: https://www.engadget.com/apple-braided-thunderbolt-3-cable-129-092133733.html
 
Back
Top