Sunday, November 27th 2011

Wintel Alliance Slowly Crumbling, ARM To Eventually Rule The Desktop?

The writing has been on the wall for a while now, that the close relationship between Microsoft and Intel (and by extension AMD) is crumbling into dust. In fact, they have never really been the best of friends. It has been clear since Microsoft unveiled that Windows 8 would run natively on ARM processors that things would never be quite the same again. Apart from some niche server variants of Windows, which could run on Itanium and other processors, all the previous desktop versions, including Windows 7, have run on x86 (and x64 for the last 6 years or so) processors.

However, Microsoft is keen to increase its presence in the lucrative smartphone and tablet market, where it's not had much success so far, getting comprehensively trounced by Android and Apple. Microsoft would be happy to use an x86/x64 processor for this application, but here, the limiting factor is the energy source, the battery, forcing the entire device to consume very little power if it's to run for more than 5 minutes. To meet this requirement, processors based on the ARM architecture have met this need admirably for years, with excellent performance while the Intel x86 variants have not (see video below). This has lead Microsoft to forge a relationship with a new processor manufacturer, Qualcomm, who make their own variant of the ARM processor, called Snapdragon. In fact, the relationship is so close now, that Windows Phone 7 only runs on Qualcomm ARM chips.

Having Windows run on two processor architectures concurrently inherently puts them into competition, creating an uneasy, unstable coexistence (witness the death of the other architectures in the Windows server space) so it seems reasonable to expect that Qualcomm will end up competing head to head with Intel at some point. This should make for a very interesting situation, given Intel's strength and Microsoft's strength, which could be used to invest in Qualcomm to help it compete with Intel in the performance desktop market, which would be expensive and difficult in terms of R&D. Perhaps an alliance with AMD or IBM, given their design expertise could also be on the cards? Of course, the major show stopper for a full-on ARM onslaught into the desktop space is that "legacy" x86/x64 apps - which the whole world runs right now - either won't run at all, or will run poorly under some sort of emulator. The fact that all current ARM chips are physically optimised for low power rather than all-out data processing performance really doesn't help the situation, either.

For the moment, let's assume that this problem is successfully overcome, perhaps by porting various key apps over to ARM say. Due to the significant efficiency and performance improvements of the ARM architecture (see video below) x86 begins to be phased out, eventually disappearing. Now, where does this leave Intel? To go bust, obviously, as it can't sell any more x86 chips. No, of course not. Intel has had an ARM licence for years, so it seems logical that it would put its many superb data processing enhancement technologies into ARM chips, to create monsters that are capable of the blistering speeds we see today from x86 chips and then some. Intel really, really won't like this situation though. Why? Because at the moment, it's only proper competitor in the x86/x64 space is AMD, which conveniently for Intel, is some considerable way behind with its flagship Bulldozer architecture. One competitor. Easy to take care of. Could probably kill its x86 business if it wanted to, just by accelerating the performance of its chips by 50%. But then that pesky Competition Commission would start investigating…

However, the ARM CPU is made by literally hundreds of different companies, since ARM Holdings is a fabless company and makes its money by licensing the rights to make the processor. It doesn't take much of a stretch to see that some big hitter like IBM, who has similar expertise in building high performance processors (think PowerPC and POWER) could start competing with a high performance desktop variant of the ARM architecture. AMD will likely do the same, if they want to remain as a CPU manufacturer (they'd still have the profitable graphics card business to fall back on, so wouldn't die). Suddenly, Intel has lots of stiff competition from all sides and that extremely profitable niche that it has sat in for the last 30+ years due to licence exclusivity evaporates, perhaps eventually becoming a me-too commodity player with razor thin margins. Very painful, very humiliating, totally unthinkable. Maybe this is the real reason why Intel only ever made a half-hearted attempt with its Xscale ARM processors and the product line never really took off? It would have to literally be forced like this to make anything more of it.

So, you can see how it's completely in Microsoft's interest to move to ARM and absolutely not for Intel to do so. They now both want diametrically different things out of their long term relationship, so no wonder it's cooling off. Any bets on when the divorce papers will hit?

And now for that video. The short video below, originally found in an interesting geek.com article, compares a 1.6 GHz dual core Atom CPU in a netbook against a development board using a Cortex-A9 ARM CPU, configured as a dual core system, running at a mere 500 MHz. Yes, just 500 MHz. The results? Even with the netbook having a graphics accelerator and the ARM dev system not having one, the ARM was only slightly slower than the Atom! Of course, it consumed a lot less power than the Atom CPU too, which is critical. Note that this video dates from Jan 2010 and there's newer versions of both products now. However, it's still valid today, as the performance balance hasn't changed much between the two processor architectures. This is because the differences are inherent to them (x86 is hot and inefficient, basically) so it doesn’t really matter how much each one is tweaked, the performance ratios will stay roughly the same.


Sources: Sign On Sandiego (lots of extra info, well worth a read) and TechEye
Add your own comment

84 Comments on Wintel Alliance Slowly Crumbling, ARM To Eventually Rule The Desktop?

#1
qubit
Overclocked quantum bit
newtekie1 said:
Their processors are just way too shitty for desktop users. They don't want to wait 15 seconds for a webpage to load.
That's just wrong, NT.

x86 goes like a rocket, because of the high performance process tech and design that goes into it, while the ARM chips are currently all optimised for low power, as I explained in the article.

And web pages don't take an excessive time to load using an ARM processor - did you see that video? The ARM had less than a third of the clock speed of the Atom and no GPU accelerator, yet it was only a bit slower. At the same clock speed, it's clear that it would have destroyed the Atom - and there are ARM variants that run well over a gigahurtz.

That Atom/ARM comparison was valid, because both chips are optimized for low power. Apply the high performance mojo that x86 currently enjoys, to an ARM and then compare them. You'll find that the ARM wins again at the same clock speed, while using significanty less power.
Posted on Reply
#2
Wile E
Power User
qubit said:
That's just wrong, NT.

x86 goes like a rocket, because of the high performance process tech and design that goes into it, while the ARM chips are currently all optimised for low power, as I explained in the article.

And web pages don't take an excessive time to load using an ARM processor - did you see that video? The ARM had less than a third of the clock speed of the Atom and no GPU accelerator, yet it was only a bit slower. At the same clock speed, it's clear that it would have destroyed the Atom - and there are ARM variants that run well over a gigahurtz.

That Atom/ARM comparison was valid, because both chips are optimized for low power. Apply the high performance mojo that x86 currently enjoys, to an ARM and then compare them. You'll find that the ARM wins again at the same clock speed, while using significanty less power.
That's assuming they can scale high enough to do so. We were supposed to see 10Ghz P4's and Power5 chips too, but that never happened, now did it?
Posted on Reply
#3
newtekie1
Semi-Retired Folder
qubit said:
That's just wrong, NT.

x86 goes like a rocket, because of the high performance process tech and design that goes into it, while the ARM chips are currently all optimised for low power, as I explained in the article.

And web pages don't take an excessive time to load using an ARM processor - did you see that video? The ARM had less than a third of the clock speed of the Atom and no GPU accelerator, yet it was only a bit slower. At the same clock speed, it's clear that it would have destroyed the Atom - and there are ARM variants that run well over a gigahurtz.

That Atom/ARM comparison was valid, because both chips are optimized for low power. Apply the high performance mojo that x86 currently enjoys, to an ARM and then compare them. You'll find that the ARM wins again at the same clock speed, while using significanty less power.
As already pointed out, the GPU on the Atom system provides no help, as it can't HW accelerate any browsers.

And the ARM chip was insanely slow, to the point that I would've been embarrassed to put it out if I was the guy trying to sell us on ARM. To show a video where your processor falls flat on simple tasks like loading web pages against a processor that desktop users already think is a slow piece of shit, doesn't help your argument that ARM is fast enough for desktop users.

Again, even if they give ARM all the advantages of not being concerned about power, it isn't going to come close to performing on the desktop. It won't even touch a Sandybridge dual-core, hell it probably won't even touch a Bulldozer dual-core. Yes, it might still consume less power, but probably not a lot, and probably still be slower than Atom...
Posted on Reply
#4
qubit
Overclocked quantum bit
Wile E said:
That's assuming they can scale high enough to do so. We were supposed to see 10Ghz P4's and Power5 chips too, but that never happened, now did it?
Oh, it will scale alright. The ARM chip, being a RISC design, is much more streamlined than a CISC one. Therefore, an ARM can be implemented with much less transistors than an x86 one, which means it can be clocked higher. The only reason you don't see 4GHz ARMs is because they wouldn't exactly be battery friendly at that speed. To be fair, there's also various other tricks like pipelining and so on that Intel and AMD use to make the chip clock faster and these would also be used to make a super-fast ARM. But make no mistake, it can be done and with less effort than with an x86. Being small, it will still have the power efficiency advantages over an equivalently clocked x86, while running faster.

newtekie1 said:
As already pointed out, the GPU on the Atom system provides no help, as it can't HW accelerate any browsers.
Ok, fair point. They're even in this respect.

newtekie1 said:
And the ARM chip was insanely slow, to the point that I would've been embarrassed to put it out if I was the guy trying to sell us on ARM. To show a video where your processor falls flat on simple tasks like loading web pages against a processor that desktop users already think is a slow piece of shit, doesn't help your argument that ARM is fast enough for desktop users.
Are we talking about the same video here? The ARM was only a bit slower, while running at a third of the clock speed. It didn't fall flat at all.

newtekie1 said:
Again, even if they give ARM all the advantages of not being concerned about power, it isn't going to come close to performing on the desktop. It won't even touch a Sandybridge dual-core, hell it probably won't even touch a Bulldozer dual-core. Yes, it might still consume less power, but probably not a lot, and probably still be slower than Atom...
That's where we disagree. My point is that if the same process mojo was put into an ARM, it would trounce x86, for the reasons I just explained to Wile E. Unfortunately, we can't definitively prove it either way here without IBM or Intel (lol, like they would want to) building such a monster, but it's reasonable to expect the ARM to win.

I don't know if you've ever compared the ARM and x86 architectures, but they're built to opposite design philosophies, being RISC & CISC respectively. In fact, if you look at all the x86 extensions such as MMX & the SSE family, you'll see how streamlined they are compared to the old legacy x86 instructions, being more like a RISC machine and also doing more work in parallel.

These instructions, of course boost the efficiency of x86 enormously, as effectively some of the old instructions aren't used in favour of these ones, depending on the application. This is all part of the "mojo" enhancements I'm talking about, so if something like this is required for a fast ARM, then it's perfectly acceptable to include it in our virtual competition. :)

btw I've programmed the ARM2 processor back in the 90's on my Acorn Archimedes: the instruction set is pure nerdgasm stuff. :respect:
Posted on Reply
#5
bostonbuddy
Arm getting performance to desktop speeds is a big if, while die shrinks and the corresponding decreases in power consumption will continue on pace on the x86 side. My bet is on intel getting its power consumption down first.
Posted on Reply
#6
v12dock
Atom is old tech bench against a brazos
Posted on Reply
#7
Wile E
Power User
qubit said:
Oh, it will scale alright. The ARM chip, being a RISC design, is much more streamlined than a CISC one. Therefore, an ARM can be implemented with much less transistors than an x86 one, which means it can be clocked higher. The only reason you don't see 4GHz ARMs is because they wouldn't exactly be battery friendly at that speed. To be fair, there's also various other tricks like pipelining and so on that Intel and AMD use to make the chip clock faster and these would also be used to make a super-fast ARM. But make no mistake, it can be done and with less effort than with an x86. Being small, it will still have the power efficiency advantages over an equivalently clocked x86, while running faster.



Ok, fair point. They're even in this respect.



Are we talking about the same video here? The ARM was only a bit slower, while running at a third of the clock speed. It didn't fall flat at all.



That's where we disagree. My point is that if the same process mojo was put into an ARM, it would trounce x86, for the reasons I just explained to Wile E. Unfortunately, we can't definitively prove it either way here without IBM or Intel (lol, like they would want to) building such a monster, but it's reasonable to expect the ARM to win.

I don't know if you've ever compared the ARM and x86 architectures, but they're built to opposite design philosophies, being RISC & CISC respectively. In fact, if you look at all the x86 extensions such as MMX & the SSE family, you'll see how streamlined they are compared to the old legacy x86 instructions, being more like a RISC machine and also doing more work in parallel.

These instructions, of course boost the efficiency of x86 enormously, as effectively some of the old instructions aren't used in favour of these ones, depending on the application. This is all part of the "mojo" enhancements I'm talking about, so if something like this is required for a fast ARM, then it's perfectly acceptable to include it in our virtual competition. :)

btw I've programmed the ARM2 processor back in the 90's on my Acorn Archimedes: the instruction set is pure nerdgasm stuff. :respect:
An ARM cannot compete in performance at the desktop level without first adding more instruction sets, and thus more silicon, rendering their power advantage null.

Sorry, x86 is not in any danger from ARM for a long time to come in the desktop world.
Posted on Reply
#8
p3ngwin1
newtekie1 said:
I disagree. Atom was based on an extremely modified Core architecture, AFAIK. And is a product that they kind of threw together a few years ago, and haven't really done anything with it. And they still managed to get the processor power consumption down to the 2 watt range.

Now look at the current Sandybridge and what it is capable of. They have dual-core Sandybridge processors, full powered full featured chips, that sip just 17w. Image what they could get that down to just by ditching the silicon for HT, and the 3MB of L3 cache? And that is with a graphics processor, drop that and what does the power consumption numbers drop to? Cripple the memory controller(who needs dual channel 32GB RAM on a smart phone?), cut out the PCI-E lanes(don't need those on a smart phone), drop the unused instruction sets and the silicon related to those. What would the power consumption numbers look like then?

If Intel was worried, and they wanted to put their effort into it, I have no doubt they could put out an x86 processor today based on an extremely modified Sandybridge that could pull under 1w under load, and idle at under 0.25w and outperform the competition.
sandybridge at 17watts eh?

so it's "just sipping" over 8X the maximum power expected of smartphones and tablets ?


ok let's a have a look at your fetish and delusion for a sandybridge-based "atom" :
"Image what they could get that down to just:"
  1. ditching the silicon for HT...
  2. and the 3MB of L3 cache...
  3. a graphics processor, drop that and what does the power consumption numbers drop too...
  4. Cripple the memory controller(who needs dual channel 32GB RAM on a smart phone?)...
  5. cut out the PCI-E lanes(don't need those on a smart phone)...
  6. drop the unused instruction sets and the silicon related to those
"What would the power consumption numbers look like then?"
you are suffering from confirmation bias, because you are only paying attention to ONE side of the "performance per watt" equation. you are literally saying "yeah, but what if we chop out all this "useless stuff" and then we'll have power consumption comparable!", all the while ignoring the performance drop that killed the ATOM by taking a CELERON (already a POS) and HALVING it's performance, then wondering why ATOM died when ARM devices offered better performance per watt.

for your proposed 'sandy-atom', not only should you ask "what power consumption numbers like then?", but ALSO you should ask "what performance would there bee after cutting all those features?".

you would create exactly the same effect that made the original Intel ATOM a piece of shit, because it was a bottom-of-the-barrel desktop chip architecture, that had it's performance HALVED from the celeron design it was butchered from. yet you expect a crippled sandybridge dual-core to compete with ARM chips that contain everything you removed from your sandybridge?

you say "drop the graphics processor", but why? how do you expect the device to display visual output with comparable performance to other devices if it has no GPU ?

you say "Cripple the memory controller(who needs dual channel 32GB RAM on a smart phone?". the amount of RAM is quite irrelevant at this time, but the bandwidth is very relevant to performance. we already have dual-channel for RAM on mobile devices, why would you cripple that in your mythical sandy-ATOM ?

you say "drop the unused instruction sets and the silicon related to those". ok this makes no sense at all and convinces me you're discussing CPU of different architectures without even knowing what an architecture is. why would you have unused instructions silicone on a chip ? are you comparing x86 to ARM ISA's ? if so then the entire chip you propose may as well be dismissed as it would have "unused silicone" for an ARM ISA.


you are literally putting your mythical sandy-ATOM on a diet by saying you want to cut its arms and legs off in order to lose weight, and then somehow claim it is still a performance athlete !?

there are existing ARM chips that contain everything you wanted to remove from your mythical sandy-atom, yes EVERYTHING.

have a look here for just ONE example. this is Marvel's quadcore ARM chip for servers and enterprise:


• Up to four high-performance CPU cores with Vector Floating Point (VFP) support
• 800MHz to 1.6GHz operating speed
• Heterogeneous multiprocessing (SMP/AMP/Mixed) with hardware-based cache coherency
• I/O cache coherency
• 32KB-Instruction (4-way) and 32KB-Data (8-way), set-associative L1 cache per core
• Up to 2MB shared and unified 32-way, set-associative L2 cache
• 64-bit DDR2/DDR3/DDR3L memory interface with ECC support at up to 800MHz clock rates (1600MHz data rate)
• Four Gigabit Ethernet MACs with interface options (GMII/RGMII/SGMII/QSGMII)
• High-performance security engines
• Four PCI-e 2.0 ports (two x4 ports can be configured to Quad x1 – up to 16 lanes)
• Three USB 2.0 ports
• Two SATA 2.0 ports
• Up to 16 high speed Marvell SERDES lanes with multi functionality (PCI-e, SATA, SGMII, QSGMII)
• Advanced power management
• On-chip LCD controller, NOR, NAND, SPI, SDIO, UART, I2C, TDM interfaces, etc

no crippled "we removed features and performance to get the power consumption down", simply a fully-featured design that compromises nothing in features or performance, ready to challenge tradition desktop and enterprise CPU's.

then there are designs from Nvidia, Qualcomm, TI, caldexa, etc that have high-performance and fully-featured designs ready to take on desktops, nettops, laptops, tablets, enterprise, server, etc ALL markets.

Ever wondered why HP wants to use ARM for servers? it's because of the performance-to-watt ratio that benefits them leaving x86. Nvidia also are pushing ARM beyond desktops and into servers with their "Project Denver" 64bit ARM chip.

you want to take Intel's latest chip, cripple it, and expect to get the power consumption down to less than 2watts under load and less than 0.25 watts under idle (the current market expectations for consumption) for tablets and smartphones? all while still performing BETTER than the ARM competition ?

not happening.

you want to cripple Intel's finest to create another ATOM to gain all the power efficiency AND lose NONE of it's performance. Meanwhile ARM designs are ADDING functionality, like baseband radios, bluetooth, wi-fi, NFC, GPU's, ALL major IO, etc onto a single die, while increasing the performance per watt ratio.

Intel are in trouble, so is AMD albeit slightly less for various reasons.
"I have no doubt they could put out an x86 processor today based on an extremely modified Sandybridge that could pull under 1w under load, and idle at under 0.25w and outperform the competition."
again you show your delusion. why would Intel ALLOW market share to be stolen from them by losing every segment they currently own, as well as not enter markets such as smartphones and tablets ? the answer is, they don't WANT to allow it, it is being TAKEN. that, and they simply do not have a chip to enter the smartphone and tablet market to compete with ARM.

Intel have been saying they will have x86 silicone in smartphones and tablets for YEARS. they failed with Moblin, with Meego, and they about to try again with x86 Android.

Intel are 2 years from gaining any market share of tablets and smartphones, and that's optimistic. why do you think Microsoft isn't waiting to put Windows on mobiles ? it's because Microsoft know that it will take Intel too many YEARS to get x86 performing at an acceptable power consumption to put into mobiles. So Microsoft wants to start benefiting from mobile NOW, and isn't waiting for Intel to get its shit together.

In doing so, Microsoft will be propagating ARM and applications for ARM hardware. it's one of the biggest changes in the computing industry, ever.
Posted on Reply
#9
NC37
Problem with IBM was they were not very serious about the business outside of servers and consoles. Apple had enough of it and was forced to switch to Intel. The other big maker Apple used, Freescale, had some great designs for a bit, but then went into the phone market too.

PowerPC held on for a bit vs Intel. Altivec brought a lot of value in via the professional market when it was added onto the RISC CPUs. But ultimately the change in the markets shifted it elsewhere.

Just would love to see the day when RISC returns. Then Apple is the last ones to switch back. Foot in mouth Apple, foot in mouth...lmao!
Posted on Reply
#10
Steevo
I use a dual core 1.2Ghz with a actually good GPU built in and it is still ages slower than my old 1.7Ghz single core laptop I use for work with crappy integrated intel graphics.

ARM is good for a few things, desktops and anything requiring mission critical time is not one of them.


I would rather have to plug in a device more often and get the performance I need when I need it than have it last a full day without, or just upgrade the battery, and still suffer performance penalties.


I just got a 16" AMD A4 laptop for our techs with 8GB RAM and a 5400RPM drive and with a LED screen and running Windows 7 the battery life is almost 7 hours of continuous use, and it weighs a little over 6 pounds. It is fast, responsive, and would be capable of replacing a desktop for all of their needs for a whopping $429, it was less than my phone.


When my phone can do all this and at those speeds I will believe it is ready for a move up in the world, but its not.
Posted on Reply
#11
p3ngwin1
newtekie1 said:
The Cortex-A9 doesn't have a GPU, pay attention here.

But again, right now Sandybridge is absolutely destroying ARM in performance/watt. So taking that same architecture, and crippling it to the point that it performs like an old 486 to match the ARM performance should give power consumption the same or better than ARM.

Like I said, Sandybridge is already down to 17w, that is with two full cores and Hyperthreading, an integrated Northbridge, an integrated GPU, and integrated memory controller, integrated PCI-E lanes, and a huge L3 cache. A lot of stuff that isn't needed in a ARM competitor.
Sandybridge is absolutely destroying ARM in performance/watt
citation ?

why are there no mobile x86 devices if Intel is "destroying ARM in performance/watt" ? why is the industry moving away from x86 with Microsoft even embracing ARM, while HP, etc embrace ARM in servers ? if Intel are unbeatable in performance/watt, then why did mobiles never start with x86, and servers are leaving x86 ?
taking that same architecture, and crippling it to the point that it performs like an old 486 to match the ARM performance should give power consumption the same or better than ARM.
486...match ARM performance ? ?

wtf are you talking about now?

right now ARM solutions compare favorably to a Core2duo for performance/watt. This performance is suitable for the majority of consumers who brows the web and use office applications.
Posted on Reply
#12
MikeMurphy
This thread is a classic example of intelligent comments being drowned out by waves of opinion passed off as fact.

1) Performance per watt isn't the important metric at the moment. Low Idle power consumption, is. This absolutely critical for smartphones as they are idle most of the time. ARM does this well. The x86 architecture is not suited for this, although Intel is aiming squarely at this problem. Therefor, the x86 architecture has so far been inappropriate in smartphones.

2) Reducing the Atom to 22nm would help power consumption but doesn't even come close to solving this problem.

3) Intel's Ultrabook initiative is in direct response to the ARM threat. ARM's next stop is in the laptop formfactor. Given the growth in the laptop market (and corresponding reduction in the desktop market) it is important to Intel to establish a better user experience than it had offered in the past. This better user experience is expected to slow the growth of ARM in this market.

4) Re ARM not being able to compete in the desktop market for performance crown? It doesn't need to. Intel can keep their shrinking x86 desktop market all to themselves while the rest of the mobile worlds switches to ARM.
Posted on Reply
#13
p3ngwin1
bostonbuddy said:
Intel must be planning to have 22nm Ivybridge based smartphone/tablet x86 cpu's. Only makes sense. They can't wait 2 years for the next die shrink to start competing in that market.
Intel have been trying to get x86 into mobiles and tablets for over 5 years now.....still waiting.

http://www.youtube.com/watch?v=UZDfmJs8Ras
Posted on Reply
#14
p3ngwin1
MikeMurphy said:
This thread is a classic example of intelligent comments being drowned out by waves of opinion passed off as fact.

1) Performance per watt isn't the important metric at the moment. Low Idle power consumption, is. This absolutely critical for smartphones as they are idle most of the time. ARM does this well. The x86 architecture is not suited for this, although Intel is aiming squarely at this problem. Therefor, the x86 architecture has so far been inappropriate in smartphones.

2) Reducing the Atom to 22nm would help power consumption but doesn't even come close to solving this problem.

3) Intel's Ultrabook initiative is in direct response to the ARM threat. ARM's next stop is in the laptop formfactor. Given the growth in the laptop market (and corresponding reduction in the desktop market) it is important to Intel to establish a better user experience than it had offered in the past. This better user experience is expected to slow the growth of ARM in this market.

4) Re ARM not being able to compete in the desktop market for performance crown? It doesn't need to. Intel can keep their shrinking x86 desktop market all to themselves while the rest of the mobile worlds switches to ARM.
agreed.

you only have to look at Apple's market share move from desktops to laptops, and mobile,to see that the performance of the hardware is less of a concern to most consumers compared to battery life and portability. this is why Apple are selling more laptops, and are investing more R&D into their mobiles compared to the desktops.

ARM have the advantage of evolving their architecture into the desktop-class processor realm, compared to Intel that has been struggling for over 5 years trying to get x86 into mobiles. they keep promising "next year you'll see consumer products", but we never do.

Intel may have a fabrication process advantage, but that isn't enough, ARM are winning not because they have fab-process advantages, it's simply the architecture is evolving more in harmony to customer's needs. we are requiring less powerful desktop machines, as the majority of our lives can be satisfied within a browser and mobile OS, and the programming advancements that allow every processor and cycle to be used more efficiently instead of sitting idle wasting fuel(battery).

the requirements for hardware are slowing, as the software and OS landscape changes to better serve our needs. Intel is a juggernaut of processing power that is quickly becoming less relevant as our needs are better served by mobile, always-connected, efficient devices.

ARM is better suited to this future that is unfolding, and Intel are desperately trying to learn how to do new tricks that they've never done before (successfully).

there's a reason ARM is gaining the support of the world's largest OS vendor together with some of the biggest server vendors, and it's not because ARM is challenging the performance crown from Intel. people here who think "intel could take a sandybridge, and cripple it to make another ATOM and it would destroy ARM", are missing the point of what this battle is about. those people should ask themselves: seeing as those mythical chips don't exist, are Intel unwilling or unable to make them ?

it's not about raw performance at any cost. it's about a better balance of power, portability (see any Intel devices with always-on 24hr wireless connections?) and lifespan. that balance is reflected in the way we are living our lives away from our desks and more on the move. There exists hardware allowing this lifestyle change, and it's not powered by x86. we want the freedom to be connected wherever we are, with the flexibility of great software and OS to allow us to consume or create as much as we want.

ARM has it, Intel needs to learn it.
Posted on Reply
#15
Wile E
Power User
That's all well and good, p3ngwin1, but this editorial is about the desktop platform.

The desktop market is simply not in danger from ARM any time soon.
Posted on Reply
#16
p3ngwin1
qubit said:
Oh, it will scale alright. The ARM chip, being a RISC design, is much more streamlined than a CISC one. Therefore, an ARM can be implemented with much less transistors than an x86 one, which means it can be clocked higher. The only reason you don't see 4GHz ARMs is because they wouldn't exactly be battery friendly at that speed. To be fair, there's also various other tricks like pipelining and so on that Intel and AMD use to make the chip clock faster and these would also be used to make a super-fast ARM. But make no mistake, it can be done and with less effort than with an x86. Being small, it will still have the power efficiency advantages over an equivalently clocked x86, while running faster.



Ok, fair point. They're even in this respect.



Are we talking about the same video here? The ARM was only a bit slower, while running at a third of the clock speed. It didn't fall flat at all.



That's where we disagree. My point is that if the same process mojo was put into an ARM, it would trounce x86, for the reasons I just explained to Wile E. Unfortunately, we can't definitively prove it either way here without IBM or Intel (lol, like they would want to) building such a monster, but it's reasonable to expect the ARM to win.

I don't know if you've ever compared the ARM and x86 architectures, but they're built to opposite design philosophies, being RISC & CISC respectively. In fact, if you look at all the x86 extensions such as MMX & the SSE family, you'll see how streamlined they are compared to the old legacy x86 instructions, being more like a RISC machine and also doing more work in parallel.

These instructions, of course boost the efficiency of x86 enormously, as effectively some of the old instructions aren't used in favour of these ones, depending on the application. This is all part of the "mojo" enhancements I'm talking about, so if something like this is required for a fast ARM, then it's perfectly acceptable to include it in our virtual competition. :)

btw I've programmed the ARM2 processor back in the 90's on my Acorn Archimedes: the instruction set is pure nerdgasm stuff. :respect:
agreed.

also, ACORN ARCHIMEDES !

wow, i remember being in school with BBC micro's, and then the school transitioned to Archimedes...damn that thing was a monster from the future. my Acorn Electron at home already had an inferiority complex about the BBC's, that Archimedes didn't improve things !
Posted on Reply
#17
p3ngwin1
newtekie1 said:
....Their processors are just way too shitty for desktop users. They don't want to wait 15+ seconds for a webpage to load. Most get annoyed and click the link again if their desktop doesn't instantly load the page when the link is clicked...
wtf?

RTFA, and the video linked. even from a DEVELOPMENT board WITHOUT A GRAPHICS MODULE, compared favourably against it's competitor that INCLUDED A GRAPHICS MODULE.

all web pages loaded in less than 2 seconds flat. so wtf are you talking about "15+ seconds to load" ?

your bias is way too obvious against ARM, you even refute the evidence in front of your eyes from the article your commenting on.
Posted on Reply
#18
p3ngwin1
newtekie1 said:
As already pointed out, the GPU on the Atom system provides no help, as it can't HW accelerate any browsers....
false:

http://www.pclinuxos.com/forum/index.php?topic=98228.0

(2nd post in that link).

in this article's video, BOTH systems DL'd the websites in less than 2 seconds, and seeing as the average American internet speed is less than 4MB/s, that's plenty fast performance for these 2 systems to render the internet content.

as has been told to you plenty of times in this thread: the Atom had a MUCH faster processor and a graphics module which is at least used in the rendering and compositing of the OS's screen in 2D, let alone in-browser acceleration that may go beyond 2D acceleration.

the ARM system had a slower processor, NO GPU and absolutely no acceleration, yet managed almost identical (sub 1 second difference) performance for rendering the webpages. so the ARM system managed almost the same performance, at much less processor speed and without a GPU for massive power savings. which was the point of the video.

how you say it's "slow as shit" and "took 15+ seconds to load" is obviously your deluded and biased bullshit speaking.
Posted on Reply
#19
Fourstaff
Wile E said:
An ARM cannot compete in performance at the desktop level
The question is, under what conditions? If under current conditions, no doubt x86 will still be the king for many years. However, if I told you 20 years ago that I will be watching porn using my mobile phone you would have brushed me off as a lunatic. If I tell you desktops are going extinct within 20 years now you are going to laugh at me too.
Posted on Reply
#20
qubit
Overclocked quantum bit
Fourstaff said:
However, if I told you 20 years ago that I will be watching porn using my mobile phone you would have brushed me off as a lunatic.
You watch porn?! :eek: I'm shocked, shocked I tell you! :eek: :eek: :laugh:
Posted on Reply
#21
chodaboy19
It's always too early to write an obituary for "Wintel". The market will become more fragmented but intel will still command the lion's share of the CPU market.
Posted on Reply
#22
newtekie1
Semi-Retired Folder
p3ngwin1 said:
sandybridge at 17watts eh?

so it's "just sipping" over 8X the maximum power expected of smartphones and tablets ?


ok let's a have a look at your fetish and delusion for a sandybridge-based "atom" :


  1. ditching the silicon for HT...
  2. and the 3MB of L3 cache...
  3. a graphics processor, drop that and what does the power consumption numbers drop too...
  4. Cripple the memory controller(who needs dual channel 32GB RAM on a smart phone?)...
  5. cut out the PCI-E lanes(don't need those on a smart phone)...
  6. drop the unused instruction sets and the silicon related to those

you are suffering from confirmation bias, because you are only paying attention to ONE side of the "performance per watt" equation. you are literally saying "yeah, but what if we chop out all this "useless stuff" and then we'll have power consumption comparable!", all the while ignoring the performance drop that killed the ATOM by taking a CELERON (already a POS) and HALVING it's performance, then wondering why ATOM died when ARM devices offered better performance per watt.

for your proposed 'sandy-atom', not only should you ask "what power consumption numbers like then?", but ALSO you should ask "what performance would there bee after cutting all those features?".

you would create exactly the same effect that made the original Intel ATOM a piece of shit, because it was a bottom-of-the-barrel desktop chip architecture, that had it's performance HALVED from the celeron design it was butchered from. yet you expect a crippled sandybridge dual-core to compete with ARM chips that contain everything you removed from your sandybridge?

you say "drop the graphics processor", but why? how do you expect the device to display visual output with comparable performance to other devices if it has no GPU ?

you say "Cripple the memory controller(who needs dual channel 32GB RAM on a smart phone?". the amount of RAM is quite irrelevant at this time, but the bandwidth is very relevant to performance. we already have dual-channel for RAM on mobile devices, why would you cripple that in your mythical sandy-ATOM ?

you say "drop the unused instruction sets and the silicon related to those". ok this makes no sense at all and convinces me you're discussing CPU of different architectures without even knowing what an architecture is. why would you have unused instructions silicone on a chip ? are you comparing x86 to ARM ISA's ? if so then the entire chip you propose may as well be dismissed as it would have "unused silicone" for an ARM ISA.


you are literally putting your mythical sandy-ATOM on a diet by saying you want to cut its arms and legs off in order to lose weight, and then somehow claim it is still a performance athlete !?

there are existing ARM chips that contain everything you wanted to remove from your mythical sandy-atom, yes EVERYTHING.

have a look here for just ONE example. this is Marvel's quadcore ARM chip for servers and enterprise:


• Up to four high-performance CPU cores with Vector Floating Point (VFP) support
• 800MHz to 1.6GHz operating speed
• Heterogeneous multiprocessing (SMP/AMP/Mixed) with hardware-based cache coherency
• I/O cache coherency
• 32KB-Instruction (4-way) and 32KB-Data (8-way), set-associative L1 cache per core
• Up to 2MB shared and unified 32-way, set-associative L2 cache
• 64-bit DDR2/DDR3/DDR3L memory interface with ECC support at up to 800MHz clock rates (1600MHz data rate)
• Four Gigabit Ethernet MACs with interface options (GMII/RGMII/SGMII/QSGMII)
• High-performance security engines
• Four PCI-e 2.0 ports (two x4 ports can be configured to Quad x1 – up to 16 lanes)
• Three USB 2.0 ports
• Two SATA 2.0 ports
• Up to 16 high speed Marvell SERDES lanes with multi functionality (PCI-e, SATA, SGMII, QSGMII)
• Advanced power management
• On-chip LCD controller, NOR, NAND, SPI, SDIO, UART, I2C, TDM interfaces, etc

no crippled "we removed features and performance to get the power consumption down", simply a fully-featured design that compromises nothing in features or performance, ready to challenge tradition desktop and enterprise CPU's.

then there are designs from Nvidia, Qualcomm, TI, caldexa, etc that have high-performance and fully-featured designs ready to take on desktops, nettops, laptops, tablets, enterprise, server, etc ALL markets.

Ever wondered why HP wants to use ARM for servers? it's because of the performance-to-watt ratio that benefits them leaving x86. Nvidia also are pushing ARM beyond desktops and into servers with their "Project Denver" 64bit ARM chip.

you want to take Intel's latest chip, cripple it, and expect to get the power consumption down to less than 2watts under load and less than 0.25 watts under idle (the current market expectations for consumption) for tablets and smartphones? all while still performing BETTER than the ARM competition ?

not happening.

you want to cripple Intel's finest to create another ATOM to gain all the power efficiency AND lose NONE of it's performance. Meanwhile ARM designs are ADDING functionality, like baseband radios, bluetooth, wi-fi, NFC, GPU's, ALL major IO, etc onto a single die, while increasing the performance per watt ratio.

Intel are in trouble, so is AMD albeit slightly less for various reasons.



again you show your delusion. why would Intel ALLOW market share to be stolen from them by losing every segment they currently own, as well as not enter markets such as smartphones and tablets ? the answer is, they don't WANT to allow it, it is being TAKEN. that, and they simply do not have a chip to enter the smartphone and tablet market to compete with ARM.

Intel have been saying they will have x86 silicone in smartphones and tablets for YEARS. they failed with Moblin, with Meego, and they about to try again with x86 Android.

Intel are 2 years from gaining any market share of tablets and smartphones, and that's optimistic. why do you think Microsoft isn't waiting to put Windows on mobiles ? it's because Microsoft know that it will take Intel too many YEARS to get x86 performing at an acceptable power consumption to put into mobiles. So Microsoft wants to start benefiting from mobile NOW, and isn't waiting for Intel to get its shit together.

In doing so, Microsoft will be propagating ARM and applications for ARM hardware. it's one of the biggest changes in the computing industry, ever.
You seem to be missing the point, All that stuff I said to cut out isn't necessary in a smartphone form factor, that is why I said cut it out. You really think PCI-E lanes are necessary on a smartphone? You think wasting half the die to a powerful iGPU is necessary on a smartphone? You think wasting another half the leftover die to L3 cache is necessary on a smartphone? And wasting a quarter of that left over die to a dual-channel DDR3 memory controller is necessary on a smartphone? Your still trying to compare Atom, a traditional processor, to an ARM processor designed for smartphones. All of that stuff sucks power, and all of it isn't necessary on a smart phone platform.

I never said it wouldn't lose any performance, I'm saying the resulting performance would be as good as ARM(which is shit).

p3ngwin1 said:
citation ?

why are there no mobile x86 devices if Intel is "destroying ARM in performance/watt" ? why is the industry moving away from x86 with Microsoft even embracing ARM, while HP, etc embrace ARM in servers ? if Intel are unbeatable in performance/watt, then why did mobiles never start with x86, and servers are leaving x86 ?
The industry is moving away from x86? Where do you get that? Because there are new markets that have opened that don't use it? Not exactly a good logical conclusion there.

Mobility never started with x86 because we are still using processors based of processors that were in old non-smartphones. They just kept scaling them up, while Intel is very good at scaling them down. I haven't seen anyone embracing ARM servers, I have no idea where you are getting that, unless you are talking about those crappy HDD boxes that run custom pre-loaded crap firmware...


p3ngwin1 said:
486...match ARM performance ? ?

wtf are you talking about now?
It is called exaggeration to prove a point. Working on an ARM processor is painfully slow, like working on a 486.

p3ngwin1 said:
right now ARM solutions compare favorably to a Core2duo for performance/watt. This performance is suitable for the majority of consumers who brows the web and use office applications.
If the video is anything to go by, no it isn't suitable for the majority of consumers who browse the web and use office applications.

MikeMurphy said:
This thread is a classic example of intelligent comments being drowned out by waves of opinion passed off as fact.

1) Performance per watt isn't the important metric at the moment. Low Idle power consumption, is. This absolutely critical for smartphones as they are idle most of the time. ARM does this well. The x86 architecture is not suited for this, although Intel is aiming squarely at this problem. Therefor, the x86 architecture has so far been inappropriate in smartphones.

2) Reducing the Atom to 22nm would help power consumption but doesn't even come close to solving this problem.

3) Intel's Ultrabook initiative is in direct response to the ARM threat. ARM's next stop is in the laptop formfactor. Given the growth in the laptop market (and corresponding reduction in the desktop market) it is important to Intel to establish a better user experience than it had offered in the past. This better user experience is expected to slow the growth of ARM in this market.

4) Re ARM not being able to compete in the desktop market for performance crown? It doesn't need to. Intel can keep their shrinking x86 desktop market all to themselves while the rest of the mobile worlds switches to ARM.
Shrinking desktop market? No. The mobile market increasing has little affect on the desktop market. But that is what this thread is about, the desktop market. For ARM to take over the desktop market, they will have to compete with Intel/AMD to take the performance crown.

And ARM isn't going to do anything well once it hits the laptop market, it needs to stay on Tablets where people are content playing Angry Birds all day, and not doing anything productive. In fact, most good productivity apps don't even support ARM, so moving into an environment where productivity is required is going to be a hard fight for them, even if they have idle power draw that is near non-existant.
Posted on Reply
#23
p3ngwin1
newtekie1 said:
You seem to be missing the point, All that stuff I said to cut out isn't necessary in a smartphone form factor, that is why I said cut it out.
well, i actually contested several of your points because you said they were "not needed". i asked why you think a graphics processor, dual channel memory and "unused instruction sets" were unnecessary in a smartphone form-factor. you have not answered satisfactorily.

you said "And that is with a graphics processor, drop that and what does the power consumption numbers drop to?", which is why i asked how you propose to have a a system without a GPU of any sort. now you want to backtrack and say "You think wasting half the die to a powerful iGPU is necessary on a smartphone?".

you are changing your argument because you know you were mistaken. first you said drop the entire GPU from SandyBridge in an effort to compete with ARm, now you are saying don't drop it completely but cut it down as that much GPU performance is unnecessary. make your mind up. fact is current ARM SoC's with on die GPU solutions embarrass Intel's integrated graphics modules, such as their HD3000.

Nvidia, PowerVR, Qualcomm, and ARM's own Mali GPU solutions have more GPU performance than Intel's current integrated graphics. Intel are so bad at integrated graphics that they even used PowerVR's solutions up until recently.

Let's compare current generation ARM solutions with on-die GPU's, to Intel's current generation integrated graphics HD3000:

Intel HD3000 played at 1024x768 (similar to iPad and LESS than Android tablet's 1280x800 CPU=Intel Core i5 2500K @ 3.3GHz GPU= Intel HD 3000 overclocked @ 1450MHz)

Need For Speed Hot Pursuit 2010 gameplay on HD3000
Call of Duty Black Ops on Intel HD 3000
]DiRt 3 On Intel HD 3000

mobile GPU's currently used in ARM SoC's such as Nvidia, PowwerVR, Qualcomm Adreno, etc
infinity blade 2
Asphalt 6: Adrenaline HD iPad 2 Gameplay

here's what a quadcore ARM chip with on-die GPU can do in less than 5 watts under load TODAY.

here's what Sony can do with a quadcore ARm chip with GPU:
PS Vita - Uncharted Golden Abyss
PS Vita - Ridge Racer Unbounded
Vision Game Engine for NGP Real-time Tech Demo

that's what can be done with ARM's SoC, the GPU is already competing favorably with Intel's HD3000 and is more than enough for the majority of people's current desktop needs for browsing and office productivity. why else do you think Microsoft is making Windows available to ARM with Office and Internet Explorer for tablets and ARM-based laptops in 2012 ?


let's see how long it will take for Intel to make your mythical crippled Sandy-ATOM perform this well in that power envelope. by the time Intel does, ARM will still be ahead by years as Nvidia, HP, Qualcomm, TI, Marvel, Apple, Samsung, etc push the ARM architecture into more devices, accelerting developement, and eroding desktop share as consumers get comfortable with mobile computing that serves their needs on the go. the desktop as you know it is dying, and is being replaced by cloud computing with always-on devices using networked data and networked processing to serve customer's needs.


also there is cost: ARM processors do not cost anywhere near the cost of Intel's CPU's. then there's encryption, which most serious businesses require of nearly all their hard drives. this means every file is written with Microsoft's "Bitlocker" or other OS's equivalent. this is CPU-intensive, unless it is accelerated by x86 extensions such as in Intel's "Clarkdale" CPU's. well guess what, ARM is getting that too and it's because ARM is going into desktop and servers.

let's ahave a look at the current example of what yo ubelieve is going to be crippled to compete with ARM on desktop and mobile:

---------------------------Core i5-2557M
Cores ...............................2
HyperThreading...................yes
Cache ............................3 3 MB
Clock freq.........................1,7 GHz
Turbo.............................2,7 GHz
Graphics....................... HD 3000
Clock freg. .....................350 MHz
Turbo (Graphics)..............1,2 GHz
Process............................32nm
TDP..................................17W

how and when exactly is Intel going to make this kind of chip, your proposed 17w candidate, available in less than 5 watts yet with better performance than the ARM equivalent ?

you can't cripple the GPU, as that is already barley good enough to compete on performance with today's ARM GPU's, and even then it currently isn't good enough on power consumption. what about clock-speed, maybe you can lower that, although it is already dangerously low to compete with multi-core ARMs.
maybe take out some of those "unnecessary instructions" you mentioned, but what will performance be like then for this mythical Intel Sandy-ATOM ?
maybe take out the PCIe lanes, but what is the use of the desktop CPU if it has no connectivity options like he ARM chips that already have PCIe with Ethernet and USB, etc ?
maybe cripple the memory controller because who needs dual channel, but again we are going to end up with a CPU that no one would put into a desktop or server/enterprise environment.

no, i think your phantom Intel CPU is not going to be in smartphones, or competing very well in desktop and server-land with ARM. ARM is aggressively marching into laptops/convertible tablets, and is already targeting servers with one of the world's largest server vendors leading the way.

all this ARM investment means more developers, and more applications. Microsoft alone will provide half the motivation by supplying the world's most popular desktop OS on ARM together with Internet Explorer and Microsoft Office. that already suits the needs of the majority of business and consumers on the planet.




newtekie1 said:
You really think PCI-E lanes are necessary on a smartphone?
no, that's why it wasn't included in my THREE points about "dropping the GPU, crippling the memory controller, and dropping 'unused instruction sets'.

you are choosing to argue something that was never said. again, your confirmation bias as you dodge the issue at hand i obvious. PCIe is needed on anything from a tablet upwards, so i don't advise cutting it from your mythical Sandy-ATOM.

newtekie1 said:
You think wasting another half the leftover die to L3 cache is necessary on a smartphone?
we already have L1 and L2 caches, and soon we will have L3 just like common desktops CPU's. in case you haven't notice, the mobile processors are quickly evolving to compete with traditional CPU's, and are gaining all their features a lot quicker than traditional CPU's ability to compete with ARM's power efficiency.

newtekie1 said:
And wasting a quarter of that left over die to a dual-channel DDR3 memory controller is necessary on a smartphone?
we already have dual-channel DDR2, so there is no reason to suggest dropping it from traditional CPU's to create your mythical Sandy-ATOM to compete with ARM.

newtekie1 said:
Your still trying to compare Atom, a traditional processor, to an ARM processor designed for smartphones. All of that stuff sucks power, and all of it isn't necessary on a smart phone platform.
it is necessary for a desktop processor, so you won't be cutting it to compete with ARM's power efficiency, yet ARM will add these features while scaling efficiency much better than Intel can put their CPU's on a diet to bring the consumption down without crippling performance like they did with ATOM. you have so much faith in Intel's ability to make a competing chip to best ARM, yet why have they not done so? are they unwilling or unable to stop ARM entering desktops and servers ?

newtekie1 said:
it being noticeably slower at even managing to surf the web than an Atom, and Atoms are slow pieces of shit in the desktop world
interesting you mention Intel's failed netbooks, as that market just so happens to be dying as people buy lots of tablets and smartphones. Devices that perform many people's needs from a connected device, such as email and browsing. netbooks died because people didn't want a portable and powerful device when they already had a desktop, until they saw tablets. then they decided to ditch desktops for the convenience of mobility.

newtekie1 said:
Shrinking desktop market? No. The mobile market increasing has little affect on the desktop market. But that is what this thread is about, the desktop market. For ARM to take over the desktop market, they will have to compete with Intel/AMD to take the performance crown.
ARM don't need to beat Intel at performance, they simply need to "perform enough", and add the important benefit of battery life for mobility, that's what Intel can't compete on right now. it will be a few more years before ARM and Intel truly clash in the same areas, but by then ARM will have a much better time surviving than Intel.

the desktop is dying:
evidence 1
evidence 2
evidence 3

newtekie1 said:
They don't want to wait 15+ seconds for a webpage to load. Most get annoyed and click the link again if their desktop doesn't instantly load the page when the link is clicked... What is good enough for a tablet user that is happy with the piss poor slow performance isn't enough for a desktop user.
if this were true, then how do you explain the declining desktop market as people buy laptops and tablets more ? do you have any citations or evidence for your claims, or do you expect us to listen and believe all your baseless opinions ?

we provide evidence when we make claims, but you ignore them and continue to refute the truth with your bias and ignorance. why can't you provide any compelling evidence to support your assertions ? is it because it doesn't exist except in your mind ?

newtekie1 said:
To effectively move into the desktop market, ARM is going to have to be alot better than just "good enough to watch videos and surf the net"
actually you seem to be missing the point, ARM simply has to continue luring people away from desktops to enjoy their entertainment and productivity on smartphones and tablets/convertibles like the Asus EEE transformer and ASUS W500. this is why ARM is winning and Intel is fighting a difficult battle: people don't need powerful desktops anymore, so that means the desktop is struggling to stay relevant in an age of mobile devices that do more than enough to server most people's needs.

ARM don't have to beat Intel in the desktop market, they simply need to continue making it less relevant by result of making powerful and efficient processors that Intel can't compete with. why do you think Intel is losing Microsoft and HP to ARM ?




newtekie1 said:
It won't even touch a Sandybridge dual-core, hell it probably won't even touch a Bulldozer dual-core. Yes, it might still consume less power, but probably not a lot, and probably still be slower than Atom...we are talking about ARM taking over the desktop market, and that just isn't going to happen
it will consume MUCH less power, so much Intel tried and failed to compete by taking a shitty Celeron, butchering it, and shoving it into netbooks. then netbooks died as tablets stuffed with ARM chips ate netbooks for breakfast and are already starting to eat laptops and desktops for dinner.

newtekie1 said:
What is good enough for a tablet user that is happy with the piss poor slow performance isn't enough for a desktop user....To effectively move into the desktop market, ARM is going to have to be alot better than just "good enough to watch videos and surf the net"
have a look at those numbers again, the desktop users are declining, whether you like it or not people are beginning to like the freedom and casual-nature of being able to be connected on the move and work anywhere they want. the need for powerful desktops is decreasing as software takes better advantage of all the hardware in the system you have. there's a reason companies like Google push for things like WebGL in the browser, and why technologies like accelerated CSS and video, etc are making faster CPU's a decreasing priority.

desktops are declining as people prefer mobiles such as laptops, tablets and smartphones. the need for powerful CPU's is declining and ARM has a bright future ahead as it enters desktops and servers. Microsoft is making a big push with it's popular OS, browser and Office applications, and Apple will probably follow soon by testing the waters with a Macbook AIR or similar.

ARM is spreading, and the days of x86 dominating are eroding very quickly, hence the old 'WINTEL' alliance dissolving.

newtekie1 said:
I never said it wouldn't lose any performance, I'm saying the resulting performance would be as good as ARM(which is shit).
can you explain why people are happily exchanging their preference for desktops and laptops, for these "shit" tablets and smartphones than can "only do email and browsing the internet" ?


newtekie1 said:
The industry is moving away from x86? Where do you get that? Because there are new markets that have opened that don't use it? Not exactly a good logical conclusion there.
do you ignore the evidence of the largest OS vendor and and one of the largest sever vendors supporting ARM and breaking with a near-exclusive x86 history ?


I haven't seen anyone embracing ARM servers, I have no idea where you are getting that, unless you are talking about those crappy HDD boxes that run custom pre-loaded crap firmware...
I'd suggest you educate yourself more about these matters before you assert your delusions on other people, maybe then you won't look so ignorant as you insist Intel could make a SandyBridge chip to beat ARM if they "wanted to". ARM is entering servers and enterprise, but you're clearly not as informed as you'd like to believe, that's why you don't know about what's happening right under your nose.




newtekie1 said:
It is called exaggeration to prove a point. Working on an ARM processor is painfully slow, like working on a 486.
you exaggerate to the point of it not being true, that's why people are buying tablets/smartphones and not laptops and desktops as much. that evidence is presented, your rebuttal with equally good evidence is expected.


newtekie1 said:
If the video is anything to go by, no it isn't suitable for the majority of consumers who browse the web and use office applications.
yet the market evidence refutes your assertion, unless you care to provide supporting facts to back your claims ?


newtekie1 said:
And ARM isn't going to do anything well once it hits the laptop market, it needs to stay on Tablets where people are content playing Angry Birds all day, and not doing anything productive. In fact, most good productivity apps don't even support ARM, so moving into an environment where productivity is required is going to be a hard fight for them, even if they have idle power draw that is near non-existant.
your grasp of people's current computing habits is as accurate as much as it isn't embarrassing.
Posted on Reply
#25
Steevo
TLDR.


Just like a console can render a game with far less eye candy, with far less pixels, with far less background tasks and overhead of a standard OS the little 840X480 screens at a whopping 24 or 30 FPS are simply not enough for a FPS with any sort of realism provided by a actual modern GPU on die or not.


Posting shit from youtube and believe it is all real and wonderful is almost a sign of delusion.


How about hard and fast numbers like FPS, AA, AF, display size...you know, the shit that matters to people. Not durp my durper can durp that game like a computer can.....

No one here is impressed by your long post. Talking shit and talking alot of shit are still the same thing.


What large businesses run them right now, what percent marketshare do they have, adoption rates, TCO vs anything, server benchmarks. Editorial pieces and the same spin that was applied to some other failures from other well known and respected companies means little people who write the checks, work on them, work with them, and use them.


Itanium anyone?
Posted on Reply
Add your own comment