That's a great article. What I don't really understand though is why you couldn't run virtual x86 machines in those threads. I understand that for server farms that would mean a loss of efficiency and so for that purpose it's not an option. But for other applications, why wouldn't that work?
Anyway, the power 8 chips will likely start life as hosts for Watson servers. IBM is investing $1B in porting Watson to the cloud and making it accessible to developers. Since Watson is an IBM product, I assume it was developed on Power chips.
This is perfect vertical integration for IBM assuming that they can maintain the lead that Watson has over other data analytics competitors. They create the software and the hardware it runs on. I mean damn. If they pull this off, it will be just like the good old days again for them.
That's because most modern VMs run on hardware using visualization extensions to pass through CPU instructions to hardware to make VMs run as fast as the host machine. If you have an X86 server, you're going to be running X86 VMs because visualization extensions allow the VM to run directly on hardware when the instruction set is the same. This does not accelerate other architectures because you can't run other architectures on hardware because... well, it's not there; the instructions are different and it's all handled differently so emulation is required. Even when Apple moved from PowerPC to Intel, Rosetta (which allowed PPC binaries run on Intel; PPC using Altivec and belonging to IBM and being an early POWER variant) was emulated in software.
The simple fact is that if you want to use CPUs like that, you need to run software that is compiled in that instruction set. Emulating ISAs is dead slow which is why you don't use it in server farms. If you need X86, you use X86. If you need POWER, you use POWER, if you need SPARC, you use SPARC. It's really as simple as that. The only time you really should emulate ISAs is if you're a developer and don't have access to the real hardware it will run on and you need to develop locally. The loss in performance to emulate will always be too much for real world applications in production; the translation is too costly versus running natively.
Also consider the cost of IBM servers plus licensing costs. X86 more often than not is the better option unless you have some very specific uses for a server farm outside of traditional software.
With this:
AMD Could Potentially Get 19B investment
http://www.guru3d.com/news-story/amd-could-potentially-get-9b-investment.html
and
AMD 16nm Opteron and FX Processors with Upto 20 Cores are a Possibility in 2016-2017
http://wccftech.com/amd-16nm-opteron-fx-processors-possibly-upto-20-cores-2016-2017/
and
AMD Working On Something “Crazy” For GDC – Plus New Demo To Make Starswarm Look Primitive
http://wccftech.com/amd-working-crazy-gdc/#ixzz3Rj2LvnMI
I think the future for AMD is all right.
You can't sell news articles and I've been hearing about a lot of good things for a long time that have never truly lived up to their expectations. So in other words, I'll believe it when I see it but until then, you can't sell an unfinished product.