• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Officially Sinks the Itanic, Future of IA-64 Architecture Uncertain

I recall reading about Itanium in the early days, it seems like ever since it's inception it has been a myth at the best of times.
 
I don't recall them having any Itanium version even close to efficient enough to consider that.
It could have been but Intel gave up on it a long time ago. It uses a lot fewer transistors to execute a task than x86 does. It left most optimizations to the software and compiler rather than processor itself.
 
Wow, I had completely forgotten about Itanic, much like the industry! :p
 
Suck Eggs HP.

You bought Compaq and killed Alpha :p

Now your Itanic has sunk.

HP spent billions trying to keep itanic afloat with intel...HP is dumb with a long history of blowing cash lol
Their execs were all having too many drug parties at intel, apparently. You'd have to be higher than a weather balloon to invest in itanium.
 
HP spent billions trying to keep itanic afloat with intel...HP is dumb with a long history of blowing cash lol
Their execs were all having too many drug parties at intel, apparently. You'd have to be higher than a weather balloon to invest in itanium.
Before AMD64 rolled out, IA-64 made a lot of sense as the future of computing. It still does in some regards but people would rather have backwards compatibility in processors than an instruction set for the 21st century.
 
Before AMD64 rolled out, IA-64 made a lot of sense as the future of computing. It still does in some regards but people would rather have backwards compatibility in processors than an instruction set for the 21st century.

Low performance = failure no matter what.
 
Itanium only made sense on paper. By the time it entered the market in 2001, it was already dead in the water. It was crushed by x86 designs of the day; Pentium III, Pentium 4 and Athlon, and in datacenter market they were crushed by RISC processors like Power and Sparc. Itanium had been in development since the late 80s, and the design choices were largely made in a bubble without too much realism.
 
Then, in a couple months Itanic revives as IA-128 and has partial compatibility with RISC-V.
 
HP spent billions trying to keep itanic afloat with intel...HP is dumb with a long history of blowing cash lol
Their execs were all having too many drug parties at intel, apparently. You'd have to be higher than a weather balloon to invest in itanium.

HP
Killed PA-Risc
And Dec Alpha
to bring out the Itanic with intel..

Even microsoft saw a sinking ship and pulled out years ago.

Can't remember how many billion the itanic it cost.
And how many % of the server and even workstation market they where meant to get.
Even before AM64 they where behind target then AM64 came out and it was pretty much game over.

Actually the intellectual property of Alpha was bought by Intel.

Either way intel HP created the Itanic ....
 
Low performance = failure no matter what.
Only benchmark I could find:
41009840.jpg

Xeon 20.1474609375
Itanium 8.8193359375
Opteron 11.516927083333333333333333333333
Itanium 13.1015625

That's per core. Itanium 2 is nothing to scoff at.

8-Core Itanium Poulson: 3.1 billion transistors
8-Core Xeon Nehalem-EX: 2.3 billion transistors

Interesting article about Poulson (newest Itanium architecture): https://www.realworldtech.com/poulson/

Itanium had 20% of the TOP 500 super computers back in 2004. IA-64 gained traction because x86 lacked memory addressing space. x86-64 reversed that pattern because of backwards compatibility/not having to find Itanium software developers.

12 instructions per clock, 8 cores, and 16 threads at the end of 2012. It was a monster.
 
Last edited:
Then, in a couple months Itanic revives as IA-128 and has partial compatibility with RISC-V.
There is no reason for making a 128-bit ISA, at least not yet anyway. Current x86 architectures have partial support for up to 512-bit through AVX, which for the time being is a much more flexible and smart way of getting good performance without adding massive complexity to the design. I see no reason why the entire core should be extended to 128-bit, at least not for the next decade.
 
64-bit calculations were largely a secondary concern. Move to 64-bit was largely dictated by memory - more specifically address space - of 32-bit becoming too small. 32 bits allows for address space of 4 GB and especially coupled with things like memory-mapping for things like GPUs - that want a large part of that address space - it just ran out faster than expected. Workarounds in form of things like PAE proved to be insufficient to address the inherent limitation.

Yes, the actual address space support today is more like 40-bit (2^40 ~ 1 TB) or 52-bit (2^52 ~ 4.5 PB ~ 4500 TB) for physical and 48-bit for virtual (2^48 ~ 280 TB) not the full 64-bit, but moving that up is a fairly minor change in terms of architecture and it'll take a while until we exhaust the 64-bit address space (2^64 = 16 EB ~ 16.7 million TB).

64-bit needs the data path, ALUs (integer, which is used for address calculations), registers, address and data buses to be 64-bit which doubled almost everything in a CPU or a CPU core compared to 32-bit CPUs. Doubling all that again to 128-bit does not sound like something CPUs would benefit from - today and in general use. For an example on that, see what happened to Intel's FP units in terms of size, power and heat when they doubled their size from 128-bit to 256-bit for AVX2 in Haswell.
 
Last edited:
64-bit calculations were largely a secondary concern. Move to 64-bit was largely dictated by memory - more specifically address space - of 32-bit becoming too small.
You are confusing register width with address width.
64-bit computing have nothing to do with 64-bit address width.

Physical Address Extension (PAE) to address beyond 4 GB was supported since Pentium Pro (1995).
 
Addresses have a tendency to go through integer units for various purposes, address generation for example. Operations to do on addresses are pretty much integer operations so in modern x86 processors integer units practically double for addressing. They are not directly related but eventually they collide. Or did I get this completely wrong?

PAE is a workaround. It is often not enough and has downsides, not least of which is enabling support for it on every level.
 
Last edited:
Addresses have a tendency to go through integer units for various purposes, address generation for example. They are not directly related but eventually they collide.

PAE is a workaround. It is often not enough and has downsides, not least of which is enabling support for it on every level.
To have register width lower than address width requires more operations, but is not uncommon. Nearly all the early computers did this, Intel 8086 16-bit register width 20-bit address width, 80286 was 16-bit / 24-bit addressing. MOS 6502 was a 8-bit CPU with 16-bit addressing, used in Commodore 64, Atari 2600, Apple II, NES and many more.

PAE was supported on Windows, MacOS(x86), Linux and all the major BSDs. Windows 8 and 10 32-bit actually requires to run in PAE mode, so it's used much more than you think.

The reason why PAE is unknown to most people, is that they switched to 64-bit OS and hardware long before they hit the 4 GB limit.
 
To have register width lower than address width requires more operations, but is not uncommon. Nearly all the early computers did this, Intel 8086 16-bit register width 20-bit address width, 80286 was 16-bit / 24-bit addressing. MOS 6502 was a 8-bit CPU with 16-bit addressing, used in Commodore 64, Atari 2600, Apple II, NES and many more.
With drive for efficiency and simultaneously widening the compute the different address width (at least to the larger side) seem to be uncommon in current architectures, no?

I remember PAE very well. It needed support from motherboard, BIOS, operating system and depending on circumstances, application. That was a lot of fun :)
Are you sure about 32-bit Windows 8 and 10 requiring PAE? They do support it and can benefit from it but I remember trying to turn on PAE manually on Windows 8 (and failing due to stupid hardware).
 
A lot of that came from the fact that the compilers had to do all the hard work. Little do people know but in your common x86-64 chip there's a lot of optimization of the CPU instructions going on behind the scenes at the silicon level before even one instruction is executed. There was none of that happening with Itanium, all of that had to be done at the compiler level which they generally weren't able to do.
Actually not. Itanium features explicit parallel instructions, and compilers are limited to working with just a few instructions within a scope, there is no way a compiler could be able to properly structure the code and memory to leverage this. It's kind of similar to SIMD(like AVX), the compiler can vectorize small patterns of a few instructions, but can never restructure larger code, so if you want proper utilization of SIMD you need to use intrinsics which are basically mapped directly to assembly. No compiler will ever be able to do this automatically.
Actually, it did. It's not easy to write a compiler to take advantage of 128 general-purpose registers in an effective way for every workload imaginable. x86 only had 8 and x86-64 bumped that to 16. Theoretically it could be really fast, but it can only be as fast as the compiler and how it determines what data goes where. The nice thing with having a bunch of general-purpose registers is because you don't need to load and store data to and from memory as often and accessing registers is faster than accessing cache, however, there are consequences to getting evicted from registers incorrectly which is more and more likely as you have to manage a larger number of registers. The reality is that you can only look so far ahead so, I suspect that a lot of the time those registers are getting loaded and stored far more often then they should be, mainly because you have to figure it out ahead of time if some data is going to be used soon or way later and the cost of getting that wrong is significant.

Just saying. IA-64 is good on paper, but performance completely relies on implementation of both the software and the compiler and the compiler is actually responsible for a lot than one for x86-64 is.
 
Last edited:
With drive for efficiency and simultaneously widening the compute the different address width (at least to the larger side) seem to be uncommon in current architectures, no?
I didn't quite get that one.

Having to do multiple operations to access memory is of course a disadvantage, but not a huge one. I remember most recompiled software got like a 5-10% improvement, due to easier memory access and faster integer math combined.

I remember PAE very well. It needed support from motherboard, BIOS, operating system and depending on circumstances, application. That was a lot of fun :)

Are you sure about 32-bit Windows 8 and 10 requiring PAE? They do support it and can benefit from it but I remember trying to turn on PAE manually on Windows 8 (and failing due to stupid hardware).
I haven't run any Windows in 32-bit since XP, but from what I've read does NX bit require it, which is enabled on all modern operating systems for security reasons.

Nevertheless, I was one of the early adapters of 64-bit OS's, not because of memory, but because I wanted that extra 5-10% performance. Linux did have an extra advantage here, since the entire software libraries were made available in 64-bit almost immediately. And it was a larger uplift than many might think. Most 32-bit software (even on Windows) was compiled with i386 ISA, yes that means 80386 compatible features only. Some heavier applications was of course compiled with later ISA versions, but most software were not. Linux software also usually assumed SSE2 support along with "AMD64", so the difference could be quite substantial in edge cases.

Actually, it did. It's not easy to write a compiler to take advantage of 128 general-purpose registers in an effective way for every workload imaginable.
The problem from the compiler side is that the code, regardless of language, needs to be structured in a way that the compiler can basically saturate these resources.

If you write even C/C++ code without considerations, not even the best compiler imaginable can restructure the overall code for it to be efficient for Itanium.
This is basically the same problem we have with writing for AVX, and the reason why all efficient AVX code uses intrinsics, which is "almost" assembly.

x86 only had 8 and x86-64 bumped that to 16. Theoretically it could be really fast, but it can only be as fast as the compiler and how it determines what data goes where. The nice thing with having a bunch of general-purpose registers is because you don't need to load and store data to and from memory as often and accessing registers is faster than accessing cache<snip>
In theory, having many registers is beneficial. At machine code level, x86 code does a lot of moving around between registers (which usually is completely wasted cycles, ARM does a lot more…). So having more registers (even if only on the ISA level) can eliminate operations and therefore be beneficial, I have no issues so far.

But, keep in mind that ISA and microarchitecture are two different things. x86 on the ISA level is not superscalar, while every modern microarchitecture is. What registers a CPU have available on an execution port is entirely dependent on their architecture, and it varies. And this is the sort of thing that a CPU is actually able to optimize within the instruction window.

Having too many general purpose registers on the microarchitecture level will get challenging, because it complicates the pipeline and is likely to introduce latency.

So to sum up, I'm all for having more registers on the ISA level, but on the microarchitecture level it should be entirely up to the designer. Current x86 designs have 4(Skylake)/4+2?(Zen) execution ports for ALUs, etc. and vector units. As this increases in the future, I would expect improvements on the ISA level could help simplify the work for the front-end in the CPUs.
 
Last edited:
The thing is, there were some good ideas powering Itanium. But it has been doomed for years.

For example, you what happens with x86/x86-64 when it tries to speed up code execution? It tries to predict whether a code path will be chosen and execute it ahead of time using idle resources. The problem is if it turns out the prediction was wrong, the pipeline has to be flushed and new instructions brought in. You know what Itanium does/did? It doesn't try to predict anything, it will execute both branches of a conditional statement and pick whichever is needed when the time comes.
Intel wasn't nuts in coming up with Itanium. It's just that everybody chose x86-64 instead.
 
For example, you what happens with x86/x86-64 when it tries to speed up code execution? It tries to predict whether a code path will be chosen and execute it ahead of time using idle resources. The problem is if it turns out the prediction was wrong, the pipeline has to be flushed and new instructions brought in. You know what Itanium does/did? It doesn't try to predict anything, it will execute both branches of a conditional statement and pick whichever is needed when the time comes.
It doesn't really matter if you try to execute both branches of a conditional, or you do speculative execution. Either way you're pretty much screwed after three or more conditionals coming within a few instructions, as the problem grows exponentially.

One of the fundamental problems for CPUs is that the CPU have less context than the author of the code does. If you for example write a function where the value of a variable is determined by one or more conditionals, but the remaining control flow is unchanged. But by the time the code is turned into machine code this information is usually lost; all the CPU sees is calculation, conditionals, access patterns etc. within just a tiny instruction window. There are a few cases where a compiler can optimize certain code into operations like conditional move, which eliminates what I call "false branching", but usually compiler optimizations like this require the code to be in a very specific form to detect it correctly, unless the coder uses intrinsics. This is an area where x86 could improve a lot, with of course some changes in compilers and coding practices to go along with it.

Ultimately code executions comes down to cache misses and branching, and dealing with these in a sensible manner will determine the speed of the code, regardless of programming language. There is not going to be a wonderful new CPU ISA which solves this automatically. Unfortunately, most code today consists of more conditionals, function calls and random access patterns than than code which actually does something, and code like this will never be fast.
 
It doesn't really matter if you try to execute both branches of a conditional, or you do speculative execution. Either way you're pretty much screwed after three or more conditionals coming within a few instructions, as the problem grows exponentially.
It should work ok for the small instruction windows that fits into the pipeline at any given moment. But I'm just speculating (see what I did there?).

One of the fundamental problems for CPUs is that the CPU have less context than the author of the code does.
Yeah, there's never going to be a universal fix for this. Just more or less efficient solutions spread between the compiler and the CPU.
 
Maybe future will eventually get to that, but I think it was product simply too much ahead of its time.

Bit like tessellation in R9800 Pro. :)
 
Maybe future will eventually get to that, but I think it was product simply too much ahead of its time.

Bit like tessellation in R9800 Pro. :)
There was something definitely wrong with execution, it wasn't just the product. Look at ARM and how they had no trouble jumpstarting a new architecture from scratch. Ironically, one that has grown to 64 bits, too.
It's a done deal though, it only matters to historians and future business decisions how.
 
Back
Top