• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Will there ever be a need for a 128-bit CPU in your computer?

Do you think we will we ever see a 128-bit general purpose CPU?


  • Total voters
    163
For example the Pentium processor with MMX had a 64-bit FPU. By your definition it would be a 64-bit processor, but because it had a 32-bit memory address space, everyone else considered it a 32-bit processor.
Absolutely not. I used the acronyms ALU and FPU explicitly for a reason.

If you look at any cell there will not be binary data. If some alien acquired a flash memory chip the alien would have no idea that it represented binary information.
Yes, they would. All they have to realize is the states represent data. They'd figure out how to read and write it shortly thereafter.


I think a super computer could be built in the next 10-20 years that has a bus fast enough to treat many CPUs as cores and have one massive pool of memory or something like old Opterons had where processors can share memory. Should it happen, it would become the first processor with a 128-bit architecture (IA-128 anyone?).
 
There will most definitely be a 128 bit processor unless someone changes it.

Newton's First Law of Motion said:
Every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it.
 
Well i dont know what the future holds, if the current type of computing will continue onward, but considering that, i would thing there would be 256, 512, 1024, 4096 bits arquitectures considering the limits of what could be run are endless, lets say we believe the theory that we live in a simulation, imagine the computer to run that?!?!?
 
I'm sure eventually we'll need something similar along the lines of 128 bit to get more memory.
640k is enough!
 
Why can't this exist now? You've already got high speed interlinks built into the high-end Intel and AMD CPUs. Extrapolating a tiny bit, the interlinks could be designed in such a way as to link two physical processors into one effective unit. Two 64 bit busses could effectively be one 64 bit bus, though this would take some substantial rework and

As a mild refresher, Moore's Law relates to transistor count. It doesn't relate to storage density, or the bus width of computers. There is no law relating to the bus width incremental increases.

Now, do I think we'll see it in computers? Absolutely, and not in the vague future. I see it in the next 10 years. Let's think about the history. We'll limit the computational history to 8, 16, 32, and 64 bit processors. The start of 8 bit is somewhere in the 70's (I can't give more specifics, because having 8 bit processors and seeing them as useful are two very different things). The mid 80's was where the 16 bit processors started. Next, the 32 bit processors came into their own around the mid 90's. 64 bit processors are finally being adopted (between regular availability in gaming and available programming), but it's the early 2010's. So we've got 15, 10, and 20 years for the bit width of processors to be generally adopted and substantially utilized. Viewing this from the historic track record, we're looking at 10-30 years from now as when 128 bit busses are realistically going to be adopted.

The real question is; "Will these busses mean anything to calculation speed by the time we need them?" Having such a substantial bus in the binary domain means more, and more complex, data can be dealt with. Quantum computing could effectively compress multiple calculations into a few operators, thereby making the speed of data processing several times that of the ability for small busses to deliver data. Effectively, the processing would require a higher sized bus, just to keep it fed with data. At this point the bus is no longer of relevance to computation, only whether that bus can deliver enough data to keep the processor busy.


I think this is what we're having problems understanding each other. Decoupling the bus from the processor is difficult, because there' no way this model works in our current computational models. Quantum computing is crazy in that it breaks all of our current understanding, but will require some legacy pieces of technology to work. Data storage, as we currently know it, is only binary. Collections of binary data are structured such that they aren't physically separable, but for the sake of fidelity we only see two states. That kind of limited understanding is what will eventually bar quantum computing from progressing, and is why we cannot decouple the two without a fundamental shift in our understanding.


Compressing all of this into "Other" seems like a waste. I think we'll see it, but it won't matter in the same way it does today.
 
Absolutely not. I used the acronyms ALU and FPU explicitly for a reason.

You're still stretching the definition of 128-bit since the most commonly used definition would indicate that all parts of the processor must be 128-bits wide including the memory addresses, which I hope you agree is not a limitation now or will be in the near future.

I think a super computer could be built in the next 10-20 years that has a bus fast enough to treat many CPUs as cores and have one massive pool of memory or something like old Opterons had where processors can share memory. Should it happen, it would become the first processor with a 128-bit architecture (IA-128 anyone?).

I doubt this will happen. The node interconnect will still be a major limitation to supercomputer efficiency and the types of code that the supercomputer can run. Communication technology would have to improve even more rapidly than computational technology, which seems like a stretch to imagine given the exact opposite has occurred in the past.

The real question is; "Will these busses mean anything to calculation speed by the time we need them?" Having such a substantial bus in the binary domain means more, and more complex, data can be dealt with. Quantum computing could effectively compress multiple calculations into a few operators, thereby making the speed of data processing several times that of the ability for small busses to deliver data. Effectively, the processing would require a higher sized bus, just to keep it fed with data. At this point the bus is no longer of relevance to computation, only whether that bus can deliver enough data to keep the processor busy.

I think this is what we're having problems understanding each other. Decoupling the bus from the processor is difficult, because there' no way this model works in our current computational models. Quantum computing is crazy in that it breaks all of our current understanding, but will require some legacy pieces of technology to work. Data storage, as we currently know it, is only binary. Collections of binary data are structured such that they aren't physically separable, but for the sake of fidelity we only see two states. That kind of limited understanding is what will eventually bar quantum computing from progressing, and is why we cannot decouple the two without a fundamental shift in our understanding.

Compressing all of this into "Other" seems like a waste. I think we'll see it, but it won't matter in the same way it does today.

Thank you for this reply; I completely agree with you. One of the major shortfalls of most of the technology predictions in these forums is the assumption that the world will continue to incrementally advance existing technological paradigms for the foreseeable future. I voted "no"for that reason, but "other" is just as good a response.
 
Defintely yes. In my relatively short life cpu's went from 80286 being 16 bit I think... to just a couple of years ago having 32 bit indispensable.

That was when I started getting interested in computing, because cpu's were 8 bit before. And even lower yet before...

Progress is happening faster all the time.

I envisage that within 5 years or less we'll be having 128 bit. And considering the improvement rates we're getting, within yet another couple of years we'd be having 512 and then 1024.
 
Ah, Qubit's daily poll ... My answer is yes, and not only for memory allocation but also for increased precision. Today double precision stands for 64 bit.
 
I still think that quibit should rephrase this poll since "128 bit" can mean a lot of things. Does he mean 128-bit memory addressing (which is what I assume that means) or any part of the processor being 128-bit. Because 128-bit memory buses and FPUs have been around for years, so by the latter definition the question is meaningless.

I envisage that within 5 years or less we'll be having 128 bit.

I doubt anyone will need to address 16 exabytes of memory within 5 years. The highest end Intel Xeon E7 can address 4TB of memory at the moment. If we're optimistic and scale this by Moore's law, then the highest end computer will reach 16EB in 44 years.

Where's Anand Chandrasekher when you need him?
 
I would make the argument that if Windows was designed properly, we wouldn't really have a need for 64-bit processors right now, or we'd just be starting to need them in the mainstream desktop world in the last year or so, forget about 128-bit.
 
You're still stretching the definition of 128-bit since the most commonly used definition would indicate that all parts of the processor must be 128-bits wide including the memory addresses, which I hope you agree is not a limitation now or will be in the near future.
Memory addressing space is only one component of 128-bit computing. Even so, most AMD64 processors today can physically only access 48-bits worth of memory. An 128-bit processor could be released today with 64-bit memory addressing and that doesn't make it any less of a 128-bit processor as long as the instruction set supports up to 128-bit memory addressing.

As I said previously, "if" is not the question; the question is "when." I think "when" will determined by needs in the super computer space. AMD64 happened because x86's memory limitations were becoming problematical for large databases. 128-bit will happen when some other urgent need isn't being fulfilled by AMD64. I believe that, in the pursuit of higher efficiency (because of ARM), the next major architectural change will come soon and it will add more registers. Making the processor 128-bit would likely be a component of achieving that end.
 
I still think that quibit should rephrase this poll since "128 bit" can mean a lot of things. Does he mean 128-bit memory addressing (which is what I assume that means) or any part of the processor being 128-bit. Because 128-bit memory buses and FPUs have been around for years, so by the latter definition the question is meaningless.
No, the question is perfectly worded and it's clarified further by my OP. Note that the word size of a CPU is ostensibly defined by the size of its ALU anyway, not the address bus.

A great example are the ancient 6502 and Z80 CPUs from the 1970s. The ALU on these was 8-bits wide, hence worked on 8-bit values at a time and hence were 8-bit CPUs. Now, the address bus was actually twice as wide at 16-bits, yet these were still 8-bit CPUs.

The floating point component of today's CPUs is really a separate CPU bolted on to the same die (think the 386 which could have a 387 coprocessor attached in a separate socket that was then integrated from the 486 onwards) so even if it works on 128-bit values, the CPU is still considered 64-bit as that's the size of the ALU.

Also, the forum doesn't allow the poll to be edited even if I wanted to.
 
32-bit processors can have 64-bit ALUs. See Atom. The wider ALU allows the processor to handle int64 and uint64 operations much faster. Try to install an AMD64 operating system on it though and it will fail miserably because it doesn't implement the full AMD64 instruction set (extra registers, wider memory addresses, etc.).


FYI, N64 had a 64-bit CPU with a 32-bit bus.
 
Last edited:
When you look extended instruction sets like SSE or AVX we are already at 128 bit (SSE) or 256 bit (AVX and AVX2) even 512 bit (AVX-512) wide instructions ... I guess we consider CPU really 128 bit if it does arithmetic with 128 bit vector data in a single clock :laugh:
 
I think there will be in the future, but maybe not in our life time (I'm 23 and still don't think my life time provided I live a long and prosperous life! :P)
 
When you look extended instruction sets like SSE or AVX we are already at 128 bit (SSE) or 256 bit (AVX and AVX2) even 512 bit (AVX-512) wide instructions ... I guess we consider CPU really 128 bit if it does arithmetic with 128 bit vector data in a single clock :laugh:
While these instructions are super-wide, isn't the result always 64-bit? Effectively this would keep the CPU as 64-bit.I'm not challenging here, I just don't know much about these instructions.
 
The data size of integral arguments to the ALU define the bitness of a processor architecture.

So on a 8 bit processor, the biggest ADD operation you had took 8-bit operands. 32-bit on 32-bit machines and 64-bit on 64-bit machines. The CPU guarantees that these basic operations are completed atomically.

Today's AVX instruction set extensions are 128 and 256 bit processors for the sake of the discussion in this thread.

There is no reason that we ever need general purpose arithmetic 128-bit operations because the numbers in real life, and so a typical computer are relatively small (64-bit is plenty).

64-bit architectures take a significant performance hit vs. 32 bit, per instruction, because instructions and data take up more space, require more memory bandwidth, produce larger executables. On the other hand they can process numbers twice as big, but to do 1+1 = 2 or perform 99.99% of math in your life you'll be fine with 64-bit numbers. Of course the exponential growth of silicon performance will make the performance hit less relevant over time, just like you don't care about exe size on your HDD anymore today.

Today's 64-bit Intel CPUs can only address 48-bit memory btw, cost savings because nobody will have that much memory in one machine for the foreseeable future. more $$ for intel.
 
Last edited:
Today's 64-bit Intel CPUs can only address 48-bit memory btw, cost savings because nobody will have that much memory in one machine for the foreseeable future. more $$ for intel.

In a sense it's both 48-bit and 64-bit at the same time. The actual addresses are 64-bit, but the specification of AMD64 only allows the first 48 bits to be used; bits 48-63 are just a copy of bit 47. The reason this is done is because it makes memory address math much faster (you only need to compute 48 bits instead of 64). So it's not really a cost saving measure, it's a performance enhancing measure that makes sense considering it will not restrict the memory usage of current systems.
 
Last edited:
it's a performance enhancing measure that makes sense considering the memory usage of current systems.
if the memory controller was implemented as 64 bit instead of 48 bit, there would be no performance difference. but using 48 bit math instead of 64-bit (like you describe), saves you transistors
 
using 48 bit math instead of 64-bit (like you correctly describe), saves you transistors

I guess it's just a matter of perspective. You see it as "they can remove excess transistors and make a bigger profit" whereas I see it as "they can reallocate those transistors toward improving performance in other ways".
 
Thanks W1zz, that cleared it up nicely about those extendend instructions. :toast:

I knew about the 48-bit address bus though. This simply means that the top 16-bits in the address register are always zero, or "dummies" like von described, to pad out to the 64-bit architecture.
 
yes, agreed, same thing. but your earlier post "memory translation much faster" is not correct.

qubit: google wiki canonical form addresses for slightly more reading
 
regarding the "atomic" in my post above:

obviously a 32-bit application in all higher level programming languages can do 64-bit math on variables, but the operation is performed in multiple steps, so it's possible that another thread sees an intermediary result (in the memory cells) which doesn't properly reflect the outcome of this operation. with 64-bit operations that's guaranteed to not happen = atomic. so atomic means something like "in one go". 32 bit multithreaded apps need to take special care to synchronize accesses to larger than 32-bit data types.
 
qubit: google wiki canonical form addresses for slightly more reading

I learned something today. :) I'd not heard the term canonical form addresses before. I see that it allows a seamless expansion of the address space to the full 64-bits as hardware evolves over time. Very clever.

I've not looked at CPU architecture at this level of detail for some time and it's quite refreshing to do so again.

Here's the details for anyone else that wants to learn about this: http://en.wikipedia.org/wiki/X86-64#Canonical_form_addresses

EDIT:

regarding the "atomic" in my post above:

obviously a 32-bit application in all higher level programming languages can do 64-bit math on variables, but the operation is performed in multiple steps, so it's possible that another thread sees an intermediary result (in the memory cells) which doesn't properly reflect the outcome of this operation. with 64-bit operations that's guaranteed to not happen = atomic. so atomic means something like "in one go". 32 bit multithreaded apps need to take special care to synchronize accesses to larger than 32-bit data types.

This sounds like a kind of synchronization problem one must be careful to avoid with the use of flags. I'm sure many obscure program bugs are caused by missing stuff like this.
 
Last edited:
Back
Top