• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Will there ever be a need for a 128-bit CPU in your computer?

Do you think we will we ever see a 128-bit general purpose CPU?


  • Total voters
    163
In my opinion...

If we will ever need more than what "64 bit" has to offer... I will probably not be alive to see that, just see how many years we were stuck at "32 bit".

And "32 bit" still usable with "PAE", but some specific applications perform better on "64 bit" due to the "32 bit" limitations.

Well, if we will ever need "128 bit" it is not going be because of memory limitations... Current hardware is not even near the "64 bit" limit.

My conclusion: I voted "No". "64 bit" will stay for a very, very long time.


Well who knows maybe quantum computing will come in our lifetime so nobody would care about "bits" anymore.
 
64 bit address space is more than enough and silicon lithography limitations wouldn't allow 128 bit address space even when stacked ... as for instruction operands width, well instruction sets get extended, new instructions work on combined registers ... didn't we got support for 128bit floating point numbers in x87 in the olden days that way.
 
Here are my thoughts, and please remember, they are just thoughts' not facts:

In a personal computer, one that a user would use at home or work - no
In a workstations such as those for MRI, CAD, Maya, ect - Maybe*
In Servers and Cloud Computing systems - Yes, i do think so

There would be no use for a 128-bit CPU for home use, and most office use. Not only in memory space addressing, but even for general registers/computing in the cpu. Any application that would require such amounts of memory or processing power would be offloaded to a server or cloud based operation. There are some particular usage scenarios where that kind of power could be tapped, but a lot of that workload I could see being offloaded to a GPU for local data crunching, or to a server farm.
didn't we got support for 128bit floating point numbers in x87 in the olden days that way.
Correct, but those are specialized instruction sets. AVX is 256bit operations
 
AVX is 256bit operations
Also those are operations on vectors where each component in a vector is a double. No gains in precision, only speedup from SIMD parallelism.
My point is that true 128bit machine would need to have 128bit memory address space (not going to happen), and have ALU/FPU that supports 128bit base scalar types in a single clock (it would make cpu-s less efficient for less wide operands so not going to happen). Specialized instruction sets that serve as extensions to the x86/x64 work pretty well.
 
My point is that true 128bit machine would need to have 128bit memory address space (not going to happen)
Not 100% sure what you mean by that, but here goes. The address bus can be logically 128-bits wide, but in reality, less address pins are physically exposed on the chip as such a gargantuan amount of much memory isn't used for various reasons. For example, today's CPUs have a logical address bus of 64-bits, but it's only physically something like 48-bits wide and they work fine.

Also, it wouldn't be hard to organize memory chips into a 128-bit wide word size configuration. This kind of thing is done all the time eg 8-bit wide memory ganged together for a 32-bit wide word and so on. Another good example is a graphics card with a wide data bus such 384- or 512-bit wide. The memory chips certainly aren't that wide, but are ganged together to provide that word width.
 
MS Corporate mission statement for 2036:

"Windows 128"

128-bit extensions and a graphical shell for a 64-bit patch to a 32-bit operating system originally coded for a 16-bit microprocessor, written by a 8-bit company that can't stand 1-bit of competition !

:D :) :D
 
Also those are operations on vectors where each component in a vector is a double. No gains in precision, only speedup from SIMD parallelism.
My point is that true 128bit machine would need to have 128bit memory address space (not going to happen), and have ALU/FPU that supports 128bit base scalar types in a single clock (it would make cpu-s less efficient for less wide operands so not going to happen). Specialized instruction sets that serve as extensions to the x86/x64 work pretty well.
AVX2 expands support to integers IIRC.
Another good example is a graphics card with a wide data bus such 384- or 512-bit wide.
That's not an apples to apples comparison and I'll explain why. GPUs do the same instruction in tandem to a large set of data so in order to read and write data quick enough, you need a wide bus with a lot of bandwidth. CPUs a bit different because we're talking much more serial applications than GPUs are running. As a result, there are a lot of things like loops, conditionals, and logic, as opposed to data like in GPUs.

This can be showed in overclocking video memory versus system memory. VRAM overclocking tends to scale linearly, system memory does not.
 
Last edited:
@Aquinus Yes, it works like that, but I think you missed my point, which was simply that memory chips are ganged together to make memory data buses as wide as necessary for the application. In the case of graphics cards that bus tends to be very wide indeed.
 
I said yes we will probably see one at some point but It could be quite a while some of us might be dead before it happens. 64bit in the PC world all though it hasn't really done much will enventually be replaced.
 
@Aquinus Yes, it works like that, but I think you missed my point, which was simply that memory chips are ganged together to make memory data buses as wide as necessary for the application. In the case of graphics cards that bus tends to be very wide indeed.
That's only for the data buses. Width of the actual registers doing math is another thing. FMA is a thing too where you can do essentially two floating point operations at once on a single extra-wide SMID unit. It's how you can do one 256-bit FP op or two (of the same,) 128-bit FP ops.

Although I think this converstation is a bit stupid because there are a lot of widths in a cpu and asking a generic question like, "Will there ever be a need for a 128-bit CPU in your computer", is dumb because it makes the assumption that the CPU doesn't have anything that is other than 64-bit wide for anything in it, which isn't true. We use things wider than 32 and 64-bit often when it comes to everything that isn't directly dealing with physical memory.

I don't think we'll need 128-bit CPUs any time soon with respect to mappable address space.
I'm uncertain as to the necessity to do math operations on larger numbers though which could be a reasonable use case going forward.

With respect to data buses, you'll always have the slower but wider or faster but thinner argument. ...and even then, you have things like PCI-E which mixes the benefits for serial communication with parallel comm.

All in all, I do still think this discussion went off the deep end when it started.
 
Although I think this converstation is a bit stupid because there are a lot of widths in a cpu and asking a generic question like, "Will there ever be a need for a 128-bit CPU in your computer", is dumb because it makes the assumption that the CPU doesn't have anything that is other than 64-bit wide for anything in it, which isn't true. We use things wider than 32 and 64-bit often when it comes to everything that isn't directly dealing with physical memory.
I'm talking about the main registers being 128-bit, not the floating point ones, SIMD ones or other specialized registers which can be very wide indeed. Those main registers which do the basic processing of the CPU are what define its word size, not the specialized types, hence the question is still valid.

Finally, I think you're reading more into this than there is and if you don't like this thread because you think it's a bit stupid, you don't have to post in it.
 
If X86 were to go 128-bit as you suggest it would need to double the size of every address and data register in the CPU. On top of that it would need to double the size of the ALU. On top of that, it would have to expand the widths of data buses to words can efficiently be sent in one clock cycle. Needless to say, the size of the core would increase by a very large amount to accommodate it, that wasn't the case with X86_64.

I think it's stupid because we are nowhere near the limitations of what current machines can do with respect having enough memory or working with data value that are so big. It's at a point where if someone truly needs more than 64-bits for an integer, a floating point number is probably going to serve them better. It's really that simple.

What's not simple is overhauling the CPU to do 128-bit logic across the board because X86_64 simply added extensions to X86 which was already capable of doing 64-bit math, just not addressing 64-bit space.

I say it's dumb because to do what you suggest to x86 because of the number of changes that would be needed and those changes are without a doubt going to increase the size of the core. I'm just making that perfectly clear because a lot of people don't even know the difference between a data and address register and even fewer people understand that 32-bit and 64-bit X86 ALUs both were capable of doing 64-bit math.

128-bit (ALUs, registers, addresses, the works) would be a fundamental change to CPU architecture, unlike X86_64 was.

You would also have to consider if words are going to remain 32-bits big or 64, or 128. The bigger you make words, the more memory is wasted. The number of issues with "wider" grow exponentially which is why you don't see people touting super wide CPUs. It's a crap ton of work for minimal gain. X86_64 really was only to address memory address space, nothing more.
 
What, like the 64-bit and it's 18.1 exabytes of potential memory not enough?
They will never be a general purpose 128-bit CPU as there is ZERO need for one.


:toast:
 
when abouts do you think we're going to hit the limits of 64 bit? how many years?
based on the push for virtualization and super computers I agree with solaris not too long at all providing the pc and server market continue to go their separate ways. Cost seems to be more limiting than tech these in the server world days I can order 4 R430's with dual 8 core cpus for the same price as one with dual 16 core cpus. obviously rack space, power, convenience and heat, go to the single 32 core/64 thread server but in the former config I end up with 64 cores/128 threads and more redundancy. The way things are going though in less than 2 years I'll be able to get a 1u rack mount with 64 cores/128 threads for the same price as the 32 cores/64 thread one.

If this tend continues there will be more demand for bigger better server cpus and less worry about how much processing is done on end user machines. Ie mainframes reworked for the modern age. In that case the extra silicone on a 128-bit cpu won't see quite so silly. Crunching larger and larger numbers will continue so long as we maintain our curiosity. Humane genome, space, particle physics, string theory, etc all require huge supercomputers to crunch their numbers. Soon those computers will begin to look silly and someone will start the march towards better number crunchers.

Now qubit said in your computer so I believe he's thinking desktop/laptop/or whatever mobile device will pass for a pc in the future. In that case I think it will take a long time for consumer grade to get it. 64-bit had obvious gains for the consumer 128 bit wont
 
It's like this when the time comes it will happen enough said.
 
If X86 were to go 128-bit as you suggest it would need to double the size of every address and data register in the CPU. On top of that it would need to double the size of the ALU. On top of that, it would have to expand the widths of data buses to words can efficiently be sent in one clock cycle. Needless to say, the size of the core would increase by a very large amount to accommodate it, that wasn't the case with X86_64.

I think it's stupid because we are nowhere near the limitations of what current machines can do with respect having enough memory or working with data value that are so big. It's at a point where if someone truly needs more than 64-bits for an integer, a floating point number is probably going to serve them better. It's really that simple.

What's not simple is overhauling the CPU to do 128-bit logic across the board because X86_64 simply added extensions to X86 which was already capable of doing 64-bit math, just not addressing 64-bit space.

I say it's dumb because to do what you suggest to x86 because of the number of changes that would be needed and those changes are without a doubt going to increase the size of the core. I'm just making that perfectly clear because a lot of people don't even know the difference between a data and address register and even fewer people understand that 32-bit and 64-bit X86 ALUs both were capable of doing 64-bit math.

128-bit (ALUs, registers, addresses, the works) would be a fundamental change to CPU architecture, unlike X86_64 was.

You would also have to consider if words are going to remain 32-bits big or 64, or 128. The bigger you make words, the more memory is wasted. The number of issues with "wider" grow exponentially which is why you don't see people touting super wide CPUs. It's a crap ton of work for minimal gain. X86_64 really was only to address memory address space, nothing more.
Yes, I agree, especially with the first two paragraphs.

However, for some reason though, you're still missing my point and still think I'm advocating such a CPU when I'm not, so you're arguing against something I didn't say. In fact, if you read my OP again, you'll see that I've actually argued against it and also voted No in the poll. :)

EDIT

In fact, most people actually voted Yes in the poll, so it's them you're disagreeing with, not me.
 
Last edited:
MS Corporate mission statement for 2036:

"Windows 128"

128-bit extensions and a graphical shell for a 64-bit patch to a 32-bit operating system originally coded for a 16-bit microprocessor, written by a 8-bit company that can't stand 1-bit of competition !

:D :) :D
The original goes:
32 bit extensions and a graphical shell [on top of] a 16 bit patch to an 8 bit operating system originally coded for a 4 bit microprocessor, written by a 2 bit company, that can't stand 1 bit of competition.
Merge the two:

128 bit extensions on graphical shell on top of a 64 bit patch to a 32-bit operating system which deviated from a 16 bit patch to an 8 bit operating system originally coded for a 4 bit microprocessor written by a 2 bit company that can't stand 1 bit of competition.

"deviated" = Windows 9x + ME -> NT

I believe the original quote was talking about Windows 95.
 
128 bit 'general' purpose cpu?? how are we defining 'general' here? yes, semantics does come into it..
 
128 bit 'general' purpose cpu?? how are we defining 'general' here? yes, semantics does come into it..
Why don't you try reading my OP? I explained it clearly there.
 
I voted yes. Wanna know why?

Forever is a long time... and I do believe if humanity is still around 1000 years from now (or more), we'll find a need for this or make one.
 
Why don't you try reading my OP? I explained it clearly there.

Yes, that's all good and fine, but what was considered 'general' in x86 computer usage a decade ago is somewhat different to what is considered 'general' in today's world and who knows what 'general' will mean another decade from now...
 
Yes, that's all good and fine, but what was considered 'general' in x86 computer usage a decade ago is somewhat different to what is considered 'general' in today's world and who knows what 'general' will mean another decade from now...


True. For that matter how about "general" 20 years from now? For all we know discrete GPUs could be gone then and become part of the CPU die and computational unit, maybe even as an additional instruction set (I don't buy that for a second, but who knows?)
 
20 years from now we could be post apocalypse and instead of tech advancements we'd just use the brains of our fallen compadres. Graphics would be amazing, but processing would take a massive hit.
 
try playing modern games on windows XP and see how far you get. we're well into the 64 bit era now, simply because it doubles the 2GB address space limit to 4GB.

You are missing something. 32 bit can address ~4GB of RAM while 64 bit can ... well! double that number 32 times i.e. 16 ExaBytes.

1 exabyte = 1 000 000 000 gigabytes
 
Back
Top