• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Will there ever be a need for a 128-bit CPU in your computer?

Do you think we will we ever see a 128-bit general purpose CPU?


  • Total voters
    163
I don't see how any of these posts are actually constructive. Don't post unless you have something to ask or contribute otherwise, off topic posts and posts that aren't thought through will just derail the thread. I don't know about other people, but I would like a serious, intelligent and thoughtful discussion on the topic.

I love discussing things with respect CPU architecture, but I don't love having to deal with shenanigans while I do it. :)

Fair game, posts shall be deleted
 
I don't see how any of these posts are actually constructive. Don't post unless you have something to ask or contribute otherwise, off topic posts and posts that aren't thought through will just derail the thread. I don't know about other people, but I would like a serious, intelligent and thoughtful discussion on the topic.

I love discussing things with respect CPU architecture, but I don't love having to deal with shenanigans while I do it. :)
"404 fun not found"
Is it not allowed to have cute little side comments? I see slightly off topic comments in 90% of forums posts. It's okay to have a little fun.

Now back to the topic before this post gets deleted. I'm a "bit" confused on how GPUs can hit 256bits. Is it simply that they are created in a whole different way? What if we were able to take a GPU chip, amp the clocks up, and somehow make it run as a CPU?

By theory, couldn't we run a desktop off of a GPU?
>It has cores (change them to act like normal processor cores)
>Has a video output
>Has memory, just somehow stick memory sticks in to add more
>Has a way to get power from the PSU
>Has cute wittle fans to cool it down
Just lacking USB and SATA ports but I'm sure there could be a way to make it work. If GPUs can hit these massive amount of bits, why are CPUs having such a hard time hitting it?

Apologies for ignorance. This is something I've been fascinated with but never was able to find a time to poke it out.
 
Just like we emulate PS2 which is a 128bit on a x86/32bit PC using pcsx2; It all comes down to coding.

As long as we have smart programing there is no current limitation.
 
"404 fun not found"
Is it not allowed to have cute little side comments? I see slightly off topic comments in 90% of forums posts. It's okay to have a little fun.

Now back to the topic before this post gets deleted. I'm a "bit" confused on how GPUs can hit 256bits. Is it simply that they are created in a whole different way? What if we were able to take a GPU chip, amp the clocks up, and somehow make it run as a CPU?

By theory, couldn't we run a desktop off of a GPU?
>It has cores (change them to act like normal processor cores)
>Has a video output
>Has memory, just somehow stick memory sticks in to add more
>Has a way to get power from the PSU
>Has cute wittle fans to cool it down
Just lacking USB and SATA ports but I'm sure there could be a way to make it work. If GPUs can hit these massive amount of bits, why are CPUs having such a hard time hitting it?

Apologies for ignorance. This is something I've been fascinated with but never was able to find a time to poke it out.

GPUs are not general processors, the kind of operations they run are special. They're designed to do the same set of instructions of different sets of data at the same time at lower clock speeds. You're also mixing terms. GPUs with a 256-bit memory interface can write/read that many bits per rising and falling edges of the memory clock. 256-bits represent the data bus, not the address bus. Once again, I did say you should learn about memory addressing first and this is part of it. For example, my i7 has 4 memory channels, each memory channel is 64-bits wide, therefore I have a 256-bit memory interface on my CPU, but it's not a "256-bit CPU".

Generally speaking, GPUs are designed to do highly parallel tasks like calculating the same algorithm on several sets of numbers (the same computation, different data). CPUs are designed to excel at serial workloads as well as interfacing with peripherals. CPUs and GPUs work very differently and it's important to understand that.
 
GPUs are not general processors, the kind of operations they run are special. They're designed to do the same set of instructions of different sets of data at the same time at lower clock speeds. You're also mixing terms. GPUs with a 256-bit memory interface can write/read that many bits per rising and falling edges of the memory clock. 256-bits represent the data bus, not the address bus. Once again, I did say you should learn about memory addressing first and this is part of it. For example, my i7 has 4 memory channels, each memory channel is 64-bits wide, therefore I have a 256-bit memory interface on my CPU, but it's not a "256-bit CPU".

Generally speaking, GPUs are designed to do highly parallel tasks like calculating the same algorithm on several sets of numbers (the same computation, different data). CPUs are designed to excel at serial workloads as well as interfacing with peripherals. CPUs and GPUs work very differently and it's important to understand that.
Okay, now I get it. Thank you. Now I can stop looking at my 660 wishing it would help with making my antivirus scans go faster. :laugh:
 
My first processor (albeit borrowed for a summer) in 1981 was much like this one, 8-bit with 32KB of memory in an Apple II

220px-MOS_6502AD_4585_top.jpg


So, about 10 years ago I bought my first 64 bit capable CPU, the 630 Prescott.

Doing some extrapolating, I won't need 128 bit CPU until the year 2041 :laugh:
 
Last edited by a moderator:
Well, you can look it this way - when doing vector math, you basically have 4 floats (4*32=128 bit) operands. So vector ALU that can do add and mul vectors is your 128 bit cpu. That would be all cpus that support SSE instruction set (128 bit registers), so everythig since pentium 3.

But traditional 128bit registers as quadruple preccission floats? Not yet, 128bit will be used only for vectors for some time.

Doing some extrapolating, I won't need 128 bit CPU until the year 2041

Not that much time :laugh:
 
Last edited:
Hi im from the future! The 64 bit architecture reigns supreme... By the time the goverment took over everything progress stopped to the point all programs were written in 64 bit. The entire architecure our goverment operates on is 64bit. They say it is a terroristic thought to think of anything past 128bit it would be too expensive to change our systems. It took 30 years to introduce the 64bit architecture. During this time.. The entire world's slew of programs were in need of overhaul to address more memory. Sadly it was struck down. Im going to hide now.
 
By the time the goverment took over everything progress stopped to the point all programs were written in 64 bit.
Yes, I'm sure they're watching you at every turn too. They've probably implanted a geolocation tracking device in your arm. BENGHAZI! Thanks Obama. :slap:
The entire architecure our goverment operates on is 64bit.
Source? Also 64-bit isn't an architecture. It's the width of the address registers in any given CPU. Two CPUs with very different architectures (like SPARC vs x86 vs ARM) all have 64-bit variants, but they're nothing alike.
They say it is a terroristic thought to think of anything past 128bit it would be too expensive to change our systems.
No, it would just be useless because we can't even address 64-bit worth of memory space yet. In reality we don't really even touch anything beyond 36-bits right now.
It took 30 years to introduce the 64bit architecture.
Actually it took a lot less when we actually needed it because we were running out of addressable space and memory continued to get bigger.
The entire world's slew of programs were in need of overhaul to address more memory.
That's what happens when major revisions are made to a particular ISA?
Sadly it was struck down. Im going to hide now.
Problem solved, you can go back under that rock you've been living under.
 
I think I have to reevaluate this because Moore's Law is in trouble as process gets smaller and smaller. Even if we follow it to the end where each transistor is only composed of a few atoms, will memory density exceed 3.4e+38 bytes? I think the answer is no. 128-bit is theoerically only reasonable in smaller-than-atom processors aka quantum computing. I think we'll have an answer in the next few decades so I'd change my vote, if I could, to "Not Sure."
 
It's not just about memory limits, its about being able to access more registers, and for certain programs a 128 bit processor has huge performance and power implications by limiting the reads writes needed to cache. You're probably not going to see an X86 128bit variant any time soon but an ARM / Apple RISC chip - that's probably not too far off.

I would say just like with 64 bit, proprietary systems and risc linux based computers will get 128 bit way before the X86 crowd.

http://riscv.org
 
I think I have to reevaluate this because Moore's Law is in trouble as process gets smaller and smaller. Even if we follow it to the end where each transistor is only composed of a few atoms, will memory density exceed 3.4e+38 bytes? I think the answer is no. 128-bit is theoerically only reasonable in smaller-than-atom processors aka quantum computing. I think we'll have an answer in the next few decades so I'd change my vote, if I could, to "Not Sure."
All sounds very reasonable to me. It's interesting to muse about where the hard limit for miniaturization will be, isn't it?

Also, I've edited the poll to allow votes to be changed. Have at it! :)
 
64 bits, means 2 over 64, for memory operations, which is nearly unreachable! 128 bits again for memory use of 2 over 128.

If 2 over 64 bits amount for RAM hits it's end limit, like 2 over 32 is 4Gb, there is no need!

It seems, it can not be reached in maybe 500 years!
 
It's not just about memory limits, its about being able to access more registers, and for certain programs a 128 bit processor has huge performance and power implications by limiting the reads writes needed to cache. You're probably not going to see an X86 128bit variant any time soon but an ARM / Apple RISC chip - that's probably not too far off.

I would say just like with 64 bit, proprietary systems and risc linux based computers will get 128 bit way before the X86 crowd.

http://riscv.org
x86-64 didn't gain traction until memory capacity greater than 4 GiB was deemed necessary. We can reasonably expect this to be true of 128-bit as well.

I wouldn't consider a processor 128-bit unless all functions of it can handle 128 bits. There's a lot of ARM processors out there, for example, that can handle 64-bit instructions and logic but the memory controller cannot access 64 bits worth of memory thus, it's only partially 64-bit. Partially 128-bit processors could come soon but fully 128-bit in the sense that x86-64 and Itanium are 64-bit is in doubt.

If Moore's Law does fall apart, the best we can hope for is 128-bit emulation where a controller distributes workloads over a series of 64-bit processors. 128 bits worth of memory could be connected to the controller and that's the pool of memory the 64-bit processors use (but can't access it all). But that's still only partially 128-bit so it doesn't count for this thought problem.
 
Well, you can look it this way - when doing vector math, you basically have 4 floats (4*32=128 bit) operands. So vector ALU that can do add and mul vectors is your 128 bit cpu. That would be all cpus that support SSE instruction set (128 bit registers), so everythig since pentium 3.

But traditional 128bit registers as quadruple preccission floats? Not yet, 128bit will be used only for vectors for some time.



Not that much time :laugh:

maybe it was a typo and he really meant 2014
 
x86-64 didn't gain traction until memory capacity greater than 4 GiB was deemed necessary. We can reasonably expect this to be true of 128-bit as well.
.

Of X86-128, yes... absolutely.

I wouldn't consider a processor 128-bit unless all functions of it can handle 128 bits. There's a lot of ARM processors out there, for example, that can handle 64-bit instructions and logic but the memory controller cannot access 64 bits worth of memory
.

I would disagree with this, a CPU and a memory controller are two different things (which is why you can have a 128 bit CPU with a 64 Bit IMC). As far as I remember, memory controllers for x86 used to be part of the MB chipset on until AMD decided to start slapping an IMC on the die (and consequently kicking the crap out of NetBurst), but this does not mean that a CPU MUST be a CPU + IMC. They're putting all sorts of crap on die now...

if the question was, would we see a 128 bit IMC+CPU anytime soon, because we need to access that quantity or RAM, then the answer will be hell no - unless somehow the entire paradigm of memory access shifted and PCI-e flash became so cheap and fast that it no longer made sense to even have ram or disk controllers.

maybe it was a typo and he really meant 2014

Not really what I was talking about but OK.
 
Last edited:
was referring to where he typed 2041, thought you were referring to that when you said not that much time lol. eh, it happens.
 
Of X86-128, yes... absolutely.



I would disagree with this, a CPU and a memory controller are two different things (which is why you can have a 128 bit CPU with a 64 Bit IMC). As far as I remember, memory controllers for x86 used to be part of the MB chipset on until AMD decided to start slapping an IMC on the die (and consequently kicking the crap out of NetBurst), but this does not mean that a CPU MUST be a CPU + IMC. They're putting all sorts of crap on die now...

if the question was, would we see a 128 bit IMC+CPU anytime soon, because we need to access that quantity or RAM, then the answer will be hell no - unless somehow the entire paradigm of memory access shifted and PCI-e flash became so cheap and fast that it no longer made sense to even have ram or disk controllers.



Not really what I was talking about but OK.
This post tells me that you're not articulating yourself properly so hopefully I can try to supplement and expand on what you're trying to say. CPUs have been able to handle doing instructions with 64-bits worth of data for a long time. It's nothing new for an X86 processor to be able to math with longs (64-bit integers) or to do math with doubles (64-bit floating point numbers). There is a potential benefit from increase the width of the ALU and data registers, which are completely separate from the address registers used to access memory and memory mapped I/O.

Will we need 128-bit address support in the near future? Probably not. Memory isn't expanding fast enough for it to really make a difference any time soon as 64-bits worth of addresses is a ton of memory space.

Will we need 128-bit data and ALU support in the near future: Not for the general consumer. Most tasks can be handled with a smaller ALU and doing wider operations only takes more time and most of the time that extra width only gets your precision and accuracy in a calculation with a lot of digits. So generally speaking, I think that there aren't enough economic factors in play to make widening the ALU and memory registers worth it, not to mention it adds to the size of the die and width of the data bus between the different parts of the CPU, all of which takes up die space and needs to be designed very carefully to support high clock speeds.

All in all, I think about this problem like I think about computer upgrades. If you don't need it, it's a waste. Changing the width of the memory controller and address register or the ALU and data registers without a reason is a waste. As it stands right now, modern CPUs are really good at doing just about everything developers need them to do while staying within the confines of reality.
 
I wouldn't consider a processor 128-bit unless all functions of it can handle 128 bits. There's a lot of ARM processors out there, for example, that can handle 64-bit instructions and logic but the memory controller cannot access 64 bits worth of memory thus, it's only partially 64-bit. Partially 128-bit processors could come soon but fully 128-bit in the sense that x86-64 and Itanium are 64-bit is in doubt.
The size of a CPU is officially defined by the size of the data word that it can handle in its main registers, not the memory or any other features. For an easy example, think of the ancient 6502 and Z80 CPUs. These could handle 8-bit data in their accumulators and had 16-bit memory address registers, but they were still 8-bit CPUs.

The line is blurred a bit with modern x86 CPUs which have extra instruction sets added to them which can handle 128-bit or maybe even 256-bit words in one go. However, they are still considered 64-bit CPUs, since the main x86 registers hold 64-bit values.
 
size of a CPU is officially defined by the size of the data word that it can handle in its main registers, not the memory or any other features.
You mean the largest data word. A "word" alone would be the amount of space that a single memory address takes up. So for most modern CPUs, a word is 8-bits, a byte. However, modern CPUs usually have a 64-bit ALU. However it's worth remembering that some CPUs like AMD's FX lineup has that funky 128-bit FMA floating point unit, but that doesn't make it a 128-bit CPU.

I don't think there is any official designation for this, but generally speaking when I work with micro-controllers, there is a clear distinction between the two as the last micro-controller I used had two 16-bit address registers and two 8-bit data registers that could be combined to do limited 16-bit math. The problem is that most people don't even understand that there is a difference between a memory register and an address register and instantly assume the two are the same and they're not.

So to say "Will we need 128-bit CPUs?" is dumb because the question is vague. The proper answer is, "What part of the CPU are you talking about?"
 
So to say "Will we need 128-bit CPUs?" is dumb because the question is vague. The proper answer is, "What part of the CPU are you talking about?"
This. The entire CPU being 128-bit is a long ways off (if ever). Parts of it being 128-bit is already here (e.g. FPU quad-float).
Wikipedia said:
Native support of 128-bit floats is defined in SPARC V8 and V9.
 
Last edited:
"640K ought to be enough for anybody."
 
try playing modern games on windows XP and see how far you get. we're well into the 64 bit era now, simply because it doubles the 2GB address space limit to 4GB.
( There is a mistake I think. )

I don't know who will need more than 16 Exabytes memory. Maybe one day. According to my point view, these CPU structures are very very important for floating point arithmetics, because of this reason one day we will need 128 bit. ( Actually we have in GPUs )
 
In my opinion...

If we will ever need more than what "64 bit" has to offer... I will probably not be alive to see that, just see how many years we were stuck at "32 bit".

And "32 bit" still usable with "PAE", but some specific applications perform better on "64 bit" due to the "32 bit" limitations.

Well, if we will ever need "128 bit" it is not going be because of memory limitations... Current hardware is not even near the "64 bit" limit.

My conclusion: I voted "No". "64 bit" will stay for a very, very long time.
 
Back
Top