• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Will there ever be a need for a 128-bit CPU in your computer?

Do you think we will we ever see a 128-bit general purpose CPU?


  • Total voters
    163

qubit

Overclocked quantum bit
Joined
Dec 6, 2007
Messages
17,865 (2.81/day)
Location
Quantum Well UK
System Name Quantumville™
Processor Intel Core i7-2700K @ 4GHz
Motherboard Asus P8Z68-V PRO/GEN3
Cooling Noctua NH-D14
Memory 16GB (2 x 8GB Corsair Vengeance Black DDR3 PC3-12800 C9 1600MHz)
Video Card(s) MSI RTX 2080 SUPER Gaming X Trio
Storage Samsung 850 Pro 256GB | WD Black 4TB | WD Blue 6TB
Display(s) ASUS ROG Strix XG27UQR (4K, 144Hz, G-SYNC compatible) | Asus MG28UQ (4K, 60Hz, FreeSync compatible)
Case Cooler Master HAF 922
Audio Device(s) Creative Sound Blaster X-Fi Fatal1ty PCIe
Power Supply Corsair AX1600i
Mouse Microsoft Intellimouse Pro - Black Shadow
Keyboard Yes
Software Windows 10 Pro 64-bit
This question applies equally to desktops and portable devices of all kinds.

With the debunking of that 128-bit ARM CPU story, it got me wondering if there will ever be a need for a 128-bit general purpose CPU of any architecture (x86, ARM, MIPS etc) no matter how advanced computers become?

Think about it, the main benefit of today's 64-bit CPUs over their 32-bit versions is not the enlarged word size, but the address bus, which allows an absolutely humongous amount of memory to be addressed. This isn't going to run out in the foreseable future, if ever. Therefore, this leaves the number crunching capability of 128-bit CPUs as the only advantage, where they would be twice as quick, since they handle twice as much data in one go. A good analogy is painting a wall with a brush that's twice as wide: it will take half the time to complete.

For most kinds of programs having a wider word size makes no difference at all, especially where the data values are small such as on a wordprocessor that handles single byte characters at a time. Only in things requiring certain kinds of intense maths operations such as cryptography (eg RSA/SSL, bitcoins) perhaps CAD and other maths-intensive tasks such as calculating Pi would we see a benefit. In those instances, graphics cards have proved to be very capable number crunchers, removing this benefit. (Moving memory blocks about would be twice as quiick though, which could make things noticeably faster if there's a large number of large blocks to move or one really big one.)

Also, I suspect that quantum computers using qubits could become mainstream in the not too distant future, removing the need for ever more powerful classical CPUs. although I don't think they will ever die out.

For these reasons, I'm going to hazard that we'll never see a general purpose 128-bit CPU.

What do you think?
 
Yes! 640K will never be enough!!!1!
 
I really meant to vote not sure, but I can't change it now, lol.
 
I'd like Intel quad-cores for $100 instead.. 64-bit ones itself
 
Also, I suspect that quantum computers using qubits could become mainstream in the not too distant future, removing the need for ever more powerful classical CPUs. although I don't think they will ever die out.

For these reasons, I'm going to hazard that we'll never see a general purpose 128-bit CPU.

So in summary, you just said "I AM THE FUTURE!!! WORSHIP ME, PUNY HUMANS!!!"
 
Yes. Remember when 512mb was a serious amount of ram? A little different but the idea remains the same.

Edit:
Better example. Remember when they thought ipv4 would never run out?
 
What do you think?
These same things were said when 32-bit debuted in the 1990s. 128-bit memory addressing space will come eventually. I just don't know if it will be in my lifetime or not.

How rapidly memory grows in density is what will ultimately determine when. "If you build it, they will come."
 
Ya I will but probably not for the same reason others will probably for the epeen, but more to sell other people on it. Just think about it services are things that people buy or sell to make someone else day a little easier or better (blah blah solaris i can think of over 9k instances where this isnt true) at least on paper. Services like dream spark for example by Microsoft offer software that costs thousands of dollars to people for free if they are going to school. This helps people (not just kids in their mid 20s) get a hand on things they might not other wise ever have the ability to install. The same goes for things like 128bit CPUs I see it from a different perspective then most like the 290x Alot of people bitched about heat but the way I look at tech advancements isnt about whats bad about them or how they compare to last years model but instead what they offer the future. 128bit CPUs will push developers to code for it and new programmers will get to start with the latest and greatest and memory changes like that can open up how we and software interact with memory mapping or maybe even memory architecture as a whole. The question most people pose is actually will you upgrade for you or should I upgrade for myself. The way to actually see the question imo is will this upgrade help others will supporting this product benefit the future technologically? Anyones CP now adays can play crysis but if you stop thinking about upgrades as a means to an end and see them as a push towards a future were tron isnt unlikely I think peoples answers may change a bit.


Heres to the future :toast:
 
I would like to say yes because I believe in magic and ponies and whatnot so I will in the poll but their will have to be some breakthroughs in addressing, and production, might it be possible in the future yes, maybe we will need it for some reason but I don't see a point to having a 128bit cpu right now and theres not really anything beyond that since a 256bit cpu would be an impossible dream right now.
 
Last edited:
I voted yes.

Still we need software to catch up with 64bit instructions before making any other kind of jump.

Most consumer programs are 32bit based and this will be the norm for a long time, can you remember how much time ago 64bit CPUs started selling?
 
https://en.wikipedia.org/wiki/64-bit_computing#64-bit_processor_timeline

The AMD64 instruction set we're using today debuted in 2003 with AMD Athlon 64 (Clawhammer). It took 10 years for AMD64 to become mainstream and it will take probably another 10 years before 32-bit programs become a rarity (like 16-bit was in the 2000s).

Pretty much all commercial programs are available for AMD64 except games. Games are rapidly going to catch up with PCs now that PS4 and Xbone are AMD64.
 
of course there will be, one day.
 
I suspect that quantum computers using qubits could become mainstream in the not too distant future, removing the need for ever more powerful classical CPUs. although I don't think they will ever die out.

I absolutely agree, which is the reason I answered "No". There is no way that in 30 years that the world will still be looking at transistors (much less silicon transistors) as the primary method of computation. Moore's law is slowly coming to an end, and the only way to keep the pace of technological progress is to have a paradigm shift and reinvent the fundamentals of computing.
 
But even with quantum computers, they still need memory.
 
You're assuming that the computing world will remain in binary. Even modern flash memory (MLC and TLC) doesn't store data in native binary.
 
Everything can be fundamentally reduced to binary--even the human genome (it's 2-bit). "Native binary" is a made up term. Hard drives are just magnetic fields but by switching the polarity of the magnetism, regions of the metal platters can be made to retain values on demand. Binary is a simple proposition which is why it is so popular in computing. That isn't going to change in our lifetimes.
 
Indeed, Ford.

Binary is the smallest and simplest number base possible and the most robust for building a digital computer with. Regardless of computer architecture this fundamental principle won't change.
 
Binary is a set of two states. In computers, there is a single threshold value of the data medium that determines the difference between a "1" or a "0" whether it's magnetic field polarity, electrical potential, electrical current, etc.. However, while it's great to think about binary theoretically, binary is frequently not the most efficient way to operate. In flash memory, MLC is a quaternary system and TLC is an octal system. Sure, it can be converted to binary information, but the method of data storage doesn't fit the fundamental definition of binary states. By that definition as well the human genome isn't fundamentally binary since it is comprised of 4 states (it's quaternary), but as you said each quaternary state can be converted to a 2-bit representation.

Thus my argument is that while it's easy to think in binary, it might be inefficient to produce a 128 bit processor that works in binary. Instead, it might be a better idea to make a processor that runs in a quatenary system and thus only be 64 wide or an octal system that is only 32 wide. It would be fundamentally equivalent to a 128-bit binary processor, but it would not be 128 values wide. Today that seems like a radical proposition but by the time 128-bit addressing is actually needed I doubt it will be as radical.

Ternary computers have been built and were even considered "the future" before electronic computers. There are actually many proposed types of quantum computers that don't operate in binary. For example instead of quibits, quantum computing might be using five state "quidits". Similarly, optical computers don't need to be composed of only two states because you can polarize light. The main reason binary quantum and optical computers are popular is because they present a logical transition from binary transistors.
 
Last edited:
Hard drives are usually operated on in 512 byte sectors or 4,096-bits. Doesn't change the fact that all of the above have a least common denominator of bits.

it might be inefficient to produce a 128 bit processor that works in binary
That's an oxymoron. Bits, regardless of how many there are, is binary. Intel, IBM, AMD, ARM, etc. could manufacture a 128-bit processor in as little has two years. The reason why they don't is because there is not enough demand for it to justify the expense.

The smallest expression x86 understands is 8-bits wide (Intel refers to it as a "word"). Anything smaller is padded to it. Most new x86 processors already have 128-bit FPUs to handle decimal (required to quantify USA's public debt) and quad (useful for science). I suspect it won't be long before ALUs are expanded to 128-bit wide.

"Five states" = 3-bit (000, 100, 010, 110, 001) If those states can act like flags, then 5-bits.

Optical systems presently do operate on binary: bright light, dim/no light. If you try to add wavelengths to the mix, it makes the photoreceptor design substantially more expensive. It becomes cost prohibitive quickly which is why it isn't going anywhere fast.
 
Someday ... somewhere yup ...

But if devs can't be arsed to make proper software for that then who the fuck needs that 128bit. Even 64bit ain't used at its full potential now.
 
Someday ... somewhere yup ...

But if devs can't be arsed to make proper software for that then who the fuck needs that 128bit. Even 64bit ain't used at its full potential now.

try playing modern games on windows XP and see how far you get. we're well into the 64 bit era now, simply because it doubles the 2GB address space limit to 4GB.
 
Art thou forgetting XP Pro x64? XD
 
try playing modern games on windows XP and see how far you get. we're well into the 64 bit era now, simply because it doubles the 2GB address space limit to 4GB.
Playing modern video games is not problem #1, and what's it got do with xp anyway. There's x86 version for all other operating systems.
 
That's an oxymoron. Bits, regardless of how many there are, is binary. Intel, IBM, AMD, ARM, etc. could manufacture a 128-bit processor in as little has two years. The reason why they don't is because there is not enough demand for it to justify the expense.

The smallest expression x86 understands is 8-bits wide (Intel refers to it as a "word"). Anything smaller is padded to it. Most new x86 processors already have 128-bit FPUs to handle decimal (required to quantify USA's public debt) and quad (useful for science). I suspect it won't be long before ALUs are expanded to 128-bit wide.

I think this is where our confusion lies. The conventional way to define the "bit width" of a processor is by minimum the width of all units, and that is what I have been using. Usually this comes down to the width of the memory address space since that is the last to be enlarged. However, you're referring to other the width of any computational unit in the processor. I hope you realize is that there have always been processors with wider computational units than the memory address space but they weren't considered to be "wider" processors. For example the Pentium processor with MMX had a 64-bit FPU. By your definition it would be a 64-bit processor, but because it had a 32-bit memory address space, everyone else considered it a 32-bit processor. I agree with you that 128-bit wide functional units in the processor are useful in the near term, and I hope you agree with me that 128-bit memory addressing won't be needed until far in the future considering how long it will be until any single computer will have more than 16 exabytes of memory. Supercomputers will reach that memory capacity in the nearer future, but since they don't have a shared memory space then 128-bit won't be that important.

Hard drives are usually operated on in 512 byte sectors or 4,096-bits. Doesn't change the fact that all of the above have a least common denominator of bits.

You're confusing the number of states with higher level organization. Hard drives still store data in binary (opposite poles) and are only organized into higher order groups of sectors on order to simplify hard drive controller design. You could still take a microscope and identify the individual "north" and "south" magnetic states on the platters. Although it also uses pages, common flash memory stores data in 4 states or 8 states per cell and needs to be converted into binary to be used in contemporary computers. If you look at any cell there will not be binary data. If some alien acquired a flash memory chip the alien would have no idea that it represented binary information.

"Five states" = 3-bit (000, 100, 010, 110, 001) If those states can act like flags, then 5-bits.

I am not arguing that there is some set of data that cannot be represented in binary. Of course I know this is not true; any integer can be converted to binary. However, computations don't have to be performed in binary. There is no fundamental law requiring binary, but other number systems cannot be natively computed on transistors. Once other forms of computers become prominent, binary may no longer be the preferred method of computation, and this certainly could be true considering how long it will be until any computer needs more than 16 exabytes of memory.

Optical systems presently do operate on binary: bright light, dim/no light. If you try to add wavelengths to the mix, it makes the photoreceptor design substantially more expensive. It becomes cost prohibitive quickly which is why it isn't going anywhere fast.

You misread my post. I argued that polarizers would be used with optical computing, which has nothing to do with variable wavelength or intensity. You can represent many states just by polarizing light at the source and adding filters at the receiver.
 
I was kind of in a dilemma choosing yes or no because of your question. Well not probably in our lifetime. Could even be your great grandson era. And yeah the possibilities will be there.
 
Back
Top