• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Envisions Do-it-all 48-Core Mobile Processors

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,670 (7.43/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Researchers at Intel have begun ground work on many-core processors that could drive ultra-mobile devices (such as tablets, smartphones, Ultrabooks, etc.,) in the near future. It design calls for no less than 48 processing cores on a piece of silicon, built on the "single-chip cloud computer" (SCC) design. The new technology could reach the markets anywhere between five to ten years from now, probably since current silicon fabrication technologies aren't advanced enough to put that much processing power into a chip that runs off a smartphone battery.

"If we're going to have this technology in five to 10 years, we could finally do things that take way too much processing power today," said Patrick Moorhead, an analyst with Moor Insights and Strategy. "This could really open up our concept of what is a computer... The phone would be smart enough to not just be a computer but it could be my computer." With devices like Microsoft Surface, which transform from a tablet to a fully-functional desktop with a dock, or a notebook with the flip of a smart cover, the definition of what constitutes a personal computer is changing. In a few years, your mobile could transform into a computing device of any shape and size for consumers.



View at TechPowerUp Main Site
 
why not just use a GPU for computing :rolleyes:
 
Sure, just like they were dreaming about 10GHz Pentium 4's...
 
Sounds good to me.
 
Reminds me of how they had such high hopes for Larabee, but then that eventually went down the toilet and we never heard about it again.
 
Intel wanting more cores...quick someone call AMD. I think Intel got into their secret stash!

Course on the plus side maybe we'll start seeing more multithreading emphasis in computing. But then that would mean, Intel is helping AMD by trying to turn the industry.
 
boring, i need to get laid.
 
Because GPUs suck at serial loads.

my point was, why not improve the gpu arch so they can work good on a wider variety of loads.
 
Most software don't even support 4 cores. I don't think it'll change dramatically from 10 years from now. During the launch, there won't be many software to show the capabilities of the chip and it'll be called a fail (similar to bulldozer).
 
Most software don't even support 4 cores. I don't think it'll change dramatically from 10 years from now. During the launch, there won't be many software to show the capabilities of the chip and it'll be called a fail (similar to bulldozer).

I can write code that will run software with as many threads as I want, but if the same data is needed for every thread and every thread is locking the same data, you won't see more than 1 thread worth of throughput. Keep in mind that multi-threading is highly dependent on the programmer and the workload being programmed. There is a point of diminishing returns and a 40-core machine will hit this without a doubt, but your phone could play 4k video, encoding a 4k video stream, play a game, and run a web site and database server without skipping a beat. AMD never marketed the FX series as fast, but they marketed it as being able to multi-task like a champ, and it does. A 40-core CPU is no different, just more cores. Intel will have to do a little more than a few die shrinks to have enough room for that though. Intel's and AMD's 8 and 16 core processors respectively are pretty big on skt2011 and G34.
 
We need moar corezzzz!!!
 
Reminds me of how they had such high hopes for Larabee, but then that eventually went down the toilet and we never heard about it again.
Larabee is now named Intel Xeon Phi Co processor
 
try to use the cell processor :)

win8 will have support :)
 
Isn't this the same claim they made when their Bonnell microarchitecture (aka Atom) was incubating?

Because CPUs can do logic many times faster than GPUs.

Find me the data width and mips of a Kepler. ...please :)
 
Last edited:
my point was, why not improve the gpu arch so they can work good on a wider variety of loads.

Because if they did, it would be a CPU. clocked at <1000MHZ..
 
I got like 5 of those in my left pocket!

On topic: Is this really practical for mobile processing?
 
Find me the data width and mips of a Kepler. ...please

It's the kind of instructions that are run and how they're run. GPUs run instructions on large sets of data in parallel. There is a reason why linear applications run on the CPU and highly parallel applications (like vector math and folding,) in some cases have been developed to run on a GPU using modern GPGPU languages like CUDA, OpenCL, and DirectCompute, but always keep in mind that a GPU can't do any calculations without the CPU to instruct it.

Think of a GPU as a ton of floating point units that run the same instruction on all the FPUs at the same time. This is over-simplistic of what modern day video cards do, but I believe it is an accurate representation of what is happening internally. A multi-core CPU can be doing vastly different things all at once where a GPU has much more trouble doing this.

So all in all, I guess what I'm really trying to say is that CPU capabilities are very broad and can be applied to multiple purposes where a GPU is considerably more specialized in the kind of workloads it can handle.
 
my point was, why not improve the gpu arch so they can work good on a wider variety of loads.

Because that would make them worse GPUs!

GPUs are good at what they do because the hardware they use is specialized for the particular task they usually run: massively parallel rendering. If you make them better at "a variety of loads", then you have to introduce stuff like out of order execution, branch prediction, all of which take up space on the GPU die, and all of which makes each compute unit more complex.
 
Find me the data width and mips of a Kepler. ...please :)
It's not "data width" and "MIPS," it's branching logic like..
Code:
if (a == b) {
  if (c == d) {
    if (e == f) {
      if (g > h) {
        if ( a < c) {
          b = a + b - c / g + h * f +e;
        }
      }
    }
  }
}
MIPS is no where near as flexible as x86 but x86 takes a performance penalty for it. Which is faster depends on the workload.
 
Back
Top