Wednesday, October 31st 2012

Intel Envisions Do-it-all 48-Core Mobile Processors

Researchers at Intel have begun ground work on many-core processors that could drive ultra-mobile devices (such as tablets, smartphones, Ultrabooks, etc.,) in the near future. It design calls for no less than 48 processing cores on a piece of silicon, built on the "single-chip cloud computer" (SCC) design. The new technology could reach the markets anywhere between five to ten years from now, probably since current silicon fabrication technologies aren't advanced enough to put that much processing power into a chip that runs off a smartphone battery.

"If we're going to have this technology in five to 10 years, we could finally do things that take way too much processing power today," said Patrick Moorhead, an analyst with Moor Insights and Strategy. "This could really open up our concept of what is a computer... The phone would be smart enough to not just be a computer but it could be my computer." With devices like Microsoft Surface, which transform from a tablet to a fully-functional desktop with a dock, or a notebook with the flip of a smart cover, the definition of what constitutes a personal computer is changing. In a few years, your mobile could transform into a computing device of any shape and size for consumers.
Source: Computer World
Add your own comment

23 Comments on Intel Envisions Do-it-all 48-Core Mobile Processors

#1
de.das.dude
Pro Indian Modder
why not just use a GPU for computing :rolleyes:
Posted on Reply
#2
RejZoR
Sure, just like they were dreaming about 10GHz Pentium 4's...
Posted on Reply
#4
btarunr
Editor & Senior Moderator
de.das.dudewhy not just use a GPU for computing :rolleyes:
Because GPUs suck at serial loads.
Posted on Reply
#5
happita
Reminds me of how they had such high hopes for Larabee, but then that eventually went down the toilet and we never heard about it again.
Posted on Reply
#6
NC37
Intel wanting more cores...quick someone call AMD. I think Intel got into their secret stash!

Course on the plus side maybe we'll start seeing more multithreading emphasis in computing. But then that would mean, Intel is helping AMD by trying to turn the industry.
Posted on Reply
#7
Phusius
boring, i need to get laid.
Posted on Reply
#8
de.das.dude
Pro Indian Modder
btarunrBecause GPUs suck at serial loads.
my point was, why not improve the gpu arch so they can work good on a wider variety of loads.
Posted on Reply
#9
hardcore_gamer
Most software don't even support 4 cores. I don't think it'll change dramatically from 10 years from now. During the launch, there won't be many software to show the capabilities of the chip and it'll be called a fail (similar to bulldozer).
Posted on Reply
#10
Aquinus
Resident Wat-man
hardcore_gamerMost software don't even support 4 cores. I don't think it'll change dramatically from 10 years from now. During the launch, there won't be many software to show the capabilities of the chip and it'll be called a fail (similar to bulldozer).
I can write code that will run software with as many threads as I want, but if the same data is needed for every thread and every thread is locking the same data, you won't see more than 1 thread worth of throughput. Keep in mind that multi-threading is highly dependent on the programmer and the workload being programmed. There is a point of diminishing returns and a 40-core machine will hit this without a doubt, but your phone could play 4k video, encoding a 4k video stream, play a game, and run a web site and database server without skipping a beat. AMD never marketed the FX series as fast, but they marketed it as being able to multi-task like a champ, and it does. A 40-core CPU is no different, just more cores. Intel will have to do a little more than a few die shrinks to have enough room for that though. Intel's and AMD's 8 and 16 core processors respectively are pretty big on skt2011 and G34.
Posted on Reply
#12
Morgoth
Fueled by Sapphire
happitaReminds me of how they had such high hopes for Larabee, but then that eventually went down the toilet and we never heard about it again.
Larabee is now named Intel Xeon Phi Co processor
Posted on Reply
#13
Velvet Wafer
MorgothLarabee is now named Intel Xeon Phi Co processor
Thanks for pointing that out to the public ;)

Posted on Reply
#14
adrianx
try to use the cell processor :)

win8 will have support :)
Posted on Reply
#17
FordGT90Concept
"I go fast!1!11!1!"
de.das.dudewhy not just use a GPU for computing :rolleyes:
Because CPUs can do logic many times faster than GPUs.
de.das.dudemy point was, why not improve the gpu arch so they can work good on a wider variety of loads.
Because GPUs don't use x86 instructions and Intel CPUs do.
Posted on Reply
#18
Lazzer408
Isn't this the same claim they made when their Bonnell microarchitecture (aka Atom) was incubating?
FordGT90ConceptBecause CPUs can do logic many times faster than GPUs.
Find me the data width and mips of a Kepler. ...please :)
Posted on Reply
#19
3870x2
de.das.dudemy point was, why not improve the gpu arch so they can work good on a wider variety of loads.
Because if they did, it would be a CPU. clocked at <1000MHZ..
Posted on Reply
#20
tacosRcool
I got like 5 of those in my left pocket!

On topic: Is this really practical for mobile processing?
Posted on Reply
#21
Aquinus
Resident Wat-man
Lazzer408Find me the data width and mips of a Kepler. ...please
It's the kind of instructions that are run and how they're run. GPUs run instructions on large sets of data in parallel. There is a reason why linear applications run on the CPU and highly parallel applications (like vector math and folding,) in some cases have been developed to run on a GPU using modern GPGPU languages like CUDA, OpenCL, and DirectCompute, but always keep in mind that a GPU can't do any calculations without the CPU to instruct it.

Think of a GPU as a ton of floating point units that run the same instruction on all the FPUs at the same time. This is over-simplistic of what modern day video cards do, but I believe it is an accurate representation of what is happening internally. A multi-core CPU can be doing vastly different things all at once where a GPU has much more trouble doing this.

So all in all, I guess what I'm really trying to say is that CPU capabilities are very broad and can be applied to multiple purposes where a GPU is considerably more specialized in the kind of workloads it can handle.
Posted on Reply
#22
jihadjoe
de.das.dudemy point was, why not improve the gpu arch so they can work good on a wider variety of loads.
Because that would make them worse GPUs!

GPUs are good at what they do because the hardware they use is specialized for the particular task they usually run: massively parallel rendering. If you make them better at "a variety of loads", then you have to introduce stuff like out of order execution, branch prediction, all of which take up space on the GPU die, and all of which makes each compute unit more complex.
Posted on Reply
#23
FordGT90Concept
"I go fast!1!11!1!"
Lazzer408Find me the data width and mips of a Kepler. ...please :)
It's not "data width" and "MIPS," it's branching logic like..
if (a == b) {
if (c == d) {
if (e == f) {
if (g > h) {
if ( a < c) {
b = a + b - c / g + h * f +e;
}
}
}
}
}
MIPS is no where near as flexible as x86 but x86 takes a performance penalty for it. Which is faster depends on the workload.
Posted on Reply
Add your own comment
Apr 26th, 2024 12:46 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts