1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Engineers Boost Computer Processor Performance By Over 20 Percent

Discussion in 'News' started by btarunr, Feb 8, 2012.

  1. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,410 (11.30/day)
    Thanks Received:
    13,615
    Location:
    Hyderabad, India
    Researchers from North Carolina State University have developed a new technique that allows graphics processing units (GPUs) and central processing units (CPUs) on a single chip to collaborate – boosting processor performance by an average of more than 20 percent.

    “Chip manufacturers are now creating processors that have a ‘fused architecture,’ meaning that they include CPUs and GPUs on a single chip,” says Dr. Huiyang Zhou, an associate professor of electrical and computer engineering who co-authored a paper on the research. “This approach decreases manufacturing costs and makes computers more energy efficient. However, the CPU cores and GPU cores still work almost exclusively on separate functions. They rarely collaborate to execute any given program, so they aren’t as efficient as they could be. That’s the issue we’re trying to resolve.”

    GPUs were initially designed to execute graphics programs, and they are capable of executing many individual functions very quickly. CPUs, or the “brains” of a computer, have less computational power – but are better able to perform more complex tasks.

    “Our approach is to allow the GPU cores to execute computational functions, and have CPU cores pre-fetch the data the GPUs will need from off-chip main memory,” Zhou says.

    “This is more efficient because it allows CPUs and GPUs to do what they are good at. GPUs are good at performing computations. CPUs are good at making decisions and flexible data retrieval.”

    In other words, CPUs and GPUs fetch data from off-chip main memory at approximately the same speed, but GPUs can execute the functions that use that data more quickly. So, if a CPU determines what data a GPU will need in advance, and fetches it from off-chip main memory, that allows the GPU to focus on executing the functions themselves – and the overall process takes less time.

    In preliminary testing, Zhou’s team found that its new approach improved fused processor performance by an average of 21.4 percent.

    This approach has not been possible in the past, Zhou adds, because CPUs and GPUs were located on separate chips.

    The paper, “CPU-Assisted GPGPU on Fused CPU-GPU Architectures,” will be presented Feb. 27 at the 18th International Symposium on High Performance Computer Architecture, in New Orleans. The paper was co-authored by NC State Ph.D. students Yi Yang and Ping Xiang, and by Mike Mantor of Advanced Micro Devices (AMD). The research was funded by the National Science Foundation and AMD.

    The paper abstract follows.

    “CPU-Assisted GPGPU on Fused CPU-GPU Architectures”

    Authors: Yi Yang, Ping Xiang, Huiyang Zhou, North Carolina State University; Mike Mantor, Advanced Micro Devices

    Presented: Feb. 27, 18th International Symposium on High Performance Computer Architecture, New Orleans

    Abstract: This paper presents a novel approach to utilize the CPU resource to facilitate the execution of GPGPU programs on fused CPU-GPU architectures. In our model of fused architectures, the GPU and the CPU are integrated on the same die and share the on-chip L3 cache and off-chip memory, similar to the latest Intel Sandy Bridge and AMD accelerated processing unit (APU) platforms. In our proposed CPU-assisted GPGPU, after the CPU launches a GPU program, it executes a pre-execution program, which is generated automatically from the GPU kernel using our proposed compiler algorithms and contains memory access instructions of the GPU kernel for multiple threadblocks. The CPU pre-execution program runs ahead of GPU threads because (1) the CPU pre-execution thread only contains memory fetch instructions from GPU kernels and not floating-point computations, and (2) the CPU runs at higher frequencies and exploits higher degrees of instruction-level parallelism than GPU scalar cores. We also leverage the prefetcher at the L2-cache on the CPU side to increase the memory traffic from CPU. As a result, the memory accesses of GPU threads hit in the L3 cache and their latency can be drastically reduced. Since our pre-execution is directly controlled by user-level applications, it enjoys both high accuracy and flexibility. Our experiments on a set of benchmarks show that our proposed preexecution improves the performance by up to 113% and 21.4% on average.
    hhumas, erixx, Jack Doph and 3 others say thanks.
  2. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,410 (11.30/day)
    Thanks Received:
    13,615
    Location:
    Hyderabad, India
    Many Thanks to tigger for the tip.
  3. FreedomEclipse

    FreedomEclipse ~Technological Technocrat~

    Joined:
    Apr 20, 2007
    Messages:
    13,597 (5.06/day)
    Thanks Received:
    2,236
    for a moment, i thought there was going to be hope for BD :p

    /troll
  4. NC37

    NC37

    Joined:
    Oct 30, 2008
    Messages:
    1,183 (0.56/day)
    Thanks Received:
    264
    Saw this coming/predicted it even before APUs came out. When NV showcased using GPUs for CPU tasks years ago...it was like just one massive hint of where future tech was going. But can AMD capitalize it? Curious to see. Intel can utilize the same idea but their GPU tech is so far behind that I could see them leveraging the CPU side even more to compensate. So then it is a matter of how far AMD can take it to offset their weakness on the x86.

    Either way, forces both companies to innovate. Innovation is good!!
  5. erixx

    erixx

    Joined:
    Mar 24, 2010
    Messages:
    3,264 (2.02/day)
    Thanks Received:
    435
    Ha! Wait a second! Didn't they say -and we believe...- that we are having this feature since we installed our first Physix enabled videocard? hohoho! HOHOHOHO!
  6. theoneandonlymrk

    theoneandonlymrk

    Joined:
    Mar 10, 2010
    Messages:
    3,375 (2.07/day)
    Thanks Received:
    562
    Location:
    Manchester uk
    +1:) but arm and nvidia imho make this more then a two horse race from here on so i am likeing AMD's open standards policy regarding HSA as hopefully most Inovators will at least try and get them standards working across platforms ,but ive a fiver says nvidia make up some more stuff only they can use.
  7. naoan New Member

    Joined:
    Jul 12, 2009
    Messages:
    304 (0.16/day)
    Thanks Received:
    62
    This stuff would probably remain as an abstract unless AMD took the aggressive stance.
  8. theoneandonlymrk

    theoneandonlymrk

    Joined:
    Mar 10, 2010
    Messages:
    3,375 (2.07/day)
    Thanks Received:
    562
    Location:
    Manchester uk
    hopefully , I prefer open standards that everybody works to, that way the devs have to work harder to make their chip better than others rather then trying to differentiate with under used additional features that need to be tailored for specificaly

    and just when you start to think your pc might last a while an all, tutt be next year im looking at mine with that hmmmm upgrade time eye:)
  9. D4S4

    D4S4

    Joined:
    Mar 27, 2008
    Messages:
    697 (0.30/day)
    Thanks Received:
    75
    Location:
    Zagreb, Croatia
    this looks very good for amd in the coming years.
  10. xenocide

    xenocide

    Joined:
    Mar 24, 2011
    Messages:
    2,133 (1.70/day)
    Thanks Received:
    458
    Keep in mind, they didn't actually physically accomplish anything yet. With the help of AMD Engineers they modeled how a supposed performance gain could potentially occur, but have yet to get it functioning. When APU's were first introduced I figured they would find a way to have the GPU and CPU simultaneously process when a Discrete GPU Solution was present, but apparently they didn't care much for developing that idea. This entire study should be taken with a truckload of grains of salt.
  11. D4S4

    D4S4

    Joined:
    Mar 27, 2008
    Messages:
    697 (0.30/day)
    Thanks Received:
    75
    Location:
    Zagreb, Croatia
    well they can't use it now since there is no software support for something like this. but in 5-10 years... intel has nothing like this and i bet heterogenous computing is going to gain some serious ground in the near future simply because amd made a chip that that makes it commercially viable.
  12. Static~Charge

    Static~Charge

    Joined:
    Nov 2, 2008
    Messages:
    471 (0.22/day)
    Thanks Received:
    85
    What happens if the GPU is busy doing video-related work and the CPU throws a calculation request at it? Does the display stutter or freeze? Or does the GPU perform the calculations more slowly? In that case, the CPU might be able to perform the calculations faster just because it isn't bogged down with other work. Definitely an issue to take into consideration.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page