1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Charts Path for Future of its GPU Architecture

Discussion in 'News' started by btarunr, Jun 17, 2011.

  1. AsRock

    AsRock TPU addict

    Joined:
    Jun 23, 2007
    Messages:
    10,455 (4.20/day)
    Thanks Received:
    1,565
    Location:
    US
    Maybe AMD's way is better but it's wiser to do what nVidia started ?. As we all know companys have not really supported AMD all that well. And all so know AMD don't have shed loads of money to get some thing fully supported.

    Not trying to say your wrong just saying we don't know both sides of the story or reasoning behind it.
  2. Casecutter

    Joined:
    Apr 19, 2011
    Messages:
    1,069 (0.98/day)
    Thanks Received:
    75
    Location:
    So. Cal.
    Nvidia bought a Physic company… AMD bought a graphics company. So yes it make sense that Nvidia wanted to get and got a lead. Although they wanted and kept it (as much as they could), their proprietary Intellectual Property, which is understandable.

    AMD got in the graphic side, dusting off ATI and got them back in contention, all along wanting to do achieve this. It just takes time and research is bearing fruit.

    The best reason for us that AMD appears to maintain to the open specification, and that will really make more developer want in.
  3. bucketface

    bucketface New Member

    Joined:
    Apr 21, 2010
    Messages:
    142 (0.10/day)
    Thanks Received:
    19
    Location:
    Perth, Australia
    this could potentially lead to cuda becoming open, if AMD can get enough support from developers, they'll have to or risk seeing it fall to the way side in favor of somthing that supports both.
  4. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.60/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    I always thought they failed because they didn't have good enough graphics drivers and as an accelerator it would not be financially viable. I remember reading that for HPC Larrabee was good enough, but you know better that me that in order for these big chips to be viable, you need the consumer market in order to have some volume and refine the process, bin chips, etc. Even if it's an small market like the enthusiast GPU market, with less than 1 million cards sold, that's far more than the 10's of thousands HPC cards you can sell. At least for now. Maybe in some years, with more demand, it would make sense to create a different chip for HPC, but then again the industry is moving in the opposite direction, and I think it's the right direction.

    Eh? No. CUDA will dissapear sometime in the future most probably, when OpenCL caches on. OpenCL is 95% similar to CUDA anyway, if you have to believe CUDA/OpenCL developers and it's free so Nvidia doesn't gain anything from the use of CUDA. It will not go anywhere now and it's not going to be in 1 or 2 years probably, because Nvidia keeps updating CUDA every now and then and stays way ahead with more features (the advantage of not depending on stardardization by a consortium). At some point it should stagnate and OpenCL should be able to catch up, even if it's evolution depends on the Khronos group.
    Last edited: Jun 17, 2011
  5. pantherx12

    pantherx12 New Member

    Joined:
    Jan 2, 2009
    Messages:
    9,714 (5.03/day)
    Thanks Received:
    1,698
    Location:
    ENGLAND-LAND-LAND

    No they haven't man, they just don't bang on about it.

    They talk directly to developers and have had a forum running for years where people can communicate about it.

    Go on the AMD developer forums to see : ]
    WarEagleAU says thanks.
  6. bucketface

    bucketface New Member

    Joined:
    Apr 21, 2010
    Messages:
    142 (0.10/day)
    Thanks Received:
    19
    Location:
    Perth, Australia
    all i was saying is if nvidia plans on seeing CUDA through the next 5 years or so they'll almost certainly have to open it up, i don't know the specifics of CUDA vs openCL but my understanding was that CUDA, as it stands is the more robust platform.
  7. St.Alia-Of-The-Knife New Member

    Joined:
    Mar 9, 2011
    Messages:
    195 (0.17/day)
    Thanks Received:
    32
    Location:
    Montreal, Canada
    "Full GPU support of C, C++ and other high-level languages"

    i know that the GPU is way faster than the CPU,
    so does this mean that GPU will replace the CPU in common tasks also??
  8. seronx

    seronx

    Joined:
    Jul 10, 2010
    Messages:
    981 (0.71/day)
    Thanks Received:
    216
    Location:
    USA, Arizona, Maricopa
    1. The architecture explained in this diagram is the HD 7000

    VLIW5 -> VLIW4 -> ACE or CU

    [​IMG]


    http://www.realworldtech.com/forums/index.cfm?action=detail&id=120431&threadid=120411&roomid=2
    http://www.realworldtech.com/

    ^wait for the article dar

    2. In 2 years you will see this GPU in the Z-series APU(The tablet APU)

    In the cloud future yes, CPU will only need to command in the future

    AMD GPUs have been GPGPU compatible since the high end GPUs could do DP

    This architecture just allows a bigger jump(ahead of Kepler)

    Nvidia was very late, some late 200 series can do DX10.1 but not very well

    The reason they are changing is not because of the GPGPU issue its more on the scaling issue

    Theoretical -> Realistic
    Performance didn't scale correctly

    [​IMG]

    It's all over the place, well scaling is a GPGPU issue but this architecture will at least allow for better scaling ^
    Last edited: Jun 17, 2011
  9. Neuromancer

    Neuromancer

    Joined:
    May 23, 2008
    Messages:
    379 (0.18/day)
    Thanks Received:
    64
    Location:
    South Jersey
    Wow how times are turning backwards.

    I got me a new math co-processor!
  10. Sapientwolf New Member

    Joined:
    Aug 23, 2006
    Messages:
    57 (0.02/day)
    Thanks Received:
    1
    The GPU is faster than the CPU at arithmetic operations that can occur in parallel (Like video and graphics). The CPU is much faster at sequential logic. The CPU has been tailored toward its area and the GPU to its own a well. However now we see the gray area between the two increasing more and more. So AMD is working hard to make platforms in which the CPU can offload highly parallel arithmetic loads to their GPUS, and make it easier for programmers to program their GPUs outside the realm of DirectX and OpenGL.

    One will not replace the other, they will merge and instructions will be exectuted on the hardware best for the job.
    Neuromancer says thanks.
  11. Hayder_Master

    Hayder_Master

    Joined:
    Apr 21, 2008
    Messages:
    5,160 (2.36/day)
    Thanks Received:
    637
    Location:
    IRAQ-Baghdad
    So they point to big improve in performance and only benchmarks can prove it.
  12. Disruptor4

    Joined:
    Jun 3, 2008
    Messages:
    219 (0.10/day)
    Thanks Received:
    21
    Well probably not only benchmarks. You will see a decrease in the time it takes to process certain things. Similar in example to how decoding and recoding can be done by the GPU in certain programs.
  13. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.54/day)
    Thanks Received:
    69
    the decoder will handle this job more then likely.
  14. xtremesv

    xtremesv

    Joined:
    Mar 11, 2010
    Messages:
    115 (0.08/day)
    Thanks Received:
    11
    The future is fusion, remember? CPU and GPU becoming one, it's going to happen, I believe it, but are we gonna really "own" it?

    These days cloud computing is starting to make some noice and it makes sense, average guys/gals are not interested in FLOPS performance, they just want to listen to their music, check facebook and play some fun games. What I'm saying is that in the future we'll only need a big touch screen with a mediocre ARM processor to play Crysis V. The processing, you know GPGPU, the heat and stuff, will be somewhere in China, we'll be given just what we need, the final product through a huge broadband. If you've seen the movie Wall-E, think about living in that spaceship Axiom, it'd be something like that... creepy, eh?
  15. Wile E

    Wile E Power User

    Joined:
    Oct 1, 2006
    Messages:
    24,324 (8.82/day)
    Thanks Received:
    3,776
    You know, AMD has always had these great gpu hardware features (the ability to crunch numbers, like in physics), then promises us great software to run on it (GPU accelerated Havok anyone?), but the software never materializes.

    I'll get excited about this when it is actually being implemented by devs in products I can use.
    Last edited: Jun 18, 2011
  16. seronx

    seronx

    Joined:
    Jul 10, 2010
    Messages:
    981 (0.71/day)
    Thanks Received:
    216
    Location:
    USA, Arizona, Maricopa
    Well by 2013

    The APU
    with

    Enhanced Bulldozer + Graphic Core Next

    Will be perfect unison

    and with

    2013
    FX+AMD Radeon 9900 series
    Next-Gen Bulldozer + Next-Gen Graphic Core Next

    and DDR4+PCI-e 3.0 will equal MAXIMUM POWUH!!!

    :rockout::rockout::rockout: :rockout::rockout::rockout: :rockout::rockout::rockout:
  17. pantherx12

    pantherx12 New Member

    Joined:
    Jan 2, 2009
    Messages:
    9,714 (5.03/day)
    Thanks Received:
    1,698
    Location:
    ENGLAND-LAND-LAND
    I know it's only one thing, but furture mark 11 does soft body simulation on the GPU on AMD cards and Nvidia cards.

    Only one thing, but it does point to things to come I think.
  18. W1zzard

    W1zzard Administrator Staff Member

    Joined:
    May 14, 2004
    Messages:
    14,544 (4.01/day)
    Thanks Received:
    11,218
    yup .. i dont think amd has successfully brought any software feature to market.. maybe x86_64 if you count intel adopting it

    i agree, but why does amd waste their money with useless computation features that apparently have nowhere to go other than video encode and some hpc apps ?
    if there was some killer application for gpu computing wouldn't nvidia/cuda have found it by now?
  19. RejZoR

    RejZoR

    Joined:
    Oct 2, 2004
    Messages:
    4,295 (1.23/day)
    Thanks Received:
    840
    Location:
    Europe/Slovenia
    And even that hyped video encoding is mostly done on CPU which makes it utterly useless as it's not much faster than pure CPU anyway. They were bragging about physics as well but they never made them. Not with Havok, not with Bullet, not with anything. I mean if you want to make something, endorse it, embrace it, give developers a reason to develop and build on that feature.
    In stead they announce it, brag about it and then we can all forget about it as it'll never happen.
    They should invest those resources into more productive things instead of wasting them on such useless stuff.

    Only thing that they pulled off properly is MLAA which uses shaders to process screen and anti-alias it. It functions great in pretty much 99,9% of games, is what they promised and i hope they won't remove it like they did with most of their features (Temporal AA, TruForm, SmartShaders etc). Sure some technologies got redundant like TruForm, but others just died because AMD didn't bother to support them. SmartShaders were good example. HDRish was awesome, giving old games a fake HDR effect which looked pretty good. But it worked only in OpenGL and someone else had to make it. AMD never added anything useful for D3D which is what most of the games use. So what's the point!?!?! They should really get their stuff together and stop wasting time and resources on useless stuff and start making cool features that can last. Like again, MLAA.
  20. swaaye

    Joined:
    May 31, 2005
    Messages:
    224 (0.07/day)
    Thanks Received:
    15
    ATI's support of GPGPU hasn't been as great as some say here. OpenCL support only goes back to HD4000 because older chips have limitations that make it basically infeasible. In other words HD3000 and 2000 are very poor GPGPU chips. X1900 isn't really even worth mentioning.

    You can on the other hand run CUDA on old G80. NV has definitely been pushing GPGPU harder.

    On the other, other hand however I can't say that GPGPU affects me whatsoever. I think AMD is mostly after that Tesla market and Photoshop filters. I won't be surprised if this architecture is less efficient for graphics. I sense a definite divergence from just making beefier graphics accelerators. NV's chips have proven with their size that GPGPU features don't really mesh with graphics speed.
    Last edited: Jun 18, 2011
  21. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.54/day)
    Thanks Received:
    69

    even at light speed the latencys will kill ya, there is no way around client power, resist the cloud, its bullshit anyways.
  22. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.54/day)
    Thanks Received:
    69
    Becuase soon enough the hardware will do the work anyways. Its not always about software. As to Nvidia, they painted themselves into a corner years ago.
  23. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.54/day)
    Thanks Received:
    69

    they should call D3D round about the bend, down the street, up the alley, over 2 blocks and in the ditch 3d. Becuase it sure as shit ain't direct. AMD will move away from directX, they see where the market is headed.
  24. W1zzard

    W1zzard Administrator Staff Member

    Joined:
    May 14, 2004
    Messages:
    14,544 (4.01/day)
    Thanks Received:
    11,218
    the market is headed toward console games that are directx (xbox360) and that get recompiled with a few clicks for pc to maximize developer $$
  25. RejZoR

    RejZoR

    Joined:
    Oct 2, 2004
    Messages:
    4,295 (1.23/day)
    Thanks Received:
    840
    Location:
    Europe/Slovenia
    Exactly. If they will try to invent something new and not push it enough like they never really did for anything, they are just plain stupid. DirectX is the way to go at the moment, mostly because of what W1z said. Profit.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page