1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Charts Path for Future of its GPU Architecture

Discussion in 'News' started by btarunr, Jun 17, 2011.

  1. btarunr

    btarunr Editor & Senior Moderator Staff Member

    Joined:
    Oct 9, 2007
    Messages:
    28,374 (11.31/day)
    Thanks Received:
    13,607
    Location:
    Hyderabad, India
    The future of AMD's GPU architecture looks more open, broken from the shackles of a fixed-function, DirectX-driven evolution model, and that which increases the role of GPU in the PC's central processing a lot more than merely accelerating GPGPU applications. At the Fusion Developer Summit, AMD detailed its future GPU architecture, revealing that in the future, AMD's GPUs will have full support for C, C++, and other high-level languages. Integrated with Fusion APUs, these new number-crunching components will be called "scalar co-processors".

    Scalar co-processors will combine elements of MIMD (multiple-instruction multiple-data,) SIMD (single-instruction multiple data), and SMT (simultaneous multithreading). AMD will ditch the VLIW (very long instruction word) model that has been in use for several of AMD's past GPU architectures. While AMD's GPU model will break from the shackles of development that is pegged to that of DirectX, it doesn't believe that APIs such as DirectX and OpenGL will be discarded. Game developers can continue to develop for these APIs, and C++ support is more for general purpose compute applications. That does, however, create a window for game developers to venture out of the API-based development model (specifically DirectX). With its next Fusion processors, the GPU and CPU components will make use of a truly common memory address space. Among other things, this eliminate the "glitching" players might sometimes experience when games load textures as they go over the crest of a hill.

    [​IMG]

    Source: TechReport
    D007 and cadaveca say thanks.
  2. MxPhenom 216

    MxPhenom 216 Corsair Fanboy

    Joined:
    Aug 31, 2010
    Messages:
    9,885 (6.81/day)
    Thanks Received:
    2,183
    Location:
    Seattle, WA
    this is looking awesome. Exciting seeing new achitecture from AMD. I want to see want nvidia has going on too
  3. Dj-ElectriC

    Dj-ElectriC

    Joined:
    Aug 13, 2010
    Messages:
    2,157 (1.47/day)
    Thanks Received:
    819
    I gawd this architecture better be good amd, ive been waiting for ages.
  4. NC37

    NC37

    Joined:
    Oct 30, 2008
    Messages:
    1,180 (0.56/day)
    Thanks Received:
    263
    Hopefully it won't turn into another DX10.1. ATI does it, but NV says no so the industry caves to NV.

    Course this is much bigger. Saw this coming. Our CPUs are gonna be replaced by GPUs eventually. Those who laughed at AMD's purchase of ATI...heh. Nice move and I guess it makes more sense to ditch the ATI name if you are gonna eventually merge the tech even more. Oh well, I still won't ever call their discrete GPUs AMD.
  5. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.49/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Fermi already does most of those things, so it's quite the opposite. Many of the new features in AMD's design were implemented in G80 5 years ago or later in GT200. AMD is way behind on this and is almost funny to see that they are going to follow the same architectural principle as Nvidia is being using for the past 5 years. Of course they are going to make the jump instead of doing it gradually like Nvidia did, but that's only posible thanks to Nvidia doing the hard work and opening doors for years.
  6. Over_Lord

    Over_Lord News Editor

    Joined:
    Oct 13, 2010
    Messages:
    751 (0.53/day)
    Thanks Received:
    86
    Location:
    Manipal
    Wow, and to think everybody had already written HD7000 as HD6000 on 28nm with minor improvements, this is BIG!!
  7. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.49/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Well this is AMD's new architecture which does not equal being the next chip. HD7000 is probably what was said to be, an evolution of HD6000. Of course it could be this new architecture, but it's not very likely since HD7000 supposedly taped out some months ago.

    Also the article in TechReport says:

    Don't you think that with less than 6 months left for HD7000 release it would be the time already to talk about specific products?

    [​IMG]

    Extend to discreet GPU is the last step, which suggests that that will happen in 2 generations. This is for Fusion only, at least for now it seems. Not in vain the new architecture is called FSA, Fusion System Architecture.
    Last edited: Jun 17, 2011
  8. Shihabyooo

    Shihabyooo

    Joined:
    Jan 10, 2011
    Messages:
    566 (0.43/day)
    Thanks Received:
    110
    Location:
    A sad excuse of a country called Sudan.
    I thought Nvidia's already covered most of these features.
    I think I'll just wait for Kepler and Maxwell.
  9. techtard

    techtard

    Joined:
    Sep 4, 2009
    Messages:
    930 (0.51/day)
    Thanks Received:
    204
    Ati already had something like this, for quite a while. It was called STREAM, and it was pretty bad. AMD rebranded it as AMD APP and it is a little better, but it sounds like they are finally serious about HPC.
    Either that, or they have been forced to adopt the nVidia route due to entrenched CUDA and nVidia paid de-optimizations for folding and other parallel computing.
  10. HalfAHertz

    HalfAHertz

    Joined:
    May 4, 2009
    Messages:
    1,886 (0.97/day)
    Thanks Received:
    378
    Location:
    Singapore
    Here's the original article:

    http://www.pcper.com/reviews/Graphi...ecture-Overview-Southern-Isle-GPUs-and-Beyond

    It seems this is indeed the base for the HD7000 Southern islands architecture. This wil be interesting...

    From what I understand, it sounds very similar to the old SPARC HPC processors...What I'm worried about is that such a drastic design change may require an even more drastic change on the software side which will distance the already limited number of developers backing AMD ...
    Last edited: Jun 17, 2011
  11. Over_Lord

    Over_Lord News Editor

    Joined:
    Oct 13, 2010
    Messages:
    751 (0.53/day)
    Thanks Received:
    86
    Location:
    Manipal
    so you're meaning to say they'll showcase us before tapeout?
  12. Mistral

    Mistral

    Joined:
    Feb 23, 2008
    Messages:
    404 (0.17/day)
    Thanks Received:
    58
    Location:
    Montreal
    I blame Carmack for this!

    Thanks Carmack...
  13. HalfAHertz

    HalfAHertz

    Joined:
    May 4, 2009
    Messages:
    1,886 (0.97/day)
    Thanks Received:
    378
    Location:
    Singapore
    I think you're underestimating AMD's efforts. I highly doubt they have been sitting idly on their thumbs all these years relying purely on Nvidia to make all the breakthroughs ;) The fact that they didn't implement it straight away into their end-products doesn't mean that they haven't been experimenting with such technologies internally. No company would invest in a product until it is financially viable to produce and there is a sufficient market for it, right?
    WarEagleAU says thanks.
  14. Pijoto New Member

    Joined:
    Jan 7, 2011
    Messages:
    10 (0.01/day)
    Thanks Received:
    0
    I was holding out for the HD7000 series for an upgrade, but now I should probably wait for the HD8000 series instead for new architechure changes...my radeon 4650 barely runs at 720p on some newer games :banghead:
  15. RejZoR

    RejZoR

    Joined:
    Oct 2, 2004
    Messages:
    4,524 (1.25/day)
    Thanks Received:
    891
    Location:
    Europe/Slovenia
    Is it just me or is this a way for AMD to run away from x86 by executing the high level languages directly on GPU ? Though i have no idea if this thing relies on x86 or its a whole thing on its own.
  16. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    8,181 (2.55/day)
    Thanks Received:
    1,137
    And while a different approach has been taken bu ATI for years they still had top performers in most fields, and still pioneered the GPU compute with their early X series of cards.


    I am excited to get both on a more common platform though, and as much as I like my 5870 I have been wanting a green card for better GTA performance.
    WarEagleAU says thanks.
    10 Million points folded for TPU
  17. theeldest

    theeldest

    Joined:
    Feb 7, 2006
    Messages:
    652 (0.21/day)
    Thanks Received:
    140
    Location:
    Austin, TX

    As I understand it, it should be just the opposite. They're working to make using the GPU transparent to developers. Microsoft was showing off C++ AMP at the conference where you can use the same executable and run it on CPU, integrated GPU, or discrete GPU with no changes.
  18. Benetanegia

    Benetanegia New Member

    Joined:
    Sep 11, 2009
    Messages:
    2,683 (1.49/day)
    Thanks Received:
    694
    Location:
    Reaching your left retina.
    Nvidia has been much more in contact with their GPGPU customers, asking what they needed and implementing it. And once it was inplemented and tested, by asking what's next and implementing that too. They have been getting the answers and now AMD only had to implement those. Nvidia has been investing a lot in universities to teach and promote GPGPU for a very long time too. Much sooner than anyone else thought about promoting the GPGPU route.

    AMD has followed a totally passive approach because that's the cheaper approach. I'm not saying that's a bad strategy for them, but they have not fought for the GPGPU side until very recently.

    In fact yes. Entrepreneur companies constantly invest in products whose viability is still in question and with little markets. They create the market.

    There's nothing wrong in being one of the followers, just give credit where credit is due. And IMO AMD deserves none.

    They have had top performers in gaming. Other than that Nvdia has been way ahead in professional markets.

    And AMD did not pioneer GPGPU. It was a group in Standford who did it and yes they used X1900 cards, and yes AMD collaborated, but that's far from pioneering it and was not really GPGPU, it mostly used DX and OpenGL for doing math. By the time that was happening Nividia had already been working on GPGPU on their architecture for years as can be seen with the launch of G80 only few monts after the introduction of X1900.

    That for sure is a good thing. My comments were just regarding how funny it is that after so many years of AMD promoting VLIW and telling everyone and dog that VLIW was the way to go and a much better approach. Even downplaying and mocking Fermi, well they are going to do the same thing Nvidia has been doing for years.

    I already predicted this change in direction a few years ago anyway. When Fusion was frst promoted I knew they would eventually move into this direction and I also predcted that Fusion would represent a turning point in how aggressively would AMD promote GPGPU. And that's been the case. I have no love (neither hate) for AMD for this simple reason. I understand they are the underdog, and need some marketing on their side too, but they always sell themselves as the good company, but do nothing but downplay other's strategies until they are able to follow them and they do unltimately follow them. Just a few months ago (HD6000 introduction) VLIW was the only way to go, almost literally the godsend, while Fermi was mocked up as the wrong way to go. I knew it was all marketing BS, and now it's demostrated, but I guess people have short memories so it works for them. Oh well all these fancy new features are NOW the way to go. And it's true, except there's nothing new on them...
    DaedalusHelios says thanks.
  19. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,820 (4.52/day)
    Thanks Received:
    6,898
    Location:
    Edmonton, Alberta
    They are finally getting rid of GART addressing!!! Yippie!!!

    Now to wait for IOMMU support in Windows-based OS!!
  20. W1zzard

    W1zzard Administrator Staff Member

    Joined:
    May 14, 2004
    Messages:
    14,745 (3.93/day)
    Thanks Received:
    11,462
    this is basically what intel tried with larrabee and failed
  21. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,820 (4.52/day)
    Thanks Received:
    6,898
    Location:
    Edmonton, Alberta
    Huh. You know what W1zz, that never even occured to me. I think you're pretty darn right there.


    The question remains though...why did Larrabee really fail? I mean, they said Larrabee wouldn't get a public launch, but wasn't fully dead yet either...so they must ahve had at least some success...or this path is inevitible.
  22. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    8,181 (2.55/day)
    Thanks Received:
    1,137
    It will be complicated to keep stacks straight with a contiguous memory address space between system and vram. Much less having a GPU make the page fault call and lookup its own data out of CPU registers or straight from disk.

    If they can pull it off my hats off to them.
    10 Million points folded for TPU
  23. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,820 (4.52/day)
    Thanks Received:
    6,898
    Location:
    Edmonton, Alberta
    But how, really, is it any different, than say, a multi-core CPU? Or a dual-socket system using NUMA?

    I mean, they can use IOMMU for address translation, as the way i see it, the GART space right now is effectively the same, but with a limited size, so while it would be much more work for memory controllers, I don't really see anything standing in the way other than programming.
  24. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.49/day)
    Thanks Received:
    69
    what I am gathering is that they will merge the cisc/risc/gpu and x86 designs into a mashup resembling none of them. Imagine a fpu with the width and power of stream processors ? they need int for many things but they can do most of this in hardware itself. this is what amd was working towards with the bulldozer design.
  25. RejZoR

    RejZoR

    Joined:
    Oct 2, 2004
    Messages:
    4,524 (1.25/day)
    Thanks Received:
    891
    Location:
    Europe/Slovenia
    I can tell you why. Intel wanted to make GPU from CPU's. AMD is trying to make a CPU from GPU's. That's the main difference. And one of the reasons why AMD could possibly succeed.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page