1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Charts Path for Future of its GPU Architecture

Discussion in 'News' started by btarunr, Jun 17, 2011.

  1. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.48/day)
    Thanks Received:
    69
    If you say so, I think your off base here and the microsoft design will offer huge problems downstream. The company ready for tommorow, will be the winner tommorw.
  2. cadaveca

    cadaveca My name is Dave

    Joined:
    Apr 10, 2006
    Messages:
    13,890 (4.53/day)
    Thanks Received:
    6,972
    Location:
    Edmonton, Alberta
    You MUST MUST keep in mind that all of this is business, and as such, the future of technology is very unfluenced by the businesses behind it. the least amount of work that brings in the most dollars is what WILL happen, without a doubt, as this is the nature of business.


    What needs to be done is for someone to effectively show why other options make more sense, not fro, a technical standpoint, but from a business standpoint.

    And like mentioned, none of these technologies AMD/ATI introduced over the years really seem to make much business sense, and as such, they fail hard.


    Amd's board now seems to realize this...Dirk was dumped, and Bulldozer "delayed", simply becuase that made the MOST business sense...they met the market demand, and rightly so, as market demnad for those products is so high that they have no choice but to delay the launch of Bulldozer.

    Delaying a new product, because an existing one is in high demand, makes good business sense.
  3. swaaye

    Joined:
    May 31, 2005
    Messages:
    231 (0.07/day)
    Thanks Received:
    16
    What I see is AMD selling all of their consumer CPUs under $200, even their 6 core chips. They need new CPU tech that they can get some better margins on. Intel charges 3-4x more for their 6 core chips because they have clear performance dominance.

    Buying ATI was a good move because both AMD and NV are now obviously trying to bypass Intel's dominance by creating a new GPU compute sector. I'm not sure if that will ever benefit the common user though because of the limited types of computing that work well with GPUs.

    Also, Llano and Brazos are redefining the low end in a way that Intel didn't bother to so that's interesting too.
    Last edited: Jun 18, 2011
  4. Wile E

    Wile E Power User

    Joined:
    Oct 1, 2006
    Messages:
    24,324 (8.40/day)
    Thanks Received:
    3,778
    The hardware need software to operate. This comment doesn't even make any sense.
  5. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.48/day)
    Thanks Received:
    69
    Sure it does, what if the cpu schedulre and the cpu decoder know how to break the works loads across, int,fpu,vliw etc. If it get smart enough, and there no reason it can't be, then the OS just sees x86 emulated as plain x86, but the underlying micro handles alot of the heavy lifting, if you don't really see the guiness behind bulldozer, your looking in the wrong places. How hard would it be for amd to intorduce vliw like elements into that modular core design ? Not terrifically hard, better belive that this is the way forward. Tradition x86 is dead.
  6. bucketface

    bucketface New Member

    Joined:
    Apr 21, 2010
    Messages:
    142 (0.09/day)
    Thanks Received:
    19
    Location:
    Perth, Australia
    most games these days use at least, parts of the Havok, Bullet or what ever libraries. resident evil 5 and company of heroes are 2 that mention use of havok on the box. bad company 2 used parts of havok or bullet? most physics come from these libraries. its alot easier for devs than writing their own.
    (below in relpy to someone above, i'm not sure how relevant it is but it's true none the less)
    the whole do what makes the most money now and we'll deal with the consequences later ideology, is why the american economy is in the state that it is. companies are like children, they want the candy & lots of it now but then they make themselves sick because they had too much. a responsible parent regulates them, it doesn't matter how big a tantrum they throw, because they know that cleaning up the resulting mess that occurs if they let them do as they please is much worse. just saying companies will do what makes the biggest short term gains regardless of the long term consequences doesn't help you or i see better games.
  7. Damn_Smooth

    Damn_Smooth New Member

    Joined:
    May 16, 2011
    Messages:
    1,435 (1.19/day)
    Thanks Received:
    478
    Location:
    A frozen turdberg.
    Speaking of AMD's graphics future this is a long, but interesting read.

    http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute

  8. HTC

    HTC

    Joined:
    Apr 1, 2008
    Messages:
    2,238 (0.95/day)
    Thanks Received:
    303
  9. Wile E

    Wile E Power User

    Joined:
    Oct 1, 2006
    Messages:
    24,324 (8.40/day)
    Thanks Received:
    3,778
    There is no way to do it transparently to the OS. You still need software to tell the scheduler what type of info is coming down the pipeline. It will require a driver at minimum.
  10. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.48/day)
    Thanks Received:
    69
    Why ? The dirver makes up for the lack of logic on the chip.
  11. Wile E

    Wile E Power User

    Joined:
    Oct 1, 2006
    Messages:
    24,324 (8.40/day)
    Thanks Received:
    3,778
    If they were capable of giving a chip that kind of logic at this point, we would have things like multi-GPU gfx cards that show up to the OS as a single gpu.

    We aren't anywhere near the chips being able to independently determine data type and scheduling like that.
  12. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.48/day)
    Thanks Received:
    69
    What do you think all this APU nonsense is about ? Popcorn on tuesdays ?
  13. jagd

    Joined:
    Jul 10, 2009
    Messages:
    451 (0.24/day)
    Thanks Received:
    89
    Location:
    TR
    Havoc is different from others , Intel bought Havoc and decided to use it as software physics api to advert intel cpus ,i dont think anyone would do anyting else what AMd done about this .noone would use a software api while you could do it on hardware.




    I see Cloud computing renamed version of old terminal pc-thin client /server concept ,with online gaming problem is connection more than hardware ,youll need rock-stable connection -something hard to find always http://en.wikipedia.org/wiki/Thin_client
  14. zpnq New Member

    Joined:
    Jun 21, 2011
    Messages:
    3 (0.00/day)
    Thanks Received:
    2
  15. a_ump

    a_ump

    Joined:
    Nov 21, 2007
    Messages:
    3,598 (1.45/day)
    Thanks Received:
    367
    Location:
    Smithfield, WV
    its about getting a CPU and GPU into one package, once die, eventually one chip that'll be way more cost effective than 2 separate chips. Oh, and taking over the entry/low end of the market from Intel.

    That's what that APU common sense is about :p


    Very nice find sir, i want to read it all but i might have to bookmark it.:toast:,
    Damn_Smooth says thanks.
  16. pantherx12

    pantherx12 New Member

    Joined:
    Jan 2, 2009
    Messages:
    9,714 (4.70/day)
    Thanks Received:
    1,699
    Location:
    ENGLAND-LAND-LAND
  17. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.48/day)
    Thanks Received:
    69

    Long range its about comming to grips with serial processing and the lack of compute power you get from it.
  18. a_ump

    a_ump

    Joined:
    Nov 21, 2007
    Messages:
    3,598 (1.45/day)
    Thanks Received:
    367
    Location:
    Smithfield, WV
    You're talking about GCN then. I was talking short range :p.

    Honestly, I definitely think AMD is going to take a leap in innovation over Nvidia these next 5 years or so. I really do think AMD's experience with CPU's is going to pay off when it comes to integrating compute performance in their GPU...well APU. Nvidia has the lead right now, but i can see AMD loosening that grip.
  19. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.48/day)
    Thanks Received:
    69
    I don't think Nvidia is going to come much further then they have thus far. AMD is set to put a whoppin on Intel and nvidia in that area. given the limits of IPC and clock speed, its the only way to get where they need to go in the first place.
  20. Wile E

    Wile E Power User

    Joined:
    Oct 1, 2006
    Messages:
    24,324 (8.40/day)
    Thanks Received:
    3,778
    A fancy name for using gpu shaders to accelerate programs. AKA: The same shit we already have for a gfx cards in the way of CUDA/whatever stream was renamed to.

    I would bet money this is not hardware based at all, and requires special software/drivers to work properly.
  21. pantherx12

    pantherx12 New Member

    Joined:
    Jan 2, 2009
    Messages:
    9,714 (4.70/day)
    Thanks Received:
    1,699
    Location:
    ENGLAND-LAND-LAND
    I bet it is hardware based, it's not just a fancy name though, it's gpu shaders in the cpu ( or next to in this case) meaning your cpu/gpu (apu) can handle all the physics and your gpu can focus on being a graphics card.

    Or if all of AMDs cpus go this way, means people don't have to buy a gpu straight away which is also nice.
  22. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.48/day)
    Thanks Received:
    69
    No, its about compute power, these first generation GPU's are about figuring out how to get the transistor and some of the basic technology figured out with How to make the transistors on the same piece of silica. The next step will be more transistors on both side cpu/gpu and the step beyond that will be a intergration of x86 cpu logic and gpu parellelism. Which will Give AMD a massive advantage over Nvidia and Intel in compute power and heavy workloads.

    AMD got it right, 6 years ago when they started down this road, thats why bulldozer is modular.
  23. Wile E

    Wile E Power User

    Joined:
    Oct 1, 2006
    Messages:
    24,324 (8.40/day)
    Thanks Received:
    3,778
    No it isn't. It's basically a gpu put on the same pcb as the cpu. The concept is exactly the same as current gpu accelerated programs. The only difference is the location of the gpu.
    WIll give an advantage =/= currently having an advantage.

    Again, this is just gpgpu. Same thing we've had for ages. It is not transparent to the OS, and must specifically be coded for. Said coding is always where AMD ends up dropping the ball on this crap. I will not be excited until I see this actually being used extensively in the wild.
  24. pantherx12

    pantherx12 New Member

    Joined:
    Jan 2, 2009
    Messages:
    9,714 (4.70/day)
    Thanks Received:
    1,699
    Location:
    ENGLAND-LAND-LAND
    No, it's on the same silicon man, there's no latency between the communication of CPU-GPU ( or very little)

    It does have benefits.
  25. Thatguy New Member

    Joined:
    Nov 24, 2010
    Messages:
    666 (0.48/day)
    Thanks Received:
    69
    Imagine the power of GPU with the programming front end of x86 or x87, which are widely supported instructions in compilers right now.

    Thats where this is headed, INT + GPU the FPU is on borrowed time and thats likely why they shared it.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page