1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Demonstrates Full Support for DirectX 12 at Game Developer Conference

Discussion in 'News' started by Cristian_25H, Mar 20, 2014.

  1. Slomo4shO

    Jul 24, 2013
    138 (0.09/day)
    Thanks Received:
    Like i said...

  2. rvalencia

    Nov 3, 2011
    182 (0.08/day)
    Thanks Received:
    A repeat of Techreport''s article

    From Techreport. http://techreport.com/news/26210/directx-12-will-also-add-new-features-for-next-gen-gpus

    "However, Tamasi explained that DirectX 12 will introduce a set of new features in addition to the lower-level abstraction, and those features will require new hardware."

    Mr Tamasi is from NVIDIA and doesn't have the authority for non-NVIDIA hardware.

    Techreport's "new blend modes" and something called "conservative rasterization" refers to DirectX12's new rendering modes.

    1. "Programmable blend and efficient OIT with pixel ordered UAV".

    2. "Better collision and culling with Conservative Rasterization".

    If you compare point 1 with http://software.intel.com/en-us/blo...ency-approximation-with-pixel-synchronization

    The API is based on Intel's Pixel Sync. The main feature with this Intel API is the pixel shader wait function. This avoids the "link list" requirements since the pipelines are handling the pixel shader read/manage/write order.

    Again, NVIDIA's Mr Tamasi DOES NOT have any authority for non-NVIDIA hardware.

    For reference

    DirectX11's Link List based OIT from http://www.docstoc.com/docs/106125562/Order-Independent-Transparency-Using-DirectX-11-Linked-Lists

    OpenGL 4.0+'s Link List based OIT from http://blog.icare3d.org/2010/07/opengl-40-abuffer-v20-linked-lists-of.html


    From http://www.amd.com/us/press-releases/Pages/amd-demonstrates-2014mar20.aspx

    "Full DirectX 12 compatibility promised for the award-winning Graphics Core Next architecture"

    AMD claims "FULL DirectX 12 compatibility".
  3. g101 New Member

    Oct 27, 2013
    2 (0.00/day)
    Thanks Received:

    Thank you, I really get tired of the completely uninformed 'enthusiast' children attempting to poke holes in concepts they cannot even begin to grasp.

    Now I will be repeating and extending some of what rvalencia has already said below. Since it appears some of you have absolutely terrible reading comprehension, maybe a few different formats of the same information will finally make the facts about dx12 sink in. Funny how people that actually have some idea of the subjects they talk about tend to draw the same conclusions, isn't it?

    So yeah, NVidia's fancy 'better than mantle' driver? Complete nonsense, they are using a worst case scenario for mantle (no cpu bottleneck), using FALSE metrics for both dx and mantle for the AMD hardware metrics (do your own research, kids) and also not achieving comparable minimums by their *own* admission. Of course, they haven't included the most relevant metric (minimum fps) in their chart, they only verbally mentioned their failure to compete and their plans to 'whittle away at it'. Yeah, right. They gained less than 5 fps, a smaller gain than most dx titles see with driver-specific optimization and they couldn't even solve the stalls so commonly found with dx, even with just this *one* title.

    Let's see crossfire mantle vs. sli dx 'magic driver' on that same hexacore testbed, novidia. Pretending that minimum fps isn't the most important metric is laughably transparent.

    Again, NVidia will NOT be providing full dx12 support with kepler or anything before it. The fact that any of you believe what NVidia says when it comes to dx specifications (or any other API for that matter) is hilarious.

    How quickly you forget NVidia 'dx 11.1* support'.

    Have fun buying new hardware and also waiting until 2016 for *virtualized* 'unified memory'. Hilarious.

    Little kids with zero education in semiconductors really have no business calling Charlie Demerjian 'chuckles'.
    Last edited: Apr 1, 2014
  4. savage.r New Member

    Apr 4, 2014
    1 (0.00/day)
    Thanks Received:
    The only reason why microsoft brought DX12 is AMD's Mantle.

    And since mantle/DX12 is mainly about multi-core CPU optimization, nvidia couldn't care less, since they just do not make any.
    For same reason they still use unoptimized physX 2.8.x in ALL of their supported games.

    But though nvidia always talking funny bullshit, they have brilliant marketing dept. for that, the most ridiculous in that INTEL "happily" supported DX12 too.

    Intel, since it is literally the monopoly on PC market, is the last one who needs anything removing CPU overhead.
    Because then AMD CPUs equal intel's from gaming performance point.
    There is actual reason why intel make 4core CPUs for 7 years now, although they know the only way to keep performance gain is bring more CPU cores on.

    But it is always nice to hear how many years they have been hardly working on that, wonder how long microsoft worked on low level API for their console. It is fucked up all those companies care about profit and they take PC platform for granted :/
  5. pr0n Inspector

    pr0n Inspector

    Dec 8, 2008
    1,334 (0.42/day)
    Thanks Received:
    GPU threads always bring out the trolls and fanboys.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)