• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Mantle API presentation by AMD, DICE and Oxide - AMD Summit 2013

Here's a more clear video of the diagnostic info from Oxide's Mantle Demo:


It's greately recommended to use 720p or download source.
 
Actually, better now they have more time to polish the first mantle release even more lol :D
 
When the heck is the Mantle patch coming out for BF4? I feel sad that they are going to be dragging this out, what with all the hype they've given this thing. I just hope they deliver on what they've promised, if this thing is what AMD says it is then it will be crazy, but then again "if ifs and buts was candys and nuts, we'd all have a merry Christmas".
 
I think even if release it now, I don't think i would take my time off the stuff I already scheduled non-PC related ... give me a rest for just a week :p
 
When the heck is the Mantle patch coming out for BF4? I feel sad that they are going to be dragging this out, what with all the hype they've given this thing. I just hope they deliver on what they've promised, if this thing is what AMD says it is then it will be crazy, but then again "if ifs and buts was candys and nuts, we'd all have a merry Christmas".

The longer it takes the more AMD's message is hurt. They went to great lengths to point out the API is easy to use, but the first game to implement Mantle has continuously had problems and is now delaying the actual implementation patch.
 
Here are the numbers

CPU boost; ~600%
GPU boost: 100%-300%

or in other words:
"amazing"

How is that showing, thats just a claim. Post some code fragment that utilizes real computing task 600% faster than normally done so we can run it on our machines. Haven't seen any of these in thread just hype from AMD slides that stretch out truth since radeon 8500LE.

I need some solid proof cause at my univ they thought me that reaching 80% in thread utilization is considered a pretty good result in scaling, when you post an improvement of 600% it sounds like bullshit out of the box or they made some sort of revolution on that field which I highly doubt.
 
Here's a more clear video of the diagnostic info from Oxide's Mantle Demo:


It's greately recommended to use 720p or download source.
Pretty impressive considering from start to finish it quadruples in AI and render workload, yet, the latency and render times stay the same. Not to mention the batch count itself, that goes up and down to huge numbers, way above anything we're seeing in current games.
 
no one really sits in front of a 42-50" living room telly checking facebook and playing peggle(thats why we sit with pads/phones),, they have consoles and its consoles that they (steambox's)have to contend with in performance terms,at least imho and as good as open Gl is I think Steam need to climb aboard the HSA / Mantle train ,to optimise there bang for buck with Amd based steambox's.

shit, we're not meant to? i'm on a 46" here right now with facebook in the other tab...


(i like the idea of mantle. we need something to replace the mostly defunct openGL)

oh and as for some of the "just make it use more threads!" arguments, that doesnt work.

if you have two AI's with one thread each, they cant work together at all - they're independent. this slows things down as one thread has to wait for the other to finish before it can continue, resulting in stalls, or vastly increased overheads.

this is why multithreading continues at a slow pace, its not simple to scale.
 
How is that showing, thats just a claim. Post some code fragment that utilizes real computing task 600% faster than normally done so we can run it on our machines. Haven't seen any of these in thread just hype from AMD slides that stretch out truth since radeon 8500LE.

I need some solid proof cause at my univ they thought me that reaching 80% in thread utilization is considered a pretty good result in scaling, when you post an improvement of 600% it sounds like bullshit out of the box or they made some sort of revolution on that field which I highly doubt.

do you not realize how much wasted cpu power is used for directx & opengl? this has been talked about for years, the most obvious example being draw calls

in addition, the dev has more control over how to deal with their data instead of relying on the driver or OS to do it for them, that's what efficiency is... nobody knows what they want to do or what the game engine is doing other than the dev

there are some slides from the star citizen devs that have the miliseconds laid out, it is a revolution in the sense of doing things very differently than normal (for pc that is, it's similar to consoles, that's why 360/ps3 run what they run at mostly 720p 30fps, you cant run the pc versions of those very games on the equivalent x1900 or 7900 at the same settings)

not sure if this was posted earlier in the thread, http://www.kitguru.net/components/g...check-out-the-first-amd-mantle-gameplay-demo/

Once the demo gets going it looks very pretty, but nothing too special. When they zoom out however and you realise the engine is rendering thousands of little ships, alongside some large capital space craft all firing their own weaponry at a very high level of detail, it turns into something very impressive. On top of that, the frame rate is constantly well over 200 and this is all before any GPU optimisation. It’s speculated that the scene should easily be able to handle 300 FPS after tweaking.
One large carrier has over 200 individual missile tubes which can target different enemies, leading to upwards of 25,000 individual objects in any one scene at one time. That’s nothing though, after optimisations, the Oxide spokesperson reckons they can squeeze in 100,000 objects and make it playable on most AMD hardware.
The demo was run on an FX8350 and a “hawaii board,” so one of the new 290x boards.

now there's no need to preach about herp derp nvidia is over or anything, it's just another tool, an experiment, few titles will need to utilize it, let's just see how it translates to a complex game & if it really boosts low end or APUs
 
Quite frankly I'm not sure I'll buy into O Rift either, and not just out of resistance to accept any implied marketing "hypnotism". I'm seriously concerned about long term effects on vision, attention span, etc. Just LCD displays themselves are bad enough long term on eyesight, and OLED/PLED has yet to prove to have a lifespan adequate enough for anything but small devices that people don't keep very long (cell phones, etc).

As far as Mantle's projected performance boost, one thing that concerns me is most only talk in terms of added frames per second, but we're seeing an increasing number of games that don't play smoothly even if you're getting good frame rates. I think both AMD and Nvidia need to focus more on frame pacing before they go trying to boost performance, and it's not just an issue with Crossfire and SLI. Single GPU performance is often crippled by erratic frame rate fluctuation too.
The issues with frame pacing all start with the number of calls made and the CPU's ability to setup and hand off that data, in short all the spiky lag seen is an issue with the underpinning of DX. Recent tests show the 2XX series GPU's as lag free as Nvidia, and now both are at the mercy of what the dev and DX are doing. We can only hope Mantle is better tha nall he other vaporware that ATI/AMD have given us in the past that was well thought out but poorly implemented, then never supported, or supported with a heft fee, but only works if you hold your finger up this way but not that, left foot in, right foot flat on the floor, head cocked, nose up, eyes closed and pray.
 
The delay is official now. It only took AMD until there was one day remaining in order to actually announce it.

Pick your favorite source:
https://news.google.com/news?ncl=d9QPbZ5MjqhUX0MHrP8HblEV0UMLM&q=amd mantle delayed&lr=English&hl=en&sa=X&ei=4VTCUrCxLLSusASK9YG4DQ&ved=0CC8QqgIwAA
So EA fucking up a game with many issues and their lack of desire to associate with the issues is somehow a bad on them? We could blame AMD for this, or we could say that EA released a rushed to market for the holiday shopping season turd and now has to clean that shithouse before the new tenant can move in.
 
So EA fucking up a game with many issues and their lack of desire to associate with the issues is somehow a bad on them? We could blame AMD for this, or we could say that EA released a rushed to market for the holiday shopping season turd and now has to clean that shithouse before the new tenant can move in.

It clearly is a mutual arrangement. EA is not innocent, but it's hard to not believe that integrating Mantle support took time away from the developers working on the initial release (DirectX) version of the game. AMD had to have known of EA's history of releasing games that need multiple patches before working properly. AMD made the logical choice to delay, but a better choice would be to not announce a date in the first place. I firmly believe that AMD was pressured into announcing a release date months ago because without a release date Mantle would be considered vaporware (moreso that it is even now) and developers would be discouraged that there were no commitments to games using it.

You have to wonder if EA set the Battlefield 4 release date of late 2013 before AMD chose to partner to integrate Mantle, and instead of focusing all developers on the initial release and then Mantle later, the developers were split between two projects causing the intitial release to be a mess and the Mantle update to be delayed.
 
Last edited:
How is that showing, thats just a claim. Post some code fragment that utilizes real computing task 600% faster than normally done so we can run it on our machines. Haven't seen any of these in thread just hype from AMD slides that stretch out truth since radeon 8500LE.

I need some solid proof cause at my univ they thought me that reaching 80% in thread utilization is considered a pretty good result in scaling, when you post an improvement of 600% it sounds like bullshit out of the box or they made some sort of revolution on that field which I highly doubt.


Let's bookmark this and check back how accurate I was :)

More Cores More CPU boost, ofcourse that 600% probably doesn't apply to quad core and below.
 
Genuine question?

Mantle is for Radeon GCN GPU's (going on their own info for now) but it talks about far greater use of the CPU, so....

Even if you have a non Radeon or even non GCN Radeon, will Mantle still make better use of the CPU? Does this also refer to AMD CPU's or Intel as well, i.e. if I have a GCN card and Intel CPU does it still utilise the Intel CPU to it's fullest extent or is it primarily focused on AMD hardware?

Genuine question looking for genuine answer. Truth is, it needs to be very easily adapted to Nvidia and Intel or else it will just be a very good gimmick for GCN owners running Mantle coded games. It can't work long term unless these guys either (a) get in on it, or (b) they both go bust and we only have AMD left.
 
Genuine question?

Mantle is for Radeon GCN GPU's (going on their own info for now) but it talks about far greater use of the CPU, so....

Even if you have a non Radeon or even non GCN Radeon, will Mantle still make better use of the CPU? Does this also refer to AMD CPU's or Intel as well, i.e. if I have a GCN card and Intel CPU does it still utilise the Intel CPU to it's fullest extent or is it primarily focused on AMD hardware?

Genuine question looking for genuine answer. Truth is, it needs to be very easily adapted to Nvidia and Intel or else it will just be a very good gimmick for GCN owners running Mantle coded games. It can't work long term unless these guys either (a) get in on it, or (b) they both go bust and we only have AMD left.

the entire point is less CPU usage - so if your high end super fast CPU was fast enough, then its going to be under utilised. this isnt about doing more with the same - its about doing the same with less.
 
Genuine question?

Mantle is for Radeon GCN GPU's (going on their own info for now) but it talks about far greater use of the CPU, so....

Even if you have a non Radeon or even non GCN Radeon, will Mantle still make better use of the CPU? Does this also refer to AMD CPU's or Intel as well, i.e. if I have a GCN card and Intel CPU does it still utilise the Intel CPU to it's fullest extent or is it primarily focused on AMD hardware?

Genuine question looking for genuine answer. Truth is, it needs to be very easily adapted to Nvidia and Intel or else it will just be a very good gimmick for GCN owners running Mantle coded games. It can't work long term unless these guys either (a) get in on it, or (b) they both go bust and we only have AMD left.
It is GCN only initially at least and it is not cpu brand specific, intel cpu's wont have any dissadvantage over amd, at least initially.
and less cpu use is a coincidence of the api not the sole reason or benefit of it, due to mantle's use the cpu will no longer be the bottleneck holding gpu's back(the gpu can do things without the cpu being involved in every little bit ,hence less load), the bottleneck will be something else then is all ,,likely memory or storage or busses.
 
Genuine question looking for genuine answer. Truth is, it needs to be very easily adapted to Nvidia and Intel or else it will just be a very good gimmick for GCN owners running Mantle coded games. It can't work long term unless these guys either (a) get in on it, or (b) they both go bust and we only have AMD left.

That's my largest concern. If AMD makes developers and\or manufacturers jump through hoops to implement Mantle (or worse, make them pay) it will never succeed.
 
this isnt about doing more with the same - its about doing the same with less.

Only in that case.

But, games were built around the limitations, that's why the speedboost in updated, games designed for mantle will shake the industry with what you call "more".

However some existing games even when designed with Mantle, the genre they're using does not allow to take all that power and turn it into units-on-screen and other stuff because it would not be the the same sub-genre anymore or a spinnoff. In those cases the performance difference will be very visible if there would be a DX or OGL render mode also available, what's more, the minimal requirements would be significantly lower than what would be with DX or OGL version.

For the genres that don't yet exist or have been dead since the 90', if those are designed with pushing the boundaries of AI, physics, visuals, they are not going to have this "boost" effect obviously.

So beware what "mantle reviews" you try, they aren't reviewing mantle API it self which continues to improve and is not a fixed thing so it cannot be reviewed ** , they're reviewing the mantle code of a specific game or in other words the effort that a developer put into optimizing the mantle code for that game. Just as with DX games, all the CPU and GPU benchmarks weren't benchmarking the hardware, they were benchmarking the DX API along with driver, so much of the optimization and also stability code residing in the driver basically made all the hardware wars technically invalid, it was and is all a driver war, there was no GPU or CPU war, it's just percieved that way.

Ofcourse practically it can be labeled as "hardware benchmarking" since DX was the only thing used (OGL doesn't even count, a few AAA pushing-the-limits games here and there) , but it's not a valid technical test at all.

** (eg. pre-release game reviews are usually invalid since day 1 patches became popular, i don't read nor care about any mainstream gaming site)
 
Last edited:
^ I agree.

Doing the same with less is Phase 1 of mantle implementation. Phase 2 is doing more with the same. We all saw it clearly in the tech space demo. And Phase 2 is just a setting in the game away to have a whole diffirent world to play if a game is optimised for mantle from the beginning. So, BF4 is Phase 1, but the coming next games might easily have Phase 2 implemented.
 
So many people think THIS TIME will be different. THIS TIME I can feel it!
 
Nobody thinks it will be diffirent. It is a fact that mantle IS a game changer non-API bridge between game, GPU and CPU. It is by far obvious in any demo and interview we watched. Speed and scale of implementation is to be seen. Not Mantle's productivity. Only ignorants or AMD-haters or console-haters can ignore the public known facts and proofs.
 
Nobody thinks it will be diffirent. It is a fact that mantle IS a game changer non-API bridge between game, GPU and CPU. It is by far obvious in any demo and interview we watched. Speed and scale of implementation is to be seen. Not Mantle's productivity. Only ignorants or AMD-haters or console-haters can ignore the public known facts and proofs.

Meats not meat till its in the pan.

meats-not-meat-til-its-in-the-pan-charles-russell.jpg
 
Back
Top