• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Editorial On AMD's Raja Koduri RX Vega Tweetstorm

Vega also has less bandwidth than Fiji sadly which is probably one of the bigger things holding it back currently I think with a die shrink a higher core clock speed and a memory bus that's more like 640 bit it's successor will be in good shape provided Nvidia doesn't go crazy on R&D and make it look like trash comparatively which could happen I suppose though I don't there there is major demand for a $2000 GPU for gaming as it is GPU's are too damn expensive.

But I see a bright future if Havok sometime soon becomes DirectPhysics as part of DirectX, that's the moment we'll see huge leap in game physics. It won't be just ragdolls anymore. It'll actually be able to affect gameplay.
I imagine it's quite hard to do complex physics in a game where it's already pushing the limits and taxing the system to death just rendering games at the resolutions and image quality settings we as game players often demand.
 
Last edited:
But I see a bright future if Havok sometime soon becomes DirectPhysics as part of DirectX, that's the moment we'll see huge leap in game physics. It won't be just ragdolls anymore. It'll actually be able to affect gameplay.
It's not a matter of "if" but "when." Microsoft wouldn't be paying for trademarks on something they'll never release.

Vega also has less bandwidth than Fiji sadly which is probably one of the bigger things holding it back currently I think with a die shrink a higher core clock speed and a memory bus that's more like 640 bit it's successor will be in good shape provided Nvidia doesn't go crazy on R&D and make it look like trash comparatively which could happen I suppose though I don't there there is major demand for a $2000 GPU for gaming as it is GPU's are too damn expensive.
You may be right. Vega on Global Foundries' 7 nm would be a serious contender. Other than Navi (fairly distant future), I really haven't heard of anything AMD has in the pipeline.
 
VEGA even on 10nm with a Radeon Pro SSG inspired M.2 connector setup wouldn't be a bad hold over until 7nm Navi which could double, triple, and quadruple down on the M.2 connections though the triple/quadruple setups would probably just be exclusive to it's workstation class cards for obvious reasons. They'd probably throw in a 10nm VEGA workstation card as well with twin M.2 connectors just so they don't cannibalize the Pro SSG.
 
GloFo doesn't have a 10nm node.

They are touting their 7nm node as being amazing , it's supposed to be optimized for performance unlike the current node which is aimed at low power consumption parts.

I don't know , I don't have much trust in GloFo and AMD as a team to work perfectly in tandem. In my opinion the only product that was designed to perfectly match the characteristics of the available node was Ryzen and it turned out great. On the GPU side of things it seems to me that they have been working against each other for the past 5 years or so.
 
I wish we could have some demo showing what to be expected from direct physics. From what I've understand async is basically allowing some units to work on compute task as soon as they are free.

But If there is something that I've learned from playing with physics simulation in 3D software, is that this is a real powerhog, It's a ram eater, and even with open cl/cuda accelaration, you need some insane hardware to get smooth simulation. So it's hard to imagine close to real world physics running on a single gpu with a measly 8gb of memory,when the said gpu is already busy pushing pixels out. So just how much of an improvement can you expect over the current state of game physics ? Right now it's so abstract...are we talking about a building/cave collapsing with debris to dodge that wouldn't be scripted, but 100% simulated ? Trying to escape a deadly fluid who's speed will depend of the nature/tilt of the ground ?
 
I expect pseudo-physics like Havok games have but more of them with better interaction between them. I'm not talking trash flying around (like some games). I'm talking about models getting deformed from a collision, structures collapsing if they become unstable, and water surfaces that displace when a dynamic object is on it. Things that are faked now on CPU because negotiating the physical and visual aspects of it are too demanding to do in real time. In the case of water, I'm not talking fluid dynamics like PhysX does in that it literally simulates blobs of water. I'm talking water providing realistic resistance (as a function of surface area and thrust) and reaction to waves that, presently, is just very simple bob and resistance modifiers on movement.
 
I wish we could have some demo showing what to be expected from direct physics. From what I've understand async is basically allowing some units to work on compute task as soon as they are free.

But If there is something that I've learned from playing with physics simulation in 3D software, is that this is a real powerhog, It's a ram eater, and even with open cl/cuda accelaration, you need some insane hardware to get smooth simulation. So it's hard to imagine close to real world physics running on a single gpu with a measly 8gb of memory,when the said gpu is already busy pushing pixels out. So just how much of an improvement can you expect over the current state of game physics ? Right now it's so abstract...are we talking about a building/cave collapsing with debris to dodge that wouldn't be scripted, but 100% simulated ? Trying to escape a deadly fluid who's speed will depend of the nature/tilt of the ground ?

Games already use compute for physics , id Tech 6 has support for GPU accelerated particles and so do many modern engines. You are talking about simulations that are done through the usual means of solving differential equations that get exponentially more difficult to compute by each object introduced in the simulation. That's most likely not how these engines compute physics , they more than likely use pseudorandom movements that are easy to compute for things like particles. And you can extend these models to many other things such as deformation/destruction. These things do not have to be computed with perfect accuracy , they just need to be close enough to reality so that they don't look off while maintaining a good level of dynamism.
 
It's going to take GPGPU to make physics mean something in games. Specifically, they need to prioritize the compute tasks and balance the GPU work with it.
Ah, tricky business ... ideally, first you'd need a way to detect N rigid body each-to-each collisions in parallel in a way that complexity is constant (or not function of N) ... maybe oct-tree hierarchical voxel grid that simulates a 3D "force field" to drive physics collisions, doing something similar to what nvidia did with voxel global illumination in their moon demo, also pure compute driven voxel grid for bouncing light rays.
Something like this:
 
Last edited:
I expect pseudo-physics like Havok games have but more of them with better interaction between them. I'm not talking trash flying around (like some games). I'm talking about models getting deformed from a collision, structures collapsing if they become unstable, and water surfaces that displace when a dynamic object is on it. Things that are faked now on CPU because negotiating the physical and visual aspects of it are too demanding to do in real time. In the case of water, I'm not talking fluid dynamics like PhysX does in that it literally simulates blobs of water. I'm talking water providing realistic resistance (as a function of surface area and thrust) and reaction to waves that, presently, is just very simple bob and resistance modifiers on movement.

While not 100% accurate, Red Faction: Guerilla came pretty damn close, and runs like a dream. Structural integrity is present there, things collapse, but the more you play with it, the more you can see the 'trick' behind it. Still manages to convince quite well though. I do think these implementations are more 'playable'.

Then there is Space Engineers:


Deformation of objects is in there, and craploads of physics, and it is a great example of how heavy cpu physics are in real time.
 
Red Faction: Guerilla used Havok on CPU. That's exactly what I'm talking about should be common place in games today but it's not for two reasons: 1) it's hell on net code and 2) there's no infrastructure to make it easy to implement. Case in point: a lot of games have prescripted destructible walls now. This is something basic support of physics in game engines has been made to support. With broad support of DirectPhysics, creating buildings, vehicles, and so on is less about visuals and more about structure. All things look the way they do because of physics. When an engine supports that basic principle, modelers are encouraged to embrace it as well. They aren't just making something pretty to look at, they're also building something functional. PhysX has already caused that shift of thinking in some games (like Farming Simulator and Unreal Engine) but a lot of developers are still stuck in that mentality of faking it until they make it (rag doll is fundamentally the result of inadequate mass of the body and way too much force delivered in the impact).

Space Engineers personifies what is wrong with CPU-based physics: it's way too easy to overwhelm causing huge framerate drops. Guerilla had that issue as well.

The sad thing about both of these games is that they had to reinvent the wheel to make their pseudo-physics simulations work. It's also painfully difficult to accelerate it on the GPU because the engine/middleware support fundamentally isn't there. That's the beauty of Havok/DirectPhysics: they can hybridize a CPU/GPU physics solution and get it widely implemented in game engines. DirectPhysics is really the only thing that can usurp PhysX which has proven to be a dead end thanks to NVIDIA.

Many core CPUs will also help solving this problem.
 
Games already use compute for physics , id Tech 6 has support for GPU accelerated particles and so do many modern engines. You are talking about simulations that are done through the usual means of solving differential equations that get exponentially more difficult to compute by each object introduced in the simulation. That's most likely not how these engines compute physics , they more than likely use pseudorandom movements that are easy to compute for things like particles. And you can extend these models to many other things such as deformation/destruction. These things do not have to be computed with perfect accuracy , they just need to be close enough to reality so that they don't look off while maintaining a good level of dynamism.
Yhea I knew about the compute part like tressFX and hairworks using direct compute, but both of these solution have a noticable impact on the framerate while still being "imperfect", that's why i'm curious about the gain of async compute with direct physic. Are they going to use them to lower the current strain on the framerate, or to push the effect further, and if they are doing the later, just how much of an improvement could we see ?

However I know that some effect like the water simulation in the witcher 3 are not really hungry, and are well made, but it only seems to work with the boat, it's less impressive when you are swimming, and they didn't mention what they are using for it.
 
Last edited:
They aren't just making something pretty to look at, they're also building something functional. PhysX has already caused that shift of thinking in some games (like Farming Simulator and Unreal Engine) but a lot of developers are still stuck in that mentality of faking it until they make it (rag doll is fundamentally the result of inadequate mass of the body and way too much force delivered in the impact).

...

The sad thing about both of these games is that they had to reinvent the wheel to make their pseudo-physics simulations work. It's also painfully difficult to accelerate it on the GPU because the engine/middleware support fundamentally isn't there.

I am on a totally different opinion , I think "faking it" is the way to go. It seems to me that what you want are perfect rigid-body and soft-body simulations on large scales , that's simply not going to happen any time soon with or without the help of GPU acceleration. It's just too much for current hardware.

When you blow up a wall in a game I don't care if each brick goes flying with the exact velocity it would need and if it lands where it should , all I care is if the overall movement of the objects looks natural. By fake I mean to not deal with this matter by classical means. I've looked this up and it turns out you can replace very cumbersome calculations with statistical approximations in a typical rigid body problem for example. The result is that per-object the simulation is not accurate however in the grand scheme of things it should look perfectly adequate.

Yhea I knew about the compute part like tressFX and hairworks using direct compute, but both of these solution have a noticable impact on the framerate while still being "imperfect", that's why i'm curious about the gain of async compute with direct physic. Are they going to use them to lower the current strain on the framerate, or to push the effect further, and if they are doing the later, just how much of an improvement could we see ?

However I know that some effect like the water simulation in the witcher 3 are not really hungry, and are well made, but it only seems to work with the boat, it's less impressive when you are swimming, and they didn't mention what they are using for it.

I think async will not solve the problem of having very robust physics engines that can be used to simulate a wide variety of things with little impact on performance.
 
Last edited:
Faking it has been the way to go thus far, not just with physics, but overall gaming graphics solutions. Tesselation for example; but also things like LOD are ways of 'faking it' in a way that GPUs can handle well. No gamer is asking for scientific accuracy, and we know that realistically modelling all these interactions will be extremely costly and its a cost the target market won't be paying, ever.
 
Physics are fundamentally what drive games at their core and there aren't many cases where it isn't faked. It's faked out of necessity, not want.
 
You guys are failing to realize that this "lil" company is taking on 2 big giants at the same time with nowhere those companies money at hand.
Intel was founded in 1968, AMD in 1969, and Nvidia in 1993. 49, 48, and 24 years old. They've had roughly the same amount of time as Intel, and twice as much as Nvidia, to build themselves into a "giant". A series of terrible business decisions, underhanded practices by competitors, and marginal marketing tactics, have made them what they are today (1/34th the size of Intel, 1/3rd the size of Nvidia, by total assets). They're doing alright, all things considered, but bad business and marketing decisions are still hurting them. Without better management, they are still in danger of bankruptcy.
 
Flashing an unmodded BIOS is not modding the BIOS. :p Driver verifies that BIOS has valid checksum at boot. Modding Linux to make it work doesn't actually make it work...


Don't burst his imagination bubble goat![/QUOTE]

Oh noes! A circle jerk attendee, me next!

Flashing an unmodded BIOS is not modding the BIOS. :p Driver verifies that BIOS has valid checksum at boot. Modding Linux to make it work doesn't actually make it work...

https://www.reddit.com/r/Amd/comments/5nb30f/bios_signature_check_bypass_patch/

Oh noes, because bypassing hash checks is so hard. Your little broken fingers can't google for 3 seconds. It would be nice if people who obviously have no idea what they are talking about would just stop the anti-amd circle jerk. Nope, random Internet troll #5322 has to rely on semantics to feel good about himself.
 
Can someone answer a question for me.

Does AMD control prices or does the retailer?

I'm asking because my local microcenter has two PowerColor RX Vega 56 in stock and it sells for $599. I asked him why is the price so high, he informed me that it's AMD that sets the prices not the retailer. I read here on TPU that it's the retailer that sets the prices. Is this store employee smoking crack or what?
 
I asked him why is the price so high, he informed me that it's AMD that sets the prices not the retailer.

That's a load of BS , no manufacturer has full control over the price of their products unless they sell them directly (which isn't the case here obviously as these cards are sold through said retailers). Not to mention that there are also the AIB partners in this chain.

Just think for a moment how dumb would it be if retailers would essentially let manufacturers dictate what profit they make.

I'll say it again in case some are still wondering : retailers aren't charities , they seek to make the biggest profit they can.
 
Last edited:
That's a load of BS , no manufacturer has full control over the price of their products unless they sell them directly (which isn't the case here obviously as these cards are sold through said retailers). Not to mention that there are also the AIB partners in this chain.

Just think for a moment how dumb would it be if retailers would essentially let manufacturers dictate what profit they make.

I'll say it again in case some are still wondering : retailers aren't charities , they seek to make the biggest profit they can.

Can't I provide them a link to AMD claims for sticking by MSRP, then maybe they will sell to me for $399? Can't I just prove to them that I'm using it for gaming, charge $599 for miners, not me.
 
"Manufacture Suggested Retail Price"

There's no way for them to enforce it. When they listed them at MSRP, they vanished within minutes. Supply versus demand dictates they raise their prices and because all of them feel that the product is worth more than MSRP (namely because what GTX 1060s and RX 580s are going for), they all do it. In this kind of environment, MSRP has no meaning.

I wouldn't be surprised if manufacturers are overnight shipping everything because they know they can still make hundreds of dollars of profit just by getting it to market where cryptomining has taken hold at their inflated prices.


Only Vega56 with no extras, air cooled *might* go for $400. No way you'll get a Vega64 at that price. If you're talking the $600 card is in stock, it is probably because it comes with extras that are unattractive to miners.
 
"Manufacture Suggested Retail Price"

There's no way for them to enforce it. When they listed them at MSRP, they vanished within minutes. Supply versus demand dictates they raise their prices and because all of them feel that the product is worth more than MSRP (namely because what GTX 1060s and RX 580s are going for), they all do it. In this kind of environment, MSRP has no meaning.

I wouldn't be surprised if manufacturers are overnight shipping everything because they know they can still make hundreds of dollars of profit just by getting it to market where cryptomining has taken hold at their inflated prices.


Only Vega56 with no extras, air cooled *might* go for $400. No way you'll get a Vega64 at that price. If you're talking the $600 card is in stock, it is probably because it comes with extras that are unattractive to miners.

Vega 56 just the card itself (no AMD package BS) is $599. Their Vega 64 (also without packages) is $699. They have like two Vega 64 in stock and I just checked that they have 12 Vega 56 in stock. WhoTF will pay this price?

Look here:

http://www.microcenter.com/search/s...165&NTX=&NTT=&NTK=all&page=1&sortby=pricehigh

Choose "Queens/Flushing" store
 
$700 Vega64 is open box.

And no, Vega56 isn't going to sell for gamers at $600 when GTX 1080s can be had for $570. When you look at mining, suddenly it all makes sense...
http://wccftech.com/ethereum-mining-gpu-performance-roundup/
MHs-740x459.png

Yes, yes, Vega isn't on there but everyone is saying it's >30 MH/s which means it is easily the highest scoring card here.

If you can't wait it out, you're better off buying a GeForce card...which sucks at mining.
 
$700 Vega64 is open box.

And no, Vega56 isn't going to sell for gamers at $600 when GTX 1080s can be had for $570. When you look at mining, suddenly it all makes sense...
http://wccftech.com/ethereum-mining-gpu-performance-roundup/
MHs-740x459.png

Yes, yes, Vega isn't on there but everyone is saying it's >30 MH/s which means it is easily the highest scoring card here.

If you can't wait it out, you're better off buying a GeForce card...which sucks at mining.

I have a LG 34" ultra-wide Freesync Monitor, buying a 1080 is a no brainer but I wouldn't be utilizing the full potential of my monitor.
 
GTX1080 is hands down the best value upper mid range/lower high end card on the market the 1080Ti is decent as well, but it's the epitome higher GPU price creeping over the last decade or so from one generation of GPU's to the next that's happened due to a lack of overall competition from AMD/ATI as a result of their merger and debt balancing cash flow struggle relationship that's really crippled R&D. The whole situation is a perfect storm of poor situations for the gaming crowd between GPU's increased emphasis on compute, lack of competition, and of course elevated prices. While gaming isn't shrinking it's still growing I'm sure the emphasis on it is waning as a result of the increased emphasis on GPU compute related focuses where profit margins are higher for GPU makers. We've become somewhat forgotten or not important enough in relative terms to worry, care, or be particularly concerned about. Essentially we need to just take what they throw our way or that's how I'm seeing it The demand and profit is higher elsewhere even if gaming jump started the party.
 
Come on people inflated GPU prices at launch, what's new here.

Also, inflated prices of any new product, especially one that's scarce, what's new, hi market economy. Let's move on or just wait a couple months.

As for the price creep, remember there is this thing called inflation, and cards are still getting performance at the same price points roughly, sometimes a bi tlower price point as well let's not forget, but coin does devaluate over time. The strategy here is to hide a price increase through some odd marketing, and that is what we're seeing. Overall, you see this once every 4-5 years as companies re balance their price tiers with inflation and the market. They don't do this every year.

I have a LG 34" ultra-wide Freesync Monitor, buying a 1080 is a no brainer but I wouldn't be utilizing the full potential of my monitor.

If its ultrawide 1080p then for sure you will make that 1080 sweat, if not now then in 6 to 8 months definitely. I can max it out on my measly 1080p too :)
 
Back
Top