Monday, February 25th 2013

AMD Teases New TressFX Technology

In our last teleconference with AMD's graphics team, the company briefly touched upon a new feature it was working with Square Enix to ship with the upcoming Tomb Raider franchise reboot game. It turns out AMD has a name for it, TressFX. The company is said to detail the technology tomorrow (26/02), but signed its teaser off with a catchphrase: "Render, Rinse, Repeat." To us, it sounds like a technology that helps GPUs render elements such as hair and foliage more accurately. Then again, we wonder how it would make Lara Croft look any hotter (who ties her hair into a French Braid). Something to look forward to 26th for.
Add your own comment

51 Comments on AMD Teases New TressFX Technology

#1
Dj-ElectriC
Somebody said realistic booby physics?

Oh hellllloooo
Posted on Reply
#2
Quantos
Ok, can someone explain who actually made this, and who it is for? Did Havoc develop this? Is it limited to AMD cards, or is it just AMD announcing it for DX11? :wtf:
Posted on Reply
#3
omnimodis78
TheMailMan78 said:
Doesn't matter now. AMD controls the development when it come to gaming. They locked down all three consoles. End of story.
I have a feeling that Intel and nVidia aren't exactly just going to say "well it was fun folks!" - not to mention, you do realize the excessive cost constraints on the manufacturers of console parts, right? AMD will probably just be meeting quotas, if even that. All is not peachy at AMD, and this console contracts aren't anything new. All it takes is a little "misunderstanding" between the supplier and the buyer for things to go really bad real fast. And also realize that AMD's APU won't be doing anything custom most likely. Let's wait and see.
Posted on Reply
#4
TheMailMan78
Big Member
omnimodis78 said:
I have a feeling that Intel and nVidia aren't exactly just going to say "well it was fun folks!" - not to mention, you do realize the excessive cost constraints on the manufacturers of console parts, right? AMD will probably just be meeting quotas, if even that. All is not peachy at AMD, and this console contracts aren't anything new. All it takes is a little "misunderstanding" between the supplier and the buyer for things to go really bad real fast. And also realize that AMD's APU won't be doing anything custom most likely. Let's wait and see.
Doesn't matter. They are not gonna say "AMD is late on a shipment we better order NVIDIA chips instead" :laugh: NVIDIA has NOTHING going on right now except an over priced GPU in a dying desktop market. AMD has a strong APU and a monopoly in the console market. I highly doubt AMD is going bankrupt next year. :rolleyes: :laugh:
Posted on Reply
#5
omnimodis78
TheMailMan78 said:
Doesn't matter. They are not gonna say "AMD is late on a shipment we better order NVIDIA chips instead" :laugh: NVIDIA has NOTHING going on right now except an over priced GPU in a dying desktop market. AMD has a strong APU and a monopoly in the console market. I highly doubt AMD is going bankrupt next year. :rolleyes: :laugh:
I seriously doubt as well that AMD is at the edge of financial ruins (as a whole), so I wasn't even implying that - nor is it an immediate option for a buyer to go to the competitor the instant quotas aren't met, but I have the feeling that AMD is a bit over its head with these new contracts. It's almost like they probably over-promised (happens all the time in contract bidding). Nvidia is doing quite well, their portfolios are healthy and viable to remain competitive. AMD just got lucky with this, now let's see if those teeth can actually bite.
Posted on Reply
#6
Xzibit
omnimodis78 said:
I have a feeling that Intel and nVidia aren't exactly just going to say "well it was fun folks!" - not to mention, you do realize the excessive cost constraints on the manufacturers of console parts, right? AMD will probably just be meeting quotas, if even that. All is not peachy at AMD, and this console contracts aren't anything new. All it takes is a little "misunderstanding" between the supplier and the buyer for things to go really bad real fast. And also realize that AMD's APU won't be doing anything custom most likely. Let's wait and see.
Nvidia saw this coming... Tegra Zone, Project Shield, GeForce Experience. Looks like a application store a handheld and a fancy clumsier gaming slider that took 5yrs to make. You'd think the big 3 Nintendo, Microsoft and Sony were looking at Nvidia and said lets give them money when they will be competing for similar customer base.

Whatever Intel and Nvidias plans are they have to take a backseat for the next 5yrs or until the next round of consoles come.

omnimodis78 said:
AMD just got lucky with this
Is that why Nvidia hired the man that helped secure the contracts Bob Feldstein. To get lucky themselves.
Posted on Reply
#7
esrever
Pretty sure everything will run on directcompute(or maybe openCL) since AMD has no way of implementing a proprietary physic engine to not run on some cards, especially since AMD hasn't released any new hardware. This will be something AMD can market with gamesevolved and likely license out to new gen game engines on the new consoles.
Posted on Reply
#8
RejZoR
theubersmurf said:
Why? You don't think it was better for the two major physics API makers to be bought up and made proprietary? I think we all know, nvidia, no um, intel, um, errr, wait, uh...no one is better off with all of these software apis being made proprietary...

I'm too cynical about this now to even worry about this little thing. Back when Ageia and Havok got bought there was a real opportunity for gpu physics that would have made a difference. Rather than Ageia and Havok making their APIs run using either OpenCL or DirectCompute so that there could be competition, we wound up with this crap...All physics being limited by CPU forever. Nvidia can't get developers to alienate AMD gpu owners by making the physics that used physx too stressful, and I'm sure intel will never let Havok build a version of their api that makes use of GPU compute tools, it just strengthens AMDs position. The whole thing is a sad, fucked affair that will forever annoy me.

The point to all this being...it's too late. The idea of a major third party stepping in with the skill to be competitive with either of Physx or Havok that would stay independent and simply license their API to keep the playing field level is just a pipe dream. It's not impossible for someone to do it, but don't hold your breath.
You are missing the point here. Havok is and was Intel's and it worked on AMD or Intel based systems. So basically whatever system you had, the physics were there. Thats why we ALL enjoyed Half-Life 2 physics. Now imagine Half-Life 2 with hardware accelerated physics taht would run on ALL graphic cards, be it Intel (these are still rubbish but anyway), AMD or NVIDIA. I bet that what Valve did in Half-Life 2 by integrating physics into gameplay was limited by the hardware itself. If they did a bit too much with it it might be too taxing on systems from that time. But with HW physics, they could do 5 times the physics integrated into gameplay and still not make the system sweat.

Now back to reality. What have all PhysX powered games in common? They were all dumbed down for anything non NVIDIA just to make PhysX look so awesome even though half of it could be easily done on any CPU. But they instead completely cut it out to artificially make PhysX look better.

And even if we look PhysX running on NVIDIA powered systems. What did they realistically gain? Realistic moving fog and some liquids that still looked like mercury. The rest was some rubbish shattered glass that i've seen back in 2001 in Red Faction 1 and which still looks phenomenal even today, i've seen insane environment in yet again red Faction Guerrilla running entirely on CPU. I've seen amazing physics details in Max Payne 2 on all the misc items in the world. Years before anyone even heard of HW physics. Same goes for clothing simulation. UT99 in 1999 had the most amazing moving flags. Sure they were not dynamic but the movement was so freakin authentic. And the Max's leather jacket in Max Payne 2 yet again. PhysX hasn't done anything others haven't done with CPU. They just overdid it on all ends so it ran like shit even on the fastest quad cores to an extent it often looked just plain ridiculous.

Graphics evolved to what we have today because ANYONE can get them regardless if you have a gfx card from one or another camp. But examples like 3dfx Glide, Creative EAX, PhysX, AMD Bullet Physics never really achieved anything because it only ran on one or another hardware and because of that you can't afford to integrate it into games core gameplay. And if you can't do that you can only do pointless gimmicky things with it. Well who gives a toss about kicking some rubbish cans around or watch old newspaper floating around in a virtual wind. CPU can do that sort of nonsense.

What we need is an open physics standard that anyone can use, integrate into core gameplay and that any end user can enjoy it. Who cares if AMD controls all 3 consoles at the moment? There are still NVIDIA users which aren't exactly in minority and because of them you again can't use TressFX in core gameplay because you'll make the game uninteresting for all the NVIDIA users and if you're a game developer you just can't afford that.

So for the love of gamers and yourself NVIDIA and AMD, stick the exclusives of your physics engines up yours (both), sit behind the same table, work out an open physics engine everyone can benefit from so we can finally move the horribly static games into something they could evolve into like almost 8 years ago. just think of how dynamic gameplay would be if physics in games were actually real, not the shit they sell it to us right now as "realistic". Imagine Battlefield maps that would actually and for real dynamically change through combat time. But because the standard hasn't been worked out you see a wall chip here and a wooden plank broken there and some plant knocked of there and thats pretty much it. Not amused because by now, they could ALL do a lot better...
Posted on Reply
#9
Lionheart
omnimodis78 said:
I am an Nvidia fanboy & I hate AMD
I corrected you:roll:
Posted on Reply
#10
TRWOV
PPLL = Programmable Phase-Locked Loop? :confused:
Posted on Reply
#11
theoneandonlymrk
A hell of a lot of ranting going on. Remember its Amd that use open standards and don't do proprietary api's whereas nvidia are such total bandits I have to hack my hybrid card to use it .a simple thing but so annoying as ive bought it and its reasonable to expect to be able to use it as I wish and not beffitng their ideals
Anyway I approve Amd carry on..
Posted on Reply
#12
SaltyFish
Long story short, it's potentially AMD's middle finger to PhysX. Whether it'll be non-proprietary is anyone's guess.

These proprietary features potentially bring in money for the manufacturers, but they stagnate game development. Long hair, frilly dresses, tall grasses... yeah, they're computationally expensive. But it's mostly cosmetic, so why not pass it onto all those idle CPU threads that all the game programmers are too lazy to use for other things?
Posted on Reply
#13
theubersmurf
RejZoR said:
You are missing the point here. Havok is and was Intel's and it worked on AMD or Intel based systems. So basically whatever system you had, the physics were there. Thats why we ALL enjoyed Half-Life 2 physics. Now imagine Half-Life 2 with hardware accelerated physics taht would run on ALL graphic cards, be it Intel (these are still rubbish but anyway), AMD or NVIDIA. I bet that what Valve did in Half-Life 2 by integrating physics into gameplay was limited by the hardware itself. If they did a bit too much with it it might be too taxing on systems from that time. But with HW physics, they could do 5 times the physics integrated into gameplay and still not make the system sweat.

Now back to reality. What have all PhysX powered games in common? They were all dumbed down for anything non NVIDIA just to make PhysX look so awesome even though half of it could be easily done on any CPU. But they instead completely cut it out to artificially make PhysX look better.

And even if we look PhysX running on NVIDIA powered systems. What did they realistically gain? Realistic moving fog and some liquids that still looked like mercury. The rest was some rubbish shattered glass that i've seen back in 2001 in Red Faction 1 and which still looks phenomenal even today, i've seen insane environment in yet again red Faction Guerrilla running entirely on CPU. I've seen amazing physics details in Max Payne 2 on all the misc items in the world. Years before anyone even heard of HW physics. Same goes for clothing simulation. UT99 in 1999 had the most amazing moving flags. Sure they were not dynamic but the movement was so freakin authentic. And the Max's leather jacket in Max Payne 2 yet again. PhysX hasn't done anything others haven't done with CPU. They just overdid it on all ends so it ran like shit even on the fastest quad cores to an extent it often looked just plain ridiculous.

Graphics evolved to what we have today because ANYONE can get them regardless if you have a gfx card from one or another camp. But examples like 3dfx Glide, Creative EAX, PhysX, AMD Bullet Physics never really achieved anything because it only ran on one or another hardware and because of that you can't afford to integrate it into games core gameplay. And if you can't do that you can only do pointless gimmicky things with it. Well who gives a toss about kicking some rubbish cans around or watch old newspaper floating around in a virtual wind. CPU can do that sort of nonsense.

What we need is an open physics standard that anyone can use, integrate into core gameplay and that any end user can enjoy it. Who cares if AMD controls all 3 consoles at the moment? There are still NVIDIA users which aren't exactly in minority and because of them you again can't use TressFX in core gameplay because you'll make the game uninteresting for all the NVIDIA users and if you're a game developer you just can't afford that.

So for the love of gamers and yourself NVIDIA and AMD, stick the exclusives of your physics engines up yours (both), sit behind the same table, work out an open physics engine everyone can benefit from so we can finally move the horribly static games into something they could evolve into like almost 8 years ago. just think of how dynamic gameplay would be if physics in games were actually real, not the shit they sell it to us right now as "realistic". Imagine Battlefield maps that would actually and for real dynamically change through combat time. But because the standard hasn't been worked out you see a wall chip here and a wooden plank broken there and some plant knocked of there and thats pretty much it. Not amused because by now, they could ALL do a lot better...
Oddly, you agreed with me for the most part, except the part when I said, no one is going to build a third physics API on the level of Ageia or Havok, and you said, what we need is to build a third physics API.
Posted on Reply
#14
Widjaja
I won't be too impressed if the 'AMD only' enhancements are to do with just Lara's hair.
Posted on Reply
#15
AphexDreamer
Be pretty cool if its some DX11 GPU Physics implementation or something. I'm actually looking forward to the announcement.
Posted on Reply
#16
natr0n
My guess is 3 gpu config ,with 1 using some kind of physics.
Posted on Reply
#17
TRWOV
I don't know why AMD keeps doing this. Why don't they go straight to the annoucement? This stuff just gets everyone's hopes up... unless AMD feeds on our broken hearts and shattered dreams then I don't see what they accomplish with this.
Posted on Reply
#19
erocker
TRWOV said:
I don't know why AMD keeps doing this. Why don't they go straight to the annoucement? This stuff just gets everyone's hopes up... unless AMD feeds on our broken hearts and shattered dreams then I don't see what they accomplish with this.
Soooo many companies do this. It's "run of the mill" marketing.
Posted on Reply
#20
TRWOV
I know, it just bothers me to no end... or maybe I'm just impatient. :p
Posted on Reply
#21
natr0n
Bjorn_Of_Iceland said:
What happened to Uno and DosFX?

Posted on Reply
#22
Rahmat Sofyan
Cool...


TressFX Hair revolutionizes Lara Croft’s locks by using the DirectCompute programming language to unlock the massively-parallel processing capabilities of the Graphics Core Next architecture, enabling image quality previously restricted to pre-rendered images. Building on AMD’s previous work on Order Independent Transparency (OIT), this method makes use of Per-Pixel Linked-List (PPLL) data structures to manage rendering complexity and memory usage.

DirectCompute is additionally utilized to perform the real-time physics simulations for TressFX Hair. This physics system treats each strand of hair as a chain with dozens of links, permitting for forces like gravity, wind and movement of the head to move and curl Lara’s hair in a realistic fashion. Further, collision detection is performed to ensure that strands do not pass through one another, or other solid surfaces such as Lara’s head, clothing and body. Finally, hair styles are simulated by gradually pulling the strands back towards their original shape after they have moved in response to an external force.

Graphics cards featuring the Graphics Core Next architecture, like select AMD Radeon™ HD 7000 Series, are particularly well-equipped to handle these types of tasks, with their combination of fast on-chip shared memory and massive processing throughput on the order of trillions of operations per second.
Posted on Reply
#25
alwayssts
See Lara The Way It's Meant To Be Braid. (Totally a missed opportunity on Roy and Co.'s part)

In all complete seriousness though, this is good...and I'll tell you why I think it matters beyond the ridiculous name.

Who had tessellation in their gpus first? ATi.
Who used it? Nobody.
Whom capitalized on the feature? nvidia (Fermi gen).

Who went for tremendous amounts of flops in their architecture first? ATi.
Who took of advantage of it? Nobody.
What happened next? We got proprietary physx and cuda on nvidia hardware using their lesser flops, but it made them oodles because they got behind united concepts of utilizing the flops in a productive way. Now, nvidia's arch has caught up in that respect.

Seriously peeps. What this says is AMD is keeping it industry-standard with directcompute/opencl etc (which is GREAT), but capitalizing on using their innovations in a tangible way (with something even I didn't know was in their GPUs). This is a very key change is direction of what has always been wrong with (let's call them for the sake of clarity) ATi. Engineering and cool new tech has never been a problem for the red guys; it's a big part of why people like me have always liked their camp (That and their openness in the past to discuss low-level arch/design decisions and particulars with us laymen in forums like B3D/Rage3D etc and to people like Anand in the press...ATi always seemed like open and friendly engineers from top-to-bottom) but their marketing/software developer relations/adoption of their forward-thinking ideas always has been (and in many cases the same is true of AMD) absolutely terrible. In a lot of ways they were (and perhaps still are) a direct inverse of nvidia. ATi creates (crossfire, eyefinity etc etc etc) nvidia exploits (SLI, 'surround' etc etc etc). Marketing ALWAYS wins the battle, because perception is created by tangible examples of use. ATi was and has always been the Diamond Rio PMP300 to nvidia's Itunes.

This is a serious step in the right direction. When you factor in the driver pushes lately, things may see a tremendous turn-around in reality, if not including eventually perception of their current (if not future) pushes forward in industry-first features. That is awesome, and truly worthy of discussion.
Posted on Reply
Add your own comment