• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GFXBench Validation Confirms Stream Processor Count of Radeon Fury

It has nothing to do with PhysX, this is because how different GCN and Maxwell GPU architectures are ... GCN has fantastic theoretical throughput and if the type of a problem/algorithm is suitable it really shines. There are some instances where it's less so (as you noticed with particle simulation) because scheduler can't reshuffle the instructions to keep all vector units inside GCN cores fully busy all the time.
Nvidia has more versatile architecture IMO although with less theoretical throughput. They did fantastic job refining it so that instruction scheduling is much simpler and they improved cache subsystem by introducing another level of caching.

That is incorrect. GCN is more suited for general puropose processing and that's because scheduling is almost entirely driver independent in Nvidia's Maxwell. Nvidia since Fermi has moved most of the scheduling to their drivers and according to them, scheduling takes up less then 4.5% of a Maxwell SM cluster area.

The reason the particle simulation test runs better on AMD is because it's a probably a port from the CUDA version of that test.
 
Last edited:
They dont really have a choice. They are still stuck at 28nm which has been the mainstay for over 3 years. The process node is stagnant because the jump to 20nm never happened. Thus Even Nvidia can do nothing truly new untill 2016 probably around xmass. It is what it is. With a drop to 16nm AMD can redesign all cards to use the same GCN version and move forward but untill then we are stuck with rebrands because it does not make sense to waste R&D cash on a new GPU that performs exactly the same as a previous gen product.

If they have to rebrand I'd rather see the HBM card slot in at the top as a 390X and everything trickle down a peg so that when the charts come out it looks like there are performance gains at each segment.
290X -> 380X
290 -> 380
285 -> 370
270X -> 360
 
That is incorrect. GCN is more suited for general puropose is because scheduling is almost entirely driver independent. Nvidia since Fermi has moved most of the scheduling to their drivers and according to them, scheduling takes up less then 4.5% of a Maxwell SM cluster area.

The reason the particle simulation test runs better on AMD is because it's a probably a port from the CUDA version of that test.
Let's not mix things here, driver issues draw calls, moves data to VRAM and back, and loads compiled shaders. The fact that ASM code of a shader can be optimized by a driver and/or a compiler is irrelevant once it's on GPU in instruction cache. Driver doesn't constantly schedule instructions, it would be ridiculously slow.
There are different kinds of schedulers here, maybe you mean work group scheduling (which threadblock/workgroup goes to which SMM) that can be done in driver. I'm talking specifically about warp instruction scheduler (that 4.5% of SMM)
You are right about GCN scheduler being totally independent ... that doesn't change the fact that you can have a real world problem only solvable by code that can't feed those big vector processors in GCN core optimally, for example something with lot's of scalar instructions with calculations interdependent of each other + lots of branching in the mix ... in that case opencl source or cuda port, it doesn't really matter those frameworks are very similar.
What does matter is optimizing specifically for one architecture over another, and I can't really say if chosen benches are biased that way.
 
Last edited:
too bad there's no love for amd in this world. even most tpu member is nvidia user.
the world will be a better place without amd. we'll gonna be just fine with only nvidia and intel.
I agree I haven't seen a Amd user around here in like forever. :rolleyes:o_O
 
There are 2 shades of green in the charts. What do they represent? (Median score, and best submitted score?)
 
too bad there's no love for amd in this world. even most tpu member is nvidia user.
the world will be a better place without amd. we'll gonna be just fine with only nvidia and intel.
on two comment i did read from you on 2 different AMD news ... both where awfully wrong ... well ... although they are amusing

I particularly like that most of us are NVidia users, 7 of us that have posted so far in this thread are AMD users..... don't feed the Trolls people! Ooops I just did o_O
ARGH... oh well too late i wrote it i post it ... i hate wasting
 
Last edited:
So while the sp has been confirmed we essentially have to pick our poison here? In 2 benches its faster, in 2 it's quite a bit slower. Hopefully standard reviews will shed some light.

At any rate it seems my R9 290 at 2 years old (chip wise, I only got it last july) is still relevent enough for amd to simply bump the clocks, add memory and rebrand?

I get nvidia famously took the 8800GTS 512MB and rebranded it twice, but that was over a 2 year period. They could at least drop these down to 380X and 380, otherwise wtf is Fury (XT) going to be named?
 
So while the sp has been confirmed we essentially have to pick our poison here? In 2 benches its faster, in 2 it's quite a bit slower. Hopefully standard reviews will shed some light.

At any rate it seems my R9 290 at 2 years old (chip wise, I only got it last july) is still relevent enough for amd to simply bump the clocks, add memory and rebrand?

I get nvidia famously took the 8800GTS 512MB and rebranded it twice, but that was over a 2 year period. They could at least drop these down to 380X and 380, otherwise wtf is Fury (XT) going to be named?
AMD cards since least 7000 series been better at opencl, in some causes a lot faster at it. Problem is, that doesn't mean crap in gaming. So its one those take it with a grain of salt kinda benchmarks. if you look at how amd compares their APU's to intel cpu's in laptops. Most benchmarks they use are ones that can be gpu accelerated. Looks great for ppl that don't understand but looks terrible for people that understand the benchmarks they used are not real work use.

Its same has how politician's pick and choose their statements, they say things that are technically correct, til you start looking at things as a whole to know they nit picked things in their favor.

had a look at 8800gtx, g92 65nm core was only rebranded to 9800gtx, the gtx+ was g92b which was 55nm.
 
Last edited:
I particularly like that most of us are NVidia users, 7 of us that have posted so far in this thread are AMD users..... don't feed the Trolls people! Ooops I just did o_O

iGPU FTW ! What about us iGPU campers ? :D
 
So while the sp has been confirmed we essentially have to pick our poison here? In 2 benches its faster, in 2 it's quite a bit slower. Hopefully standard reviews will shed some light.

At any rate it seems my R9 290 at 2 years old (chip wise, I only got it last july) is still relevent enough for amd to simply bump the clocks, add memory and rebrand?

I get nvidia famously took the 8800GTS 512MB and rebranded it twice, but that was over a 2 year period. They could at least drop these down to 380X and 380, otherwise wtf is Fury (XT) going to be named?
Well the G92 got a die shrink to extend it's life, this rebrand is just spinning with pretty much the final 28nm nodes you pretty much don't expect much outside of what the 285 already brought. That being said there are few straight up rebands outside of OEM.
 
i read months ago that it would be as strong as a 295x2. was laughing when i seen that post that it would be weaker than a 980ti. that hbm dont play.. as in there needs to be a little balancing going on. there isnt much point putting it on something like a 280x or 290x in its current form. may be different for zen if they use it in a integrated apu form. only the top tier pascal will beat it all around but not for long.
 
Last edited:
WTF guys why all the crying about the naming ? FURY XT will be called just that The AMD FURY XT .......It is to establish itself from the Radeon X390 line........And these benches seem to lend an answer as to why the HBM cards will be Fury Pro and Fury XT.
 
This seems "typical" of Radeons. They seem to chug through the pixels at high res, but driver efficiency isn't there for crazy high fps @ low res. At least that is my take.
That's what I have noticed through the years too.
 
Back
Top