• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Fury X "Fiji" Voltage Scaling

good information there. Im waiting for software also :). Can you tried wrote email to the AMD team ?
 
huge fps increase ? nope... huge power bill increase ? yes !!! :laugh:
If someone has money to buy an enthusiast-grade card, I doubt the power bill would be an issue..
 
How do you know that you really OC the VRAM?
All software including CCC+GPU-Z report the new clock, performance changes, artifacts star appearing at high clock, card crashes at too high clock
 
I wonder if there is some additional gain if we would reduce the VRM temperatures. High VRM temperatures does reduce efficiency.

It would be great to see a follow up on this article, my suggestions:
-Fury X with removed cover and added fan to improve airflow over the VRM area
-Fury X with custom waterblock

Is it possible to read the ASIC quality of a Fury X graphics card? I wonder how overclocking results vary from chip to chip.

Awesome work W1zzard!
 
I wonder if there is some additional gain if we would reduce the VRM temperatures. High VRM temperatures does reduce efficiency.

It would be great to see a follow up on this article, my suggestions:
-Fury X with removed cover and added fan to improve airflow over the VRM area
-Fury X with custom waterblock
Don't you think that what your proposing is beyond the scope of TPU's graphics reviews?
The site reviews don't centre upon modded hardware.
 
Conclusion: Buy a GTX 980 ti.

Negative. I am still going to buy a Fury X being that it is still a great card.
 
There's only one real reason to choose a lesser product in this day and age, mister fx.
 
So it is maxed from the factory and a few percent slower than nvidia flag ship.

Fury Nano will be the sweet spot.

Real curious to see how dx12 changes the field.

14nm = shut up and take my money
 
Looking at the numbers, I'm not sure if a 150W power draw increase for a mere 3 FPS increase is worth it for most gamers.
:eek: I'd reckon not. Better off installing a second.
 
At least now we know that AMD has no clue what an overclocker is or what he/she dreams of... :cool:
I guess they had something else in mind when they said something about an "overclockers dream"! :toast:

"Pump and dump" (P&D) is a form of microcap stock fraud that involves artificially inflating the price of an owned stock through false and misleading positive statements, in order to sell the cheaply purchased stock at a higher price. Once the operators of the scheme "dump" sell their overvalued shares, the price falls and investors lose their money. Stocks that are the subject of pump and dump schemes are sometimes called "chop stocks".[1]

While fraudsters in the past relied on cold calls, the Internet now offers a cheaper and easier way of reaching large numbers of potential investors.[1]



https://en.wikipedia.org/wiki/Pump_and_dump

AMD has got in trouble before for doing this when Bulldozer was released, they were sued by their investors and are currently being investigated by the SEC for making it look like the performance numbers for that chip in particular were much better than the sad reality, we all saw the AMD leaked benchmarks showing it perform better than Intel's offerings at the time. Same thing with Fury, AMD even released performance charts showing it perform better than current Nvidia GPUs, we all know how that ended.

You would think they would have learned the lesson, seeing how last quarter results further put AMD in the red, it wont surprise me at all when investors once again take AMD to court for share pumping, so sad, seeing all the efforts put forward by their excellent engineering team go to waste because of bad management that will drive the company to the ground :(
 
Because @W1zzard is a damn wizard, that's why.

Also, if you didn't just read his previous post, testing is extremely time-consuming. I recall him stating once that all the testing for a single GPU review can take him weeks.
Don't register just to question the owner of the site, especially when:
1) He is not required to be so thorough in his testing, or even to perform the tests for us
2) He is one of THE names in the GPU industry for he is the writer of GPU-z and Sapphire Trixx
3) He is far more experienced in doing what he does than you are, he has been running this site, doing reviews, and helping to modify video cards for over a decade.
Show some respect.

Had to sign in to reply to this. 2 and 3 are just speaks top authority and ignoring common sense.

Regarding 1 , yes he is required to be thorough and what would be the point of the site without these tasks?

He may do what he wishes but he should realize he has a responsibility to explain 1. This is a single sample 2. This is one game which isn't even neutral I.e. favors Nvidia. Bf3 is also old with two mute recent games. A much better test is 3dmark it another benchmark since this is oc testing.

Great he did the unlock before others but that is no excuse for putting out weak sauce like this. If time is an issue let someone else do the oc testing and perfect the tools instead. This article is an example of why some people consider this site less than fair. Some less logical individuals will fail to see the limitations/issues and march forth to proclaim the new gospel

Personally not much care about this, though interested in the undervolt oc claims. Far more interested in what dx12 and win 10 drivers bring

"Pump and dump" (P&D) is a form of microcap stock fraud that involves artificially inflating the price of an owned stock through false and misleading positive statements, in order to sell the cheaply purchased stock at a higher price. Once the operators of the scheme "dump" sell their overvalued shares, the price falls and investors lose their money. Stocks that are the subject of pump and dump schemes are sometimes called "chop stocks".[1]

While fraudsters in the past relied on cold calls, the Internet now offers a cheaper and easier way of reaching large numbers of potential investors.[1]



https://en.wikipedia.org/wiki/Pump_and_dump

AMD has got in trouble before for doing this when Bulldozer was released, they were sued by their investors and are currently being investigated by the SEC for making it look like the performance numbers for that chip in particular were much better than the sad reality, we all saw the AMD leaked benchmarks showing it perform better than Intel's offerings at the time. Same thing with Fury, AMD even released performance charts showing it perform better than current Nvidia GPUs, we all know how that ended.

You would think they would have learned the lesson, seeing how last quarter results further put AMD in the red, it wont surprise me at all when investors once again take AMD to court for share pumping, so sad, seeing all the efforts put forward by their excellent engineering team go to waste because of bad management that will drive the company to the ground :(

Yeah saying a multi-core centric chip was fast when single thread was still king was a bad idea. But does the chip suck as much as some claim? Not with more threads being used.

Again the question is interpretation. What does it mean to oc well? 200MHz? 300? Depends on the architecture. It's not an objective claim and since people have different ideas on what it means, they think badly of Amd. E.g. my 100 mhz oc 290x scores about the same as my 1508mhz oc 970.

Why I suspect bf3 should not be used is that people get better results without voltage. The article should not pretend this is some fact and admit the limitations of the testing

There's only one real reason to choose a lesser product in this day and age, mister fx.

You don't just buy a gpu for the now, you buy for the months or years you will have it. There are legit reasons top expect the fury x to be ahead in those months and years, not least of which is history. With just a few percentages between them and similar power draw (oc with undervolt?) Other things have to be considered.

My money is on fury x in the coming months. Personally skipping till next year or 2017. Hopefully Nvidia has cleaned up their act and sorted out their asynchronous shader implementation by then so I have good options
 
Last edited by a moderator:
There's only one real reason to choose a lesser product in this day and age, mister fx.

4K, multi-GPU setup (because 1 GPU is not enough). CF Fury X wins. Even against OC SLI 980Ti.

There's that review floating around, 4x Fury X vs 4x Titan X, 4K battle. The Furies wreck Titans, seems SLI is broken beyond 2 GPUs.
 
From where these Nekkers came?
 
IF the R9 Fury would have been a really fast GPU, I would have bought it...
So, maybe next time AMD?
 
This is one game which isn't even neutral I.e. favors Nvidia. Bf3 is...
...and yet Battlefield 3 is an AMD Gaming Evolved title, and was part of AMD's gaming bundle for a veritable age.
Far more interested in what dx12 and win 10 drivers bring
When DX12 becomes relevant it will become interesting, and since AMD seem to have written off DX11 thanks to their driver overhead issues, and Mantle, we'd all better hope that the resources have been well spent.
 
@W1zzard I'm rather interested in GPU load figures during the testing.

Reason: "On paper", the Fury X looks like a blazing fast card. Should, in theory, make everything else look like toys. Meanwhile, the performance is much lower than expected and the gains from overclocking are very low. The reason for such oddly bad performance is very likely either one of the two:
· the driver has a severe CPU overhead, being unable to feed commands to the GPU fast enough, stalling it a lot (which would explain why [IIRC] it performs so much better relative to Nvidia cards on very high resolutions and low perf gain from overclocking)
· their shader compiler can't optimize for sh*t, leading to all brunt muscle the Fury X has to be wasted on doing needless calculations (doesn't really explain the better relative perf on high res and low overclocking perf gain)

Seeing GPU load figures would greatly help figuring out which is the case
 
All software including CCC+GPU-Z report the new clock, performance changes, artifacts star appearing at high clock, card crashes at too high clock
Strange. Why then somebody of AMD is saying that it's not possible to overclock the vram.
 
4K, multi-GPU setup (because 1 GPU is not enough). CF Fury X wins. Even against OC SLI 980Ti.

There's that review floating around, 4x Fury X vs 4x Titan X, 4K battle. The Furies wreck Titans, seems SLI is broken beyond 2 GPUs.

Who uses four GPUs for gaming anyway and very few people are gaming on 4k but for those who are two Fury X or two 980 Ti is plenty so why should either company devote very much resources for drivers for such a tiny few who are running more than two of the above?
 
...and yet Battlefield 3 is an AMD Gaming Evolved title, and was part of AMD's gaming bundle for a veritable age.

When DX12 becomes relevant it will become interesting, and since AMD seem to have written off DX11 thanks to their driver overhead issues, and Mantle, we'd all better hope that the resources have been well spent.

it was actually free with nvidia cards, but it is not as bad as other games I guess. Still better to review new cards with newer more relevant games and for overclocking it's probably better to see the performance gain with synthetics and that would be easier on time than testing with a game.

dx12 is already relevant if you're buying a GPU now.

@W1zzard I'm rather interested in GPU load figures during the testing.

Reason: "On paper", the Fury X looks like a blazing fast card. Should, in theory, make everything else look like toys. Meanwhile, the performance is much lower than expected and the gains from overclocking are very low. The reason for such oddly bad performance is very likely either one of the two:
· the driver has a severe CPU overhead, being unable to feed commands to the GPU fast enough, stalling it a lot (which would explain why [IIRC] it performs so much better relative to Nvidia cards on very high resolutions and low perf gain from overclocking)
· their shader compiler can't optimize for sh*t, leading to all brunt muscle the Fury X has to be wasted on doing needless calculations (doesn't really explain the better relative perf on high res and low overclocking perf gain)

Seeing GPU load figures would greatly help figuring out which is the case

people should be careful about this driver overhead business. dx11 might just plain suck.
 
it was actually free with nvidia cards, but it is not as bad as other games I guess. Still better to review new cards with newer more relevant games and for overclocking it's probably better to see the performance gain with synthetics and that would be easier on time than testing with a game.

dx12 is already relevant if you're buying a GPU now.



people should be careful about this driver overhead business. dx11 might just plain suck.
You have that backwards, it was free with AMD products not NVidia products...

But its actually a pretty unbiased game (Least at this point).

The overclocking results do show to be a bit disappointing, but I am more impressed with the overclock that can be gotten with lowering the voltage!
 
You have that backwards, it was free with AMD products not NVidia products...

But its actually a pretty unbiased game (Least at this point).

The overclocking results do show to be a bit disappointing, but I am more impressed with the overclock that can be gotten with lowering the voltage!

http://www.geforce.com/landing-page/get-bf3

you are thinking of battlefield 4. I don't blame you, its what most would expect to see here. It's an old game that could be glossed over by newer cards and drivers.
 
people should be careful about this driver overhead business. dx11 might just plain suck.

AMD's [proprietary] driver is also notorious for very high CPU overhead on OpenGL under Linux, especially when compared to Nvidia's [proprietary] driver. AFAIK both share a [very] large portion of their codebases with their Windows counterparts.
So I'd assume the overhead's also somewhere in the more fundamental parts of the driver and IIRC Nvidia allows itself to assume a lot more when it comes to state validation while AMD's driver is p much paranoid, checking everything rigorously (which takes quite a few clock cycles while still falling flat in some cases).

P.S. Yeah, D3D11 does plain out suck, though.
 
Back
Top