• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

FuryX completely abandoned now, barely matching the 1060 or 580!

Joined
Feb 3, 2017
Messages
3,475 (1.33/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
HBMs main advantage in general really was not the bandwidth. It is latency, power efficiency and density. While all this is beneficial for a gaming GPU, everything is overshadowed by bandwidth and cost. Cost in particular is the biggest factor here - gamers do not care about size, VRAM power is noticeable but minor and latency is pretty much irrelevant. Just give us the performance for lowest possible price, right? With GDDR5X and GDDR6 in picture, there are considerably cheaper and simpler alternatives available for similar bandwidth.
 
Joined
Jun 28, 2018
Messages
299 (0.14/day)
The 980 Ti was always the better card, it's reference 1000Mhz base clock always makes it appear slower in charts too, when it could easily gain 25-30% over that.

My 980Ti Gaming G1, although already a version with very high clocks compared to reference, I still managed to extract another 200Mhz easily. Max temps of 70º in the summer and 60º~65º on winter.

I still play pretty much everything at 1440p with good quality. Not at max settings in the most demanding titles of course, but close to that. It´s still impressive, considering that it´s a card that is going for almost 4 and a half years.
 
Last edited:
Joined
Apr 16, 2019
Messages
632 (0.35/day)
Yeah, 980Ti really was the last godlike overclocker; as the saying goes - we shall never see their like again :D
 

Chloefile

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
10,879 (2.64/day)
Location
Finland
System Name 4K-gaming
Processor AMD Ryzen 7 5800X
Motherboard Gigabyte B550M Aorus Elite
Cooling Custom loop (CPU+GPU, 240 & 120 rads)
Memory 32GB Kingston HyperX Fury @ DDR4-3466
Video Card(s) PowerColor RX 6700 XT Fighter
Storage ~4TB SSD + 6TB HDD
Display(s) Acer 27" 4K120 IPS + Lenovo 32" 4K60 IPS
Case Fractal Design Define Mini C
Audio Device(s) Asus TUF H3 Wireless
Power Supply EVGA Supernova G2 750W
Mouse Logitech MX518
Keyboard Roccat Vulcan 121 AIMO
VR HMD Oculus Rift CV1
Software Windows 11 Pro
Benchmark Scores It runs Crysis remastered at 4K
Yeah, 980Ti really was the last godlike overclocker; as the saying goes - we shall never see their like again :D
Whoa, didn't even know that they can get to 1500MHz just like 980. That's a hella nice boost there.
 
D

Deleted member 163934

Guest
I've been with AMD for years. Things broke and not fixed for years in Windows drivers is normal in AMD case.

My HD 7750 in Windows8.1 can use only drivers between 13.9 and 14.4 if I want WDDM 1.3. If I don't care about WDDM 1.3 then I can use 13.2 - 14.4.
13.1 and older - texture corruption (known issue since GCN 1.0 release, fixed in 13.2 beta)
14.12-15.6 -> issues with hardware decode (that affects things that don't use the decoder!!!)
15.7+ -> random texture corruption, similar to pre 13.2 (I suspect that as soon as WDDM 2.0 was added to drivers they broke something regarding Win 8.1 only)
You can say what you want. I know for sure the gpu is fine. No issue at all in Linux and in Win 8.1 with drivers 13.2 - 14.4 is fine.

Funny part is that the issue I have in Win 8.1 with any driver newer than 15.7 don't happen in Win 7 with the same driver!!! Yep in Win 7 I can use drivers that are causing issues in Win 8.1...

I had so many issues with the HD 7750 that I moved to nvidia. (drivers were not the only issue, the vbios is a mess for most hd 7750 sold, core needs a higher voltage and the tdp needs to be increased to make it work properly (you see high fps but it just doesn't feel smooth, before I fixed it a gt 710 at 30 fps was smoother than the hd 7750 at 60 fps...)). Don't worry the issue with the drivers happens with the stock bios so my bios edit has nothing to do with the drivers issues I experience.

Sure you can say upgrade to Windows 10. My answer is over my dead body until they will fix the core issues that Win 10 has (doubt it will ever happen, from what I see they don't even properly test the updates they push, that's how much they care, I expect messed up from a linux distro due to the limited testing resources not that much from MS, but real world proves it's the reverse). God knows what Windows 10 is doing cause I see sick HDD usage without a reason (yes comparing to Win 8.1 or Win 7, Win 10 just uses the hdd like a mad, pointless to compare it to Mint Mate/Lubuntu cause Linux wins it big time (fastest boot time, lowest hdd usage I see are in Linux)).

The messed up with AGESA for the latest Ryzen it's no surprise for me. I've seen so many crap things in varios vbioses and bioses for mb for amd ...

I don't say things are good on the other side. :p The industry started to suck big time in the last 10 years... And it's gonna be worst...
It's considerable harder (sometimes impossible) to fix things on NVIDIA side if somethings is messed up in the vbios. (NVIDIA drivers might actually only use limited informations from the vbios.)
 
Last edited by a moderator:
Joined
Oct 2, 2015
Messages
2,986 (0.96/day)
Location
Argentina
System Name Ciel
Processor AMD Ryzen R5 5600X
Motherboard Asus Tuf Gaming B550 Plus
Cooling ID-Cooling 224-XT Basic
Memory 2x 16GB Kingston Fury 3600MHz@3933MHz
Video Card(s) Gainward Ghost 3060 Ti 8GB + Sapphire Pulse RX 6600 8GB
Storage NVMe Kingston KC3000 2TB + NVMe Toshiba KBG40ZNT256G + HDD WD 4TB
Display(s) Gigabyte G27Q + AOC 19'
Case Cougar MX410 Mesh-G
Audio Device(s) Kingston HyperX Cloud Stinger Core 7.1 Wireless PC
Power Supply Aerocool KCAS-500W
Mouse Logitech G203
Keyboard VSG Alnilam
Software Windows 11 x64
I've been with AMD for years. Things broke and not fixed for years in Windows drivers is normal in AMD case.

My HD 7750 in Windows8.1 can use only drivers between 13.9 and 14.4 if I want WDDM 1.3. If I don't care about WDDM 1.3 then I can use 13.2 - 14.4.
13.1 and older - texture corruption (known issue since GCN 1.0 release, fixed in 13.2 beta)
14.12-15.6 -> issues with hardware decode (that affects things that don't use the decoder!!!)
15.7+ -> random texture corruption, similar to pre 13.2 (I suspect that as soon as WDDM 2.0 was added to drivers they broke something regarding Win 8.1 only)
You can say what you want. I know for sure the gpu is fine. No issue at all in Linux and in Win 8.1 with drivers 13.2 - 14.4 is fine.

Funny part is that the issue I have in Win 8.1 with any driver newer than 15.7 don't happen in Win 7 with the same driver!!! Yep in Win 7 I can use drivers that are causing issues in Win 8.1...

I had so many issues with the HD 7750 that I moved to nvidia. (drivers were not the only issue, the vbios is a mess for most hd 7750 sold, core needs a higher voltage and the tdp needs to be increased to make it work properly (you see high fps but it just doesn't feel smooth, before I fixed it a gt 710 at 30 fps was smoother than the hd 7750 at 60 fps...)). Don't worry the issue with the drivers happens with the stock bios so my bios edit has nothing to do with the drivers issues I experience.

Sure you can say upgrade to Windows 10. My answer is over my dead body until they will fix the core issues that Win 10 has (doubt it will ever happen, from what I see they don't even properly test the updates they push, that's how much they care, I expect messed up from a linux distro due to the limited testing resources not that much from MS, but real world proves it's the reverse). God knows what Windows 10 is doing cause I see sick HDD usage without a reason (yes comparing to Win 8.1 or Win 7, Win 10 just uses the hdd like a mad, pointless to compare it to Mint Mate/Lubuntu cause Linux wins it big time (fastest boot time, lowest hdd usage I see are in Linux)).

The messed up with AGESA for the latest Ryzen it's no surprise for me. I've seen so many crap things in varios vbioses and bioses for mb for amd ...

I don't say things are good on the other side. :p The industry started to suck big time in the last 10 years... And it's gonna be worst...
It's considerable harder (sometimes impossible) to fix things on NVIDIA side if somethings is messed up in the vbios. (NVIDIA drivers might actually only use limited informations from the vbios.)
Funny you say this, I've used an HD7750 from 2013 to 2018 and I didn't have a single issue with it, no corruption, no bios issues, no driver problems, never. Went from 7 to 8, to 8.1, to 10. Just use the Windows 7 drivers if you're stuck on 8.1.
Also had green camp cards, no problem either. 7600GT, from 2008 to 2013 (the 7750 replaced it), only issue was that the last drivers had missing sprites in a lot of games, but the card was EoL for years by that point.
 
D

Deleted member 163934

Guest
Funny you say this, I've used an HD7750 from 2013 to 2018 and I didn't have a single issue with it, no corruption, no bios issues, no driver problems, never. Went from 7 to 8, to 8.1, to 10. Just use the Windows 7 drivers if you're stuck on 8.1.

Win 7 drivers installed on Win 8.1 have other issues.

It depends what exactly you do with it. The issues I see with the drivers can be triggered in particular ways (I can trigger it in Dota 2 by just changing the menus a cseveral times 15.7+ case in Win 8.1, or 6+ hardware accelerated tabs in Chrome 14.12-15.6). It's not exactly at first try.
It depends on your eyes and attention. You might just miss the texture corruption. There've been a lot that failed to see the texture corruption with 13.1 and older drivers...
It also depends on what vbios (the version on it changes things a bit in good or in worst) you have and what vrm controller you have on that gpu. If you don't have uP1801 (commonly used, but not the only one) then most likely you have a proper TDP set in the vbios (55+).
Now the voltage issue... that depends on several things like the board design (how long is the path between vrm and core), like the other stuff used on that board (the quality is the key). One gpu model can happy work with 1.1V while another can just not work ok and you have to bump it to 1.2V. Why? Because of how much you lose from that voltage on the road to core. With one board you can have a 15% voltage loss on the road and that 1.1V becomes 0,935V while if it was 1,2V it would had become 1.02V (that 0,08V can be enough to change a couple of things, 0.08V is enough to make the transitors snappy; if you downvolted a cpu then you should know that even 0.025V can change things (enough to lose 1 core...)), while on another board you can have only 5% voltage loss on the road and in this case the 1.1V becomes 1,045V (that is higher than 1.02V from the previous case) and you don't need to bump it to 1.2V anymore.
Every single piece on you gpu can have a +/-5% variation around the values wrote on it. Those variations can cause issues in particular combinations.


You gonna say ok but this is not AMD fault. Ofc it is. To properly fix it you need to get the voltage from the vrm output and the one that you get at core. Then you need to compare it with the min accepted by the core to run properly at that particular freq, if the voltage at core is lower then you up the vrm voltage until you get a proper voltage at core, if you hit the vrm voltage limit and you still failed to get a proper voltage at core then you see what is the highest clock you can achieve and change the max achievable clock to that, if it's impossible to do it then try to display a message (try because if there are issues even at the lowest clock I doubt the gpu is actually capable to display something properly). The voltage limits have to be set in the vbios, the control has to be done by the drivers. But you don't see something like this done in the hd 7750 case... So it's a fail design because it ignores basic electronic knowledge (voltage drops between 2 points on a wire...)... You gonna say it cost too much. Ha ha. You supposed to already have the voltage reading at core, you just need one from the vrm controller, and the data in the vbios doesn't change anything (the data in the vbios cost you nothing, you already have them from design & test phase of the core).
(I assume here they are capable to properly calibrate 2 voltage reading sensors...)

Now if you ask me both HD 7750 and HD 7770 should have the same vcore.
HD 7750 is using same chip as HD 7770, but well it's a chop down one and it's only 4/5 of a HD 7770. Even when clocked the same the HD 7770 will still be more powerfull. No point to cripple even more the HD 7750 by using silly low voltages and stupid low TDP.
If both HD 7750 and HD 7770 have same vcore you can happy clock lower the HD 7750 clocks there won't be any issue.
That vcore is to be sure the transitors work properly...
Cape Verde has been designed to be 640:40:16. The lower varitions (512:32:16 / 384:24:8) are there because they could chop it down, the chop down came after design and test phase were finished. Main things regarding vcore and freq remain as for the full Cape Verde (with the chop down versions you might get a bit lower vcore, but it's so low that it's actually a waste of time; the cores are not the same, if you always chop down core 5 there is no guarantee that you chopped down the core that was getting the lowest voltage, you might actually chop down the core that was getting the highest voltage in some cases and in such situation you actually have to keep a similar voltage with the full design).

I'll do some wrong math now.
HD 7770 TDP 80W
HD 7750 TDP 55W

HD 7750 is 4/5 of HD 7770. (4/5)*80=64 > 55
If you ask me HD 7750 max tdp should be at least 64 not 55. (At least because I think it should actually be higher than 64.)

Most HD 7750 have a targeted tdp of ~45W. That is close to 50% of the max TDP of a HD 7770. The targeted tdp is not ignored by the driver.
To achieve it the only way is to:
1) lower the voltage - limited because the transistors won't work properly
2) lower the clock - alone won't really do much, combine with lower voltage you gonna get more
3) reduce the load on the gpu (skip frames processing basicaly...) - not acceptable from my point of view, but if you do it you gonna see the best results...

You gonna say what's with that max tdp. Well targeted tdp depends on the max tdp. Higher max tdp allows you to have higher targeted tdp...
 
Last edited by a moderator:
Joined
May 2, 2017
Messages
7,762 (3.08/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Win 7 drivers installed on Win 8.1 have other issues.

It depends what exactly you do with it. The issues I see with the drivers can be triggered in particular ways (I can trigger it in Dota 2 by just changing the menus a cseveral times 15.7+ case in Win 8.1, or 6+ hardware accelerated tabs in Chrome 14.12-15.6). It's not exactly at first try.
It depends on your eyes and attention. You might just miss the texture corruption. There've been a lot that failed to see the texture corruption with 13.1 and older drivers...
It also depends on what vbios (the version on it changes things a bit in good or in worst) you have and what vrm controller you have on that gpu. If you don't have uP1801 (commonly used, but not the only one) then most likely you have a proper TDP set in the vbios (55+).
Now the voltage issue... that depends on several things like the board design (how long is the path between vrm and core), like the other stuff used on that board (the quality is the key). One gpu model can happy work with 1.1V while another can just not work ok and you have to bump it to 1.2V. Why? Because of how much you lose from that voltage on the road to core. With one board you can have a 15% voltage loss on the road and that 1.1V becomes 0,935V while if it was 1,2V it would had become 1.02V (that 0,08V can be enough to change a couple of things, 0.08V is enough to make the transitors snappy; if you downvolted a cpu then you should know that even 0.025V can change things (enough to lose 1 core...)), while on another board you can have only 5% voltage loss on the road and in this case the 1.1V becomes 1,045V (that is higher than 1.02V from the previous case) and you don't need to bump it to 1.2V anymore.
Every single piece on you gpu can have a +/-5% variation around the values wrote on it. Those variations can cause issues in particular combinations.


You gonna say ok but this is not AMD fault. Ofc it is. To properly fix it you need to get the voltage from the vrm output and the one that you get at core. Then you need to compare it with the min accepted by the core to run properly at that particular freq, if the voltage at core is lower then you up the vrm voltage until you get a proper voltage at core, if you hit the vrm voltage limit and you still failed to get a proper voltage at core then you see what is the highest clock you can achieve and change the max achievable clock to that, if it's impossible to do it then try to display a message (try because if there are issues even at the lowest clock I doubt the gpu is actually capable to display something properly). The voltage limits have to be set in the vbios, the control has to be done by the drivers. But you don't see something like this done in the hd 7750 case... So it's a fail design because it ignores basic electronic knowledge (voltage drops between 2 points on a wire...)... You gonna say it cost too much. Ha ha. You supposed to already have the voltage reading at core, you just need one from the vrm controller, and the data in the vbios doesn't change anything (the data in the vbios cost you nothing, you already have them from design & test phase of the core).
(I assume here they are capable to properly calibrate 2 voltage reading sensors...)

Now if you ask me both HD 7750 and HD 7770 should have the same vcore.
HD 7750 is using same chip as HD 7770, but well it's a chop down one and it's only 4/5 of a HD 7770. Even when clocked the same the HD 7770 will still be more powerfull. No point to cripple even more the HD 7750 by using silly low voltages and stupid low TDP.
If both HD 7750 and HD 7770 have same vcore you can happy clock lower the HD 7750 clocks there won't be any issue.
That vcore is to be sure the transitors work properly...
Cape Verde has been designed to be 640:40:16. The lower varitions (512:32:16 / 384:24:8) are there because they could chop it down, the chop down came after design and test phase were finished. Main things regarding vcore and freq remain as for the full Cape Verde (with the chop down versions you might get a bit lower vcore, but it's so low that it's actually a waste of time; the cores are not the same, if you always chop down core 5 there is no guarantee that you chopped down the core that was getting the lowest voltage, you might actually chop down the core that was getting the highest voltage in some cases and in such situation you actually have to keep a similar voltage with the full design).

I'll do some wrong math now.
HD 7770 TDP 80W
HD 7750 TDP 55W

HD 7750 is 4/5 of HD 7770. (4/5)*80=64 > 55
If you ask me HD 7750 max tdp should be at least 64 not 55. (At least because I think it should actually be higher than 64.)

Most HD 7750 have a targeted tdp of ~45W. That is close to 50% of the max TDP of a HD 7770. The targeted tdp is not ignored by the driver.
To achieve it the only way is to:
1) lower the voltage - limited because the transistors won't work properly
2) lower the clock - alone won't really do much, combine with lower voltage you gonna get more
3) reduce the load on the gpu (skip frames processing basicaly...) - not acceptable from my point of view, but if you do it you gonna see the best results...

You gonna say what's with that max tdp. Well targeted tdp depends on the max tdp. Higher max tdp allows you to have higher targeted tdp...
Wow. Let's see.

1: This is way off topic. Start your own thread?

2: The 7750 launched in early 2012 and is outperformed by 8% by the integrated Vega 8 in a Ryzen 5 2200G. Yes, it's a shame that it has (apparently) had issues for a long time, but 2012-era AMD was already struggling, and the card was too old and low-end by the time they launched the Catalyst drivers for it to be a focus for their renewed driver efforts. A shame, sure, but seven and a half years later it doesn't matter much. At least they've improved in the years since.

3: You don't seem to know how chip binning works. The "chopped down" parts you speak of aren't necessarily just cut down because they needed/wanted a lower-end SKU, but often also because they couldn't reach the power/performance targets of the high-end SKU due to silicon manufacturing variance. The 7770 obviously got the chips binned for the highest clocks at a given voltage, which means that any chips that didn't reach that bin either would have needed more power for the same clocks or couldn't reach them at all. Also, which cores are disabled is generally not chosen willy-nilly, but rather selected based on which cores (if any) have manufacturing defects or the worst characteristics. I have never heard of a cut-down GPU where the same cores are disabled across the entire production run.

4: Unless these are AMD reference boards, the VRM implementation is down to the AIB partner, not AMD. And you make a lot of noise about voltage drop - have you actually measured on-board voltages with a multimeter to see if this is in fact the cause of your problems? If not, then this is pure speculation on your part, and frankly rather absurd - PCB design engineers know how voltage drop works, and given that voltage drop is measurable and should be ~constant for a given design, it is also quite easy to correct for by increasing the output voltage of the VRM.

5: This overlaps with a couple of the previous points, but needs pointing out. You say: "One gpu model can happy work with 1.1V while another can just not work ok and you have to bump it to 1.2V. Why? Because of how much you lose from that voltage on the road to core." This is not the main reason for this phenomenon. Silicon quality - and thus binning - is the chief determinant here. Unless your VRM is designed by someone completely incompetent (which, again, would place the responsibility at the AIB partner designing the board, not the GPU chip maker) any voltage drop from VRM to die is well-known and adjusted for.


Now, it's a shame that you had such issues with your GPU (I never owned anything from that generation so I can't speak to any design flaws there, but I had a HD 6950 (pre-GCN!), and was very happy with it until 2015), but your explanations of how and why you did seem to not match with reality to a large degree - unless, that is, you've done in-depth testing to verify your claims. If not, well, you likely got a bit of a lemon in terms of power/clock scaling, possibly with a poor board/cooler design from the AIB partner, and had that compounded by driver issues. I understand that this sucks, and AMD would by no means be blameless in this (I'd say the first point is nobody's fault as that's how silicon works; the second is down to the AIB partner; the third being solely AMD's fault), but you seem a bit myopic here.
 
D

Deleted member 163934

Guest
1: This is way off topic. Start your own thread?

You are free to report me. The staff here is free to warn me, banned me or perma banned me. In the case of a ban or perma ban I request that all the informations regarding me (posts, ip addresses and not only) stored on this site to be wiped. I can tell you one single thing I just don't care if I'm banned on a forum or in a game. I don't see clear evidences that someone from the forums or games I play actually cares about me so why should I care? I prefer to follow "An eye for an eye" principle until someone proves that it's wrong, and that person will be added to my exception list from this principle :) .
Ofc it's easier to blame the subject and not the world. But you know world can be to blame after all ignorance is what you find almost everywhere.
There are a couple of forums that ofc lack the option to delete my account and they refuse to delete my account...

From my point of view I'm ontopic. I talk about AMD overall behaviour: abandon things. It's not limited to FuryX that's the problem.

3: You don't seem to know how chip binning works. The "chopped down" parts you speak of aren't necessarily just cut down because they needed/wanted a lower-end SKU, but often also because they couldn't reach the power/performance targets of the high-end SKU due to silicon manufacturing variance. The 7770 obviously got the chips binned for the highest clocks at a given voltage, which means that any chips that didn't reach that bin either would have needed more power for the same clocks or couldn't reach them at all. Also, which cores are disabled is generally not chosen willy-nilly, but rather selected based on which cores (if any) have manufacturing defects or the worst characteristics. I have never heard of a cut-down GPU where the same cores are disabled across the entire production run.

You are assuming that this is a binning problem. You ignore another reason that you actually wrote: they just needed cut down chips and when you don't have enough partial failure in the process you just cut down proper chips ( hd 7750/ hd 7730 are not the only examples, AMD did it in cpu cases see Sempron 140/145 for example) (I don't say they always did it, just because demands were higher than partial failures during manufacturing process).
The reasons to chop down a chip are not limited to production partial failure that are normal in any process. You are free to provide evidences that only partial production failure are chop down because I fail to find a single one.
AMD is not the only one that did it. Nvidia did it too.
To actually say how the situation is we need the following data: % of production partial failure, % of the production partial failure that can be reused, number of full chip, partial chips used. I strongly believe that number of full chip is no more than 1,5x higher than partial chip used and that the production partial failure rate is less than 20% and in such situation you have good chips that are chop down because they need them. You won't get such information. Even AMD reps won't really be able to tell you such numbers because they don't have access to such data.
(that 1,5x higher translate like this: at 3 full chips you have 2 chop down chips so 60% full 40% chop down
if the production partial failure rate is 20% than you have (80%-unusuable) full and 20% chop down, doubt unusable are 50% cause that's when you get the balance)

If a manufacturing process has 50% partial/complete failure rate than something is really wrong there...

Not all silicon react in same way. You can easily find chips that refuse to ran at lowest clock and 1V while can happy ran at all clocks and 1.1V...

We can talk as much as you want here. I'd keep the chop down version running at same voltage as the big brother for stability reasons. You are free to believe and do what you want.

You have voltage drops from psu to mb, from mb to pci-e slot, from gpu pci-e connector to vrm, from vrm to core pins, inside the core (I already wrote that if you undervolt a cpu at some point you start to lose 1 core, it's no longer stable while the others are).

It's not only one AMD gpu. It's all of them 4 to be exact. HD 4350 totaly broken PowerPlay table in the bios (it's corrupted) you can even find this broken vbios on techpowerup vbios collection... HD 4650 one day started to display artifacts never figure out what broke the core and the ram are fine because all were used on similar hd 4650 boards and work just fine (and the hd 4650 had the ram rated at 500 Mhz while the bios was only setting them at 400 Mhz for 3D, fun part is that boot ram clocks were 500 Mhz... if there were issues running at 500 Mhz wouldn't it be normal to have it set 400 everywhere?!?), the HD5670 wrong UVD max clocks (good luck playing a game and watching something that uses hardware decode, your gpu runs at considerable lower clocks and the performance sucks) (could had been fixed, AMD answer "upgrade to HD 6xxx", easy solution - vbios update - never done, solution - vbios edit to fix the wrong clocks). HD 7750 the problems listed.
K10 cpu family, what should amd release an microcode update in the last 8 years, it's not like there is any major bug affecting this particular family. K8 brisbane family, show me a microcode update for this "bug" free cpu.

You can say what you want about manufacturers. In all those years AMD failed to see the mess the manufactures do? If they saw why they didn't really reacted? (You can fix a lot of things in drivers if you want...).

If you think I have something with AMD, you are wrong. This is what I would expect when buying an AMD product based on past experience.
Kinda hard to blame me for staying away from AMD...

P.S.: multimeter is useless because it display some sort of average. You need an oscilloscope because you actually care about the min value and the max value not about the average (too low min = instability, too high max = you burn the chip) (I'll exagerate on purpose it's pointless to have 1V average when the min is 0,5V and max is 1,5V, it will look fine on the multimeter cause you will see ~1V).
 
Last edited by a moderator:
Joined
May 2, 2017
Messages
7,762 (3.08/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
You are free to report me. The staff here is free to warn me, banned me or perma banned me. In the case of a ban or perma ban I request that all the informations regarding me (posts, ip addresses and not only) stored on this site to be wiped. I can tell you one single thing I just don't care if I'm banned on a forum or in a game. I don't see clear evidences that someone from the forums or games I play actually cares about me so why should I care? I prefer to follow "An eye for an eye" principle until someone proves that it's wrong, and that person will be added to my exception list from this principle :) .
Ofc it's easier to blame the subject and not the world. But you know world can be to blame after all ignorance is what you find almost everywhere.
There are a couple of forums that ofc lack the option to delete my account and they refuse to delete my account...

From my point of view I'm ontopic. I talk about AMD overall behaviour: abandon things. It's not limited to FuryX that's the problem.
Whoa, Nelly. Talk about overblown response - I'm kind of surprised you didn't start talking about someone coming to your house and smashing your PC. I just said that you're quite far OT here. Chill out. And while I see your point, there is a dramatic difference between "abandoning" (as in not actively pursuing driver optimizations for) a 4-year-old GPU compared to a 7-year-old GPU. If you don't see that, then you need to adjust your expectation of the useful life of gaming hardware.

You are assuming that this is a binning problem. You ignore another reason that you actually wrote: they just needed cut down chips and when you don't have enough partial failure in the process you just cut down proper chips ( hd 7750/ hd 7730 are not the only examples, AMD did it in cpu cases see Sempron 140/145 for example) (I don't say they always did it, just because demands were higher than partial failures during manufacturing process).
The reasons to chop down a chip are not limited to production partial failure that are normal in any process. You are free to provide evidences that only partial production failure are chop down because I fail to find a single one.
AMD is not the only one that did it. Nvidia did it too.
To actually say how the situation is we need the following data: % of production partial failure, % of the production partial failure that can be reused, number of full chip, partial chips used. I strongly believe that number of full chip is no more than 1,5x higher than partial chip used and that the production partial failure rate is less than 20% and in such situation you have good chips that are chop down because they need them. You won't get such information. Even AMD reps won't really be able to tell you such numbers because they don't have access to such data.
(that 1,5x higher translate like this: at 3 full chips you have 2 chop down chips so 60% full 40% chop down
if the production partial failure rate is 20% than you have (80%-unusuable) full and 20% chop down, doubt unusable are 50% cause that's when you get the balance)
I never said binning is only about defects. In fact, I explicitly said this:
The "chopped down" parts you speak of aren't necessarily just cut down because they needed/wanted a lower-end SKU, but often also because they couldn't reach the power/performance targets of the high-end SKU due to silicon manufacturing variance.
Which does not have to do with defect rates whatsoever, but the inherent variability in transistor performance in litographically produced ICs.

Beyond that: of course chips are cut down to fulfill the need for lower end SKUs if necesary. Ideally of course the chipmakers would like to avoid this (as they're effectively giving away the opportunity to sell the chip at a higher price), but at times demand for certain parts necessitates this. Every chipmaker in the world producing GPUs or similar chips has done this. The thing is, this is never the majority of chips, as the voltage/clock variance is usually enough to exclude a significant portion of chips from the highest bin. In other words: most cut-down parts - even with all components working - couldn't actually be sold as the higher tier card as it would either run too hot at the rated clock or not run at all. On the other hand there are usually rather wide margins left in the binning processes - the more fine-grained you go, the more expensive and time consuming binning is, after all - which is why overclocking can some times be very gainful. But again: nothing beyond stock is guaranteed, and there's a reason for that.

If a manufacturing process has 50% partial/complete failure rate than something is really wrong there...
Same straw man as above.
Not all silicon react in same way. You can easily find chips that refuse to ran at lowest clock and 1V while can happy ran at all clocks and 1.1V...
So now ... you're using what I said as an argument against what I said?

We can talk as much as you want here. I'd keep the chop down version running at same voltage as the big brother for stability reasons. You are free to believe and do what you want.
You're welcome to do whatever you want with your hardware, but you seem to assume that there is no such thing as clock/voltage target binning despite what you said literally one sentence before this. There is. In other words, at the same clocks there is zero guarantee that the lower-end part will be stable at the same voltage as the higher-end part, and there is a significant chance that it will require higher voltages. The exception from this is at lower clocks where the window of stability is wider.

You have voltage drops from psu to mb, from mb to pci-e slot, from gpu pci-e connector to vrm, from vrm to core pins, inside the core (I already wrote that if you undervolt a cpu at some point you start to lose 1 core, it's no longer stable while the others are).
Yes, and? Is it AMD's fault that you have voltage drops in your system outside of your GPU? Remember, your argument was that this was poor engineering on AMD's part as they couldn't possibly have accounted for voltage drops in their GPU's power supply. Also, if you have that significant voltage drops within your PC, you should get a better PSU. Most good quality PSUs have voltage sense wires attached to the ends of the 24-pin wire to compensate for any voltage drop over the 24-pin cable.

Beyond that, you don't seem to know how VRMs work. Their output is only partly dependent on input voltage - they accept a range of input voltages, and with the exception of rapid fluctuations of the input voltage, they will output the correct output voltage regardless of what the input voltage is as long as it is within spec. A VRM fed by a stable 11.8V input and one fed by a stable 12.2V input will both supply 1.1V out if that's what the VRM controller tells them to do.

It's not only one AMD gpu. It's all of them 4 to be exact. HD 4350 totaly broken PowerPlay table in the bios (it's corrupted) you can even find this broken vbios on techpowerup vbios collection... HD 4650 one day started to display artifacts never figure out what broke the core and the ram are fine because all were used on similar hd 4650 boards and work just fine (and the hd 4650 had the ram rated at 500 Mhz while the bios was only setting them at 400 Mhz for 3D, fun part is that boot ram clocks were 500 Mhz... if there were issues running at 500 Mhz wouldn't it be normal to have it set 400 everywhere?!?), the HD5670 wrong UVD max clocks (good luck playing a game and watching something that uses hardware decode, your gpu runs at considerable lower clocks and the performance sucks) (could had been fixed, AMD answer "upgrade to HD 6xxx", easy solution - vbios update - never done, solution - vbios edit to fix the wrong clocks). HD 7750 the problems listed.
K10 cpu family, what should amd release an microcode update in the last 8 years, it's not like there is any major bug affecting this particular family. K8 brisbane family, show me a microcode update for this "bug" free cpu.

You can say what you want about manufacturers. In all those years AMD failed to see the mess the manufactures do? If they saw why they didn't really reacted? (You can fix a lot of things in drivers if you want...).
Sorry, but you're asking for microcode updates for CPU architectures released in 2003 and 2007? Not even Intel does that (they're skipping a lot of security patches even for Nehalem, which is far newer than both K10 and K8). How would you expect a company a fraction of their size to have the resources to do this, especially for products that have been EOL for a decade or more? And why would they want to?

As for your GPU experiences, I kind of feel bad for you, as you've obviously had some bad luck. The first sounds like it really should have been RMA'd - if the powerplay tables borked themselves, with no outside intervention, that's a warranty issue - while I can't quite understand what you mean with the second. Did you desolder the RAM and GPU package from the card and resolder them on other cards to test? If not, how on earth can you say that "they are fine?" Things break. Again: should've RMA'd it. The third is a bit odd, and pretty crap that they wouldn't fix it, but in those days AMD's driver support wasn't exactly stellar. Not surprised, sadly. But again, things have thankfully improved. And, on the other hand, I've had no significant issues with my HD 4850s (Crossfire, noisy as hell with the stock cooler, better when I replaced them with Arctic coolers), my HD 6950, or my Fury X. I might have been lucky, you might have been unlucky, or the truth might be somewhere in between. Nonetheless, things are a lot better these days than when AMD was at risk of going bankrupt.

P.S.: multimeter is useless because it display some sort of average. You need an oscilloscope because you actually care about the min value and the max value not about the average (too low min = instability, too high max = you burn the chip) (I'll exagerate on purpose it's pointless to have 1V average when the min is 0,5V and max is 1,5V, it will look fine on the multimeter cause you will see ~1V).
So rather than respond to my question of whether you actually measured voltage drop between the VRM and your GPU die, you pull out this ... is it an excuse? First off, if you have that much ripple in your DC power in your PC, you desperately need a new PSU. Secondly, yes, multimeters have relatively low resolution and slow polling compared to an oscilloscope, but that's irrelevant when it comes to your story about voltage drop between a GPU's VRM and die. If there's actually significant voltage drop, a good multimeter would be able to measure it - and if a good multimeter can't measure it, it's not significant. What you're talking about here is voltage ripple, which needs to be kept in check, but is mainly down to the quality of your PSU. The ATX spec requires the 12V rail to have more than 120mV of ripple, while most good quality PSUs manage less than half of this (down to 10-20mV on the best units). You're right that a multimeter wouldn't be able to measure this (I've never seen a multimeter with that kind of resolution, at least), but then again the GPU must be ATX compliant just like the rest of the PC, so it needs to be able to handle ripple up to 120mV. And, besides, the VRM serves to filter out quite a bit of ripple as long as it's of somewhat decent quality. And even with this, ripple and voltage drop aren't really related. If there is significant voltage drop, the "average" voltage measured by a low-resolution multimeter will also drop - again, unless the drop is so small that it's irrelevant.
 
Joined
May 30, 2015
Messages
1,865 (0.58/day)
Location
Seattle, WA
Except the AIBs want or need to slap on a 3 slot heatsink so the PCB might as well be humongous.
Or the heatsink is twice the size of the PCB in length.. About just what Sapphire's R9 Fury was:


AMD's reference design for the R9 Fury was a triple-fan, dual-slot cooler. AIBs just followed (or adopted, in XFX's case) AMD's spec.
 

Chloefile

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
10,879 (2.64/day)
Location
Finland
System Name 4K-gaming
Processor AMD Ryzen 7 5800X
Motherboard Gigabyte B550M Aorus Elite
Cooling Custom loop (CPU+GPU, 240 & 120 rads)
Memory 32GB Kingston HyperX Fury @ DDR4-3466
Video Card(s) PowerColor RX 6700 XT Fighter
Storage ~4TB SSD + 6TB HDD
Display(s) Acer 27" 4K120 IPS + Lenovo 32" 4K60 IPS
Case Fractal Design Define Mini C
Audio Device(s) Asus TUF H3 Wireless
Power Supply EVGA Supernova G2 750W
Mouse Logitech MX518
Keyboard Roccat Vulcan 121 AIMO
VR HMD Oculus Rift CV1
Software Windows 11 Pro
Benchmark Scores It runs Crysis remastered at 4K
AMD's reference design for the R9 Fury was a triple-fan, dual-slot cooler. AIBs just followed (or adopted, in XFX's case) AMD's spec.
I know, saw that from the rare/unreleased GPU thread.
 
Joined
May 18, 2009
Messages
2,733 (0.50/day)
Location
MN
System Name Personal / HTPC
Processor Ryzen 5900x / i5-4460
Motherboard Asrock x570 Phantom Gaming 4 /ASRock Z87 Extreme4
Cooling Corsair H100i / stock HSF
Memory 32GB DDR4 3200 / 8GB DDR3 1600
Video Card(s) EVGA XC3 Ultra RTX 3080Ti / EVGA RTX 3060 XC
Storage 500GB Samsung Pro 970, 250 GB SSD, 1TB & 500GB Western Digital / 2x 4TB & 1x 8TB WD Red
Display(s) Dell - S3220DGF 32" LED Curved QHD FreeSync Monitor / 50" LCD TV
Case CoolerMaster HAF XB Evo / CM HAF XB Evo
Power Supply 850W SeaSonic X Series / 750W SeaSonic X Series
Mouse Logitech G502
Keyboard Black Microsoft Natural Elite Keyboard
Software Windows 10 Pro 64 / Windows 10 Home 64
I still like my 980Ti - it doesn't disappoint at all. Glad I went with this card when it came out - I've had her for just over 4 years and she's still running strong!

I can run any game I play on high for 1920x1080 or if I have a game I'd more enjoy at 5760x1080 I can run a mix of medium/high settings.

Still a helluva card, if you ask me. Got the clocks cranked - she boosts to around 1425 and memory OCed up an extra 250MHz.
 
Joined
Nov 22, 2018
Messages
567 (0.29/day)
Location
PL, Krk (JPN, Tokyo)
System Name Nilin
Processor Ryzen 9 5800x
Motherboard Asus Rog Crosshair VIII Hero Wifi
Cooling Lian Li Galahad AIO 360
Memory G. Skill TridentZ Neo 32gb 3600Mhz CL16-16-16-36
Video Card(s) Asus TUF RTX 3080
Storage Samsung Evo 960 250gb (System), Samsung Evo 860 500gb (Misc), Samsung 990 Pro 1Tb (Games)
Display(s) LG 27" UHD IPS, LG Ultragear 27" WQHD Nano IPS
Case InWin 303 (7x Fractal Prisma 120mm)
Power Supply Tt Toughpower Grand RGB 850W
Mouse Razer Viper Ultimate
Keyboard SteelSeries Apex 3
Benchmark Scores https://www.3dmark.com/fs/21022952 (old system)
I just popped in my HD 7970 with its glorious 3gb of vram and also im nicely surprised.
It is marketing that keeps pushing us towards more.
Fury is safe ;)
Actually If I ever found a used R9 290X Vapor X or that hybrid Fury X id buy the hell out of it...
 
D

Deleted member 163934

Guest
Sorry, but you're asking for microcode updates for CPU architectures released in 2003 and 2007? Not even Intel does that (they're skipping a lot of security patches even for Nehalem, which is far newer than both K10 and K8). How would you expect a company a fraction of their size to have the resources to do this, especially for products that have been EOL for a decade or more? And why would they want to?

As for your GPU experiences, I kind of feel bad for you, as you've obviously had some bad luck. The first sounds like it really should have been RMA'd - if the powerplay tables borked themselves, with no outside intervention, that's a warranty issue - while I can't quite understand what you mean with the second. Did you desolder the RAM and GPU package from the card and resolder them on other cards to test? If not, how on earth can you say that "they are fine?" Things break. Again: should've RMA'd it. The third is a bit odd, and pretty crap that they wouldn't fix it, but in those days AMD's driver support wasn't exactly stellar. Not surprised, sadly. But again, things have thankfully improved. And, on the other hand, I've had no significant issues with my HD 4850s (Crossfire, noisy as hell with the stock cooler, better when I replaced them with Arctic coolers), my HD 6950, or my Fury X. I might have been lucky, you might have been unlucky, or the truth might be somewhere in between. Nonetheless, things are a lot better these days than when AMD was at risk of going bankrupt.

Brisbane has 0 microcode updates. Never saw one. Either this is a perfect cpu (doubt), they released a cpu knowing it has a cpu microcode update issue unsolvable (worst than most of things done by intel) or they just didn't cared.

Now the Athlon x4 640 that I own was released around 10 May 2010. The latest microcode for it was released in March 2010. I can say this cpu saw no love after release. :p

Nobody asked for a microcode release for such old cpu in 2019. But then when you see that well one received 0 and the other one none after it was pushed on the market you start to think about how much they do care about the products they sell.

The hd 4350 has a messed up power play table, only thing good there is the boot clocks. For some reasons only those were used (some older drivers 9.x early 10.x actualy ignore the messed up structure and read other parts from it, 11.x just reads the boot clock and uses that for everything). A simple RMA wasn't going to fix it, this is not the result of a bad flash during manufacturing, this particular model was sold with a bad power play table (I asked the manufacturer to send me a vbios for it, they send me exactly the same I already had flashed). Also explaining to the shop that you know the vbios is a mess wasn't really going to work. It was going to pass all the tests this is the hilarious part.
How or why the tool used to compile the vbios didn't noticed that the power play table is not right I have no clue.
Sure you can say it was ATI. When you buy something you don't buy only the good parts you buy the bad parts too. :D (If you were buying only the good parts I would had been the first to buy a wife :D ).

Sure things have change but I still read a lot of issue regarding AMD that well don't really make me think things have change enough for me to take them into consideration. :)

Regarding the hd 7750 well it took me a couple of years to "fix it" and I'm still very limited in the drivers I can use on it on Windows side. Also I didn't had a proper reason to RMA it. Not gonna make them accept the RMA because I say it doesn't feel smooth or snappy like an NVIDIA gpu is...

Regarding the first part. I actually don't care anymore to be honest and I don't joke. i've been banned on some games forum because I just posted videos with cheaters that weren't banned. They decided that to ban me on the forum is better than to log in the game and ban the cheaters, that's how I ended up to not care if I get banned on a forum or in a game, I just move on if I'm banned simple as that (when I actually care I'm the one ending up hurt so to protect myself I had to learn to care as less as possible; when I don't understand grey and I only see white or black there is just no other way to try to survive in this world).
And regarding games. I reported in a game a particular nasty bug abuse. They did nothing for 6 months. I decided to force them to fix it and basicaly started to abuse it. They banned me but still didn't fixed the bug. When I'm the mood to troll them I make a new account and I abuse it because I don't need to level up in any way to do it. They just don't want to fix it simple as that. And i show up from time to time wasting 15 minutes just to remind them. :D At some point they even made a topic about me and what I do (fun part they never wrote that I reported that bug and I started to abuse it only when I saw that they did nothing for 6 months) they wrote some stupid things about how I do it when in fact I just jump and move forward in their game (the fact that this is how you can climb walls that you shouldn't be able to climb is not my fault)... For what exactly I'm banned? For their incompentence on fixing a bug?!? They can even disable jumping, it doesn't change anything in that game.
Yes I agree that after you reported a bug and they did nothing for 6 months you can happy abuse it because they clearly don't care and only abusing it is the only way to force them do it. (Trust me if I'll find a bug in a router model that allow me to take control over it, report it and see how they do nothing for 6 months, I'll build myself a botnet with that router model only and set it to flood that manufacturer ("Fix the bugs in the model xyz you made!" will be included in the flood packets so they know why it happens!), maybe they will understand when the routers they made and sold are used to take down their websites... because it's clearly they need to see what can happen to understand the problem. Ofc they can start legal actions against me, I'll involve the press and I'm gonna play not guilty because this wouldn't had happened in the first place if they would had released a fix for the bug I reported, by doing nothing they allowed this to happen so if someone is guilty is them, any lost is caused by them and they are the ones fully responsible for it. I doesn't even matter if I win or lose, they lost anyway.)

My expectation atm regarding AMD are:
1) to have to mod the vbios to get it right
2) to have issues with the drivers and a limited amount of drivers I can use
3) big issue at release
4) big issue with first generation

I can bash NVIDIA don't worry. Expectation with NVIDIA:
1) if something is bad in the vbios pray that it's big enough to be able to RMA it else god have mercy cause i won't really figure out easily how to fix it
2) always worried that there will be a driver that well gonna get smoke out of the gpu (they did it once)
3) higher prices for the performance offered

Intel:
1) cool design bugs where you lose performance when you update the microcode
2) no real improvement in term of performance in last many years
3) keeping high prices
Intel indirectly:
1) cool that you release microcodes, not cool that the manufacturer don't update the bios, not cool that there is no nice way to update the microcode like it is with the ME
2) bios locks that don't even let you flash a modded bios (mostly for the microcodes) and you need to either use some weird bugs to do it or well some weird ways to unlock it (assuming you figure out the addresses that need to be changed cause are not the same...)

We kinda need at least a 3rd competitor in cpu area and a 4th in gpu area. Sadly it I don't see it happen.

L.E.: Regarding the soldering part yes. Some people came to me with problems regarding their pc. They are all warn that I'm not responsible for any damage I do to their pc/laptop (they don't like it they are free to go to someone else and pay for the service). I prefer to change the parts that I believe are broken and not to try to repair them (gonna get back here). I also don't change anything for my work (if things are replaced either the owner buy them or if the owners agrees I order the stuff and he give me the money for it), I just do it cause I want and I like to do it. Usualy the parts that I believe are broken stay at me (because the owner doesn't want them back). So I ended up with various broken things.
From time to time I'm in the mood (to be read I feel that my hands are steady enough and my eyes are in a good shape) to try to "fix" them. Max I can do at an acceptable level is a reflow. That didn't stopped me from using the reflow procedure to actually try to solder a gpu chip. Usualy when I try such thing well I get smoke as result, but I had some (think it was 3) working results (I'd never use such of my "working" result for more than testing because it's a joke the way those things are soldered and won't actually hold for more than a few hours). The stuff doesn't work anyway so for me it's between doing experiments or just trash it, i prefer do experiments and if it's fail (usualy) trash it. :D The stuff is broken and free so the costs are a bit of electricity bill and my time.
 
Last edited by a moderator:
Joined
Sep 17, 2014
Messages
20,782 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Wow. The stories and adventures in this topic :D Great reads.
 
Joined
Mar 18, 2008
Messages
5,717 (0.98/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
40,435 (6.61/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
All i hear is complaining and no one trying to get solutions for themselves.
 
Joined
Mar 18, 2008
Messages
5,717 (0.98/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
All i hear is complaining and no one trying to get solutions for themselves.

Dude I tried to be vocal. Got either silence or humiliated mostly from other AMD users. I was constantly in contact with the FuryX bios modding community work on HBM timing, after AMD blocked HBM OC. In the end it was just not going anywhere so I quit using the card. That was a period when I still think with emotion instead of my brain when I bought the card
 
Joined
Mar 11, 2019
Messages
61 (0.03/day)
Location
Germany
System Name New Horyzen
Processor Ryzen 7 1700X
Motherboard ASRock Fatal1ty X370 Gaming K4
Cooling Noctua U-12S
Memory 2x 8GB Corsair Vengance LPX 3200MHz
Video Card(s) Sapphire 5700XT
Storage A lot
Case Modded Corsair Carbide 300R
Audio Device(s) ESI Maya44 eX, Focusrite Scarlett 2i4 2nd, Samson Meteor, Mixer and Headphone Amp
Power Supply SeaSonic M12II 750W
Mouse Logitech G502
Keyboard HyperX Alloy & Logitech G13
Software All of it
When I switched from my R9 Fury to the 5700XT, I noticed a reducting in VRAM useage.
Games that were previously maxing out the 4GB were now only reaching to 2.8GB (both on 2560x1080p).

That said, the R9 Fury(X) was a nice proof of concept for HBM.
 
Joined
Nov 24, 2017
Messages
853 (0.37/day)
Location
Asia
Processor Intel Core i5 4590
Motherboard Gigabyte Z97x Gaming 3
Cooling Intel Stock Cooler
Memory 8GiB(2x4GiB) DDR3-1600 [800MHz]
Video Card(s) XFX RX 560D 4GiB
Storage Transcend SSD370S 128GB; Toshiba DT01ACA100 1TB HDD
Display(s) Samsung S20D300 20" 768p TN
Case Cooler Master MasterBox E501L
Audio Device(s) Realtek ALC1150
Power Supply Corsair VS450
Mouse A4Tech N-70FX
Software Windows 10 Pro
Benchmark Scores BaseMark GPU : 250 Point in HD 4600

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
40,435 (6.61/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
Dude I tried to be vocal. Got either silence or humiliated mostly from other AMD users. I was constantly in contact with the FuryX bios modding community work on HBM timing, after AMD blocked HBM OC. In the end it was just not going anywhere so I quit using the card. That was a period when I still think with emotion instead of my brain when I bought the card

Did you sell it off?
 
Joined
Mar 18, 2008
Messages
5,717 (0.98/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
Did you sell it off?

handed it down to family.

When I switched from my R9 Fury to the 5700XT, I noticed a reducting in VRAM useage.
Games that were previously maxing out the 4GB were now only reaching to 2.8GB (both on 2560x1080p).

That said, the R9 Fury(X) was a nice proof of concept for HBM.

Yeah new vram compression at work. I am surprised you stayed on with Navi after Fury.
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
40,435 (6.61/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
handed it down to family.



Yeah new vram compression at work. I am surprised you stayed on with Navi after Fury.

I probably would of taken it at a fair price to tool with lol
 
Top