• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Readies Radeon HD 7970 GHz Edition

Do you think HD 7970 GHz Edition can make HD 7970 attractive again?

  • Yes

    Votes: 12 14.3%
  • No

    Votes: 25 29.8%
  • For me it never lost attractiveness

    Votes: 47 56.0%

  • Total voters
    84

T4C Fantasy

CPU & GPU DB Maintainer
Staff member
Joined
May 7, 2012
Messages
2,561 (0.59/day)
Location
Rhode Island
System Name Whaaaat Kiiiiiiid!
Processor Intel Core i9-12900K @ Default
Motherboard Gigabyte Z690 AORUS Elite AX
Cooling Corsair H150i AIO Cooler
Memory Corsair Dominator Platinum 32GB DDR4-3200
Video Card(s) EVGA GeForce RTX 3080 FTW3 ULTRA @ Default
Storage Samsung 970 PRO 512GB + Crucial MX500 2TB x3 + Crucial MX500 4TB + Samsung 980 PRO 1TB
Display(s) 27" LG 27MU67-B 4K, + 27" Acer Predator XB271HU 1440P
Case Thermaltake Core X9 Snow
Audio Device(s) Logitech G935 Headset
Power Supply SeaSonic Platinum 1050W Snow Silent
Mouse Logitech G903 Lightspeed
Keyboard Logitech G915
Software Windows 11 Pro
Benchmark Scores FFXV: 19329
So......in two years what? X970 at $700 and X950 at $600?
The way i understand gpu pricing is this:
4 ranges
DualGPU >600
Highend 300 - 500 (4870, 5850, 5870, 6970, GTX570, GTX260, GTX280, GTX480, GTX470, X850, X1900XT, X1800XT, 7800GTX, 6800ultra, 9800XT, 8800GTX, 8800GTS)
Midrange 180 - 250 (3850, 3870, 4850, 5770, 6850, 6870, gtx 560ti, gtx 560, gtx 460, 6800GS, 5700ultra, X700, 9600XT, 7600GT, X1900GTO, 8800GT, 9500pro)
Lowend 80 - 150 (...)

Right now there is a gigantic hole in the 150 - 250 range, which for the most people is the sweet spot($200 to be exact). 7870 should have started at the most, at 300 and that is still expensive but understandable for a new product and all the inflation and stuff...
.....and 7950 at 350. I have no doubt they will go down in price once nvidia put its shit together and have a complete lineup and good supply of new generation chips, BUT, i pretty much doubt 7950 will ever go down to $260 like the 6950 2GB, because the starting price is so sky high. Just like the 7870 will never be as low as the $155 HD 6870.

intel cpus hardly ever deprieciate in price, if newegg still had Pentium 4 EE's in stock from 2004 they would cost $999 easily
 
Joined
Feb 13, 2012
Messages
522 (0.12/day)
The article says no such thing about the GPU being a revision. All it is saying is that as the process is refined there is less voltage leakage and a bit more performance headroom (i.e. the standard deviation curve is moving to higher freqs. If you're expecting Tahiti XTX to be a foundry respin you're in a waking dream.[/I][/B]
less voltage leakage meaning better consumption = higher clocks at the same voltage
whether it is a revised or not it will be a better and more efficient chip, and with 1250mhz capability that is a good 70-80mhz higher than the best overclocked tahitis now
gk104 is clocked at 1006mhz and dynamic clock takes it to 1110mhz or something http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/4, too close to its limit i shall say, especialy noting that tahiti at 925mhz is only 6% slower on average(according to wizz review) than gk104 at 1006-1110

So your idea of comparison is to take the highest efficiency mainstream (and lower) card and measure it against a card a considerable step up in market segment. What next? comparing power consumption of the GTX 680 against that of the HD 6450 ?, acoustics of the GTX 680 against a passive cooled card ?[/I][/B]

no my idea of efficiency comparison is to look at the capability of GCN architecture vs Kepler. pitcairn for example is 40% smaller than tahiti, and has 40% less cores, and has less bandwidth, yet performs only 20%-25% slower. that being said, it is clear that GCN architecture has much more potential than what tahiti is bringing out.
so im putting performance/core, performance/die area, performance/watt, all into perspective

Sorry, not convinced that a binned Tahiti is the next messiah. Don't save me a pew at the Church of Redfanboyism

Amazing how "computer" ( I presume you mean compute function/ GPGPU) has suddenly become of major importance with AMD followers -where was all this corncern when Fermi and Evergreen were having to go round.
On your second point, you do realise 1. that Quadro/Tesla will be based on GK110 since GK104 has no ECC 72-bit memory and is constrained of double precision FP performance, and...2. AMD have had capable workstation cards for generations- they just heaven't put much effort into a software enviroment or drivers for the pro sector. Big engine great. Not being able to figure out how to shift out of neutral bad.
Since you're all for lopsided comparisons, are you willing to bet that Tahiti will be a GPGPU match for GK110...It sounds like Cray aren't


The link you posted actually quotes 20-25%. Leaving aside your lowballing. The 20-25% is gaming performance not compute. Since you have trouble distinguishing the two:
GTX 680 FLOPS 1006M core x 1536 shader x 2 OPC = 3090.432 GFlop...Double precision artificially capped at 1:24 rate
GK110 would need only a 800M core clock to have a 20% FLOP advantage ( 800 x 2304 x 2 OPC = 3686 GFlops), but here's the kicker. Quadro DP is a full 1:2 rate. Now according to this [URL="http://www.3dcenter.org/news/was-vom-nvidia-gk110-chip-zu-erwarten-ist"]3DCentre article (probably a bit more credible than Videocardz and OBR)
single precision is estimated at 4000+ GFlops (2000+ double precision) so;
GK110 4000+ FP32 and 2000+ FP64
Tahiti XT 3788 FP32 and 947 FP64...and that making a huge assumption that an AMD pro card could be built around Tahiti XT. For AMD's last arch, they used Cayman LE...a HD 6950 with 128 shaders fused off (Firepro V7900)


it isnt the next messiah, it is just an improvement over something that was already great, no one can deny that, if you do then bring your evidence, telling me gk110 WILL be better is not a valid arguement, it is yet to be released and what you post is out of speculation in specs and even more speculation in release dates, as far as i remember i read rumors saying sep/oct but not sure, but either way even if its august then that will be around 3-4 month before amd releases the hd 8970(1 year from tahiti), exactly the same amount of time between tahiti and kepler, and who knows what will they bring by then with the enhanced GCN. but so far its said that its 20% better than tahiti in compute in the SAME power envelope
http://videocardz.com/30786/amd-radeon-hd-8970-speculation-radeon-hd-7990-delayed
this slide states tenerife in single precision did 4500tflops as in MARCH 2012, meaning it can even get better, so if we take ur speculations seriously then 4000tflops for gk110 is already something amd is achieving inhouse, but does it matter? no it doesnt because untill its released there is no point of arguing

as for now, does kepler beat amd in compute? no it doesnt even come close. does it beat it in gaming? barely and amd seems to be closing the gap with the new binned tahiti not to mention they already trade blows depending on the titles.
so its smart to stop bashing amd and be fair and give each camp their credit
oh and for your info i run a gtx460. no1 denied that fermi wasnt a badass architecture except for gtx480 and 470 which pretty much weren't ready and had a bad start, but when fermi was properly refined it was way better than vliw5 in gaming and compute, and while vliw4 was more effiecient if you only look at gaming and performance/watt/die size, it was light years behind in compute, not to mention with nvidia releasing 500mm2+ chips they sure held the performance crown

and nvidia will do the same approach next time around as well, knowing that gk110 is speculated to be 550mm2, that is a good 200mm2 bigger than tahiti, so will it be faster than tahiti and gk104? hell yes it will be, but will also cost more to manufacture and ofcourse consume more power. and i sure can tell you amd cant beat it with a 360mm2 die size, that would require an architecture that is like 40% more efficient than nvidias kepler which i dont think will happen since both amd and nvidia are on par in terms of architectures. but if amd would release a big chip like that they could beat whatever nvidia bringsd but I doubt that will ever happen, its just not amd's methodology to do so
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
it isnt the next messiah, it is just an improvement over something that was already great, no one can deny that, if you do then bring your evidence, telling me gk110 WILL be better is not a valid arguement
I'm telling no such thing. What I'm putting forward is the speculation of others, much the same as...
it is yet to be released and what you post is out of speculation in specs and even more speculation in release dates, as far as i remember i read rumors saying sep/oct but not sure, but either way even if its august then that will be around 3-4 month before amd releases the hd 8970(1 year from tahiti), exactly the same amount of time between tahiti and kepler, and who knows what will they bring by then with the enhanced GCN. but so far its said that its 20% better than tahiti in compute in the SAME power envelope
...the speculation you're passing off as fact (note the bolded part- feel free to post some factual links)
this slide states tenerife in single precision did 4500tflops as in MARCH 2012, meaning it can even get better
You mean the slide that found to be fake a few days after it showed up ? The same slide that had the word "enabling" misspelled in the fine print?

so if we take ur speculations seriously then 4000tflops for gk110 is already something amd is achieving inhouse
Supposition masquerading as fact ?
2. at the days of ati they never made bigger chips and would never compete with nvidia on the fastest gpu, but were all about efficiency, the hd3870 was a tiny chip with 320radeon cores that had a die size(192mm2) smaller of that of the hd7870 pitcairn(212mm2)
Sorry, that's either bullshit or a knowledge base that doesn't extend further back than RV670
And do you know why ATi pursued a small chip strategy? It's because ATi released a pig called R600, and R600 was 420mm². ATi -prior to RV670 -before your time I assume, didn't have a small die strategy- it had a Win At All Cost strategy; ATi's R580 (352mm²) and R520 (288mm²) vs Nvidia's G71 (196mm²) being a prime example.
 
Joined
Feb 13, 2012
Messages
522 (0.12/day)
I'm telling no such thing. What I'm putting forward is the speculation of others, much the same as...

...the speculation you're passing off as fact (note the bolded part- feel free to post some factual links)

You mean the slide that found to be fake a few days after it showed up ? The same slide that had the word "enabling" misspelled in the fine print?
http://www.dvhardware.net/news/2012/amd_tenerife_slide.jpg

Supposition masquerading as fact ?

exactly why i posted the following
"but does it matter? no it doesnt because untill its released there is no point of arguing"
but offcourse you decided to totaly ignore that
and spelling mistakes doesnt automatically mean its fake, it is humans who are subject to error who design these things.


Sorry, that's either bullshit or a knowledge base that doesn't extend further back than RV670
And do you know why ATi pursued a small chip strategy? It's because ATi released a pig called R600, and R600 was 420mm². ATi -prior to RV670 -before your time I assume, didn't have a small die strategy- it had a Win At All Cost strategy; ATi's R580 (352mm²) and R520 (288mm²) vs Nvidia's G71 (196mm²) being a prime example.

i hope you are aware that the post i was replying to was comparing to hd3000 and hd4000 to recent cards right? im talking in relevance to these generations which the earlier post mentioned which is the strategy that ati had at the time that recently changed
its just funny how you cherry pick statements or generalize my "specific" statements and forget the complete picture just for the sake of arguing. and thats what you have been doing so far, just arguing for the sake of arguing. it would've been way more productive to answer to the statements which you totaly ignored that are actually what this thread is about which is as of today, hd7970 dominates any other nvidia solution for compute, and as for gaming they trade blows depending on the titles, and which tahiti reaching a new level of efficiency things will get even more interesting, so whether you care about gaming or compute is up to the buyer, as for gcn it remains a very solid architecture which is'nt necessarily directed to compete with nvidia only, it is designed for future implementation with cpu cores and HSA so it is a step in the right direction, but again remember "the big picture"
either way please enough with your trolling and fruitless arguing and lets not kill this thread by going way off topic, as im sure you can find some sentence or 2 here and there to find something to argue about, but i don't intend on keeping this going
 
Top