Friday, June 15th 2012

AMD Radeon HD 7970 GHz Edition "Tahiti XT2" Detailed

We've known since May, the existence of a new high-end single-GPU graphics card SKU in the works, at AMD. Called the Radeon HD 7970 GHz Edition, the SKU is being designed to regain AMD's competitiveness against NVIDIA's GeForce GTX 680. We're hearing a few additional details about the SKU. To begin with, AMD has worked with TSMC to refine the chip design. The Tahiti XT2 will be able to facilitate significantly higher clock speeds, at significantly lower voltages, than the current breed of Tahiti XT chips.

Tahiti XT2, or Radeon HD 7970 GHz Edition, will ship with a core clock speed of 1100 MHz, 175 MHz faster than the HD 7970. The GPU core voltage of Tahiti XT2 will be lower, at 1.020V, compared to 1.175V of the Tahiti XT. It's unlikely that AMD will tinker with memory clock speed, since Tahiti already has a 384-bit wide GDDR5 memory interface, which gives it 264 GB/s memory bandwidth at 1375 MHz (5.50 GHz effective). According to the source, the new SKU enters mass-production next week. So best case, it should reach markets by late-June or early-July.

Source: OCaholic.ch
Add your own comment

112 Comments on AMD Radeon HD 7970 GHz Edition "Tahiti XT2" Detailed

#1
sergionography
now the question remains, will there be a new revision on the whole GCN line up? or saving it for next gen? bigger chips same consumption?
as for XTX I think they are saving that for the 2304 shader tahiti XD
if the tahiti XT that was released in jan wasnt exactly what AMD had planned then we can safely assume that crossed out 1000mhz 2304core tahiti was the original one XD
lets all dream, ghz edition 2304core tahiti for less than $500
Posted on Reply
#2
Xzibit
by: T4C Fantasy
so basically the hd7970 is a professional gaming graphics card since its still on par with a 680 in gaming and 2x more in compute power
This generation I beleive AMD is hands down the superior card. People can argue until there eye bleed about which is the better GPU company but as far as overall value and performance AMD won this round.

Tahiti XT is what Kepler 680 should have been to Fermi 580.

Kepler can only handle Double-Percision at Single-Percision rate. This is why their Quadro series features x2 GK104s chips to make up for the lack of compute power and its also being marketed to a lower specific tier market than before since it lacks the DP.
"GK104 lacks the ECC and compute flexibility of the Fermi Tesla cards"
"NVIDIA’s goal for K10 is to go after the specific market segments that don’t need ECC and don’t need flexibility"


AMD didnt need to cut down/out Compute Power to achieve low power usage nor did it have to introduce the problamatic GPU Boost feature thats causing so many headaches to keep temps and power usage tamed.

I'm sure Nvidia could have made something similar in size and performance but something happened that they either couldnt shrink the die small enough with computation power on it and still have low power usage so I guess they just cut out the computation power to compete with AMD in similar die size and power usage.
Posted on Reply
#3
Disruptor4
Well.... time to sell my current Gigabyte Windforce 7970 for a Gigabyte Windforce Gigahertz edition 7970 :P
Higher clocks for less heat/power... that's what I'm talking about :P
Posted on Reply
#4
MrMilli
by: fullinfusion
Sounds like amd don't have anything up it's sleeve for awhile if there just binning for higher clocks.
This isn't binning. This is a new revision of Tahiti hence the name Tahiti XT2.
Posted on Reply
#6
Aquinus
Resident Wat-man
by: MrMilli
There, fixed!

http://parallelis.com/kepler-underperform-on-gpgpu-gtx680/
AMD video cards since the 4k series have been about a 4:1 ratio of single precision to double which is pretty fair. A lot of graphics still use single-precision vectors which would be why the 680 shines in video games but not tasks like folding.
Posted on Reply
#7
T4C Fantasy
CPU & GPU DB Maintainer
by: Xzibit
This generation I beleive AMD is hands down the superior card. People can argue until there eye bleed about which is the better GPU company but as far as overall value and performance AMD won this round.

Tahiti XT is what Kepler 680 should have been to Fermi 580.

Kepler can only handle Double-Percision at Single-Percision rate. This is why their Quadro series features x2 GK104s chips to make up for the lack of compute power and its also being marketed to a lower specific tier market than before since it lacks the DP.
"GK104 lacks the ECC and compute flexibility of the Fermi Tesla cards"
"NVIDIA’s goal for K10 is to go after the specific market segments that don’t need ECC and don’t need flexibility"


AMD didnt need to cut down/out Compute Power to achieve low power usage nor did it have to introduce the problamatic GPU Boost feature thats causing so many headaches to keep temps and power usage tamed.

I'm sure Nvidia could have made something similar in size and performance but something happened that they either couldnt shrink the die small enough with computation power on it and still have low power usage so I guess they just cut out the computation power to compete with AMD in similar die size and power usage.
its all about time, nvidia didnt expect amd to release so early so they rushed their design like they did with the 400 series, 600 is only slightly better at gaming than the 7k series so i will say the 600 is a clear superior card for gaming even if only slightly... however i am a folder and gamer... the 7k is for me.

edit: I "was" a pure AMD fanboy, but now im just a hardware realist, I go for what will do my tasks the best for the price and efficiency, so if the nvidia maxwell rumors are true with being 12x! more performance.. NOT 12% but 12x! the biggest leap between 2 gens than the fermi articulture, then ill be getting a GTX780, but until then im content with my HD7970.
Posted on Reply
#8
Scrizz
by: cadaveca
W...T...F...



XT2? Why doesn't everyone call it XTX, like they should?


:roll:
I miss the old nomenclature. :cry:
xD
Posted on Reply
#9
HellasVagabond
by: Xzibit
This generation I beleive AMD is hands down the superior card. People can argue until there eye bleed about which is the better GPU company but as far as overall value and performance AMD won this round.

Tahiti XT is what Kepler 680 should have been to Fermi 580.

Kepler can only handle Double-Percision at Single-Percision rate. This is why their Quadro series features x2 GK104s chips to make up for the lack of compute power and its also being marketed to a lower specific tier market than before since it lacks the DP.
"GK104 lacks the ECC and compute flexibility of the Fermi Tesla cards"
"NVIDIA’s goal for K10 is to go after the specific market segments that don’t need ECC and don’t need flexibility"


AMD didnt need to cut down/out Compute Power to achieve low power usage nor did it have to introduce the problamatic GPU Boost feature thats causing so many headaches to keep temps and power usage tamed.

I'm sure Nvidia could have made something similar in size and performance but something happened that they either couldnt shrink the die small enough with computation power on it and still have low power usage so I guess they just cut out the computation power to compete with AMD in similar die size and power usage.
You are not serious right ? Friend we compare graphics cards (technology-wise) the way they leave the factory in reference designs and not partner designs. Find me a single reference AMD card (single GPU) that can go up against the reference GTX680. Same goes for the GTX690 (multi-GPUs).
And then we have the various optimizations / cuts AMD has performed in their drivers over the last 3 years to gain more FPS something which not many people accept (until they run a game at default settings and start noticing various objects not where they should be - something which changes once you place every setting in high).
I could continue about this and talk about PhysX, GRID and VGX but no point in doing that.

I support both teams because i have both but calling the 79xx series better than the GTX6xx series is.....Weird.....AMD lost this round, plain and simple...Even if you just look at the fact that they still haven't released the 7990 that's more than enough.
Posted on Reply
#10
Xzibit
by: HellasVagabond
You are not serious right ? Friend we compare graphics cards (technology-wise) the way they leave the factory in reference designs and not partner designs. Find me a single reference AMD card (single GPU) that can go up against the reference GTX680. Same goes for the GTX690 (multi-GPUs).
And then we have the various optimizations / cuts AMD has performed in their drivers over the last 3 years to gain more FPS something which not many people accept (until they run a game at default settings and start noticing various objects not where they should be - something which changes once you place every setting in high).
I could continue about this and talk about PhysX, GRID and VGX but no point in doing that.

I support both teams because i have both but calling the 79xx series better than the GTX6xx series is.....Weird.....AMD lost this round, plain and simple...Even if you just look at the fact that they still haven't released the 7990 that's more than enough.
I havent owned a ATI card in 5 years. Still dont own own.

Your basing your premise on GAMING only. The GTX 680 is what 2-5% better in certain games. I hardly think that justifies the lack of computation power being 35%+ less.

PhysX thats the biggest marketing joke. You still take a 30-40% performance hit when its active on a single card.

GRID is a nice theory on paper but its a service. They explained it at E3 look it up. Current testing is being compared to console latency one-way. Highest test population they had is 40 people. You can say GRID is Nvidia responce to loosing all 3 Console contracts.

VGX is another nice thing on paper but has to do more with Nvidia dumping almost 2 billion in the last 6 months into the Mobile market than it doesn with its descrete graphics division. One of its prime focus will be to upscale performance from Tablets and Mobile devices. That in it of itself will cause all kind of backlash from their mobile hardware partners.
Posted on Reply
#11
Googoo24
And then we have the various optimizations / cuts AMD has performed in their drivers over the last 3 years to gain more FPS something which not many people accept (until they run a game at default settings and start noticing various objects not where they should be - something which changes once you place every setting in high).
Huh?
GRID
Is GRID really part of this?
Posted on Reply
#12
HellasVagabond
Friend EVERYTHING is nice on paper at first but what else do you expect ? At least they are bringing something new and up until today at least i have yet to see any driver optimizations by NV that make objects vanish from games (play for example world of tanks with an Radeon with stock settings and you will see what i am talking about - one out of many games with glitches out there when the CCC is on default) just to get a few FPS more.

And whoever claims that he's getting a card based on computing power so they can help SETI or Folding@Home well there are still previous gen cards with crazy computing power.

Even a 5% increase in Games (even if we don't take into consideration the various driver optimizations) is a 5% increase, thus a better card for gaming. If the 7970 was 5% better it would be a better gaming card. It's simple.

@Googoo , Google is your friend, it should clear this up for you.
Posted on Reply
#13
Googoo24
@Googoo , Google is your friend, it should clear this up for you.
Did that already, and can find nothing on disappearing objects from games. Perhaps you can "help" me instead.

And I'm still not getting how GRID is an incentive to buy a Nvidia card.
Posted on Reply
#14
theoneandonlymrk
by: HellasVagabond
And whoever claims that he's getting a card based on computing power so they can help SETI or Folding@Home well there are still previous gen cards with crazy computing power.
thats right , my 5870 and 5850 piss on an nvidia 6xx for compute power, Nv won what round??,
it is they that started the compute on Gpu revolution with physx, yet there behind on my score card, AMD made a card that really was a beast in all scenarios, nvidia went for a bit more on gameing not shit loads more , A Bit ,and for more money.

for ages you couldnt even buy NV if you wanted too and in all that time amd's been rollin em out and then those early adopters got their egos spanked by the 670 which for a gamer(NON FOLDING) is clearly the one to buy.
Posted on Reply
#15
HellasVagabond
@GooGoo24 you will not see many talking about objects disappearing, many just call them glitches in general. You can certainly find many results about AMD driver optimizations.

@theoneandonlymrk However the immediate opponent of the 5870 was the GTX 480 so no NV didn't lose that round then in terms of performance.
Posted on Reply
#16
eidairaman1
This topic is about the Refresh of the 7970 not the 5870, not the 670, not the 680, not the 480. Stay on topic
Posted on Reply
#17
HellasVagabond
Expanding a topic with correct arguments and civilized manners is never a bad thing.

In any case if the "trend" of out times is for AMD to release a card, sell it and then release the same basically card with higher clocks and sell it for even more then i really hope NV doesn't follow the same path.....I like to see technology progressing and not the same chips getting slightly updated and released again, that's all.
Posted on Reply
#18
eidairaman1
by: HellasVagabond
Expanding a topic with correct arguments and civilized manners is never a bad thing.

In any case if the "trend" of out times is for AMD to release a card, sell it and then release the same basically card with higher clocks and sell it for even more then i really hope NV doesn't follow the same path.....I like to see technology progressing and not the same chips getting slightly updated and released again, that's all.
where have you been the last 10+ years.

AMD (ATI at the time) had released 9500/9700 and then released 9600/9800 as a product refresh.

NV did the same thing then.

Product refreshes were only just reintroduced now since after they dropped the idea with the HD 2000 series
Posted on Reply
#19
theoneandonlymrk
by: HellasVagabond
Expanding a topic with correct arguments and civilized manners is never a bad thing.

In any case if the "trend" of out times is for AMD to release a card, sell it and then release the same basically card with higher clocks and sell it for even more then i really hope NV doesn't follow the same path.....I like to see technology progressing and not the same chips getting slightly updated and released again, that's all.
Thats what they all do.
Posted on Reply
#20
N3M3515
by: eidairaman1
where have you been the last 10+ years.

AMD (ATI at the time) had released 9500/9700 and then released 9600/9800 as a product refresh.

NV did the same thing then.

Product refreshes were only just reintroduced now since after they dropped the idea with the HD 2000 series
don't forget about
5900 ultra -> 5950 ultra
5600 -> 5700 ultra
4870 -> 4890
GTX 280 -> GTX 285
9700 pro -> 9800 pro -> 9800XT
X800XT -> X850XT
X1900XT -> X1950XTX
7800 GTX -> 7800 GTX 512
etc....

PD: now that i think about it, the jump from GTX 280 to GTX 285 was almost like GTX 480 -> GTX 580 hehe...
Posted on Reply
#21
eidairaman1
by: N3M3515
don't forget about
5900 ultra -> 5950 ultra
5600 -> 5700 ultra
4870 -> 4890
GTX 280 -> GTX 285
9700 pro -> 9800 pro -> 9800XT
X800XT -> X850XT
X1900XT -> X1950XTX
7800 GTX -> 7800 GTX 512
etc....

PD: now that i think about it, the jump from GTX 280 to GTX 285 was almost like GTX 480 -> GTX 580 hehe...
I know what you were getting at but thanks
Posted on Reply
#22
Casecutter
by: rvalencia
7970's consumes 210 watts link http://www.atomicmpc.com.au/Feature/296004,amd-radeon-hd-7970-reference-disassembly-guide.aspx

From http://www.guru3d.com/article/asus-radeon-hd-7970-directcu-ii-review/4 Radeon HD 7970 has 210 watts.

Believe it or not, the high end Radeon HD 7970 has a rated peak TDP (maximum power draw) of just 210 Watt, and that's really all right for a product of this caliber, features and performance.

From http://www.techspot.com/review/481-amd-radeon-7970/page2.html Radeon HD 7970 has 210 watts.

"it still chugs up to 210 watts under load
Not arguing, TDP is different than consumption under load. TDP is the theoretical limit for the chip, PCB, power section etc of AMD reference design. Can that card or other AIB designs achieve above that sure, with really good cooling (water blocks) some reference can make it above that, but then you take the risk on bricking, not that you don’t have that same risk with normal OC’n on regular card it just that most often there's some head room left to bump it.

As you say most often 210W is what it pulls at normal gaming/stress loads, but there’s something like 20% buffer on max power in the design of a reference board. What this indicates is probably AMD will hold to around that 210W, but now can add 20% higher clock while still holding within that. That's not something you get from binning chips, but a improvement of the manufacturing process that controls gat leakage.
Posted on Reply
#23
HellasVagabond
by: N3M3515
don't forget about
5900 ultra -> 5950 ultra
5600 -> 5700 ultra
4870 -> 4890
GTX 280 -> GTX 285
9700 pro -> 9800 pro -> 9800XT
X800XT -> X850XT
X1900XT -> X1950XTX
7800 GTX -> 7800 GTX 512
etc....

PD: now that i think about it, the jump from GTX 280 to GTX 285 was almost like GTX 480 -> GTX 580 hehe...
Actually some of the products you refer to had other changes and not just higher/lower clocks...Some had more/less shader units and others featured more ram clocked differently....Plus some where released with a few weeks in between so they weren't ment as a product refresh but as an addition to that line of GPUs.

However even if that was always the case i don't think we should be happy about it nor accept it with open arms.
Posted on Reply
#24
N3M3515
by: HellasVagabond
Actually some of the products you refer to had other changes and not just higher/lower clocks...Some had more/less shader units and others featured more ram clocked differently....Plus some where released with a few weeks in between so they weren't ment as a product refresh but as an addition to that line of GPUs.

However even if that was always the case i don't think we should be happy about it nor accept it with open arms.
Exactly, some.
Posted on Reply
#25
Googoo24
by: N3M3515
Exactly, some.
Personally, I also consider the 460 (despite the numbering) to be a replacement for the abysmal 465. It completely eclipsed that card, and came two months later. Some might disagree though, since it was G104.
Posted on Reply
Add your own comment