Thursday, September 26th 2013

Radeon R9 and Radeon R7 Graphics Cards Pictured Some More

Here's a quick recap of AMD's updated product stack, spread between the R9 and R7 series. This article can help you understand the new nomenclature. AMD's lineup begins with the Radeon R7 250 and Radeon R7 260X. The two are based on the 28 nm "Curacao" silicon, which is a variation of the "Pitcairn" silicon the previous-generation Radeon HD 7870 was based on. The R7 250 is expected to be priced around US $89, with 1 GB of RAM, and performance rated at over 2,000 points by 3DMark Firestrike benchmark. The R7 260X, features double the memory at 2 GB, higher clock speeds, possibly more number crunching resources, Firestrike score of over 3,700 points, and a pricing that's around $139. This card should turn up the heat against the likes of GeForce GTX 650 Ti Boost.

Moving on, there's the $199 Radeon R9 270X. Based on a chip not much unlike "Tahiti LE," it features 2 GB of memory, and 3DMark Firestrike score of over 5,500 points. Then there's the Radeon R9 280X. This card, priced attractively at $299, is practically a rebrand of the Radeon HD 7970 GHz Edition with. It features 3 GB of RAM, and over 6,800 points on 3DMark Firestrike. Then there are the R9 290 and R9 290X. AMD flew dozens of scribes thousands of miles over to Hawaii, and left them without an official announcement on the specifications of the two. From what AMD told us, the two feature 4 GB of memory, over 5,000 TFLOP/s compute power, and over 300 GB/s memory bandwidth. The cards we mentioned are pictured in that order below.

More pictures follow.

Radeon R7 250
Radeon R7 260X
Radeon R9 270X
Radeon R9 280X
Radeon R9 290
Radeon R9 290X
Add your own comment

77 Comments on Radeon R9 and Radeon R7 Graphics Cards Pictured Some More

#51
Casecutter
The Von MatricesThis seems to make no business sense since any chips with defects can't be sold.
There's nothing positive these are the only models they intend to release or just a sprinkling of some spots. That said it pretty hard to fill in when they have such tight price steps... Idk. Just saying these might not be the only one to release, or as Nvidia did you hold back binned part and release them 6 month from now as R8's? Nvidia has shown the two derivative models per wafer is no longer and that they can to appropriated to completely different segments. Not says it a bad thing, but that the common way of looking at this may well be no more.
HumanSmokeEither the charts AMD posted are correct- in which case information can be extrapolated from them OR they are incorrect.
Or AMD just maded that R9 290X bar to fade out... After you used that chart to arrive at the 17-18% above comment, I noticed that the others all seem to end, while that one fades... IMO. It perhaps could be AMD trickery, that slide is not a clear measurement of data indicating performance. If it was AMD would have to stipulate the system spec’s and other parameters to make relevant engineering data... It's a P-R slide, as such not something we should deem as gospel.
Posted on Reply
#52
The Von Matrices
CasecutterThere's nothing positive these are the only models they intend to release or just a sprinkling of some spots. That said it pretty hard to fill in when they have such tight price steps... Idk. Just saying these might not be the only one to release, or as Nvidia did you hold back binned part and release them 6 month from now as R8's? Nvidia has shown the two derivative models per wafer is no longer and that they can to appropriated to completely different segments. Not says it a bad thing, but that the common way of looking at this may well be no more.
Nvidia makes a mind-boggling number of OEM card variations to efficiently allocate defective chips. Only the best chips get used for retail cards. This results in a lot of complaints from enthusiasts because an Nvidia OEM card with the same model number as a retail card might have a completely different core, clock speeds, and memory configuration from the retail card with the only similarity between the two being an approximate performance. Maybe AMD is going this same route with its defective chips?
Posted on Reply
#53
TheoneandonlyMrK
HumanSmokeWow. That was a chore to read. Ever heard of punctuation and grammar, or English your second language?
Please let me simplify:
Either the charts AMD posted are correct- in which case information can be extrapolated from them OR they are incorrect and AMD is either lying or has some serious QC and/or internal communications problems.
It is either one or the other.

For someone who isn't "assed if the card blows a fart" you seem to be posting an awful lot. Personally I am interested in the new releases, both from an enthusiast standpoint, and as someone who advises and builds systems for others. If you don't like what is said please use, by all means use the ignore function in your CP.

YW
The reason i reply is because you post soooo much drivel.......:shadedshu
you are misguiding others with your myopic view of the world based on what most agree is a scant pr slide meant to churn the rumour mill.

but with you its allway's essentially,,

amd are lieing shit nobs ,OR they Are lieing shit nobs(cross out one or two from each side though each post) ,,, yeah your english makes it sound like more than that,, but it is'nt
Posted on Reply
#54
Hilux SSRG
wolfThanks Hilux SSRG, you pulled a fanboy.
Not a fanboy comment. Just stating that both camps have put out rehashes and not new chips for the low to mid range for the last two generations. It's a shame really that both companies are unfortunately keeping prices artificially high on old product. And I bet Nvidia will pull a gtx 800 series rehash of the 600 series no doubt, for the low to mid range.
Posted on Reply
#55
The Von Matrices
Hilux SSRGNot a fanboy comment. Just stating that both camps have put out rehashes and not new chips for the low to mid range for the last two generations. It's a shame really that both companies are unfortunately keeping prices artificially high on old product. And I bet Nvidia will pull a gtx 800 series rehash of the 600 series no doubt, for the low to mid range.
It all comes down to TSMC. They killed half-node processes and now are taking longer than 2 years for the full-node processes as well. Nvidia and AMD can only do so much without a process advancement. Without a process advancement, they can't increase performance without increasing die size and manufacturing cost; hence, no more performance in cards at the same price point.

Edit: this simplifies things a bit because they could reduce margins or design a new chip, but reducing margins would bring them back to the HD4000/GTX200 series price wars which are undesirable for them, and a redesigned low end chip would only be viable for a short time before 20nm came in and made it obsolete - they never would recoup their investment.
Posted on Reply
#56
Xzibit
CasecutterOr AMD just maded that R9 290X bar to fade out... After you used that chart to arrive at the 17-18% above comment, I noticed that the others all seem to end, while that one fades... IMO. It perhaps could be AMD trickery, that slide is not a clear measurement of data indicating performance. If it was AMD would have to stipulate the system spec’s and other parameters to make relevant engineering data... It's a P-R slide, as such not something we should deem as gospel.
Holy ****

Someone with a brain. :respect:
Posted on Reply
#57
Casecutter
The Von MatricesMaybe AMD is going this same route with its defective chips?
Good point and might well be a reason for the model matrix, let it rain confusion.
Posted on Reply
#58
Xzibit
CasecutterGood point and might well be a reason for the model matrix, let it rain confusion.
More confusion...

If you go back and look at the presentation Raja when hes talking about "Over 5 TFLOPS Compute"

Hes talking about the series R9 290. Not specific to R9 290X.

GK110 - 780 to Titan there is a 517 differance

Tahiti - 7950 to 7970 GHZ there is a 1229 difference

Add that to the speculation...
Posted on Reply
#59
Hilux SSRG
The Von MatricesIt all comes down to TSMC. They killed half-node processes and now are taking longer than 2 years for the full-node processes as well. Nvidia and AMD can only do so much without a process advancement. Without a process advancement, they can't increase performance without increasing die size and manufacturing cost; hence, no more performance in cards at the same price point.
True, but I got to wonder if in the process of designing chips both AMD and Nvidia have reduced performance in order to include low power. I do not know enough about die shrinks but anyone can see Intel has quit on performance and only seeks low power for their CPUs. I wonder if GPUs are/have gone that direction as well.
Posted on Reply
#60
Casecutter
XzibitMore confusion...
Just meant the new naming convention, can provide new ways to perplex and obscure.
Posted on Reply
#61
The Von Matrices
Hilux SSRGTrue, but I got to wonder if in the process of designing chips both AMD and Nvidia have reduced performance in order to include low power. I do not know enough about die shrinks but anyone can see Intel has quit on performance and only seeks low power for their CPUs. I wonder if GPUs are/have gone that direction as well.
I'm not sure that low power is AMD and NVidia's primary concern, particularly in the present. Mainstream laptops have already gotten rid of the discrete GPU, and the remaining mobile GPUs aren't particularly optimized for low power consumption. Both manufacturers have realized that they can't beat integrated graphics power consumption no matter how well they design a GPU. The low-power consumption graphics segment is now solely filled by integrated graphics. Think of NVidia's Optimus - they've essentially admitted that it's simply not possible to beat the power consumption of integrated graphics even by downclocking or shutting off most of the shaders of a discrete GPU. The discrete GPU has been relegated to situations where performance is more of a concern than power consumption. I don't see that relationship between discrete and integrated graphics changing anytime soon.
Posted on Reply
#62
Xzibit
btarunrUnless AMD's PowerPoint skills suck, R9 290X Firestrike is under 8000.



Notice the top block (which ends at 8000) is fading at the top. GTX TITAN's Firestrike score is ranging between 8,200 and 8,700.
They still suck at it because the R9 280X doesn't match up to the 7970 GHZ.

7970 @ Stock (925/1375)
Fire Strike
Score:6622
Graphic Score:7402

7970 @ 7970 GHZ/Boost (1050/1500)
Fire Strike
Score:7210
Graphic Score:8235
Posted on Reply
#63
HumanSmoke
CasecutterOr AMD just maded that R9 290X bar to fade out... After you used that chart to arrive at the 17-18% above comment, I noticed that the others all seem to end, while that one fades...
Yes, quite possible. Maybe the 18% is invalid....so the only other metric AMD released was the "over 5TFlop" FP32 figure. Presumably if the number was closer to 6 then that is what they would have used, no? Assuming a 5.1-5.5 range conveniently gives you an 18.6% increase over Tahiti's 4.313 TF number up to a maximum of ~27%
CasecutterIt perhaps could be AMD trickery, that slide is not a clear measurement of data indicating performance. If it was AMD would have to stipulate the system spec’s and other parameters to make relevant engineering data... It's a P-R slide, as such not something we should deem as gospel.
Again, quite possible. It's not as though AMD hasn't got a prior record in this regard. BUT in the absence of any other data what would YOU suggest be used as a comparison and speculation point? (This is not a rhetorical question)

Or is it verboten to speculate ? I seem to recall that many people here have speculated and hypothesised on much less information than that. Allow me to refresh your memory on your speculation for what is now called the 290X:
CasecutterWe're not going to see anything like 39% in just a re-spin on 28Nm. I figure 25% in hardware, what they find in new release driver who knows. If they gave us 7990 performance which is 22% more than a Titan @2650x, while holding to the alleged 250W... in a $550 card.
As far as performance I speculated upon, I worked on the given information and came up with ~18% over the HD 7970GE based on the Firestrike slide with preliminary driver support, and and the FP32 numbers.The fact that the GTX 780 scores around the same:

Led me to conclude that the cards were likely evenly matched....as other sources have intimated:
R290 X unfortunately I cannot release pricing or specification info yet, but price wise its similar to GTX 780 but slightly faster, right now, of course AMD could change the launch price at any time
The other speculation I indulged in was price, and the GPU under the various cards skirts.
None of the speculation has been shown inordinately out of the likely expected parameters so far.
Posted on Reply
#64
The Von Matrices
I want to make sure that everyone knows that Firestrike is both a GPU and CPU benchmark. You can sway the scores by +/-1000 with the same GPU just by changing the CPU. These AMD scores are worthless for comparing to competing cards unless they were both run on the same platform. But since AMD won't divulge this information, then we just have to accept that any graphics card card scoring within about 20% could be equal in performance to AMD's new graphics cards. It could swing either way.
Posted on Reply
#65
Serpent of Darkness
From 6990 to 7990, and including the 7970, they all average around a 25% to 35% increase in core frequency when they are overclocked. Let's assume this is true.

2816 Streaming Processors (2) (1000 MHz boost) = 5.632 GFLOPs.

If you consider the OC potential of the previous generations. The average is at around 30%,

Single 6990 base clock is 830 Mhz. It's max OC is roughly around 1060 to 1120 Mhz. 27.71%
Single 7970 base clock is 900 Mhz. 1050 Mhz boost. It's max OC is roughly 1321 Mhz w/ LN2. 46.78%
Single 7990 base clock is 950 Mhz. 1000 Mhz boost. It's max OC is roughly 1100 Mhz. 10.00%; this may actually be higher...

Average all together from up above = 28.16%...

What's my point.
10% OC headroom: 1.1 x 5.632 GFLOPs = 6.19852 GFLOPs.
20% OC headroom: 1.2 x 5.632 GFLOPs = 6.7584 GFLOPs.
28.16% OC Headroom: 1.2816 x 5.632 GFLOPs = 7.21813 GFLOPs.
30.00% OC Headroom: 1.3 x 5.632 GFLOPs = 7.3216 GFLOPs.

So, if plausible, it may not be that difficult or impossible to surpass the 5.0 to 6.0 GFLOPs boundaries with the up coming R9-990... Sadly, only time will tell...
Posted on Reply
#66
Maban
Anyone else notice the DVI ports on the R9 280X, 290, and 290X are DVI-D instead of DVI-I?
Posted on Reply
#67
EarthDog
MabanAnyone else notice the DVI ports on the R9 280X, 290, and 290X are DVI-D instead of DVI-I?
And? :confused:
Posted on Reply
#68
Maban
EarthDogAnd? :confused:
And...that means it can't output to VGA. It's just an interesting fact.
Posted on Reply
#69
Casecutter
HumanSmokeOr is it verboten to speculate
Oh no, speculation is all we have, either in late July as in my words or as of you working from a new slide. Just don't get all "all-uppie" when someone points out an alternate estimation. I just said it fades...

As to what you bolded in my words you need to revival the context of the question:
Prima.VeraI would love to see the 9970 having the same performance level as two 7970 cards.
CasecutterWe're not going to see anything like 39% in just a re-spin on 28Nm. I figure 25% in hardware, what they find in new release driver who knows. If they gave us 7990 performance which is 22% more than a Titan @2650x, while holding to the alleged 250W... in a $550 card. Then where's that leave Volcanic Island 4-5 months ?
I believe I used W1zzard reference GTX780 review and that could've skewed the percentages a little.
www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_780/26.html

All I was saying if they could deliver such performance (of a 7990), it would be much more than I could envision... I didn't say I was expecting. While I wrote Volcanic... meant what does that means for 20Nm process in a couple of month most still though Q1 '14, even in July it was thought Volcanic Island was 20Nm.
I exhaled a while back... take a breath and wait for the 15th. ;)
Posted on Reply
#70
EarthDog
MabanAnd...that means it can't output to VGA. It's just an interesting fact.
Who the hell is buying a $400+ card to run on a monitor with only VGA inputs? If you can afford such a card, you better have a monitor with DVI on it, LOL!
Posted on Reply
#71
The Von Matrices
EarthDogWho the hell is buying a $400+ card to run on a monitor with only VGA inputs? If you can afford such a card, you better have a monitor with DVI on it, LOL!
I can see two (small) groups of people having problems with this:

1) The few die-hards still using 1600/1920x1200 CRTs
2) Reviewers who test monitors for latency by comparing an LCD with a CRT.
Posted on Reply
#72
EarthDog
Forget the voiceless minority. :p
Posted on Reply
#73
The Von Matrices
EarthDogForget the voiceless minority. :p
I like compatibility but if there is no longer a large enough userbase to justify the legacy features then by all means remove them and use the savings toward features that more people will use. Right now DVI-D to VGA converters cost around $200 but if you want to mix legacy hardware with new hardware then you need to be willing to pay for adapters.
Posted on Reply
#74
a_ump
The Von MatricesI like compatibility but if there is no longer a large enough userbase to justify the legacy features then by all means remove them and use the savings toward features that more people will use. Right now DVI-D to VGA converters cost around $200 but if you want to mix legacy hardware with new hardware then you need to be willing to pay for adapters.
Where did you get they cost $200? i simple adapter would work wouldn't it? those are only $5-15. and converter boxes are $45-90. all this with a simple google search.

Also i use a VGA monitor with my GTX 560 because of budget i couldn't get a new monitor. Just because people can get great hardware to go in the case doesn't mean they aren't stretching their limit. Hell i don't plan to replace my current monitor until it breaks so i may be part of that voiceless userbase ;) lol
Posted on Reply
#75
The Von Matrices
a_umpWhere did you get they cost $200? i simple adapter would work wouldn't it? those are only $5-15. and converter boxes are $45-90. all this with a simple google search.

Also i use a VGA monitor with my GTX 560 because of budget i couldn't get a new monitor. Just because people can get great hardware to go in the case doesn't mean they aren't stretching their limit. Hell i don't plan to replace my current monitor until it breaks so i may be part of that voiceless userbase ;) lol
The simple adapters only work for DVI-I/DVI-A ports (which means they already have native analog output; the adapter just rearranges the pins); these cards have only DVI-D ports which means no native analog output. You are right; I overestimated the active adapters' cost; I did find a Startech adapter for $45.99.

VGA has a resolution limitation of about 3MP (QXGA 2048x1536) @60Hz; if you are using a VGA monitor, you can't have a resolution above that, but even those types of monitors are rare. These high end cards are overkill for a single 2MP or 3MP 60Hz monitor. Rest assured, the mid range and lower end cards have DVI-I ports so that you can still use cheap VGA adapters, and they are a much better performance match for the VGA resolutions.

EDIT: Interestingly enough, DisplayPort to VGA adapters are much cheaper than DVI-D to VGA adapters. You can get a DisplayPort to VGA adapter for $24.71, although it is limited to 1920x1200 @60Hz (not QXGA 2048x1536 @60Hz).
Posted on Reply
Add your own comment
Apr 26th, 2024 08:06 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts