Tuesday, October 1st 2013

Radeon R9 290X Clock Speeds Surface, Benchmarked

Radeon R9 290X is looking increasingly good on paper. Most of its rumored specifications, and SEP pricing were reported late last week, but the ones that eluded us were clock speeds. A source that goes by the name Grant Kim, with access to a Radeon R9 290X sample, disclosed its clock speeds, and ran a few tests for us. To begin with, the GPU core is clocked at 1050 MHz. There is no dynamic-overclocking feature, but the chip can lower its clocks, taking load and temperatures into account. The memory is clocked at 1125 MHz (4.50 GHz GDDR5-effective). At that speed, the chip churns out 288 GB/s of memory bandwidth, over its 512-bit wide memory interface. Those clock speeds were reported by the GPU-Z client to us, so we give it the benefit of our doubt, even if it goes against AMD's ">300 GB/s memory bandwidth" bullet-point in its presentation.

Among the tests run on the card include frame-rates and frame-latency for Aliens vs. Predators, Battlefield 3, Crysis 3, GRID 2, Tomb Raider (2013), RAGE, and TESV: Skyrim, in no-antialiasing, FXAA, and MSAA modes; at 5760 x 1080 pixels resolution. An NVIDIA GeForce GTX TITAN was pitted against it, running the latest WHQL driver. We must remind you that at that resolution, AMD and NVIDIA GPUs tend to behave a little differently due to the way they handle multi-display, and so it may be an apples-to-coconuts comparison. In Tomb Raider (2013), the R9 290X romps ahead of the GTX TITAN, with higher average, maximum, and minimum frame rates in most tests.
RAGE
The OpenGL-based RAGE is a different beast. With AA turned off, the R9 290X puts out an overall lower frame-rates, and higher frame latency (lower the better). It gets even more inconsistent with AA cranked up to 4x MSAA. Without AA, frame-latencies of both chips remain under 30 ms, with the GTX TITAN looking more consistent, and lower. At 4x MSAA, the R9 290X is all over the place with frame latency.
TESV: Skyrim
The tester somehow got the game to work at 5760 x 1080. With no AA, both chips put out similar frame-rates, with the GTX TITAN having a higher mean, and the R9 290X spiking more often. In the frame-latency graph, the R9 290X has a bigger skyline than the GTX TITAN, which is not something to be proud of. As an added bonus, the VRAM usage of the game was plotted throughout the test run.

GRID 2
GRID 2 is a surprise package for the R9 290X. The chip puts out significantly, and consistently higher frame-rates than the GTX TITAN at no-AA, and offers lower frame-latencies. Even with MSAA cranked all the way up to 8x, the R9 290X holds out pretty well on the frame-rate front, but not frame-latency.
Crysis 3
This Cryengine 3-based game offers MSAA and FXAA anti-aliasing methods, and so it wasn't tested without either enabled. With 4x MSAA, both chips offer similar levels of frame-rates and frame-latencies. With FXAA enabled, the R9 290X offers higher frame-rates on average, and lower latencies.
Battlefield 3
That leaves us with Battlefield 3, which like Crysis 3, supports MSAA and FXAA. At 4x MSAA, the R9 290X offers higher frame-rates on average, and lower frame-latencies. It gets better for AMD's chip with FXAA on both fronts.
Overall, at 1050 MHz (core) and 4.50 GHz (memory), it's advantage-AMD, looking at these graphs. Then again, we must remind you that this is 5760 x 1080 we're talking about. Many Thanks to Grant Kim.
Add your own comment

100 Comments on Radeon R9 290X Clock Speeds Surface, Benchmarked

#1
jigar2speed
At 800 MHz (core) and 4.60 GHz (memory) - there is a significant advantage to AMD - any heat from Nvidia - AMD can increase the speed of the chip plus the RAM speed can upped to ridicules speeds.

Looks like AMD's card partners are going to enjoy their time tweaking this chip.
Posted on Reply
#2
Slacker
I can't wait for this card to be officially lifted of its NDA and gets benched. It looks really promising and maybe a great upgrade from my 6970:)
Posted on Reply
#3
HumanSmoke
The memory is clocked at 1125 MHz (4.20 GHz GDDR5-effective).
Should read 4.50GHz
Posted on Reply
#4
boogerlad
I hope this means high overclockability... Core clocks seem low, and memory clocks are low probably because of bus instability due to so many traces.
Posted on Reply
#5
Nordic
So Dynamic underclocking? Not the clear winner over titan I had hoped for.
Posted on Reply
#6
btarunr
Editor & Senior Moderator
james888So Dynamic underclocking? Not the clear winner over titan I had hoped for.
At $599 (compared to TITAN's $999), I'm sure even you can't complain.
Posted on Reply
#7
Nordic
btarunrAt $599 (compared to TITAN's $999), I'm sure even you can't complain.
Compared to the titan it is a great deal for some great hardware. That is not my point though. I am not interested in upgrading from my 7970. I still hoped it would do more for very superficial reasons that involve popcorn. I just want to see AMD beat nvidea without question.

I also did not mean dynamic underclocking as a negative. That is just how the article described it with out the term I used. I was wondering if such a term would be correct.
Posted on Reply
#8
Steevo
I am guessing the review sample is a ES spin, so final could have higher clocks, and as well increased memory clocks as AMD have mentioned.


Considering this is almost hand in hand with titan at 800Mhz, 1Ghz would would provide somewhere around 20% more performance depending on achievable memory speeds.

27% increase in SP count but W1zz's benchmarks of Titan VS 7970 shows the AMD only 12% slower at that resolution, so either the card is crippled and someone got it for other testing, or they have failed to reach their target.

A efficiency improvement in the 7970 for higher core speeds and better memory would have been a better investment for them if this were true. I doubt this to be true, instead its most likely a crippled ES card.

www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan/22.html



Or its meant to show the validity of the core architecture at the same speed as Titan, while consuming less power, and having more features possibly.
Posted on Reply
#9
GoldenTiger
btarunrAt $599 (compared to TITAN's $999), I'm sure even you can't complain.
And against a $650 or less GTX 780 like the Superclocked ACX or similar, that are faster than Titans right out of the box? It gets murkier there ;). Personally I've sold my GTX 780 and am looking to spend $500-800 on a card or pair of cards to run faster. Right now it's not looking like this one's going to be anything particularly better, unfortunately.
Posted on Reply
#10
hilpi
Very fishy Tomb Raider Benchmark, Min FPS higher or the same as the AVG FPS...
Posted on Reply
#11
buggalugs
I don't think they are the final clock speeds, and I don't think AMD will stop using dynamic overclocking.
Posted on Reply
#12
1d10t
with the same core clock,less TMU and ROP but higher bus width AMD can perform on par or besting Titan.Clearly they mock nVidia by pricing this card only $599.Sure,nVidia can release Titan Ultra anytime,but when it does AMD just bumps clock and rename to R9 290XGE² :laugh:
SteevoOr its meant to show the validity of the core architecture at the same speed as Titan, while consuming less power, and having more features possibly.
i'm second to that mate :)
Posted on Reply
#13
SIGSEGV
1d10twith the same core clock,less TMU and ROP but higher bus width AMD can perform on par or besting Titan.Clearly they mock nVidia by pricing this card only $599.Sure,nVidia can release Titan Ultra anytime,but when it does AMD just bumps clock and rename to R9 290XGE² :laugh:
Ultra enthusiast market vs Enthusiast market..
They (AMD) redefined it well bro... :laugh:


:toast:
Posted on Reply
#14
the54thvoid
Intoxicated Moderator
It's the same old story. NDA is 14 days away, by most accounts. Hopefully they're wrong. Until then all the benchmarks we see are dubious.

I'm thinking the Titan benches are dubious due to the huge latency spikes. It's one of the most consistent cards out there. Unless of course it's not that good at handling triple screen resolutions?

Either way, those clocks look far too low on the core for that same manufacturing process. You'd think AMD would learn from low clock speeds (the original 7970). I call 'meh' until it's official.
Posted on Reply
#15
TRWOV
Maybe drivers aren't there yet? That would explain why the dynamic overclocking isn't working. Steevo's suggestion that this isn't final silicon could be true too.
Posted on Reply
#16
RCoon
All I see is this

These benchmarks mean nothing, and I highly doubt the core or memory clock is correct. Like way off. I certainly dont see these benchmarks as being very open and clear.
Posted on Reply
#17
esrever
At 800mhz gpu and 4500mhz vram. The gpu does not hit AMD's specs. Which says 4b triangles per second and over 300GB/s vram. You need 1ghz GPU and 5000mhz vram to hit the specs. This leak is either bullshit or an ES that won't reflect final performance, more likely bullshit.
Posted on Reply
#18
HumanSmoke
the54thvoidI'm thinking the Titan benches are dubious due to the huge latency spikes. It's one of the most consistent cards out there. Unless of course it's not that good at handling triple screen resolutions?
Without any horizontal axis (time) in the graphs there's only so much you can infer I suppose. As for the Titan and 5760x1080, PCPer and others have tested and the graphs look a bit smoother (horizontal axis notwithstanding)
the54thvoidEither way, those clocks look far too low on the core for that same manufacturing process. You'd think AMD would learn from low clock speeds (the original 7970). I call 'meh' until it's official.
Me to. 800MHz seems absurdly low for a GPU that is ~30% smaller than a GK110 which is conservatively clocked 10% higher,
Posted on Reply
#19
net2007
I think you guys are missing the point here. If these benchmarks are real, the 290x hangs with a stock titan. It's obvious the clocks are low.
Posted on Reply
#20
sunweb
Holy shader O_O Is this true? Cores at 800 Mhz + ram at 4,5 Ghz + no overclocking and still stomps Titan at some games while goes toe to toe with the others? Damn. I need more benchmarks, if its true then wow.
Posted on Reply
#21
unholythree
RCoonAll I see is this
emotibot.net/pix/6233.gif
These benchmarks mean nothing, and I highly doubt the core or memory clock is correct. Like way off. I certainly dont see these benchmarks as being very open and clear.
Agreed, I'm pretty close to an AMD fanboy; I want them to succeed badly but hype is a dangerous thing. I know why I'm still running a 1090T.
Posted on Reply
#22
Hayder_Master
it's not beat Titan it's just same or 2 FPS more and maybe less in some games
Posted on Reply
#23
Recus
From slightly faster than GTX 780 to faster than Titan. Seems legit :wtf:
Posted on Reply
#24
Aquinus
Resident Wat-man
I think I'll continue to sit tight so our own W1zzard can give us numbers that we can trust. ;)

Either way, if the benchmarks are legit, it's a great price point for the kind of performance it will put out. I can't see how people are complaining except for that it's a benchmark before the end of the NDA.
Posted on Reply
#25
HumanSmoke
RecusFrom slightly faster than GTX 780 to faster than Titan. Seems legit :wtf:
Ah, but it's only the Titan Paradox Edition...its a special one-off card that has an average framerate 6 frames per second lower than its minimum framerate. No doubt when you overclock it, the card travels backwards in time and becomes its own inventor :eek:
Posted on Reply
Add your own comment
Apr 27th, 2024 02:33 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts