Thursday, October 3rd 2013

Radeon R9 290X Priced at $729.99 on Newegg.com

US retailer Newegg.com leaked pricing of its reference design Radeon R9 290X graphics cards. An MSI-branded reference-design card got its own store page with its pricing redacted, but Newegg did a sloppy job at that. On close inspection of the HTML code of the page, we discovered the pricing value intended for that page to be US $729.99 (excl. taxes). Given that Newegg.com tends to add $10 to $30 on MSRP, pricing of the R9 290X is likely to have been set at $699 (excl. taxes).
Add your own comment

126 Comments on Radeon R9 290X Priced at $729.99 on Newegg.com

#101
Casecutter
HalfAHertzI think that anything over 650$ for the retail card is just greed on AMD's side unless the 290x really stands out vs the 780.
IT's greed on either side and "Greed is Good" for the share holders. Get too greedy and folks can just not purchase, bad for share holders. That's how it works, not saying it right it's the mentality of companies beholden to sharholders. Run the price up as high as your marketing feels it will bare for the volume you have to sell.
Posted on Reply
#102
BiggieShady
LemmingOverlordFunny. Peeps were under the impression it would cost $599, not $729.
So they are actually releasing Hawaii GPU for $599, just not Hawaii Pro :laugh: R9 290X will probably come with a premium at $699
Posted on Reply
#103
andresgriego
CasecutterIT's greed on either side and "Greed is Good" for the share holders. Get too greedy and folks can just not purchase, bad for share holders. That's how it works, not saying it right it's the mentality of companies beholden to sharholders. Run the price up as high as your marketing feels it will bare for the volume you have to sell.
At this price point, 1 sell = 3 sells with a healthly profit. You could argue that there are more 'parts' that make up a higher end card but those are cheap, so is 'better' silicon. Add on some $20 bling and you're sustainable.
Posted on Reply
#104
HumanSmoke
andresgriegoAt this price point, 1 sell = 3 sells with a healthly profit. You could argue that there are more 'parts' that make up a higher end card but those are cheap, so is 'better' silicon. Add on some $20 bling and you're sustainable.
True enough to an extent. The mark up on higher priced cards is proportionally higher than 2-3 lower tier cards.
2-3 lower tier cards might sell for substantially less than one high priced card, but their component fit-out - 2-3 x PCB's, I/O, GDDR5 chips, power delivery and logic components, cooling, packaging and shipping will still be proportionally higher- and if the larger die yields well you also have a likely minimal overall fabrication cost penalty- if any.
Posted on Reply
#105
Nirutbs
too expensive at all :banghead:
Posted on Reply
#106
NeoXF
To AMD: Make Mantle UE3.5, UE4, Unity3D, Source(1&"2") and maybe whatever Ubisoft's using and you've got the performance lock-down.

Frostbyte 3 & whatever Eidos is using are a great start.


And here's a little joke: If I've got a quad @ 4GHz, 8GB of RAM & a R7 260X GPU... will I be able to run any XBO-era game up to it's EOL @ 900p or such? :p
Posted on Reply
#107
GSquadron
This is fake news
It says 9999.99 now and everyone can modify that price

What I found out though is that if you type in R9 290X on amazon, you will get a wonderful Sapphire Radeon HD 7950 3GB DDR5
Posted on Reply
#108
EpicShweetness
HalfAHertzI think that anything over 650$ for the retail card is just greed on AMD's side unless the 290x really stands out vs the 780.
Especially when you think about the fact that the R9 280X (7970) can be had for $300. 2 of those cards would surely and handily beat the R9 290X/780/TITAN. At $500 then the solution makes sense, and $600 your just getting it for the simplicity of 1 card, at $700 :wtf:.
Posted on Reply
#109
AsRock
TPU addict
AleksanderThis is fake news
It says 9999.99 now and everyone can modify that price

What I found out though is that if you type in R9 290X on amazon, you will get a wonderful Sapphire Radeon HD 7950 3GB DDR5
LOL, of course it is i just found it funny finding it in the content of their website at the time.
Posted on Reply
#110
eidairaman1
The Exiled Airman
Youch, thats too hefty for my tastes, 2x280s or 270s will suffice
Posted on Reply
#111
iKhan
I think I'll stick with my 7870 XT for a while. I'll probably pick up 280X or two at some point, but not any time soon.

Any word of GTX 770 price cut?
Posted on Reply
#113
NeoXF
^ Looks like about the same grapics score a Titan gets... Not that I`m fully convinced of the validity of it or anything.
Posted on Reply
#114
15th Warlock
NeoXF^ Looks like about the same grapics score a Titan gets... Not that I`m fully convinced of the validity of it or anything.
Not quite, short by about 700 points on the graphic department, but you have to consider that this are probably beta drivers they're using for that run, you can even see the discrepancies in frame pacing compared to Titan, the lines are much flatter when you compare both. Chances are the driver used for that run is not optimized for the 290X, that is, like you said, if this is a real benchmark.

This is what a single Titan scores:

Posted on Reply
#115
jihadjoe
Not at all surprised at the pricing.
AMD isn't here to run a charity.
Posted on Reply
#116
Ja.KooLit
hope AMD would reduce price soon. alot of people must be disappointed at the launch price
Posted on Reply
#117
NeoXF
15th WarlockNot quite, short by about 700 points on the graphic department, but you have to consider that this are probably beta drivers they're using for that run, you can even see the discrepancies in frame pacing compared to Titan, the lines are much flatter when you compare both. Chances are the driver used for that run is not optimized for the 290X, that is, like you said, if this is a real benchmark.

This is what a single Titan scores:

img.techpowerup.org/131006/Singletitan.jpg
Um, no. Well, maybe your WC/OC`d ones. :D
But we`re talking about stock here... well, w/ each brand`s implementation of turbo/self-OC I guess...




Either way, I expect AT LEAST a little improvement w/ newer drivers, maybe not how R7000s did since early 2012 `till now, but still...
Posted on Reply
#118
the54thvoid
Intoxicated Moderator
NeoXFUm, no. Well, maybe your WC/OC`d ones. :D
But we`re talking about stock here... well, w/ each brand`s implementation of turbo/self-OC I guess...

Either way, I expect AT LEAST a little improvement w/ newer drivers, maybe not how R7000s did since early 2012 `till now, but still...
GTX Titan, at 993 core, memory at stock, 1.15 volts. (custom bios with boost eliminated, this is about the standard stock boost speed with fan on higher settings to keep thermals down.) Almost every Titan goes to 993 as a default top boost and stays there. Reviewers that don't touch fan profiles will eventually get lower scores.

Titan at stock (boost) is only 9% faster than the reported 290X run.

Note gpu-z reported 1006Mhz but AB reported 993Mhz.



And this is overclocked... :laugh:

25% faster than stock. :p



I should point out that in the initial leaks of the 290X performance it was noted the AMD card lost in every benchmark, even though it won in most of the games. This might not be very well optimised for benchmarks at this stage so I wouldn't be concerned with them at all. After, all, benchmarks are just for fun and you don't buy a gfx card for that (well, most folk don't). It's the games performance that counts and i think the 290X will work out very well.
Posted on Reply
#120
HumanSmoke
XzibitI see two different Titans, base clock difference.
I see comprehension fail...of the heavily overclocked variety.

the54thvoid has already explained that his Titan is running at 993MHz (AB reported).

A stock Titan, while nominally 834MHz and 876Mhz boost, will actually in stock trim and with adequate cooling run at pretty much it's highest speed bin in 3DMark Firestrike.

The highest speed bin for a stock titan without voltage modification is in fact 992MHz.

As you can plainly see from the reported clockspeeds on this page, the stock Titan will indeed peg the maximum boost state in games, let alone an intensive graphics benchmark. Since Firestrike will peg GPU usage close to 100% it soon becomes apparent that the Titan will sit at the highest freq. it can unless thermally limited.
Posted on Reply
#121
15th Warlock
NeoXFUm, no. Well, maybe your WC/OC`d ones. :D
But we`re talking about stock here... well, w/ each brand`s implementation of turbo/self-OC I guess...

hwbot.org/image/931600.jpg
legitreviews.com/images/reviews/2144/firestrike-extreme.png

Either way, I expect AT LEAST a little improvement w/ newer drivers, maybe not how R7000s did since early 2012 `till now, but still...
Yes, I must admit that this is the SC model that runs a little faster out of the box than the base model :) I didn't OC the card for that run per se, but my WC will provide a higher thermal headroom for more turbo boost, however the CPU was running at 4.3Ghz compared to that 2600 at 5Ghz, the physics score was actually lower than that supposed 290X run, its probably not a perfect comparison, but it should give you a ballpark figure for reference :)

But like you said, chances are that run, if real, was probably made using early drivers, just check the frame pacing graphic, it's all over the place compared to more recent AMD drivers.

Thing is, if AMD has its way that probably won't even matter, as developers will most likely choose to support mantle if the architecture is shared by all next gen consoles and AMD PC cards, like I said, it could represent a paradigm shift in PC video performance, so people should look at this card as a future proof investment, in my personal case I already have a rig waiting for these cards once they are released, but at $729 a pop for this limited edition, I prefer to wait for the vanilla version, I don't really care about BF4 premium, if AMD releases the not limited edition of this card at $599, it'll have a winner in its hands :)
Posted on Reply
#122
HumanSmoke
15th WarlockThing is, if AMD has its way that probably won't even matter, as developers will most likely choose to support mantle if the architecture is shared by all next gen consoles and AMD PC cards..
Game developers won't "choose" Mantle, they will code for it if AMD throw money their way in the same way that game developers have always added features.
Not so sure about the common API reasoning either. Sony have their own console API's (2 + a wrapper I believe- neither of which is Mantle), and I don't see Microsoft ditching D3D any time soon. Mantle is pretty much PC and architecture (and game engine at this stage) specific, and once the hyperbole is stripped away it looks like a welcome addition to PC gaming but not the second coming of Christ. The fact Steam looks linked to Nvidia and Intel wont help propagate Mantle in its current guise.
AMD claims that they can do 9x the draw calls with Mantle...........Before everyone gets too excited, we will not see a 9x improvement in overall performance with every application. A single HD 7790 running in Mantle is not going to power 3 x 1080P monitors in Eyefinity faster than a HD 7970 or GTX 780 (in Surround) running in DirectX. Mantle shifts the bottleneck elsewhere.

What kind of improvements can we expect? Performance will increase when things are not shader bound. We probably could see an improvement in geometry handling (higher polygon scenes). Other than that though, do not expect miracles. In applications that require multiple shading passes, complex lighting, tessellation and deformation, we might see improvements in the single digit percentages, perhaps as high as the low teens.
[Source]
Posted on Reply
#123
15th Warlock
HumanSmokeGame developers won't "choose" Mantle, they will code for it if AMD throw money their way in the same way that game developers have always added features.
Not so sure about the common API reasoning either. Sony have their own console API's (2 + a wrapper I believe- neither of which is Mantle), and I don't see Microsoft ditching D3D any time soon. Mantle is pretty much PC and architecture (and game engine at this stage) specific, and once the hyperbole is stripped away it looks like a welcome addition to PC gaming but not the second coming of Christ. The fact Steam looks linked to Nvidia and Intel wont help propagate Mantle in its current guise.

[Source]
I don't think anyone in their right mind is expecting a 9X performance leap (particularly on older software and cards) out of this mantle initiative, however, I think it's reasonable to expect a few developers to jump ship to AMD if they play their cards just the right way.

There's a precedent, remember back in 2004 when some HL2 benchmarks leaked running much faster on Ati hardware than on Nvidia's cards? When the game was finally released the Source engine made full use of the R300 potential, and even mid level cards like the Radeon 9500 would out perform the green team's flagship at the time (the FX5800) by quite a measurable margin.

Sure, the particulars of that debacle were very different from what we have today, but truth is, AMD is in an enviable position before the release of these new cards, as sole providers for both major next gen consoles they'll use that leverage to help developers make a rendering engine get much "closer to the metal" on most gaming platforms that share this architecture and reduce costs by making it much easier to port games between Xbone, PS4 and PC without loss of performance in the process.

Sure, Mantle is a PC initiative, and it may be in the interest of MS and Sony not to provide the necessary tools to make this porting so easy, but developers working on GCN based GPUs will find it much easier to program to these particular hardware targets and make such optimizations available when releasing a particular game on multiple platforms.

Look, I might be playing devil's advocate here, but the potential is there, and AMD has made everyone fully aware that they intend to take advantage of their position on multiple occasions, so it's not a stretch to think that we may see history repeat itself like the glory days of the 9700Pro.
Posted on Reply
#124
HumanSmoke
15th WarlockLook, I might be playing devil's advocate here, but the potential is there, and AMD has made everyone fully aware that they intend to take advantage of their position on multiple occasions, so it's not a stretch to think that we may see history repeat itself like the glory days of the 9700Pro.
Well, ideally you'd hope that the Mantle initiative gives MS some cause for thought on overhauling DirectX, but knowing how agile MS are, I wouldn't hold my breath.
A lot will depend on how committed (cash) AMD are to making it work. Activision would seem to a natural bandwagon jumper so I could see them adopting it- but I can't really see much benefit in easing workload on an engine that produced stratospheric framerates in any case.
Ideally the API needs to raise the game i.q. levels substantially over DX/OGL to make the software anything other than a bullet point...if it doesn't then it becomes just another feature like TressFX or PhysX, and of course if it does then we end up full circle from the advent of the gaming GPU, because dollars to donuts the other two vendors will follow suit....the good old, bad old days of Glide, Matrox Simple Interface, RRedline, SGL etc...
Posted on Reply
#125
BiggieShady
HumanSmokeWell, ideally you'd hope that the Mantle initiative gives MS some cause for thought on overhauling DirectX, but knowing how agile MS are, I wouldn't hold my breath.
It was good for them that lowly xbox had some kind of advantage ... and now, potentially, the very cheapest steambox with AMD APU will be able to run all games done with mantle enabled engines, as good as xbone.
A lot will depend on how committed (cash) AMD are to making it work. Activision would seem to a natural bandwagon jumper so I could see them adopting it- but I can't really see much benefit in easing workload on an engine that produced stratospheric framerates in any case.
Not so much Activision, they traditionally don't make games that need massive number of draw calls each frame ... DICE on the other hand, with their huge visibility distance, already hit 7k draw calls in bf3 while developing frostbite 2. CryEngine is often used for games with huge draw distances, so CryTek could profit from implementing mantle renderer.
Ideally the API needs to raise the game i.q. levels substantially over DX/OGL to make the software anything other than a bullet point...if it doesn't then it becomes just another feature like TressFX or PhysX,
The thing is, with increased max number of draw calls each frame, you should get much more dynamic (interactive) objects on screen, so it's about scene/game complexity, not directly image quality.
and of course if it does then we end up full circle from the advent of the gaming GPU, because dollars to donuts the other two vendors will follow suit....the good old, bad old days of Glide, Matrox Simple Interface, RRedline, SGL etc...
Of course, unless history repeats itself ... and we all know that never happened before :rolleyes:
Posted on Reply
Add your own comment
Apr 25th, 2024 23:56 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts