• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Radeon R9 290X Could Strike the $599.99 Price-point

The Titan can barely survive on its own VRMs at stock none the less trying to suck out 10-15% more performance out of it on air for the masses. If I was NVIDIA or a board partner, I wouldn't release a Titan with much more than it has already for fear of inordinately high RMA rates potentially dramatically cutting into profits.
The other alternative is to re-release the Titan with the two missing power phases as some vendors have done with the reference PCB for their own GTX 780's.
22857_gigabyte_gtx_780_windforce_x3_oc_vrm.jpg


Reducing the VRAM from 6GB to 3GB, and neutering the FP64 capability could serve to provide a differentiator between the two models.

There are a lot of permutations possible. Likely depends on AMD's final pricing of the 290X and 290, how long it would take to put into action, and whether Nvidia see the effort viable versus the lifespan of the cards and sales potential. Allowing AIB's to raise voltage limits on cards built to handle the increased power (the 8+2, 12+2, 16+2 configs) helps Nvidia and AIB's but there still needs to be a reference card if Nvidia want widespread and continuing site review benchmark PR
 
its quite mind boggling. i asked myself the same question. amd said they were unable to clock the bus at 6ghz stable.. is that a problem to hire an engineeer who can design a stable bus? i dont care if its super wide or just standard wide, but if nvidia can make their own stable, amd shouldnt lag behind as they are with everything else. lets call tron to make a little inspection and report on the amd created environment for bits, bytes, and little helpless shaders.
Why do you say its' some problem with engineering or ability to run stable? Who's your source... Self?

It conceivably has more to do with the memory controllers within the chip and of it's implemented by either side. AMD figures the cost of a wide bus negates the cost of memory to manage that spec. Perhaps it was that the volume from supplier wasn't there for both Nvidia and AMD (Nvidia got there first and AMD realizes their going to need boat loads) and it was smarter to go wider and offer 4Gb with more bandwidth.

There are a lot of permutations possible.
But can they do those quickly and then sell it for less...?
 
Last edited:
The Titan can barely survive on its own VRMs at stock none the less trying to suck out 10-15% more performance out of it on air for the masses

That's pretty untrue mate. We've had good dialogue over how crippled Titan is but it's not "barely" surviving at stock - that's just BS.

In your favour though, adequate cooling is required on the card to maintain clocks and keep VRM's cool. Titans need better coolers (ACX, etc) for consistent high clocks. I do forgot sometimes that my card is under water. I figure you buy a Titan - you buy water cooling too :laugh:

Anyway, isn't this about the fabled, mystical R9 290X card? I really would like to see the bare PCB and see what AMD have built. No point having an awesome new chip and deviously good API's coming if the reference card is a piece of crap. Let's have some robust solid chokes and voltage circuitry that can take a beating.
And not that bloody blower fan......
 
And not that bloody blower fan
Titan and 780 referance cooler where fine with the blower in terms of noise can't truely equate airflow though, but they sounded good. AMD is hopefully using that to better against as the B-M.
 
But can they do those quickly and then sell it for less...?
Are you asking a question or just paraphrasing what I wrote?
There are a lot of permutations possible. Likely depends on AMD's final pricing of the 290X and 290, how long it would take to put into action, and whether Nvidia see the effort viable versus the lifespan of the cards and sales potential.
 
Reducing the VRAM from 6GB to 3GB, and neutering the FP64 capability could serve to provide a differentiator between the two models.

umm, didnt that already happen? (gtx780)

i really like 6gb of vram.
i really like the presence of fp capability.
i wouldnt be interested in nerfed hardware. i never was.
we need to move forward, not backward..
 
umm, didnt that already happen? (gtx780)
Not really the same thing on the spec sheet even if the practical realities are somewhat closer- and I think the possible part we're discussing is more aimed at marketing bullet-points than any performance part missing from the inventory.
GTX 780 = 2304 shaders, 3GB GDDR5, 1:24 FP64
Titan = 2688 shaders, 6GB GDDR5, 1:3 FP64

Other possible combinations are therefore
2688 shader, 3GB GDDR5, 1:3 FP64 and 2688 shader, 6GB GDDR5, 1:24 FP64
You could also add 7Gb/sec effective memory if the GK 110's memory controller could be QA'ed for the speed. Running out of spec for OC'ed cards is generally a whole lot different from reference validation.
i really like 6gb of vram.
So buy the 6GB version. Just as you like 6GB isn't it conceivable that someone else would be happy to sacrifice 3GB to have 30-40% reduction in price?
i really like the presence of fp capability.
Same argument. See above.
i wouldnt be interested in nerfed hardware. i never was.
Titan : 2688 shaders....K6000 : 2880 shaders. Titan is a salvage part...so you're lusting over a nerfed part already
we need to move forward, not backward..
Tell that to Jen Hsun and Rory, and provide an alternative income stream for them to recoup their loss of ROI.
It's a nice idea...but basically an idealized scenario totally divorced from reality.
HD 7870XT (Tahiti LE) 75% enabled die (shaders). Introduced 5 months after the fully enabled part.
GTX 660Ti (GK104-300) 88% enabled die (shaders). Introduced 5 months after the fully enabled part.
HD 6930 (Cayman CE) 83% enabled die (shaders). Introduced 12 months after the fully enabled part.
GTX 560Ti 448SP (GF110-270) 88% enabled die (shaders). Introduced 13 months after the fully enabled part.
HD 5830 (Cypress LE) 70% enabled die (shaders). Introduced 5 months after the fully enabled part
 
im lusting for fully operational gk110 with samsung memory and custom pcb and clock at least 1ghz. for $300 like in the day of 3dfx. thats all, im humble!
 
Saw a leaked pre order page with a price showing ~$735 for the BF4 edition of this card.
http://www.overclock.net/t/1429858/taobao-asus-radeon-r9-290x-bundled-with-bf4-735
Probably not very credible, but still something to look at.

The price might not mean a great deal in itself- maybe not even the comparison with other cards if there's an "early adopter tax" applied which is likely.

According to this site the R9-290X is $839.83, but as a comparison, the Gigabyte GTX 780 WF3 OC is $814, and the Asus DC2OC is $821.

If nothing else, it says that the etailer isn't one that will lure customers from Newegg!
 
Apparently NDA is lifted on October 2. Seems reasonable since on the 3rd pre-orders should start.
 
The price might not mean a great deal in itself- maybe not even the comparison with other cards if there's an "early adopter tax" applied which is likely.

According to this site the R9-290X is $839.83, but as a comparison, the Gigabyte GTX 780 WF3 OC is $814, and the Asus DC2OC is $821.

If nothing else, it says that the etailer isn't one that will lure customers from Newegg!

Retailers are always going to add a early adopters tax if supply is a bit light v demand, Its already shaking up the price structure even now ,,joy:D.
 
I'm thinking this will be about the same in performance as a 7990 as the price on the 7990 has dropped to about the same as the 290X will be sold for.
 
I'm thinking this will be about the same in performance as a 7990 as the price on the 7990 has dropped to about the same as the 290X will be sold for.

Not a chance. I wish it was though. I'd def buy one.
 
hmmm 44 rops instead of 48? if that isnt a mistake in the news post I smell cut down GPU and this 290X isnt fully enabled. 44 ROPs is a wierd number

most cards are 8 16 24 32 40 48 usually a multiple of 8

44 doesnt really fit in with that
 
hmmm 44 rops instead of 48? if that isnt a mistake in the news post I smell cut down GPU and this 290X isnt fully enabled. 44 ROPs is a wierd number

most cards are 8 16 24 32 40 48 usually a multiple of 8

If AMD used 4 ROPs per CU it would make sense. Tahiti has 32 ROPs and 8 CUs; it would make sense that Hawaii with 11 CUs has 44 ROPs
 
CUs and ROPs are completely decoupled nowadays, so you could have 1 ROP and 50 CUs (which makes no sense of course)

Also ROPs and CUs are not the same thing .. ROPs = rasterizers, CUs= thingies that have the shaders in them
 
CUs and ROPs are completely decoupled nowadays, so you could have 1 ROP and 50 CUs (which makes no sense of course)

Also ROPs and CUs are not the same thing .. ROPs = rasterizers, CUs= thingies that have the shaders in them

I wasn't implying any direct link between the ROPs and CUs, just that AMD might have wanted to keep the same ratio as Tahiti.
 
This thing is gonna crush nvidia's offerings. Glad to see AMD (or whats left of ATI) still putting out some nice hardware even if the CPU offerings (don't kill me here guys) aren't on par.
 
This thing is gonna crush nvidia's offerings. Glad to see AMD (or whats left of ATI) still putting out some nice hardware even if the CPU offerings (don't kill me here guys) aren't on par.

I think the biggest unknown of this launch at this point is not what the R9 290X is or how it performs but how NVidia chooses to respond. That is what I am most curious about.
 
OK. bta, i got that you now know something you didn't earlier.

A 384BIT controller was mentioned here an you were kinda angry about how AMD could not explain this at the conference. Now i see a bunch of comments got deleted and it mentions 512BIT. ?
 
OK. bta, i got that you now know something you didn't earlier.

A 384BIT controller was mentioned here an you were kinda angry about how AMD could not explain this at the conference. Now i see a bunch of comments got deleted and it mentions 512BIT. ?

I have my doubts that AMD even knows how many bits the memory bus is.
 
straight from the side of my neck......but i bet its still true

This thing is gonna crush nvidia's offerings. Glad to see AMD (or whats left of ATI) still putting out some nice hardware even if the CPU offerings (don't kill me here guys) aren't on par.

.....uh ....no.... Been ages since we've seen any true crushing..... except our hopes and dreams about new hardware's performance.

i expect 80% of nvidia performance for a lesser p[rice or 5% better for the same price
 
.....uh ....no.... Been ages since we've seen any true crushing..... except our hopes and dreams about new hardware's performance.

i expect 80% of nvidia performance for a lesser p[rice or 5% better for the same price

we already have something with about 80% of the performance of a GTX Titan it's called a 7970 Ghz edition and cost a 3rd of the price this is esentially 1.375 7970s so it should be as fast or faster than the Titan
 
If thats the case then the 290x would have the same issues as Tahiti where the ROP count to Shader count makes no sense

640 shaders 16 rops 40 shaders per ROP
1280 shaders 32 rops 40 shaders per ROP
2048 shaders 32 rops 64 shaders per ROP
2816 shaders 44 rops 64 shaders per ROP

So in terms of efficiency of shaders to rop and its relation to performance the new GPU will likely hit the same wall as the Tahiti does.

at 2816 shaders 48 rops it drops to 58.6 shaders per ROP still not quite where it needs to be

The way AMD usually designed a GPU was to start in the middle scale up and scale down

so 7870 = starting point 1280 shaders 80 TMU 32 ROPs
half a 7870 = 7770 640 40 16
scaling up would have been 1920 120 48 what we got was 2048 128 32

When it comes to GPUs there are of course diminishing returns however a balanced design tends to be better overall.

just look at the 7770 to 7870 to 7970
128bit > 256bit > 384bit
1gb > 2gb > 3gb
640 > 1280 > 2048
16 ROPs > 32 ROPs > 32 ROPs
40 TMUs > 80 TMUs > 128 TMUs

AMDs approach in the past would have been
128bit > 256bit > 384bit > 512bit
1gb > 2gb > 3gb > 4gb
640 > 1280 > 1920 > 2560
16 ROPs > 32 ROPs > 48 ROPs > 64 ROPs ( cut back 8 ROP increase still puts it at 45 Shaders per ROP at 56 ROPs. Allowing for a GPU with 3200 shaders 200 TMU 64 ROP = 50 shaders per ROP
40 TMUs > 80 TMUs > 120 TMUs > 160 TMUs

you can see where things dont quite make sense
Granted wafer size die size getting perfectly working chips all come into play but you get the idea. In terms of AMDs own designs and efficiencies

increasing shader count without proper ROP count tends to result in issues

5850 vs 5870 comes to mind. back with the 1440 shaders vs 1600 shaders at the same clock speeds performance was about 2% difference due to ROP limitation

normally with a die shrink ROP can do more work thus its been alright but since we are still stuck at 28nm I would have rather seen 48 ROPs for better shader to ROP ratio I am also rambling like mad and dont give a fuck. THe long story short seems to be that 64 shaders per rop is not nearly as efficient as 40 shaders per rop.
 
Last edited:
Back
Top