• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Radeon HD 7900 Series to Use XDR2 Memory?

I'm more excited about the architecture than the actual memory... memory traditionally doesn't have as huge of an impact on performance vs. GPU specs.
 
Yes! More copper please.

Since the price of copper went up. are you happy paying more if they were to put more copper on?
 
Since the price of copper went up. are you happy paying more if they were to put more copper on?

That's a bit of a problem here actually. We have a lot of coins still using copper, but are rarely in circulation (i.e. not being used, just being kept somewhere). So the coins still have the same value as currency but slightly increased in value as a commodity. But they're not being used simply because they're coins. :laugh: And IIRC making those coins are now more expensive than their equivalent value as money. LOL
 
That's a bit of a problem here actually. We have a lot of coins still using copper, but are rarely in circulation (i.e. not being used, just being kept somewhere). So the coins still have the same value as currency but slightly increased in value as a commodity. But they're not being used simply because they're coins. :laugh: And IIRC making those coins are now more expensive than their equivalent value as money. LOL

Melt the copper down into blocks and sell it to scrap metal merchants!!! :rockout:
 
Make it with 512bit and let me forget upgrad the gpu for 3 years.

If memory performance was enough then people would still be using 2900xt cards.
 
AMD's next gen is shaping up to be something pretty damn beefy indeed. the CPU's might be a little while behind Intel's but their GPU's are doing a damn good job of going blow for blow with Nvidia, in fact this next round is anybodies guess.
 
The drop in power consumption and heat as loss is pretty darn impressive, I wasn't expecting that huge of a leap, whilst increasing performance dramatically, compared to 6xxx series vs 5xxx...
:respect::respect:
 
You all must not understand charts. data rate per pin is win here. Why waste real estate on a huge bus when using better memory will allow for a smaller controller footprint, and thus reduce latencies, improve throughput, give less chances of die flaws in critical areas per chip....


Good design wins out every time, it might not be that funny car that goes the fastest, but it also doesn't cost as much, and take as much in maintenance.
 
Why waste real estate on a huge bus when using better memory will allow for a smaller controller footprint, and thus reduce latencies, improve throughput, give less chances of die flaws in critical areas per chip....

Except that is not completely true (or true at all). The memory controller in Barts is half the size of that in Cypress or Cayman despite all of them being 256 bit, and it's not much bigger than that in Juniper, despite the latter being 128 bits, because attaining higher memory clocks also increases the number of transistors required. In the end they need to find balance and if XDR needs a beefier MC to attain those speeds, requiring less pins would be of very little use (on the context of overall power consumption and die space).

EDIT: And maybe I'm completely wrong but didn't IBM have a lot of troubles getting Cell past some clocks due to the memory controller? I think I remember reading something about that.
 
The question for this to be sensible is… who is AMD making relations with to produce such memory... Samsung? But if this is true it better better TDP and thermals which make cooling memory less of a concern while offering significant band-width improvement. This is interesting, need to turn over some more rocks to see if there's any collaboration to the folk that could be ramping up production for this to be really legitimate information.
 
I think AMD should be testing GDDR5 with 512bit bus n then test XDR2 with 256bit n then 512bit bus to see which performs better.
 
I call FUD

Well.. it's about that time of year again. New cards should be out before Christmas, so let the information/misinformation extravaganza begin!

I think AMD should be testing GDDR5 with 512bit bus n then test XDR2 with 256bit n then 512bit bus to see which performs better.

512bit bus just isn't necessary with GDDR5. There's plenty of bandwith with a 256 bit bus. This is why when overclocking GDDR5, results are minimal. I would love to see a Cayman chip paired with a 128bit bus to see what happens!

I have no idea what kind of bus is needed for XDR2 RAM. It seems like some pretty nice stuff though.
 
190w TDP for the fastest single GPU card ? :wtf: PSU makers ain't gonna like this.


Rambus licenses XDR memory chip manufacture to notable high-volume vendors.

So "assuming" these specs are correct, wouldn't that limit the sub-manufacturers ability to throw in more vRAM ? I highly doubt most would go throw the hassle of licensing.
 
Well.. it's about that time of year again. New cards should be out before Christmas, so let the information/misinformation extravaganza begin!



512bit bus just isn't necessary with GDDR5. There's plenty of bandwith with a 256 bit bus. This is why when overclocking GDDR5, results are minimal. I would love to see a Cayman chip paired with a 128bit bus to see what happens!

I have no idea what kind of bus is needed for XDR2 RAM. It seems like some pretty nice stuff though.

I just know back in the day with RIMMs that DDR was slower clock wise but performed the same if not faster than RIMMs cuz the bus Address size was larger on DDR, plus latencies were tighter
 
Well.. it's about that time of year again. New cards should be out before Christmas, so let the information/misinformation extravaganza begin!

True enough, doesn't make it less irritating despite knowing it's coming.
 
Don't care if they use rambus or rambo or toilet paper and toothpicks that is used to construct the houses here instead of brick and mortar, as long as there is a big performance increase and price remains decent + actual quality control and testing to make sure it is not Apple anything then fine by me.

Price + performance = sale or no sale for me.

7970 looks acceptable, performance would definitely be nicer they just need to think "build for laptop and unlock extra's for desktop" = 1x gpu solution with multiple applications. :toast:

It is nice now that the gpu's is starting to enter the 21st century where they should have been 5+ years ago or equivalent to 1000years in pc time.
Seems like the new leadership at AMD is finally paying off for ATI as well. :rockout:
 
IDK, R300 core was the one that literally did this :nutkick: to the graphics industry by being just that powerful n to support newer features, it took NV the 6000 series to have something but even the lower model 6000s were still under powered to the R300 cores. it wasnt till R520 that they finally got Crossfire done right without bulky cables. The Radeon 2000 series was a burden in the top end n released the 3000 which were the same but minor improved, 4000 series for ATI came about n performed very well then the 5000 took it higher.

R9700 still handles some games but fully supports Win 7 and Vista Aero desktops to max potential.
 
GTX285 is also 512 bit and it is fucking slow like hell now. ;)

that's b/c its GDDR3. Or about the same speed as GDDR5 in 256bit, which is still plenty fast. I wouldn't go as far to say that gpu is slow either. Its on par with a lot of current gpus in terms of memory bandwidth.

heh... rambus. they'd better be careful lest they get sued later.
 
The Bottle neck of AMD

I´m not a deep expert on technology but taking an overview i noted that AMD processors and AMD Graphics has the little disadvantege regarding memory bandwidth becasue if we take a little bit deeper all benchmarks developed with several efforts for many people on this page and anothers, you will note that nVidia and Intel uses on their architecture a biger memory bus band width, and if we make and a projection of performance will increase (theoric) on AMD processors and AMD Graphics, in other words AMD could be deliver a little bit of more performance about 20% to 30% more than actual performance tilting balance on your favor if they usess a biger memory bandwidth, is not a deep commentary but i think that the bottle neck on AMD units is the memory bus bandwidth on their units because if you note nVidia actually uses less clock frecuencies with a memory bus bandwidth of 384 bits, and Intel uses a bus clock of 3000Mhz instead of AMD uses on your high end GPUS´s 256bits with higer clock speeds of memory bus speed and 2000Mhz of bus clock on processors. And if AMD does not look it bottle neck on new architecture will ever have the disadvantage against its competitors. This just if you try to pump or make flow a certain amount of water trough a 1 square inch or and a hald sauare inch in what will be more faster the flow.. obiously trough the biger. Hope thar example coul be fine.
 
This will be good for mobile graphics.

GDDR5 is power hungry to operate which is why GDDR3 is a better fit for optimus-type setups as the memory is always active.
 
When PC's used rambus the licensing fees etc made it cost at least 4 times more the DDR at the time.

Rambus in PC's was short lived due to the high price and having to pay a high licensing fees, plus it had high latency compared to SDR and generated lots of heat.

So going from past products XDR may or may not be the same.

And they're patent whores.
 
HD7970 seems to be da bomb, 190w plus ~40% performance increase over HD6970? Yes, please! :roll: And the 7670 also looks sweet. Who wants a 60w HD5770? :rockout:
 
HD7970 seems to be da bomb, 190w plus ~40% performance increase over HD6970? Yes, please! :roll: And the 7670 also looks sweet. Who wants a 60w HD5770? :rockout:

Sounds good to me!!! I'll take 1 for my HTPC I'm going to make....and a 7950 to replace my 5850 :D
 
Back
Top