Monday, September 12th 2011

Radeon HD 7900 Series to Use XDR2 Memory?

AMD's next-generation enthusiast graphics processor (GPU) is shaping up to be something more unique than expected. The GPU codenamed "Tahiti" is going to be bleeding-edge in terms of its feature-set. To begin with, there's talk that it will make use of PCI-Express Generation 3 (Gen 3) system bus, which will give it a mammoth 32 GB/s of system interface bandwidth. Next, Tahiti will use a number-crunching architecture that's a generation ahead of even the VLIW4 it released with Cayman. VLIW4 will make up for most of the HD 7000 series, but not the top-end Tahiti GPU, it will use what AMD is referring to as "CoreNext Architecture", which is expected to boost performance per square millimeter die area beyond even what VLIW4 manages.

The most recent piece of information is bound to shock and awe. Tahiti, it appears, will use the XDR2 memory interface. XDR2 is an ultra-high bandwidth and power-efficient memory bus that's competitive with GDDR5, maintained by Rambus, which is claimed by it to be a generation ahead of GDDR5. It's not like XDR2 will be exotic to AIBs, the XDR architecture is used in game consoles where the high-bandwidth offsets low memory capacity by allowing quick streaming of texture data. Rambus licenses XDR memory chip manufacture to notable high-volume vendors. Nordic Hardware compiled data from various unreliable sources to sketch out what Radeon HD 7900 series could look like.


Source: NordicHardware
Add your own comment

54 Comments on Radeon HD 7900 Series to Use XDR2 Memory?

#1
FreedomEclipse
~Technological Technocrat~
Bundy said:
Yes! More copper please.
Since the price of copper went up. are you happy paying more if they were to put more copper on?
Posted on Reply
#2
entropy13
FreedomEclipse said:
Since the price of copper went up. are you happy paying more if they were to put more copper on?
That's a bit of a problem here actually. We have a lot of coins still using copper, but are rarely in circulation (i.e. not being used, just being kept somewhere). So the coins still have the same value as currency but slightly increased in value as a commodity. But they're not being used simply because they're coins. :laugh: And IIRC making those coins are now more expensive than their equivalent value as money. LOL
Posted on Reply
#3
FreedomEclipse
~Technological Technocrat~
entropy13 said:
That's a bit of a problem here actually. We have a lot of coins still using copper, but are rarely in circulation (i.e. not being used, just being kept somewhere). So the coins still have the same value as currency but slightly increased in value as a commodity. But they're not being used simply because they're coins. :laugh: And IIRC making those coins are now more expensive than their equivalent value as money. LOL
Melt the copper down into blocks and sell it to scrap metal merchants!!! :rockout:
Posted on Reply
#4
Kaleid
Hayder_Master said:
Make it with 512bit and let me forget upgrad the gpu for 3 years.
If memory performance was enough then people would still be using 2900xt cards.
Posted on Reply
#5
wolf
Performance Enthusiast
AMD's next gen is shaping up to be something pretty damn beefy indeed. the CPU's might be a little while behind Intel's but their GPU's are doing a damn good job of going blow for blow with Nvidia, in fact this next round is anybodies guess.
Posted on Reply
#6
Jegergrim
The drop in power consumption and heat as loss is pretty darn impressive, I wasn't expecting that huge of a leap, whilst increasing performance dramatically, compared to 6xxx series vs 5xxx...
:respect::respect:
Posted on Reply
#7
Steevo
You all must not understand charts. data rate per pin is win here. Why waste real estate on a huge bus when using better memory will allow for a smaller controller footprint, and thus reduce latencies, improve throughput, give less chances of die flaws in critical areas per chip....


Good design wins out every time, it might not be that funny car that goes the fastest, but it also doesn't cost as much, and take as much in maintenance.
Posted on Reply
#8
Benetanegia
Steevo said:
Why waste real estate on a huge bus when using better memory will allow for a smaller controller footprint, and thus reduce latencies, improve throughput, give less chances of die flaws in critical areas per chip....
Except that is not completely true (or true at all). The memory controller in Barts is half the size of that in Cypress or Cayman despite all of them being 256 bit, and it's not much bigger than that in Juniper, despite the latter being 128 bits, because attaining higher memory clocks also increases the number of transistors required. In the end they need to find balance and if XDR needs a beefier MC to attain those speeds, requiring less pins would be of very little use (on the context of overall power consumption and die space).

EDIT: And maybe I'm completely wrong but didn't IBM have a lot of troubles getting Cell past some clocks due to the memory controller? I think I remember reading something about that.
Posted on Reply
#9
Casecutter
The question for this to be sensible is… who is AMD making relations with to produce such memory... Samsung? But if this is true it better better TDP and thermals which make cooling memory less of a concern while offering significant band-width improvement. This is interesting, need to turn over some more rocks to see if there's any collaboration to the folk that could be ramping up production for this to be really legitimate information.
Posted on Reply
#11
eidairaman1
I think AMD should be testing GDDR5 with 512bit bus n then test XDR2 with 256bit n then 512bit bus to see which performs better.
Posted on Reply
#12
erocker
[H]@RD5TUFF said:
I call FUD
Well.. it's about that time of year again. New cards should be out before Christmas, so let the information/misinformation extravaganza begin!

eidairaman1 said:
I think AMD should be testing GDDR5 with 512bit bus n then test XDR2 with 256bit n then 512bit bus to see which performs better.
512bit bus just isn't necessary with GDDR5. There's plenty of bandwith with a 256 bit bus. This is why when overclocking GDDR5, results are minimal. I would love to see a Cayman chip paired with a 128bit bus to see what happens!

I have no idea what kind of bus is needed for XDR2 RAM. It seems like some pretty nice stuff though.
Posted on Reply
#13
Shihabyooo
190w TDP for the fastest single GPU card ? :wtf: PSU makers ain't gonna like this.


btarunr said:
Rambus licenses XDR memory chip manufacture to notable high-volume vendors.
So "assuming" these specs are correct, wouldn't that limit the sub-manufacturers ability to throw in more vRAM ? I highly doubt most would go throw the hassle of licensing.
Posted on Reply
#14
eidairaman1
erocker said:
Well.. it's about that time of year again. New cards should be out before Christmas, so let the information/misinformation extravaganza begin!



512bit bus just isn't necessary with GDDR5. There's plenty of bandwith with a 256 bit bus. This is why when overclocking GDDR5, results are minimal. I would love to see a Cayman chip paired with a 128bit bus to see what happens!

I have no idea what kind of bus is needed for XDR2 RAM. It seems like some pretty nice stuff though.
I just know back in the day with RIMMs that DDR was slower clock wise but performed the same if not faster than RIMMs cuz the bus Address size was larger on DDR, plus latencies were tighter
Posted on Reply
#15
[H]@RD5TUFF
erocker said:
Well.. it's about that time of year again. New cards should be out before Christmas, so let the information/misinformation extravaganza begin!
True enough, doesn't make it less irritating despite knowing it's coming.
Posted on Reply
#16
WarraWarra
Don't care if they use rambus or rambo or toilet paper and toothpicks that is used to construct the houses here instead of brick and mortar, as long as there is a big performance increase and price remains decent + actual quality control and testing to make sure it is not Apple anything then fine by me.

Price + performance = sale or no sale for me.

7970 looks acceptable, performance would definitely be nicer they just need to think "build for laptop and unlock extra's for desktop" = 1x gpu solution with multiple applications. :toast:

It is nice now that the gpu's is starting to enter the 21st century where they should have been 5+ years ago or equivalent to 1000years in pc time.
Seems like the new leadership at AMD is finally paying off for ATI as well. :rockout:
Posted on Reply
#17
eidairaman1
IDK, R300 core was the one that literally did this :nutkick: to the graphics industry by being just that powerful n to support newer features, it took NV the 6000 series to have something but even the lower model 6000s were still under powered to the R300 cores. it wasnt till R520 that they finally got Crossfire done right without bulky cables. The Radeon 2000 series was a burden in the top end n released the 3000 which were the same but minor improved, 4000 series for ATI came about n performed very well then the 5000 took it higher.

R9700 still handles some games but fully supports Win 7 and Vista Aero desktops to max potential.
Posted on Reply
#18
xBruce88x
KooKKiK said:
GTX285 is also 512 bit and it is fucking slow like hell now. ;)
that's b/c its GDDR3. Or about the same speed as GDDR5 in 256bit, which is still plenty fast. I wouldn't go as far to say that gpu is slow either. Its on par with a lot of current gpus in terms of memory bandwidth.

heh... rambus. they'd better be careful lest they get sued later.
Posted on Reply
#19
Xtro
The Bottle neck of AMD

I´m not a deep expert on technology but taking an overview i noted that AMD processors and AMD Graphics has the little disadvantege regarding memory bandwidth becasue if we take a little bit deeper all benchmarks developed with several efforts for many people on this page and anothers, you will note that nVidia and Intel uses on their architecture a biger memory bus band width, and if we make and a projection of performance will increase (theoric) on AMD processors and AMD Graphics, in other words AMD could be deliver a little bit of more performance about 20% to 30% more than actual performance tilting balance on your favor if they usess a biger memory bandwidth, is not a deep commentary but i think that the bottle neck on AMD units is the memory bus bandwidth on their units because if you note nVidia actually uses less clock frecuencies with a memory bus bandwidth of 384 bits, and Intel uses a bus clock of 3000Mhz instead of AMD uses on your high end GPUS´s 256bits with higer clock speeds of memory bus speed and 2000Mhz of bus clock on processors. And if AMD does not look it bottle neck on new architecture will ever have the disadvantage against its competitors. This just if you try to pump or make flow a certain amount of water trough a 1 square inch or and a hald sauare inch in what will be more faster the flow.. obiously trough the biger. Hope thar example coul be fine.
Posted on Reply
#21
MikeMurphy
This will be good for mobile graphics.

GDDR5 is power hungry to operate which is why GDDR3 is a better fit for optimus-type setups as the memory is always active.
Posted on Reply
#22
TheGuruStud
Syborfical said:
When PC's used rambus the licensing fees etc made it cost at least 4 times more the DDR at the time.

Rambus in PC's was short lived due to the high price and having to pay a high licensing fees, plus it had high latency compared to SDR and generated lots of heat.

So going from past products XDR may or may not be the same.
And they're patent whores.
Posted on Reply
#23
TRWOV
HD7970 seems to be da bomb, 190w plus ~40% performance increase over HD6970? Yes, please! :roll: And the 7670 also looks sweet. Who wants a 60w HD5770? :rockout:
Posted on Reply
#24
happita
TRWOV said:
HD7970 seems to be da bomb, 190w plus ~40% performance increase over HD6970? Yes, please! :roll: And the 7670 also looks sweet. Who wants a 60w HD5770? :rockout:
Sounds good to me!!! I'll take 1 for my HTPC I'm going to make....and a 7950 to replace my 5850 :D
Posted on Reply
#25
KooKKiK
xBruce88x said:
that's b/c its GDDR3. Or about the same speed as GDDR5 in 256bit, which is still plenty fast. I wouldn't go as far to say that gpu is slow either. Its on par with a lot of current gpus in terms of memory bandwidth.

heh... rambus. they'd better be careful lest they get sued later.
512 bit GDDR3 = 256 bit GDDR5

so there's no need for 512 bit XDR2 either. ;)
Posted on Reply
Add your own comment