• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Radeon HD 7900 Series to Use XDR2 Memory?

that's b/c its GDDR3. Or about the same speed as GDDR5 in 256bit, which is still plenty fast. I wouldn't go as far to say that gpu is slow either. Its on par with a lot of current gpus in terms of memory bandwidth.

heh... rambus. they'd better be careful lest they get sued later.

512 bit GDDR3 = 256 bit GDDR5

so there's no need for 512 bit XDR2 either. ;)
 
Except that is not completely true (or true at all). The memory controller in Barts is half the size of that in Cypress or Cayman despite all of them being 256 bit, and it's not much bigger than that in Juniper, despite the latter being 128 bits, because attaining higher memory clocks also increases the number of transistors required. In the end they need to find balance and if XDR needs a beefier MC to attain those speeds, requiring less pins would be of very little use (on the context of overall power consumption and die space).

EDIT: And maybe I'm completely wrong but didn't IBM have a lot of troubles getting Cell past some clocks due to the memory controller? I think I remember reading something about that.

The larger die size is generally due to being compatable with more than one type of memory, GDDR3, GDDR5 and or overkill. I'm sure they have considered the die size for memory controller and the type of memory. I would almost bet they are using their own cards to run simulations on GPU cores and all sorts of other fun stuff we dont get to see. They woudn't have made it this long or far in the world of tech without some idea of what they are doing.

And yes, memory controller performance can have a large effect on the actual GPU. So a smaller IMC closer to the heart of the GPU core will improve performance as long as its design and implementation is good, but modern design allows for memory retraining, and a separate clock, and I believe ATI learned a lesson with the decoupling capacitors.
 
The larger die size is generally due to being compatable with more than one type of memory, GDDR3, GDDR5 and or overkill. I'm sure they have considered the die size for memory controller and the type of memory. I would almost bet they are using their own cards to run simulations on GPU cores and all sorts of other fun stuff we dont get to see. They woudn't have made it this long or far in the world of tech without some idea of what they are doing.

And yes, memory controller performance can have a large effect on the actual GPU. So a smaller IMC closer to the heart of the GPU core will improve performance as long as its design and implementation is good, but modern design allows for memory retraining, and a separate clock, and I believe ATI learned a lesson with the decoupling capacitors.

I never said they don't know what they are doing. I'm just saying that doing faster memory requires mores transistors and it's not something I'm making up, it's something that AMD said when explaining why Barts was so much smaller than Cypress and Cayman.

A faster memory does not automatically mean a reduction in silicon space, nor does it mean better thermals, etc. GPU designers make an election, for Cayman they decided to use faster memory and a big MC compared to Barts, twice as big in fact, so they could have gone with a 512 bit interface and only considering the chip it would have been better.

i.e

512 x 4000 = 256 GB/s

256 x 5000 = 160 GB/s

both requiring about the same size MC. They could have gone with

384 x 4000 = 192 GB/s

and it would have required less die space than Cypress' or Cayman's MC. They didn't follow that route because of more traces on the PCB.

So to sum it up, you said:

Why waste real estate on a huge bus when using better memory will allow for a smaller controller footprint, and thus reduce latencies, improve throughput, give less chances of die flaws in critical areas per chip....

All I'm saying is that that is not necessarily true, since a MC designed for very high frequencies can actually be much bigger than one designed for lower frequencies but is twice as wide. And sometimes it's not even that, sometimes a 50% wider MC can actually win in both metrics (pure performance + efficiency) as would have been the case if AMD had gone the 384 bit route.

IF it was fact and not rumors about AMD using XDR2 memory, it would make sense to think that's what AMD thought would help their architecture more, despite whatever downsides it could have. But they are rumors and they don't really make any sense at all. Not even Charlie D bites this and it would have made for a very "good" (CD style) article, where he could find a way to bash Nvidia for being behind and all. Instead he is bluntly denying this, which makes me suspect it's false.
 
I wonder are we now going back to 90's with XDR as we are already almost there with GPUs.

What is the use of fast internal memory speed other than processing speed at device while you can't move the data faster between the devices.

2 cents
 
Like the hype, boo on the rambus.

Never met a rambus product that was good.

(TBH: how are they still around? Patent lawsuits are a business structure?)
 
Back
Top