Discussion in 'News' started by btarunr, Dec 8, 2011.
Fab it out at home then send the blueprints to TMSC.
Yeah. IN the past, i think that might have bene the reason why rumours had ATI cards doing realyl well..then they didn't. Perhaps AMD's own line had great results, but when they moved over to TSMC, things changed. Makes a little bit of sense, but who knows.
All i know is that I have no idea what to expect out of these cards any more. The good thing about that though, is that it makes them all that much more exciting. I can't wait to get one or two!
Well supposedly the new highest ends cards use "graphics core next" what ever that means
So I expect an IPC improvement, especially if you consider the stream processor amount hasn't increased by the usual amount you would see from a smaller fab process.
That or they've got some other goodies planned, they always wanted to put side-port *memory on the GPU for example but could never justify the die space requirement on larger fab processes, maybe this is the chips that finally have it.
Might of got the name mixed up with something else AMD do, but it's essentially on die memory which would really help with math performance amongst other things.
( or so AMD think)
By the by just to +1 all the ES comments, I doubt pcb will be red, I doubt it will need 2 x 8 pin, they just do that so they can try a wide variety of voltages etc with one ES so they can find the sweet spot etc.
I vote we hype the shit out of the 7xxx series so the internet explodes into rage again when the product fails to live up tto the hype.
Or, we can wait for reviews. Shouldn't be too long now.
I dont know what to expect either, i was truly convinced that it would have taken them to atleast end of 2nd quarter to have these cards with the new architecture. I have no clue how they will perform.
I thought XDR2 was on hold
I understand where you're coming from. I was mostly thinking about the added Tessellation power and better AA in these newer cards.
Please do explain, I am ignorant of this.
I hope AMD does a good job with these.
They have an small foundry with PCB printers, placers only they're very small-scale. ES GPUs are still sourced from TSMC. Other components are still bought from Asia. Those cards you see could have very well cost AMD $20,000~$50,000 a piece to make (I'm obviously not including the R&D costs of Tahiti).
Once the ES card designs are tested stable in some of the most atrocious conditions (overheat, overhumidity, sucky PSU, Furmark, etc.,), they become qualification samples, and are produced in slightly more number to send to ODMs. PCPartner and TUL are the main upstream ODMs, AIB partners buy from them and place their stickers, handle all the shipping/regulation/warranty stuff, and resell.
Think of AMD and component makers as Crude sourcing and shipping companies, ODMs as oil refineries, and AIBs as oil marketing companies.
"faildozer" is still faster than what you run. It is not a bad chip it is just not the "fastest." It is no more of a fail than Phenom II, Athlon II, core i5 or anything else that isn't the fastest on the market. As for running it on a dozer based system its on a Crosshair board look at the pictures its kind of hard to miss not to mention the OEM AMD heatpipe cooler. Way to flaimbait completely off topic though thanks for the vast insight you have given us towards AMD's new series of GPU's.
not to mention that if you look at DX11 gaming reviews (gaming future as opposed to gaming present) the extra multi threading makes bulldozer quite powerful, especially in multi GPU setups.
some people see just green blue or red though eh and not all three, does 12 memory chips deffinately define 1.5 or 3 gig people?
im thinkin could they not use 256 bit of it for gpu vmem use, and 128 for ioummu, as in 128 bit just for a direct link to memory or something and dunno just thinkin outloud
i have no idea what IOMMU is, but current memory design for motherboards and GPU's definitely shows that bus width and memory amounts are tied together.
Yeah can someone explain this iommu businesss?
i dont remmember much of what ive read right now but imho its incresaed memory compatamilty between the gfx card and mobo/os with the gfx card able to control mem as the cpu does to some extent and as far as im thinkin if they already have dual DMA busses on caymen gpus if they added a third that might increase the bus width without increasing the memmory footprint i may be getting confused tho eh
I posted the question in that thread, but maybe it is more appropriate here, come to think of it:
What's the (new) IOMMU gonna do for us?
It can provide the ability for large shared system cache for GPGPU and gmaing, once drivers are sorted out, as well as the OS.
Many users, myself included, have found AMD inadequte for Multi-GPU use. Investigating the issue reveals that the AMD CPUs lack enough PCIe-to-System Ram bandwidth via AMD's onboard memory controller. At the same time, AMD's CPUs only use about 65% of the bandwidth the DIMMs they use support. The IOMMU could perhaps take advantage of that 35% of bandwidth left over, and make 3D performance much better on AMD platforms with multiple VGAs.
As well, it could allow VGAs to shared pooled resources in local data caches. In other words, you could have one GPU access data on the other card's memory space, making onboard VGA ram on multiple cards a total space, rather than a duplicated space. For for like the HD6990, it has 4 GB of ram, but only 2GB usable effectively. IOMMU could make it have 4GB of usable space.
Those are possibilities. Until the cards come out, and AMD start talking more about htem, we'll not know for sure what the IOMMU will truly offer. What I can say is that no "consumer" OS other than Linux actually supports IOMMUs, so I remain hesitant to guess what will happen.
OK now that is clearer and indeed this new IOMMU seems to hold much promise. Exciting times.
To the extent of my (limited) knowledge, GART is a common space for both GPUs, so nothing changes in that regards (other than allowing the GPUs to access more memory), it's the vram that needs to be replicated in a multi-GPU situation not system ram whether in GART space or otherwise. vram access is a lot faster than main memory access (+pcie access), so using/relying in a common pool like that in main memory could posibly degrade performance rather than inprove it.
Then the answer is right there. New IOMMU or GART, both will communicate through PCIe so that's a dead end, like I said above.
And of course the whole thing becomes even more irrelevant fr graphics, when you consider that the new cards will have 3 GB of memory. With so much memory and memory bandwidth to boot, the last thing you wnt to do, is to move data from and to main memory.
Where did they say that one GPU can read vram of the other one? Plus why would you want to do that in the first place? It would NOT help graphics performance at all (GPGPU that's another thing). Graphics performance entirely depends on the bandwidth/availability/lag between the GPU and its own vram, controlled by its own memory controller. As long as you move anything from vram to any other memory pool performance can and most probably will degrade.
That's not exactly the issue with AMD CPUs though. It's not really the PCIe that is the problem, nor is HTT(what goes from PCIe controller on chipset to CPU NB).
Although, you may be right, just not on where the bottleneck occurs(probably due to my poor explanation ). It depends on how the IOMMU interfaces with the CPU memory controller. The bottleneck could simply be occuring beucase of how GART is dealt with. You only have 256 MB of GART space in system ram, which means the contorller is constantly writing to the GART space from system ram due to it's limited size. Allowing for a large buffer size would mean less writes to GART, which can boost performance as the CPU doesn't have to copy from System RAM to GART. Same thing with sharing VGA ram...maybe you should check last year's Fusion Summit presentation and it might give you a better idea.
Anyway, we should be discussing this in the thread created for it.
12x128 Mb chips would equal 2gig im only speculating from rumours i heard and only sayin it cos no one else is
samsungs new gddr5 7GHz capable
and with rumours imho i see a 256 bit memory bus between gfx gpu and mem plus a seperate sideband sort of 128bit iommu bus making what they will call a 384bit bus and more system to gpu bandwith utilised
like i say a 3rd on die DMA maybe 4 in total
AMD have not been shy about makeing drastic marked hardware changes lately take APU's BD and GCN for example all marked changes from that which went before
no facts support this pos bs just speculating
You're probably right there, but the benefit would not make the vram replication issue dissapear though, and the benefit would be there for single GPU too. Also that wouldn't make the multi-gpu situation you described any more appealing either. You want everything on local vram, as much as posible. Remember just because the GPU can access any virtual address thanks to IOMMU, physical memory path still exists and you don't want anything graphics related to be read "directly" from main memory or in case it's posible from the other GPU's vram.
12x128 == 1536
damn too muchim still pondering the other bit though
Separate names with a comma.