• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD to Skip 20 nm, Jump Straight to 14 nm with "Arctic Islands" GPU Family

Joined
May 13, 2008
Messages
669 (0.11/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
Whew... that was 3 breath sentence.

But it's that last one that surprises, all of a sudden rumors say 1st Gen HBM is constrained, even though SK Hynix indicated client shipments started in January 2015. While this says SK Hynix is "ready" for HBM2, sure not near production but appears on track.

What's more in question is where is TSMC with 16 nm FinFET? As from some of the rumors others have been "investigating options" or at "keeping open mind" for their next shrink. Some speculate TSMC might not have full production for large power budget IC's until Q3 2016. Such a lapse might give AMD the window to get Arctic Islands parts solidly vetted at GloFo and still be ready by this time next year.

I get the impression it is the 2x1GB stacks that are constrained; everything points to that imho.

First, and for a long time, we heard 'Fiji' was only going to be 4GB (4x1GB). Then we heard murmurs AMD was internally battling with offering an 8GB design, even though it may hold up production and raise the price over $700. Then, we got that slide deck that included what appeared to be info fresh off the line about making 2x1GB stacks (likely meaning the bandwidth of a single 1GB stack with two connected stacks or 2x chips in a stack)...something that nobody really saw coming (HBM1 was going to be 4hi 1GB, HBM2 up to 8hi 4GB). I have little doubt this was a last-second addition/decision as they noticed peoples' concerns with 4GB per gpu (especially in crossfire) for such an expensive investment. This can be noticed by the frantic 'dx12 can combine ram from multi gpus into a single pool' coming across the AMD PR bow.

AMD really seems in a tough place with that. 4GB is likely (optimally) not enough for the 390x, especially with multi-gpu in the current landscape, but 8GB is likely a little too much (and expensive) for a single card (and I bet 390 non-x will be perfectly fine with 4GB aimed at 1440p)...it's the reason a 6GB similar-performance design from nvidia makes sense....that's just about the peak performance we can realistically expect from a single gpu on 28nm.

One more time with gusto: 28nm will get us ~3/4 of the way to 4k/8GB making sense on the whole. 14nm will pick up the slack..the rest is just gravy (in performance or power savings).

While I want 4k playability as much as anyone in demanding titles (I'm thinking a dual config on 14nm is in my future, depending on how single cards + dx12 handle the situation), I can't help but wonder if the cards built for 1440p60+ will be the big winners this go-round, as the value gap is so large. That is to say, 390 (non-x, 4GB), perhaps a cheaper gtx 980, and/or a similarly-priced salvage GM200.
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
interesting that they are breaking news on this while the 390X isnt out yet, they must be sure of its performance imho.

I'd take the implied constraint on HBM memory at face value ,I mean was it possible for them to make enough , not in one plant ,that shits gonna be hot potatos for a few years yet and pricing will confirm this.
 
Joined
May 13, 2008
Messages
669 (0.11/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
interesting that they are breaking news on this while the 390X isnt out yet, they must be sure of its performance imho.

I'd take the implied constraint on HBM memory at face value ,I mean was it possible for them to make enough , not in one plant ,that shits gonna be hot potatos for a few years yet and pricing will confirm this.


It's surely a weird situation with HBM1. Hynix has exactly one customer, and that one customer from all accounts has had their product ready for some time but refuses to launch it on account of older products in the channel, as well as supposedly massively optimizing drivers before release. With such a floating target, as well as uncertainty of sales (given the high price, unknown competitive landscape etc)...I couldn't really blame Hynix for keeping supply tight (if 1GB is indeed 'constrained' as well).
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
where are you getting your info that Hynix has one customer,thats just odd, i have no proof to the contrary but no business i ever heard of bet all its eggs on 1 basket
 
Joined
May 13, 2008
Messages
669 (0.11/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
where are you getting your info that Hynix has one customer,thats just odd, i have no proof to the contrary but no business i ever heard of bet all its eggs on 1 basket

Perhaps that is over-reaching in assumption...point taken, but it seems pretty obvious they are the first, and every other product coming later appears to use HBM2. It's not unheard of (Samsung's GDDR4 says 'hi'), especially given the technology will evolve is a very obvious way (essentially stacking currently-common low-density ddr3 to more-recent higher-density ddr4 as manufacturing of that style of memory proliferates).

AFAIK the main customers will be AMD (GPUs, APUs) and nVIDIA (at least GPUs). We know nvidia isn't jumping on until HBM2 (Pascal), and it can be assumed by the approximate dates on roadmaps APUs will also use HBM2. We know Arctic Islands will use HBM2.

There may be others, but afaict HBM1 is more-or-less a trial product...a risk version of the technology...developed by not only Hynix, but also AMD for a very specific purpose; AMD needed bandwidth while keeping their die size and power consumption in check for a 28nm gpu product. The realistic advantages over GDDR5 with a gpu on a smaller core process that can accommodate it (say 8ghz gddr5 on 14nm) aren't gigantic for HBM1, but it truly blooms with HBM2. The fact is they needed that high level of efficient bandwidth now to be competitive given their core technology....hence it seems HBM1 is essentially stacking 2Gb DDR3 while the mass commercial product will be stacking more-relevant (and cheaper by that time) up to 4-8Gb DDR4.
 
Last edited:
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
So largely just your opinion and outlook upon it then , fair enough.

I personally dont think that AMD and Hynix co-operating on this tech precludes its use in other markets for Hynix ,,with imaging sensors ,Fpgas and some other less known about networking and instrumentation chips being candidates for its use(while not hindering Amd's use of it).

Is Nvidia going to use this on Pascal?? or could that be some other variant like Micron/intel's ,point is with 3D/HBM/3DS were going to be seeing the same high bandwidth memory standards(JEDEC) used on various different propositions over the next few years so I dont think any CO-op tie ins are going to last that long if they do at all and exclusivity wont last but a year at best.
 
Joined
Apr 1, 2014
Messages
502 (0.14/day)
System Name Personal Rig
Processor Intel i5 3570K
Motherboard Asus P8Z77-V
Cooling Noctua NH-U12P Push/Pull
Memory 8GB 1600Mhz Vengeance
Video Card(s) Intel HD4000
Storage Seagate 1TB & 180GB Intel 330
Display(s) AOC I2360P
Case Enermax Vostok
Audio Device(s) Onboard realtek
Power Supply Corsair TX650
Mouse Microsoft OEM 2.0
Keyboard Logitech Internet Pro White
Software Legal ;)
Benchmark Scores Very big
Re read the OP 1 or





there was no mention of AMD using NAND Flashmemory
thats besides the point.

The article got it wrong, as there is no such thing as 14nm flash memory.
 
Joined
May 13, 2008
Messages
669 (0.11/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
So largely just your opinion and outlook upon it then , fair enough.

I personally dont think that AMD and Hynix co-operating on this tech precludes its use in other markets for Hynix ,,with imaging sensors ,Fpgas and some other less known about networking and instrumentation chips being candidates for its use(while not hindering Amd's use of it).

Is Nvidia going to use this on Pascal?? or could that be some other variant like Micron/intel's ,point is with 3D/HBM/3DS were going to be seeing the same high bandwidth memory standards(JEDEC) used on various different propositions over the next few years so I dont think any CO-op tie ins are going to last that long if they do at all and exclusivity wont last but a year at best.


You're right, and it's certainly possible. That said, other companies seem set in their ways of wider buses, proprietary cache (as you mentioned), and/or more dense alternatives/cheaper alternatives to HBM1. HBM2 certainly could, and likely will, be widely adopted.

It's my understanding nvidia will use HBM2 in Pascal. Their latest roadmap essentially gave their plan away: biggest chip will use 12GB/768Gbps ram iirc. That means 3xHBM2 4GB.

I think an interesting way for nvidia to prove a point about HBM1 is to simply do the following:


GM204 shrunk to ~1/2 it's size on 14/16nm (so essentially 200-some mm2), with 4/8GB (4-8Gb) 8ghz GDDR5 running at something like 1850/8000

vs

FijiXT

Hypothetically...who wins?
 
Joined
Apr 19, 2011
Messages
2,198 (0.46/day)
Location
So. Cal.
I get the impression it is the 2x1GB stacks that are constrained; everything points to that imho.

First, and for a long time, we heard 'Fiji' was only going to be 4GB (4x1GB). Then we heard murmurs AMD was internally battling with offering an 8GB design, even though it may hold up production and raise the price over $700. Then, we got that slide deck that included what appeared to be info fresh off the line about making 2x1GB stacks (likely meaning the bandwidth of a single 1GB stack with two connected stacks or 2x chips in a stack)...something that nobody really saw coming (HBM1 was going to be 4hi 1GB, HBM2 up to 8hi 4GB). I have little doubt this was a last-second addition/decision as they noticed peoples' concerns with 4GB per gpu (especially in crossfire) for such an expensive investment. This can be noticed by the frantic 'dx12 can combine ram from multi gpus into a single pool' coming across the AMD PR bow.

AMD really seems in a tough place with that. 4GB is likely (optimally) not enough for the 390x, especially with multi-gpu in the current landscape, but 8GB is likely a little too much (and expensive) for a single card (and I bet 390 non-x will be perfectly fine with 4GB aimed at 1440p)...it's the reason a 6GB similar-performance design from nvidia makes sense....that's just about the peak performance we can realistically expect from a single gpu on 28nm.

One more time with gusto: 28nm will get us ~3/4 of the way to 4k/8GB making sense on the whole. 14nm will pick up the slack..the rest is just gravy (in performance or power savings).

While I want 4k playability as much as anyone in demanding titles (I'm thinking a dual config on 14nm is in my future, depending on how single cards + dx12 handle the situation), I can't help but wonder if the cards built for 1440p60+ will be the big winners this go-round, as the value gap is so large. That is to say, 390 (non-x, 4GB), perhaps a cheaper gtx 980, and/or a similarly-priced salvage GM200.

Always good info, and I honed in with you saying, “just about the peak performance we can realistically expect from a single gpu on 28nm”.

As to the issue of 4Gb not being enough or needing 8Gb... Isn’t more, the amount of memory is almost meaningless if you don't have the processing power to support it? I thought I read 8Gb of HBM will offer up to 1 Terabyte of bandwidth, so given that wouldn't it be a waste for AMD to add extra memory if the GPU designs on 28nm physically prevents a die size that could exploit all that. Would Fiji with 4096 SP's, not have the oomph and watch 50% of such 1Tb bandwidth go unused?

You made a good point when saying, "This can be noticed by the frantic 'dx12 can combine ram from multi gpus into a single pool' coming across the AMD PR bow." But, isn’t that a good thing as a single 390X is not going to offer excellent 4K, but a Crossfire and all 8Gb (2x 4Gb) would act as one. While is any of the color compression (memory) of Tonga able to factored into what Fiji might be able to exploit? I mean Tonga was made for Apples 5K Retina display could that provide an advantage for 4K panels?
 
Last edited:
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Be nice to find out eh ,Id obviously vote Amd there jk

I have not a clue, the maths is easy,, but imho its too hypothetical, to clean to easy and chips dont bin that way , not many nodes have panned out exactly how they were scripted to and its that which makes this cat and mouse chip game so worthy of debate.
 
Joined
Sep 7, 2011
Messages
2,785 (0.60/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
It's surely a weird situation with HBM1. Hynix has exactly one customer, and that one customer from all accounts has had their product ready for some time but refuses to launch it on account of older products in the channel, as well as supposedly massively optimizing drivers before release. With such a floating target, as well as uncertainty of sales (given the high price, unknown competitive landscape etc)...I couldn't really blame Hynix for keeping supply tight (if 1GB is indeed 'constrained' as well).
HBM by the accounts I've seen, is not being aggressively ramped- probably due to manufacturing costs and defect rates needing to be passed on the product end price. Manufacturing a GPU+HBM on interposer has it's own yield/manufacturing defect and tolerance issues (complexity and overall size - which could well top out at larger than 800mm²). Xilinx has been shipping 2.5D for a couple of years or more, and has just started production of 3D FPGA's. Neither small, nor cheap, nor easy to manufacture as this article on the original Virtex-7 concludes. On the plus side, the price for these 3D chips drops rapidly once the yield/manufacturing issues are under control ( the $8K price is roughly half what it was a year ago).
 
Joined
Jun 13, 2012
Messages
1,327 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts. 180-190watt draw)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
It's surely a weird situation with HBM1. Hynix has exactly one customer, and that one customer from all accounts has had their product ready for some time but refuses to launch it on account of older products in the channel, as well as supposedly massively optimizing drivers before release. With such a floating target, as well as uncertainty of sales (given the high price, unknown competitive landscape etc)...I couldn't really blame Hynix for keeping supply tight (if 1GB is indeed 'constrained' as well).
I doubt that is the case they refuse to launch cause other products in the channel. AMD isn't in position to do delay launch of a new product given their spot $ wise. Its more like they got issues they are working out on the product.

Part of reason i believe amd is competitive with nvidia is due to higher memory bandwidth that keeps their gpu's there. AMD likely fears that day nvidia switches to HBM.
 
Joined
Sep 7, 2011
Messages
2,785 (0.60/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
I doubt that is the case they refuse to launch cause other products in the channel.
Most definitely not. HBM is a catalogue product for any vendor to integrate, but I suspect like all 2.5D/3D stacked IC's, the manufacturing cost needs to be justified by the end product's return on investment.
AMD fulfil the launch customer requirement, but I suspect that many other vendors are waiting to see how 2.5D pricing aligns with product maturity, and 3D pricing/licencing and standards shake out. AFAIA, the HBM spec while ratified by JEDEC is still part of ongoing (and not currently resolved) test/validation ratification/spec finalization process - such as the IEEE P1838 spec that (I assume) will provide a common test/validation platform for 3D heterogeneous die stacking across HBM, HMC, Wide I/O2 etc.
AMD isn't in position to do delay launch of a new product given their spot $ wise. Its more like they got issues they are working out on the product.
Would seem logical. AMD's R&D is probably stretched pretty thin considering the number of projects they have on their books. I'm also guessing that a huge GPU (by AMD's standards) incorporating a new memory technology that needs a much more sophisticated assembly process than just slapping BGA chips onto PCB presents its own problems.
 
Joined
May 13, 2008
Messages
669 (0.11/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
Be nice to find out eh ,Id obviously vote Amd there jk

I have not a clue, the maths is easy,, but imho its too hypothetical, to clean to easy and chips dont bin that way , not many nodes have panned out exactly how they were scripted to and its that which makes this cat and mouse chip game so worthy of debate.

It certainly is in jest...but here's my theoretical:

980 needs 7ghz/256-bit at 1620mhz (Yes, it's that over-specced). At 8ghz it could could support up to 1850mhz. Samsung's tech *should* give around a ~30% performance boost (my brain seems to think it'll be 29.7%). Currently, maxwell clocks around 22.5% better than other 28nm designs...which are around to slightly less than 1v = 1ghz. Extra bandwidth from 980 gives it roughly a 4% performance boost going on a typical clock of 1291.5mhz (according to wizard's review), if you wish to do scaling that way. Since I matched them, we don't need that.


1850*(2048sp+512sfu)/4096 = 1156.25 Fiji at matched bw/clock....

...but Fiji has 33% more bandwidth than it needs (or should get ~5.33_% performance from extra bw) so...

1050*1.05333_ = 1106mhz 'real' performance

If you want to get SUPER technical, Fiji's voltage could be 1.14v, matching the lowest voltage of HBM (which operates at 1.14-1.26v) and should overclock some. Theoretically that gm214 would need to be around 1.164v (1850/1.297/1.225= 1.164)....which just so happens to be the accepted best scaling voltage/power consumption on 28nm. Weird...that.

You could go even further, assuming Fiji could take up to 1.26v, as could the HBM....and that HBM is going to at least be proportional to 1600mhz ddr3 at 1.35v...squaring those averages all away (and assuming Fiji scales like most chips; not Hawaii) you could end up with something like a 1240mhz/1493mhz Fiji comparing to a ~2100/9074 (yes, that could actually happen) GM214. It wouldn't be much different than how 770 was set up and clocked, proportionally (similar to gk204 at around 1300mhz; small design at high voltage, if not the pipeline adjusted to do so at a lower voltage). Given that nvidia clearly took their pipeline/clockspeed cues from ARM designs (which are 2ghz+ on 14nm), and their current memory controllers are over-volted (1.6 vs 1.5v spec)....it's possible (if totally unlikely)!


TLDR: Depending on how you look at it, they would be really really damn close...and would be interesting to see just for kicks. That's not to say they won't just go straight to Pascal...which I have got to assume will be something like 32/64/96 rop designs scaled to 1/2/3 HBM similar to the setup of maxwell (.5/1/1.5).

Yeah yeah...It's all just speculation...but I find the similarities in the possibilities of design scaling (versus previous gen) quite uncanny. There are really only so many ways to correlatively skin a cat (between units, clockspeeds, and bw) and these companies plan their way forward years ahead of time (hoping nodes will somewhat fit what they designed)...and one such as that makes a lot of sense.

I'm getting into the crazy talk and writing a sentence every ten minutes between doing other stuff....must be time to sleep. :)
 
Joined
May 13, 2008
Messages
669 (0.11/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
I doubt that is the case they refuse to launch cause other products in the channel. AMD isn't in position to do delay launch of a new product given their spot $ wise. Its more like they got issues they are working out on the product.


Except....didn't they say exactly that in their last earnings call?

I am in no disagreement that putting 4/8(16? How does 2x1GB work?) stacked ram die + 1/2 gpus on a (probably) 832 or 1214mm2 interposer is likely a huge pain in the ass....It just seemed that was at least *part* of the issue.

Always good info, and I honed in with you saying, “just about the peak performance we can realistically expect from a single gpu on 28nm”.

As to the issue of 4Gb not being enough or needing 8Gb... Isn’t more, the amount of memory is almost meaningless if you don't have the processing power to support it? I thought I read 8Gb of HBM will offer up to 1 Terabyte of bandwidth, so given that wouldn't it be a waste for AMD to add extra memory if the GPU designs on 28nm physically prevents a die size that could exploit all that. Would Fiji with 4096 SP's, not have the oomph and watch 50% of such 1Tb bandwidth go unused?

You made a good point when saying, "This can be noticed by the frantic 'dx12 can combine ram from multi gpus into a single pool' coming across the AMD PR bow." But, isn’t that a good thing as a single 390X is not going to offer excellent 4K, but a Crossfire and all 8Gb (2x 4Gb) would act as one. While is any of the color compression (memory) of Tonga able to factored into what Fiji might be able to exploit? I mean Tonga was made for Apples 5K Retina display could that provide an advantage for 4K panels?

Lot of Q's there.

Buffer size and bandwidth are two different things. Sure, they could swap things out of buffer with faster bandwidth, but that's generally impractical (and why extra bandwidth doesn't give much more performance). A larger tangible buffer for higher-rez textures is absolutely necessary if you have the processing power to support it, which I think Fiji does (greater than 60fps at 1440p requiring ~4GB.)

I do not believe AMD's (single-card) 8GB setup will be 1280Gbps, I think that is the distinction made by '2x1GB'. I believe it will be 640 just like the 4GB model. I would love to be wrong, as that would provide a fairly healthy boost to performance just based on the scale.

I personally believe (scaling between 1440p->2160p) is where the processing power of 390x will lie. Surely some games will run great at 30->60fps at 4k, but on the whole I think we're just starting to nudge over 30 at 4k....it's generally a correlation to the consoles (720p xbox, 900p ps4). I personally don't think 4k60 will be a consistant ultra-setting reality until 14nm and dual gpus....hopefully totalling 16GB in dx12. Buffer requirement could even go higher, and if there's room, PC versions can always use more effects to offset whatever scaling differences.

I'm not at all saying the improvements in dx12 don't matter, they absolutely do, only that for the lifespan of this card they cannot be depended upon (yet)...and in the future, worse-case, they may still not be. How many dx9 (ports) titles do we still see?

When you smush everything together into a box, I personally believe these cards avg out making sense around ~3200x1800 and 6GB. Obviously ram amount will play a larger factor later on, as it becomes feasible to scale textures from consoles making the most of their capabilities. That means more xbox games will be 720p, more ps4 games slightly higher, rather than the current 1080p. Currently the most important scaling factor is raw performance (from those inflated resolutions on the consoles).

There are certainly a lot of factors to consider, and obviously even more unknowns. I can only go on the the patterns we've seen.

For instance, I use a 60fps metric. Just like performance dx12 may bring, perhaps we will all quickly adopt some form of adaptive sync making that moot. As it currently sits though, I personally can only draw from the worst-case/lowest common denominator, as nothing else is currently widely applicable.
 
Last edited:
Joined
Jul 13, 2008
Messages
306 (0.05/day)
Location
EU
Not an AMD Fan but I have to say that's a smart move AMD

When they first moved to 28 it turned out the fabs had huge issues with it and many chips on the die failed leading to high prices as I recall.
So it's a risky move, if the 14nm production technology is not going well you are in deep shit.
 
Joined
Jul 13, 2012
Messages
71 (0.02/day)
I get the impression it is the 2x1GB stacks that are constrained; everything points to that imho.

First, and for a long time, we heard 'Fiji' was only going to be 4GB (4x1GB). Then we heard murmurs AMD was internally battling with offering an 8GB design, even though it may hold up production and raise the price over $700. Then, we got that slide deck that included what appeared to be info fresh off the line about making 2x1GB stacks (likely meaning the bandwidth of a single 1GB stack with two connected stacks or 2x chips in a stack)...something that nobody really saw coming (HBM1 was going to be 4hi 1GB, HBM2 up to 8hi 4GB). I have little doubt this was a last-second addition/decision as they noticed peoples' concerns with 4GB per gpu (especially in crossfire) for such an expensive investment. This can be noticed by the frantic 'dx12 can combine ram from multi gpus into a single pool' coming across the AMD PR bow.

AMD really seems in a tough place with that. 4GB is likely (optimally) not enough for the 390x, especially with multi-gpu in the current landscape, but 8GB is likely a little too much (and expensive) for a single card (and I bet 390 non-x will be perfectly fine with 4GB aimed at 1440p)...it's the reason a 6GB similar-performance design from nvidia makes sense....that's just about the peak performance we can realistically expect from a single gpu on 28nm.

One more time with gusto: 28nm will get us ~3/4 of the way to 4k/8GB making sense on the whole. 14nm will pick up the slack..the rest is just gravy (in performance or power savings).

While I want 4k playability as much as anyone in demanding titles (I'm thinking a dual config on 14nm is in my future, depending on how single cards + dx12 handle the situation), I can't help but wonder if the cards built for 1440p60+ will be the big winners this go-round, as the value gap is so large. That is to say, 390 (non-x, 4GB), perhaps a cheaper gtx 980, and/or a similarly-priced salvage GM200.



If it's true about the dx12 stacking... man.. 970's ftw. 290x ftw.
 
Joined
Jun 13, 2012
Messages
1,327 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts. 180-190watt draw)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
If it's true about the dx12 stacking... man.. 970's ftw. 290x ftw.

Don't know how that stacking will really work if it does. Might not work as well as people if it has to go through pci-e bus for 1 card to talk to other cards memory or what ever it will do.
 
Joined
Apr 2, 2011
Messages
2,660 (0.56/day)
I'm seeing plenty of people talking about DX12, and I don't get it. There is no plan out there which states DX12 will only appear on these new cards, and in fact Nvidea has stated that their current line-up is DX12 capable (though what this means in real terms is anyone's guess). Basing wild assumptions off of incomplete and inconsistent data is foolish in the extreme.

"Arctic Islands" is a fun name, but why exactly does everyone think the cards will be so much cooler? Heat transfer from a surface is a function of the area, when looking at a simplistic model of a chip. When you decrease the manufacturing size by half, you lose 75% of the surface area. Yes, you'll also have to decrease voltage inside the chip, but if you look at a transistor as a very poor resistor you'll see that power = amperage * voltage = amperage^2 * resistance. To decrease the power flowing through the transistor, just to match the same thermal limits of the old design, you need to either half the amperage or quarter the resistance. While this is possible, AMD has had the tendency to not do this.


HBM is interesting as a concept, but we're still more than 8 months from seeing anything using it. Will AMD or Nvidea use the technology better, I cannot say. I'm willing to simply remain silent until actual numbers come out. Any speculation about a completely unproven technology are just foolish.



TL;DR:
All of this discussion is random speculation. People are arguing about things that they've got no business arguing about. Perhaps, just once, we can wait and see about the actual performance, rather than being disappointed when our wild speculations doesn't match with what we actually get. I'm looking forward to whatever AMD offers, because it generally competes with Nvidea on some level, and makes sure GPU prices aren't ridiculous.
 
Joined
Apr 29, 2014
Messages
4,180 (1.15/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
Well I think it became pretty clear at a point the delay in this top card was not so much because of 20nm as rumored but because they were waiting on HBM to be up to par with higher quantities. I mean they realized 4gb will only satisfy peoples hunger for a short time especially with the amount of leaked/rumored/hinted at performance from these GPU's. One thing AMD has had going for it for a long while has been memory size which has always helped it ahead in the higher resolutions category and they need to at least keep that to be competitive for the high end market where most people come to that area for high end needs (I mean higher refreshes, resolution, etc).

At this point, them skipping it was inevitable as it was not good for high end performance. Lets just hope 14nm is a great success in the distant future.
 

64K

Joined
Mar 13, 2014
Messages
6,104 (1.65/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) MSI RTX 2070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Dell 27 inch 1440p 144 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
"Arctic Islands" is a fun name, but why exactly does everyone think the cards will be so much cooler?

I'm guessing AMD chose that code name because they have found a way to not only take advantage of the improved efficiency of the 14nm process but also a more efficient architecture on top of that. Like Nvidia did with Maxwell. Same 28nm process as Kepler but more efficient so it used less watts.

AMD knows that they currently have a reputation for designing GPUs that run too hot and use too many watts for the same performance as an Nvidia GPU. I'm not saying they deserve that reputation but it does exist. Over and over I see people citing those two reasons as why they won't buy an AMD card. As far as the extra watts used it doesn't amount to anything much on an electricity bill for an average gamer playing 15-20 hours a week unless you live in an area where electricity is ridiculously expensive or you're running your card at max 24/7 for Folding or Mining. For me the difference would be about 8 cents a month on my power bill between a reference GTX 780 Ti (peak 269 watts) and a reference R9 290X (peak 282 watts) from W1zzard's reviews based on the last generations flagship cards. Even if AMD used 100 watts more than Nvidia it still wouldn't amount to much. 65 cents a month difference at 10 cents per kWh.

AMD is already the brunt of many jokes about heat/power issues. I don't think they would add fuel to the fire by releasing a hot inefficient GPU and calling it Arctic Islands.
 
Joined
Apr 19, 2011
Messages
2,198 (0.46/day)
Location
So. Cal.
AMD knows that they currently have a reputation for designing GPUs that run too hot... I don't think they would add fuel to the fire by releasing a hot inefficient GPU and calling it Arctic Islands.

I'd just remind those, it wasn't until AMD did GCN and/or 28mn, that being poor on power/heat became the narrative, and even then they weren’t out of bounds with Kepler.

Maxwell is good, and saving while gaming is commendable, but the "vampire" load during sleep compared to AMD ZeroCore is noteworthy over a months' time.

I ask why Apple went with the AMD Tonga for their iMac 5K Retina display? Sure it could’ve been either Apple/Nvidia just didn’t care to or need to "partner up". It might have been a timing thing, or more the spec's for GM206 didn’t provide the oomph, while a GTX 970M (GM204) wasn't the right fit spec's/price for Apple.

Still business is business and keeping the competition from any win enhances one's "cred". Interestingly, we don’t see that Nvidia has MXM version of the GM206?
 
Last edited:
Joined
Apr 29, 2014
Messages
4,180 (1.15/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
AMD knows that they currently have a reputation for designing GPUs that run too hot and use too many watts for the same performance as an Nvidia GPU. I'm not saying they deserve that reputation but it does exist.
The irony still baffles me on that, with NVidia does it then its ok but if AMD does it then its the greatest sin in all of the computing world.

I ask why Apple went with the AMD Tonga for their iMac 5K Retina display? Sure it could’ve been either Apple/Nvidia just didn’t care to or need to "partner up". It might have been a timing thing, or more the spec's for GM206 didn’t provide the oomph, while a GTX 970M (GM204) wasn't the right fit spec's/price for Apple.

Still business is business and keeping the competition from any win enhances one's "cred". Interestingly, we don’t see that Nvidia has MXM version of the GM206?
AMD was chosen by Apple most times because they are more flexible than NVidia is. They allow Apple to make more modifications as necessary to their designs to fit within their spectrum. Not to mention I am sure there are areas AMD cuts them some slack especially in pricing to make it more appealing.
 
Joined
Apr 19, 2011
Messages
2,198 (0.46/day)
Location
So. Cal.
AMD was chosen by Apple most times because they are more flexible than NVidia is. They allow Apple to make more modifications as necessary to their designs to fit within their spectrum. Not to mention I am sure there are areas AMD cuts them some slack especially in pricing to make it more appealing.
Sure it probably was AMD's "willingness/flexibility" to design and tape-out a custom like Tonga to hit Apple's requirements to suitability drive their 5K Retina display. Providing appropriate graphics all while upholding a power envelope for such AIO construction was paramount, the energy saving during sleep was "feather in cap" for both total efficiency and thermal management when idle.

For R9 285 being a "gelding" from such a design constrained process it came away fairly respectable.
 
Top