• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD to Skip 20 nm, Jump Straight to 14 nm with "Arctic Islands" GPU Family

Joined
Jun 13, 2012
Messages
1,316 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts. 180-190watt draw)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
AMD was chosen by Apple most times because they are more flexible than NVidia is. They allow Apple to make more modifications as necessary to their designs to fit within their spectrum. Not to mention I am sure there are areas AMD cuts them some slack especially in pricing to make it more appealing.

I am sure a lot also had to do with Apple could get the chip for super cheap too keep their insanely high margin's on all the products their slap their logo on. Some of the non-butt kissing reviews of the that 5k imac, that gpu has a hard time pushing that rez just in normal desktop work. can see stuttering when desktop animations are working. Even a 290x/980 would be hard press to push that many pixels.
 
Joined
Apr 2, 2011
Messages
2,645 (0.56/day)
I'm guessing AMD chose that code name because they have found a way to not only take advantage of the improved efficiency of the 14nm process but also a more efficient architecture on top of that. Like Nvidia did with Maxwell. Same 28nm process as Kepler but more efficient so it used less watts.

AMD knows that they currently have a reputation for designing GPUs that run too hot and use too many watts for the same performance as an Nvidia GPU. I'm not saying they deserve that reputation but it does exist. Over and over I see people citing those two reasons as why they won't buy an AMD card. As far as the extra watts used it doesn't amount to anything much on an electricity bill for an average gamer playing 15-20 hours a week unless you live in an area where electricity is ridiculously expensive or you're running your card at max 24/7 for Folding or Mining. For me the difference would be about 8 cents a month on my power bill between a reference GTX 780 Ti (peak 269 watts) and a reference R9 290X (peak 282 watts) from W1zzard's reviews based on the last generations flagship cards. Even if AMD used 100 watts more than Nvidia it still wouldn't amount to much. 65 cents a month difference at 10 cents per kWh.

AMD is already the brunt of many jokes about heat/power issues. I don't think they would add fuel to the fire by releasing a hot inefficient GPU and calling it Arctic Islands.

You really haven't answered the question posed here.

Yes, GCN has has a bit of a negative image due to heat production. Do you also propose that the reason they called the last generation fire islands was because they generated heat? If I was in marketing, and had the choice to name the project after a feature of the hardware, naming it after excess heat production would demonstrate substantial stupidity.

We can conjecture that they'll be cooler, or we could maker them cooler with a mild underclock. We could also design a stock cooler that wasn't absolute crap (read: so many of the 2xx series coolers were custom because the stock cooler from AMD was terribad). AMD chose to push performance numbers by hitting the edges of their thermal envelop, and save money by designing a cooler that met these base requirements. This isn't a design driven off of a name for the project. If it was, the next CPU core would be called "Intel killer." All of this funnels back into my statement that any conclusions drawn now are useless. No facts and no knowledge mean any conclusion can be as easily dismissed as stated.
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
I'd just remind those, it wasn't until AMD did GCN and/or 28mn, that being poor on power/heat became the narrative
You were in a coma during the whole Fermi/Thermi frenzy? 34 pages of it on the GTX 480 review alone. Even AMD were falling over themselves pointing out that heat+noise = bad....although I guess AMD now have second thoughts on publicizing that sort of thing
I ask why Apple went with the AMD Tonga for their iMac 5K Retina display? Sure it could’ve been either Apple/Nvidia just didn’t care to or need to "partner up". It might have been a timing thing, or more the spec's for GM206 didn’t provide the oomph, while a GTX 970M (GM204) wasn't the right fit spec's/price for Apple.
Probably application, timing and pricing. Nvidia provide the only discrete graphics for Apples MBP which is a power/heat sensitive application. GM 206 probably wasn't a good fit for Apple's timeline, and Nvidia probably weren't prepared to price the parts at break-even margins. As I have noted before, AMD supply FirePro's to Apple. The D500 (a W7000/W8000 hybrid) to D700 (W9000) upgrade for Mac Pro costs $300 per card. The difference in those retail FirePro SKU's is ~$1800 per card, a $1500 difference. If Apple can afford to offer a rebranded W9000 for $300 over the cost of a cut down W8000, and still apply their margins for profit and amortized warranty, how favourable is the contract pricing for Apple?
Maxwell is good, and saving while gaming is commendable, but the "vampire" load during sleep compared to AMD ZeroCore is noteworthy over a months' time.
8-10W an hour is noteworthy??? What does that make 3D, GPGPU, and HTPC video usage scenarios then?
Still business is business and keeping the competition from any win enhances one's "cred".
A win + a decent contract price might matter more in a business environment. People have a habit of seeing through purchased "design wins". Intel and Nvidia's SoC programs don't look that great when the financials are taken into account - you don't see many people lauding the hardware precisely because many of the wins are bought and paid for.
Interestingly, we don’t see that Nvidia has MXM version of the GM206?
It wouldn't make any kind of sense to use GM 206 for mobile unless the company plan on moving GM 107 down one tier in the hierarchy- and given the number of "design wins" that the 850M/860M/950M/960M is racking up, that doesn't look likely.
From an engineering/ROI viewpoint what makes sense? Using a full die GM 206 for mobile parts, or using a 50% salvage GM 204 ( the GM 204 GTX 965M SKU has the same logic enabled as the GM 206) that has the same (or a little better) performance-per-watt and a larger heat dissipation heatsink?
 
Joined
Dec 17, 2011
Messages
359 (0.08/day)
I'm seeing plenty of people talking about DX12, and I don't get it. There is no plan out there which states DX12 will only appear on these new cards, and in fact Nvidia has stated that their current line-up is DX12 capable (though what this means in real terms is anyone's guess).
I think they are talking about what DX12 software i.e. games will bring to the table. It is just as exciting a prospect as a new GPU coming in.
 
Joined
Apr 29, 2014
Messages
4,180 (1.15/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
I am sure a lot also had to do with Apple could get the chip for super cheap too keep their insanely high margin's on all the products their slap their logo on. Some of the non-butt kissing reviews of the that 5k imac, that gpu has a hard time pushing that rez just in normal desktop work. can see stuttering when desktop animations are working. Even a 290x/980 would be hard press to push that many pixels.
Well that is what I said, it comes down to money and the OEM being flexible which is why Apple chooses them. But as far as pushing 5k, well it can handle the basics but was never meant to be the ultimate performance as nothing we have could offer decent performance at 5k without using multiple GPU's.

You really haven't answered the question posed here.

Yes, GCN has has a bit of a negative image due to heat production. Do you also propose that the reason they called the last generation fire islands was because they generated heat? If I was in marketing, and had the choice to name the project after a feature of the hardware, naming it after excess heat production would demonstrate substantial stupidity.

We can conjecture that they'll be cooler, or we could maker them cooler with a mild underclock. We could also design a stock cooler that wasn't absolute crap (read: so many of the 2xx series coolers were custom because the stock cooler from AMD was terribad). AMD chose to push performance numbers by hitting the edges of their thermal envelop, and save money by designing a cooler that met these base requirements. This isn't a design driven off of a name for the project. If it was, the next CPU core would be called "Intel killer." All of this funnels back into my statement that any conclusions drawn now are useless. No facts and no knowledge mean any conclusion can be as easily dismissed as stated.
AMD got more flak on this than NVidia did for the same thing...The problem also was not that they designed a bad heatsink, it was more they did not make a better one as it really just meant to be inexpensive as they push that more people on the high end do/want something more anyways and are probably going to handle cooling it themselves. Obviously this was a mistake they realized hence why we are getting something different this time.

I think as far as DX12 is concerned, all we hear is conjecture at this point and filled with a lot of what if's/I think's instead of pure fact. Until we see it in the open we will not know what being DX12 ready actually means.
 

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,378 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
Well that is what I said, it comes down to money and the OEM being flexible which is why Apple chooses them. But as far as pushing 5k, well it can handle the basics but was never meant to be the ultimate performance as nothing we have could offer decent performance at 5k without using multiple GPU's.


AMD got more flak on this than NVidia did for the same thing...

No. Fermi was a heat fiasco for Nvidia and they were mocked relentlessly for it. ATI used it (as did their brand owners -see what term I didn't use there!) to their advantage.
Problem is when you mock someone's failing and then do it yourself, its a marketting and PR disaster. The GTX 480 was righted by the surprise release of a hitherto "can't be done" GTX 580 that managed to include the previously fused off cores.
Hopefully (if the naming conjecture is true) next years card will be cool but the flip side of pumping up Arctic Islands is that 390X will be a furnace.

I bloody hope it isn't.
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
No. Fermi was a heat fiasco for Nvidia and they were mocked relentlessly for it.
As they were with the FX 5800U...
ATI used it (as did their brand owners -see what term I didn't use there!) to their advantage.
Problem is when you mock someone's failing and then do it yourself, its a marketing and PR disaster.
At least with the FX 5800U, Nvidia actually had the balls and sense of humour to laugh at their own failings. No amount of marketing could save NV30 from the obvious negative traits, so the company had fun with it.

Not something many companies would actually put together to announce their mea culpa. They may have done something similar with Fermi had AMD, their loyal followers, and shills not begun getting creative first.

Two things stand out. Nvidia's videos mocking themselves are much funnier and original than AMD's efforts, and the NV30 became a byword for hot'n'loud because of its staggeringly high 74 watt (full load) power consumption. What a difference a dozen years makes in GPU design.
 
Joined
Dec 29, 2014
Messages
861 (0.25/day)
As far as the extra watts used it doesn't amount to anything much on an electricity bill for an average gamer playing 15-20 hours a week unless you live in an area where electricity is ridiculously expensive or you're running your card at max 24/7 for Folding or Mining. For me the difference would be about 8 cents a month on my power bill between a reference GTX 780 Ti (peak 269 watts) and a reference R9 290X (peak 282 watts) from W1zzard's reviews based on the last generations flagship cards. Even if AMD used 100 watts more than Nvidia it still wouldn't amount to much. 65 cents a month difference at 10 cents per kWh..

Compare a reference GTX 970 to an R9 290 at idle (7W more) or playing a video (60W) or gaming avg (76W). Any way you slice it the FPS/$ advantage of the AMD card disappears pretty fast if you actually use it. If it's on all the time, and you spend 6 hrs per week watching video, and 20 hrs a week gaming, you will spend ~$20/yr more on electricity in the US.

http://www.techpowerup.com/reviews/Colorful/iGame_GTX_970/25.html
 
Joined
Jun 13, 2012
Messages
1,316 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts. 180-190watt draw)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
At least with the FX 5800U, Nvidia actually had the balls and sense of humour to laugh at their own failings. No amount of marketing could save NV30 from the obvious negative traits, so the company had fun with it.
Not something many companies would actually put together to announce their mea culpa. They may have done something similar with Fermi had AMD, their loyal followers, and shills not begun getting creative first.
Two things stand out. Nvidia's videos mocking themselves are much funnier and original than AMD's efforts, and the NV30 became a byword for hot'n'loud because of its staggeringly high 74 watt (full load) power consumption. What a difference a dozen years makes in GPU design.

Those AMD "fixer" videos are pretty sad. One the first ones they tried to compare what was at time a nvidia gtx650 (was easy to tell by the design of the ref cooler. Guy said it doesn't run his game well, then the fixer guy hands him a 7970 like they were compareing low-midrange card vs their top of line card at the time. It was pretty sad marking attempt by them. I know some people wouldn't looked in to what card they claimed wouldn't run his game well but when you looked it was pretty sad comparing. Would be like comparing a gtx980 to a r7 260x in performance now.
 
Joined
Apr 2, 2011
Messages
2,645 (0.56/day)
...
AMD got more flak on this than NVidia did for the same thing...The problem also was not that they designed a bad heatsink, it was more they did not make a better one as it really just meant to be inexpensive as they push that more people on the high end do/want something more anyways and are probably going to handle cooling it themselves. Obviously this was a mistake they realized hence why we are getting something different this time.

I think as far as DX12 is concerned, all we hear is conjecture at this point and filled with a lot of what if's/I think's instead of pure fact. Until we see it in the open we will not know what being DX12 ready actually means.

Did you read my entire post? Perhaps if you did you wouldn't have restated what I said.

AMD designed the cheapest cooler that would meet the thermal limitations of their card. This meant a lower priced final product, but the performance was "terribad." You couldn't overclock, the cards put out a bunch of heat, and worst of all they were noisy. AMD cut cost, upped heat, and didn't put out an appreciably better product when it came to raw numbers. The custom coolers rolled out, and AMD based GPUs actually had a chance. When a custom cooler can drop temperatures, decrease noise, and increase performance all at once you have to admit the initial cooler was a twice baked turd.

Additionally, GPUs are sold with a cooler and removing it violates any warranties related to that card. You really want to argue that AMD assumed most people would void their warranties to bring their GPUs to noise/heat parity with the Nvidea offerings. That's insane.



I'm not saying that Nvidea can do no wrong. Fermi was crap, that existed because GPU computing was all the rage and Nvidea "needed" to compete with the performance of AMD at the time. I'm not saying there's any viable excuses, just that there is no proof that Arctic Islands means a cooler chip. Arguing that the name, history, or anything else insures that is foolish. We won't have an answer until these GPUs start appearing, and discussion before that is speculation at best. Arguing over wild speculation is pointless.
 
Joined
Jun 13, 2012
Messages
1,316 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts. 180-190watt draw)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
AMD designed the cheapest cooler that would meet the thermal limitations of their card. This meant a lower priced final product, but the performance was "terribad." You couldn't overclock, the cards put out a bunch of heat, and worst of all they were noisy. AMD cut cost, upped heat, and didn't put out an appreciably better product when it came to raw numbers. The custom coolers rolled out, and AMD based GPUs actually had a chance. When a custom cooler can drop temperatures, decrease noise, and increase performance all at once you have to admit the initial cooler was a twice baked turd.

They didn't even design that cooler, they just tossed on cooler from the last gen cards and shipped it.

As person you quoted touched on DX12, dx12 allowing the more use of the hardware's full power. Kinda wonder if use of cheap cooler if AMD pulls that against how much amplified the heat issue will be with DX12 letting the gpu run more closer to 100% then was allowed before. Could be same for nvidia side but their ref cooler isn't half bad.

AMD got more flak on this than NVidia did for the same thing...The problem also was not that they designed a bad heatsink, it was more they did not make a better one as it really just meant to be inexpensive as they push that more people on the high end do/want something more anyways and are probably going to handle cooling it themselves.

On the nvidia card did that heat cripple performance by 20%? Or did the nvidia card still run pretty much as it was ment to? Really AMD took the most heat is more do to the they sold the cards with "up to ####mhz". When you use that it well usually means you won't get that top end most the time.
 
Last edited:
Joined
Apr 29, 2014
Messages
4,180 (1.15/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
No. Fermi was a heat fiasco for Nvidia and they were mocked relentlessly for it. ATI used it (as did their brand owners -see what term I didn't use there!) to their advantage.
Problem is when you mock someone's failing and then do it yourself, its a marketting and PR disaster. The GTX 480 was righted by the surprise release of a hitherto "can't be done" GTX 580 that managed to include the previously fused off cores.
Hopefully (if the naming conjecture is true) next years card will be cool but the flip side of pumping up Arctic Islands is that 390X will be a furnace.
I bloody hope it isn't.
Mocked for it maybe, but not nearly as bad as some people (Including some of the people on this forum) do at least from what I saw during those times on other sites and such. I ran into more people who still said it was a great card and stated how many different ways to alleviate the problem as there were plenty, same as with the R9 290/X. The problem is I have seen many of those people then ridicule the same traits on the AMD side claiming it should have been better... Personally, does not matter in the end of the day its easy to alleviate and something most of us could find a way around on any of the coolers. But them mocking it (AMD during those days) was a little idiotic, but no matter what AMD says they are always wrong in some peoples eyes...
Those AMD "fixer" videos are pretty sad. One the first ones they tried to compare what was at time a nvidia gtx650 (was easy to tell by the design of the ref cooler. Guy said it doesn't run his game well, then the fixer guy hands him a 7970 like they were compareing low-midrange card vs their top of line card at the time. It was pretty sad marking attempt by them. I know some people wouldn't looked in to what card they claimed wouldn't run his game well but when you looked it was pretty sad comparing. Would be like comparing a gtx980 to a r7 260x in performance now.
Was it stupid, yes it was but its just an attempt at a mocking video with an attempt at humor. I doubt they put any thought into what NVidia card it was more than it was an NVidia card with those videos and more focused on it being a quick bit of humor.
Did you read my entire post? Perhaps if you did you wouldn't have restated what I said.
AMD designed the cheapest cooler that would meet the thermal limitations of their card. This meant a lower priced final product, but the performance was "terribad." You couldn't overclock, the cards put out a bunch of heat, and worst of all they were noisy. AMD cut cost, upped heat, and didn't put out an appreciably better product when it came to raw numbers. The custom coolers rolled out, and AMD based GPUs actually had a chance. When a custom cooler can drop temperatures, decrease noise, and increase performance all at once you have to admit the initial cooler was a twice baked turd.
Additionally, GPUs are sold with a cooler and removing it violates any warranties related to that card. You really want to argue that AMD assumed most people would void their warranties to bring their GPUs to noise/heat parity with the Nvidea offerings. That's insane.
I'm not saying that Nvidea can do no wrong. Fermi was crap, that existed because GPU computing was all the rage and Nvidea "needed" to compete with the performance of AMD at the time. I'm not saying there's any viable excuses, just that there is no proof that Arctic Islands means a cooler chip. Arguing that the name, history, or anything else insures that is foolish. We won't have an answer until these GPUs start appearing, and discussion before that is speculation at best. Arguing over wild speculation is pointless.
I was agreeing with you not making a retort at your post...Sorry if it came off wrong.
On the nvidia card did that heat cripple performance by 20%? Or did the nvidia card still run pretty much as it was ment to? Really AMD took the most heat is more do to the they sold the cards with "up to ####mhz". When you use that it well usually means you won't get that top end most the time.
It went up to 105c and could just as well cause issues. Solution is the same as with the AMD card, have a better cooled case or use some form of airflow to alleviate the heat from stalemating inside the card. AMD's driver update which changed how the fan profile was handled helped the issue and putting some nice airflow helped in both cases keep the temps down easily.
Either way, both NVidia and AMD heard the cries and have decided to alleviate the issue on both ends.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.96/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Yes, you'll also have to decrease voltage inside the chip, but if you look at a transistor as a very poor resistor you'll see that power = amperage * voltage = amperage^2 * resistance. To decrease the power flowing through the transistor, just to match the same thermal limits of the old design, you need to either half the amperage or quarter the resistance. While this is possible, AMD has had the tendency to not do this.
That's not how resistors or circuits in a CPU work with respect parts that are operating as logic. Since we're talking clock signals, not constant voltage, we're talking about impedance not resistance because technically a clock signal can be described as an AC circuit. As a result, it's not a simple as you think it is. On top of that, reducing the size of die very well can impact the gap in a transistor. Smaller gaps means a smaller electric potential is required to open or close it. Less gap means less impedance, so even if voltage might be as high (maybe a little lower, 0.1 volts?) So while you're correct that resistance increases on the regular circuitry because the wires are smaller, it does not mean transistors' impedance to a digital signal is more. In fact, impedance on transistors have continued to go down as smaller manufacturing nodes are used.

Lastly, impedance on a transistor depends on how strong the driving voltage difference is between the emitter and the base for an NPN transistor versus grounding the base for PNP transistors to open them up.

Also you made a false equivalency. You assume resistance doubles when circuit size is halved which is not true. Resistance might increase, but it's not that kind of rate. It depends on a lot of factors.
 
Joined
Nov 28, 2012
Messages
413 (0.10/day)
Imagine the Irony if the stock versions of these cards get to 90°C XD
 

64K

Joined
Mar 13, 2014
Messages
6,104 (1.66/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) MSI RTX 2070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Dell 27 inch 1440p 144 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
Imagine the Irony if the stock versions of these cards get to 90°C XD

AMD haters would be merciless in ridiculing that card if it did happen. If the joke is so obvious then I can't believe that AMD would have chosen Arctic Islands if it's going to be a hot/inefficient GPU. AMD needs to be getting everything right for the near future. Their stock has fallen 20% in the last week and a half. I wish them the best but mistakes are a luxury they can't afford right now.
 
Joined
Sep 22, 2012
Messages
1,010 (0.24/day)
Location
Belgrade, Serbia
System Name Intel® X99 Wellsburg
Processor Intel® Core™ i7-5820K - 4.5GHz
Motherboard ASUS Rampage V E10 (1801)
Cooling EK RGB Monoblock + EK XRES D5 Revo Glass PWM
Memory CMD16GX4M4A2666C15
Video Card(s) ASUS GTX1080Ti Poseidon
Storage Samsung 970 EVO PLUS 1TB /850 EVO 1TB / WD Black 2TB
Display(s) Samsung P2450H
Case Lian Li PC-O11 WXC
Audio Device(s) CREATIVE Sound Blaster ZxR
Power Supply EVGA 1200 P2 Platinum
Mouse Logitech G900 / SS QCK
Keyboard Deck 87 Francium Pro
Software Windows 10 Pro x64
Next news is AMD delay R9-390X go right on 14nm...
Before 5-6 months I was convinced they will launch 20nm, 20% stronger than GM200, 3D HBM Memory, Extreme Bandwidth, Incredible fps,
card made for 4K resolution... typical for AMD. I don't read their news any more, only topic and maybe few words more...
I don't want to read before R9-390X show up because they send news only to move attention from main questions...
Specification and performance of R9-390X,
Distance from GTX980 and TITAN X,
Temperatures, noise, power consumption.
Last time when customers wait so long AMD made miracle, they almost beat TITAN.
They didn't beat him, TITAN was better card, less heat, better OC, better gaming experience, more video memory, but they made miracle, nobody expect same performance as NVIDIA premium card. Main problem is because AMD still no better card than that model Hawaii, almost same as crippled GK110. But now is middle of 2015 and TITAN was launched on beginning of 2013. And NVIDIA had 4 stronger models, TITAN Black, GTX780Ti, GTX980, TITAN X and 5th GTX980Ti is finished only need few weeks to install chips on board and send to vendors when time come.
Gap between NVIDIA and AMD is huge now and it's time to AMD make something good and drop price of GTX980Ti.
 
Last edited:
Joined
Apr 19, 2011
Messages
2,198 (0.46/day)
Location
So. Cal.
You were in a coma during the whole Fermi/Thermi frenzy?
8-10W an hour is noteworthy???

I meant the power/heat "narrative" is new as being directed toward AMD, not the topic in general.

While not specifically a cost objective for and individual computer, though when you have 3 that are sleeping as myself, it is worth being aware of. We should looking be looking at all such "non-beneficial" loads, or "vampire usage" on everything. This should be just as disquieting and regarded almost as wasteful (if not more as nothing is hppening) to your household, as such avertised upfront efficiencies’ products are market around, and their effect on a community wide basis, then and regional power grid.

I'm astounded ... your need to deliver "point by point" discord, didn’t mean to rile you personally. o_O
 
Last edited:
Joined
Apr 2, 2011
Messages
2,645 (0.56/day)
That's not how resistors or circuits in a CPU work with respect parts that are operating as logic. Since we're talking clock signals, not constant voltage, we're talking about impedance not resistance because technically a clock signal can be described as an AC circuit. As a result, it's not a simple as you think it is. On top of that, reducing the size of die very well can impact the gap in a transistor. Smaller gaps means a smaller electric potential is required to open or close it. Less gap means less impedance, so even if voltage might be as high (maybe a little lower, 0.1 volts?) So while you're correct that resistance increases on the regular circuitry because the wires are smaller, it does not mean transistors' impedance to a digital signal is more. In fact, impedance on transistors have continued to go down as smaller manufacturing nodes are used.

Lastly, impedance on a transistor depends on how strong the driving voltage difference is between the emitter and the base for an NPN transistor versus grounding the base for PNP transistors to open them up.

Also you made a false equivalency. You assume resistance doubles when circuit size is halved which is not true. Resistance might increase, but it's not that kind of rate. It depends on a lot of factors.

One, I've started by stating that a transistor is approximated as a poor resistor. While incorrect, this is the only way I know of to figure out bled off energy (electrical to thermal) without resorting to immensely complicated mathematics that are beyond my ken. It also makes calculation of heat transference a heck of a lot easier.

Two, I said exactly that. In a simple circuit, voltage can be expressed as amperage multiplied be resistance. Power can be expressed as amperage multiplied by voltage. I took the extra step, and removed the voltage term from the equation because transistors generally have a fixed operational voltage depending upon size. As that is difficult, at best, to determine I didn't want it to muddy the water.

Third, where exactly did I suggest resistance is double? I cannot find it in any of my posts. What I did find was reference to circuit size being halved, which quarters the available surface area to conduct heat. Perhaps this is what you are referring to? I'd like clarification, because if I did say this I'd like to correct the error.


All of this is complicated by a simplistic model, but it doesn't take away from my point. None of the math, or assumed changes, means that the Arctic Islands chips will run cool, or even cooler than the current fire islands silicon. Yes AMD may be using a 75% space saving process to only increase the transistor count by 50%, yes the decreased transistor size could well offer a much smaller gate voltage, and yes the architecture may have been altered to be substantially more efficient (thus requiring fewer clock cycles to perform the same work). All of this is speculation. Whenever I can buy a card, or see some plausibly factual test results, anything said is wild speculation.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
I meant the power/heat "narrative" is new as being directed toward AMD, not the topic in general.
Isn't that always the case with tech as emotive as the core components of a system? The failings - perceived or real, of any particular architecture/SKU, are always held up in comparison with their contemporaries and historical precedent. When one company drops the ball, people are only to eager to fall upon it like a pack of wolves. Regarding heat/power, the narrative shifts every time the architecture falls outside of the norm. The HD 2900XT (an AMD product) was pilloried in 2007, the GTX 480/470/465 received the attention three years later, and GCN in its large die compute orientated architecture come in for attention now. The primary difference between the present and the past is that in previous years, excessive heat and power where just a negative point that could be ameliorated by outright performance - and there are plenty of examples I can think of- from the 3dfx Voodoo 3 to the aforementioned FX 5800U and GeForce 6800Ultra/Ultra Extreme. The present day sees temp and input power limit performance due to throttling which makes the trade off less acceptable for many.
I'm astounded ... your need to deliver "point by point" discord, didn’t mean to rile you personally. o_O
Well, I'm not riled. You presented a number of points and I commented upon them individually for the sake of clarity, and to lessen the chances that anyone here might take my comments out of context. I also had three questions regarding your observations. Loading them into a single paragraph lessens their chances of being answered - although I note that splitting them up as individual points fared no better in that regard :laugh: ....so there's that rationale mythbusted.
 

crazyeyesreaper

Not a Moderator
Staff member
Joined
Mar 25, 2009
Messages
9,752 (1.78/day)
Location
04578
System Name Old reliable
Processor Intel 8700K @ 4.8 GHz
Motherboard MSI Z370 Gaming Pro Carbon AC
Cooling Custom Water
Memory 32 GB Crucial Ballistix 3666 MHz
Video Card(s) MSI GTX 1080 Ti Gaming X
Storage 3x SSDs 2x HDDs
Display(s) Dell U2412M + Samsung TA350
Case Thermaltake Core P3 TG
Audio Device(s) Samson Meteor Mic / Generic 2.1 / KRK KNS 6400 headset
Power Supply Zalman EBT-1000
Mouse Mionix NAOS 7000
Keyboard Mionix
So is my GTX780.. a 2 year old card.. and the 980 is not too far above so that even a 780ti can keep it in check.

AMD is lagging that much, they needed to skip a 20nm just to make them competitive.

Heavily overclocked GTX 780 keeps up with the 970 just fine, meanwhile 780 Ti overclocked can get close to the 980.

As such Nvidia did alot of R&D to push performance up enough to counter overclocked previous gen by a few %%%% points. Titan X and 980 Ti offer what Fury from AMD will offer. So they are relatively similar for now in terms of performance. Nothings really changed that much.

W1zz managed to get a 17% Performance boost on the GTX 780 with overclocking
on the 780 Ti he got a further 18% Performance boost.

So if we say 10% Performance across the board via Overclocking then yes, The GTX 780 compares to the 970 while the 780Ti compares to a 980Ti.




Add 10% to the 780 and 10% to the 780Ti and they have no issues keeping up with the 970 and 980 for the most part. It is game dependent but even in the Averaged senario across a multitude of games the result remains the same.
 
Last edited:
Joined
Jan 2, 2015
Messages
1,099 (0.33/day)
Processor FX6350@4.2ghz-i54670k@4ghz
Video Card(s) HD7850-R9290
AMD should have dumped TSMC long ago although there isn't that many choices. AMD should try to do a deal with Samsung.
they have been working with samsung for years and they are both founding members of the hsa foundation. they have also been in talks about using 14nm for amd's new gpu's and cpu's for some time.
 
Top