• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Next AMD Flagship Single-GPU Card to Feature HBM

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,012 (2.39/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
You can't base the findings on the fact that GPU's are like 1 year apart...

Yes you can. The sales are based on what each company has on offer right now, not what generation or release date they are. 290x is AMD's best card right now. GTX 980 is Nvidia's best card (sort of). It's not an issue of which is newer.

It is AMD's problem they don't have a performance competitor (on perf/watt), not the markets. FWIW, I think their next card should hit the mark based on rumours so far. I think it may be as fast as GM200 but it will consume more power. But if it's a faster card and better at 4K, power draw be damned. But all being said, that's only my opinion.
 
  • Like
Reactions: 64K

64K

Joined
Mar 13, 2014
Messages
6,749 (1.73/day)
Processor i7 7700k
Motherboard MSI Z270 SLI Plus
Cooling CM Hyper 212 EVO
Memory 2 x 8 GB Corsair Vengeance
Video Card(s) Temporary MSI RTX 4070 Super
Storage Samsung 850 EVO 250 GB and WD Black 4TB
Display(s) Temporary Viewsonic 4K 60 Hz
Case Corsair Obsidian 750D Airflow Edition
Audio Device(s) Onboard
Power Supply EVGA SuperNova 850 W Gold
Mouse Logitech G502
Keyboard Logitech G105
Software Windows 10
Yes you can. The sales are based on what each company has on offer right now, not what generation or release date they are. 290x is AMD's best card right now. GTX 980 is Nvidia's best card (sort of). It's not an issue of which is newer.

It is AMD's problem they don't have a performance competitor (on perf/watt), not the markets. FWIW, I think their next card should hit the mark based on rumours so far. I think it may be as fast as GM200 but it will consume more power. But if it's a faster card and better at 4K, power draw be damned. But all being said, that's only my opinion.

Power draw is irrelevant to me as well. Even if my next card drew 300 watts which is about 150 watts more than my present card it wouldn't amount to anything. I game an average about 15 hours a week and my electricity costs 10 cents kWh so the difference would be a little less than $1 a month on my bill. What can you buy with $1 these days? A pack of crackers at a convenience store I guess.
 
Joined
Jan 31, 2012
Messages
2,630 (0.56/day)
Location
East Europe
System Name PLAHI
Processor I5-10400
Motherboard MSI MPG Z490 GAMING PLUS
Cooling 120 AIO
Memory 32GB Corsair LPX 2400 Mhz DDR4 CL14
Video Card(s) PNY QUADRO RTX A2000
Storage Intel 670P 512GB
Display(s) Philips 288E2A 28" 4K + 22" LG 1080p
Case Thermaltake URBAN R31
Audio Device(s) Creative Soundblaster Z SE
Power Supply Fractal Design IntegraM 650W
Mouse Logitech Triathlon
Keyboard REDRAGON MITRA
Software Windows 11 Home x 64
Power draw is irrelevant to me as well. Even if my next card drew 300 watts which is about 150 watts more than my present card it wouldn't amount to anything. I game an average about 15 hours a week and my electricity costs 10 cents kWh so the difference would be a little less than $1 a month on my bill. What can you buy with $1 these days? A pack of crackers at a convenience store I guess.

+1, excellent argument. Love it when numbers speak.
 
Joined
Apr 29, 2014
Messages
4,286 (1.11/day)
Location
Texas
System Name SnowFire / The Reinforcer
Processor i7 10700K 5.1ghz (24/7) / 2x Xeon E52650v2
Motherboard Asus Strix Z490 / Dell Dual Socket (R720)
Cooling RX 360mm + 140mm Custom Loop / Dell Stock
Memory Corsair RGB 16gb DDR4 3000 CL 16 / DDR3 128gb 16 x 8gb
Video Card(s) GTX Titan XP (2025mhz) / Asus GTX 950 (No Power Connector)
Storage Samsung 970 1tb NVME and 2tb HDD x4 RAID 5 / 300gb x8 RAID 5
Display(s) Acer XG270HU, Samsung G7 Odyssey (1440p 240hz)
Case Thermaltake Cube / Dell Poweredge R720 Rack Mount Case
Audio Device(s) Realtec ALC1150 (On board)
Power Supply Rosewill Lightning 1300Watt / Dell Stock 750 / Brick
Mouse Logitech G5
Keyboard Logitech G19S
Software Windows 11 Pro / Windows Server 2016
Well, you KNOW there is always an extreme fanboy who joins just to troll and dump on a thread, completely unaware that bta is not biased. Happens on both sides of the fence, sadly, depending whether the news is about the green side or red side.
Yea and it gets quite old especially when you get the people intentionally talking/joking trying to cook up an argument. Makes the thread cluttered and hard to get some good information from.

Yes, but that's a dual-GPU card...

The issue is that if you put a single GPU past the 300w mark, many people will run into issues with power supplies for example. It hurts sales, many systems will be incompatible.
The TDP of an R9 290X is 290Watt according to the database on techpowerup so 10 more is not really much of a difference. On top of that TDP is usually not representative anyways of actual power usage as the card uses less than 290 watts under load depending on fan speed (Reference I am speaking of) though the fans are not really making up for much of the power usage and GPGPU where I see these figures climbing up towards ~290 is not representative of real world most of the time. Even so either way I do not think we will be sweating to much with a little bit higher.

I think focusing on performance is better in many ways as long as you do not hit a point that is beyond crazy otherwise you do nullify part of your market with having to invest in more expensive parts to run already expensive parts. Power draw to me is important but more for the mobile class than anything as the higher class of cards are more for the extreme ranges of setups (higher resolution, surround/eyefinity, 3D, etc).

Power draw is irrelevant to me as well. Even if my next card drew 300 watts which is about 150 watts more than my present card it wouldn't amount to anything. I game an average about 15 hours a week and my electricity costs 10 cents kWh so the difference would be a little less than $1 a month on my bill. What can you buy with $1 these days? A pack of crackers at a convenience store I guess.
Bingo, though to be fair some places do have very high electricity costs compared to you or I so I can see it somewhat but even with that unless your stressing your computer 24/7 it will not amount to much.
 
Joined
Jan 14, 2015
Messages
8 (0.00/day)
System Name My PC
Processor Ryzen 7 5700G
Motherboard MSI MPG B550 GAMING CARBON WIFI
Cooling Cooler CPU Noctua NH-D15 Chromax
Memory Kingston Fury Renegade 32GB(2x16GB), DDR4-3600Mhz, CL16
Video Card(s) Asus Dual Radeon 6700 XT
All I want to say is, I think they should bring the temps down. I don't really care if they designed the chip to run at 100 degrees Celsius. Nvidia did the same with the 5xx series, which ran really hot. They forget I have other components in my system which may be affected by the high temperatures of the GPU.
So, high temps means more heat means less OC for my CPU. All on air btw.
80 is the maximum acceptable imo.
 
Joined
Aug 11, 2011
Messages
4,357 (0.90/day)
Location
Mexico
System Name Dell-y Driver
Processor Core i5-10400
Motherboard Asrock H410M-HVS
Cooling Intel 95w stock cooler
Memory 2x8 A-DATA 2999Mhz DDR4
Video Card(s) UHD 630
Storage 1TB WD Green M.2 - 4TB Seagate Barracuda
Display(s) Asus PA248 1920x1200 IPS
Case Dell Vostro 270S case
Audio Device(s) Onboard
Power Supply Dell 220w
Software Windows 10 64bit
These AMD hyper-space-nextgen-dxlevel13123-gcn technologies look so good on the paper, but somhow always fail to show their strength when it comes to real world games after release...

If history repeats itself HBM will become the new GPU memory standard, just like GDDR3 and GDDR5 did in the past (GDDR4 was kind of a misstep for ATi and only they used it). I would say that those are more than proven success for ATi's R&D. Unified shaders and tessellation also caught on, although ATi's tessellation engine (Thruform) didn't become the standard.
 
Joined
Dec 6, 2005
Messages
10,885 (1.57/day)
Location
Manchester, NH
System Name Senile
Processor I7-4790K@4.8 GHz 24/7
Motherboard MSI Z97-G45 Gaming
Cooling Be Quiet Pure Rock Air
Memory 16GB 4x4 G.Skill CAS9 2133 Sniper
Video Card(s) GIGABYTE Vega 64
Storage Samsung EVO 500GB / 8 Different WDs / QNAP TS-253 8GB NAS with 2x10Tb WD Blue
Display(s) 34" LG 34CB88-P 21:9 Curved UltraWide QHD (3440*1440) *FREE_SYNC*
Case Rosewill
Audio Device(s) Onboard + HD HDMI
Power Supply Corsair HX750
Mouse Logitech G5
Keyboard Corsair Strafe RGB & G610 Orion Red
Software Win 10
All I want to say is, I think they should bring the temps down. I don't really care if they designed the chip to run at 100 degrees Celsius. Nvidia did the same with the 5xx series, which ran really hot. They forget I have other components in my system which may be affected by the high temperatures of the GPU.
So, high temps means more heat means less OC for my CPU. All on air btw.
80 is the maximum acceptable imo.

The 290x reference cooler is/was a complete piece of crap, both in design and manufacturing.
 
Joined
Mar 29, 2012
Messages
414 (0.09/day)
Location
brÄŤko dc/bosnia and herzegovina
System Name windows 10 pro 64bit
Processor i5 6600k 4.4ghz 1.25v
Motherboard asus maximus viii gene
Cooling BeQuiet Dark Rock pro
Memory 2x8(16)GB 2860mhz
Video Card(s) gtx 1070 EVGA
Storage ssd x2 128gb raid0/ ssd480gb
Display(s) AOC 1440p 75hz
Case Aerocool DS Cube
Audio Device(s) asus motherboard intergrated
Power Supply be Quiet pure power L8 600w
Mouse Corsair Ironclaw wireles
Keyboard Logitec G213
Software my favorite World of Tanks :) is that a software?? :)
im begining to belive in conspiracy teory!! maybe intel and/or nvidia is paying for that shitlords not to make amd die shrink!!i mean everyone do that...,but not amd...
samsung go to freakin 14nm...,intel shrink as well,nvidia...,but not amd...,there is something realy wierd in that picture!!
but dont judge me...,its just a teory...,something that crosed my mind :)
 
Joined
Nov 10, 2006
Messages
4,666 (0.71/day)
Location
Washington, US
System Name Rainbow
Processor Intel Core i7 8700k
Motherboard MSI MPG Z390M GAMING EDGE AC
Cooling Corsair H115i, 2x Noctua NF-A14 industrialPPC-3000 PWM
Memory G. Skill TridentZ RGB 4x8GB (F4-3600C16Q-32GTZR)
Video Card(s) ZOTAC GeForce RTX 3090 Trinity
Storage 2x Samsung 950 Pro 256GB | 2xHGST Deskstar 4TB 7.2K
Display(s) Samsung C27HG70
Case Xigmatek Aquila
Power Supply Seasonic 760W SS-760XP
Mouse Razer Deathadder 2013
Keyboard Corsair Vengeance K95
Software Windows 10 Pro
Benchmark Scores 4 trillion points in GmailMark, over 144 FPS 2K Facebook Scrolling (Extreme Quality preset)
All I want to say is, I think they should bring the temps down. I don't really care if they designed the chip to run at 100 degrees Celsius. Nvidia did the same with the 5xx series, which ran really hot. They forget I have other components in my system which may be affected by the high temperatures of the GPU.
So, high temps means more heat means less OC for my CPU. All on air btw.
80 is the maximum acceptable imo.
Eeehhh.. If I recall correctly, someone (I think Intel) has been doing some research on high-temp computing with the theory that it may be cost effective to start designing products that can run safely at rather high temperatures with the intended benefit being that the components become easier (or rather, cheaper) to cool. Imagine a CPU that throttled at, say for example, 150c, and could run quite happily at 120c. The amount of thermal energy a heatsink dissipates increases with thermal delta, so what if we increased that by making the hot side hotter? If AMD's FX chips and Intel's 46xx/47xx chips could run at those temps, we could probably use the stock cooler to achieve the same overclocks we see on high-end air and vis-a-vis high-end air could push in to new territory.

The problem with products that run hot isn't that they were designed to run hot, but more accurately that they were designed to run so close to thermal limits. If those nVidia cards could run at 150c, they'd just turn down the fan speed and most everyone would be happy.
I was thinking of just ordering a GTX 970, but now I'm hesitating again. Argh.
Exact same situation for me. I've been considering a 970, but only because it does really well in the one game I want to play and it plays nicely with some backlighting hardware I have. I'd prefer AMD, but even if the new card performs below expectations, at the very least, it should bump the GTX 970's price down.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,747 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
If you have a 200W heat load, the heat output to your system/room is the same (200W), no matter if the card is running cool but with high fan speed or warm with low fan speed.

my electricity costs
You still have heat dumped into your room / high fan noise
 
Joined
Oct 2, 2004
Messages
13,791 (1.88/day)
I have a ridiculously low custom speed fan profile on my HD7950, so it's absolutely silent. It runs hot, but it's silent. So, I frankly don't realyl care what TDP it has for as long as cooler can deal with it at low RPM. Which means my next card will be a WindForce 3X again for sure.
 

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.47/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
We need to know more about the performance before we can judge 300w a bad thing or not. If it has 3-5 times the performance of a 290X, I'd argue that it isn't going to waste. When you can get one 300w card that replaces two 200w cards, I'd call that a win.
 
Joined
Dec 6, 2005
Messages
10,885 (1.57/day)
Location
Manchester, NH
System Name Senile
Processor I7-4790K@4.8 GHz 24/7
Motherboard MSI Z97-G45 Gaming
Cooling Be Quiet Pure Rock Air
Memory 16GB 4x4 G.Skill CAS9 2133 Sniper
Video Card(s) GIGABYTE Vega 64
Storage Samsung EVO 500GB / 8 Different WDs / QNAP TS-253 8GB NAS with 2x10Tb WD Blue
Display(s) 34" LG 34CB88-P 21:9 Curved UltraWide QHD (3440*1440) *FREE_SYNC*
Case Rosewill
Audio Device(s) Onboard + HD HDMI
Power Supply Corsair HX750
Mouse Logitech G5
Keyboard Corsair Strafe RGB & G610 Orion Red
Software Win 10
If it has 3-5 times the performance of a 290X,

Very doubtful anywhere close to that magnitude (it only implies higher memory bandwidth) ...unless they are talking about an architecture change or serious higher clock on the GPU, it'll probably be on the order of 10%-25% improvement. Just guessin'
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
You can't base the findings on the fact that GPU's are like 1 year apart...
They both compete in the same market at the same time, and are both current (non-EOL - therefore they can.

By your reasoning, Intel's latest 2-3 platform offerings shouldn't have reviews including AMD FX and 990X chipsets for comparison, since the AMD platform is over 2 (Vishera) and 3 ( 900 series chipset) years old.
Yes, but can it play at 4K ?
Most likely yes. With such memory it will have tons of bandwidth to support it. It's just up to the GPU design to utilize it now...
Bandwidth is only half the equation. HBM is limited to 4GB of DRAM in it's first generational phase. Are you confident that 4GB is fully capable for holding the textures in all scenario's for 4K gaming?
Very doubtful anywhere close to that magnitude (it only implies higher memory bandwidth) ...unless they are talking about an architecture change or serious higher clock on the GPU, it'll probably be on the order of 10%-25% improvement. Just guessin'
That is likely a fairly low estimation IMO. If the quoted numbers are right, Fiji has 4096 cores which are a 45% increase over Hawaii. The wide memory I/O afforded by HBM in addition to colour compression should also add further, as would any refinement in the caching structure - as was the case between Kepler and Maxwell-assuming it was being accorded the priority that Nvidia's architects imbued their project with.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.47/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
Very doubtful anywhere close to that magnitude (it only implies higher memory bandwidth) ...unless they are talking about an architecture change or serious higher clock on the GPU, it'll probably be on the order of 10%-25% improvement. Just guessin'
HBM should mean smaller die required too connect the memory which translates to lower TDP. The TDP growth is not coming from the HBM, it is coming from elsewhere.

If leaked information is to believed, it has double the stream processors as the 280X, a 17% higher clockspeed, and more than double the memory bandwidth.
 
Joined
Apr 19, 2011
Messages
2,198 (0.44/day)
Location
So. Cal.
Correct me if wrong the Flagship graphics card, codenamed "Fiji" would via the GM200 as the 390/390X, then "Bermuda" is said to become the 380/380X and via the 970/980 correct?

First how does btarunr come up with, "Despite this, "Fiji" could feature TDP hovering the 300W mark..."? the article said "the world’s first 300W 2.5D discrete GPU SOC using stacked die High Bandwidth Memory and silicon interposer." Honestly that doesn't sound like anything more than an internal engineering project rather than any inferance to "Fiji". It appears to be pure speculation/assumption, not grounded in any evidence it is a imminent consumer product release.

I also would like btarunr to expound on "despite slow progress from foundry partner TSMC to introduce newer silicon fabs" as that doesn't seem to come from the linked article. It seems a slam on AMD for not having something for what now 4mo's since 970/980?
We know that TSCM "as normal" effected both companies abilities; Nvidia basically had to hold to 28nm on mainstream, and possible so will AMD for "Bermuda". Is saying as he does a hint there's some use of 16nm FinFET for "Flagship graphics" cards from both or either? (I don’t think that going out on a limb). With a 16nm FinFET would be strange for either side to really need push the 300W envelope? I believed AMD learned that approaching 300W is just too much for the thermal effectiveness of most reference rear exhaust coolers (Hawaii).

Despite many rumors of that mocked up housing, I don’t see AMD releasing a single card "reference water" type cooler for their initial "Fiji" release, reference air cooling will maintain. I don't discount they could provide a "Gaming Special" to find the market reaction as things progress for a "reference water" cooler, but not primarily.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
HBM should mean smaller die required too connect the memory which translates to lower TDP.
I think you missed the point of HBM. The lower power comes about due to the lower speed of the I/O ( which is more than offset by the increased width). GDDR5 presently operates at 5-7Gbps/pin. HBM as shipped now by Hynix is operating at 1Gbps/pin




The TDP growth is not coming from the HBM, it is coming from elsewhere.
Maybe the 45% increase in core count over Hawaii ?
Correct me if wrong the Flagship graphics card, codenamed "Fiji" would via the GM200 as the 390/390X, then "Bermuda" is said to become the 380/380X and via the 970/980 correct?
There seem to be two schools of thought on that. Original roadmaps point to Bermuda being the second tier GPU, but some sources are now saying that Bermuda is some future top-tier GPU on a smaller process. The latter begs the question: If this is so, what will be the second tier when Fiji arrives? Iceland is seen as entry level, and Tonga/Maui will barely be mainstream. There is a gap unless AMD are content to sell Hawaii in the $200 market.
So where does btarunr come up with, "Despite this, "Fiji" could feature TDP hovering the 300W mark..."? It appears to be pure speculation/assumption, not grounded in any evidence?
I would have thought the answer was pretty obvious. btarunr's article is based on a Tech Report article (which is referenced as source). The Tech Report article is based upon a 3DC article (which they linked to) which does reference the 300W number along with other salient pieces of information.
I also would like btarunr to expound on "despite slow progress from foundry partner TSMC to introduce newer silicon fabs".With a 16nm FinFET would be strange for either side to really need push the 300W envelope?
TSMC aren't anywhere close to volume production of 16nmFF required for large GPUs (i.e. high wafer count per order). TSMC are on record themselves as saying that 16nmFF / 16nmFF+ will account for 1% of manufacturing by Q3 2015.
I believed AMD learned that approaching 300W is just too much for the thermal effectiveness of most reference rear exhaust coolers (Hawaii).
IDK about that. The HD 7990 was pilloried by review sites, the general public, and most importantly, OEMs for power/noise/heat issues. It didn't stop AMD from going one better with Hawaii/Vesuvius. If AMD cared anything for heat/noise, why saddle the reference 290/290X with a pig of reference blower design that was destined to follow the HD 7970 as the biggest example GPU marketing suicide in recent times?
Why would you release graphics cards with little of no inherent downsides from a performance perspective, with cheap-ass blowers that previously invited ridicule? Nvidia proved that the blower design doesn't have to be some Wal-Mart looking, Pratt & Whitney sounding abomination as far back as the GTX 690, yet AMD hamstrung their own otherwise excellent product with a cooler guaranteed to cause a negative impression.
Despite many rumors of that mocked up housing, I don’t see AMD releasing a single card "reference water" type cooler for their initial "Fiji" release, reference air cooling will maintain. I don't discount they could provide a "Gaming Special" to find the market reaction as things progress for a "reference water" cooler, but not primarily.
That reference design AIO contract Asetek recently signed was for $2-4m. That's a lot of AIO's for a run of "gaming special" boards don't you think?
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.47/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
I think you missed the point of HBM. The lower power comes about due to the lower speed of the I/O ( which is more than offset by the increased width). GDDR5 presently operates at 5-7Gbps/pin. HBM as shipped now by Hynix is operating at 1Gbps/pin
It's not pin. It's 128 GiB/s per HBM chip with up to 1 GiB density.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
How exactly was HD7970 a GPU marketing suicide?
As I said:
Why would you release graphics cards with little of no inherent downsides from a performance perspective, with cheap-ass blowers that previously invited ridicule?
Of the "cons" outlined in the reference card review, price was what AMD could charge, perf/watt was a necessary trade off for compute functionality, and PowerTune/ZeroCore weren't a big influence which leaves...


Now, are you going to tell me that the largest negative gleaned from reviews, users, and tech site/forum feedback WASN'T due to the reference blower shroud?
Do you not think that if AMD had put more resources into putting together a better reference cooling solution that the overall impression of the reference board - THE ONLY OPTION AT LAUNCH - might have been better from a marketing and PR standpoint ? How many people stated that they would only consider the HD 7970 once the card was available with non-reference cooling - whether air or water?
 
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
That's backwards. 128 GiB/s per 1 Gb (128 MiB) chip. 4 of them stacked up gets 4 Gb (512 MiB) and 512 GiB/s effective rate. Stick 8 of those on the card and you still have 512 GiB/s and 4 GiB of RAM or be ridiculous and stick 16 of them on the card on two memory controllers for 1 TiB/s and 8 GiB of RAM.
FFS. First generation HBM is limited to four 1GB stacks (256MB * 4 layers)



It's not pin. It's 128 GiB/s per HBM chip with up to 1 GiB density.
I was referring to the effective data rate (also see the slide above). Lower effective memory speed = lower voltage = lower power envelope - as SK Hynix's own slides show


EDIT: Sorry about the double post. Thought I was editing the one above.
 
Last edited:
Joined
Apr 30, 2012
Messages
3,881 (0.85/day)
That reference design AIO contract Asetek recently signed was for $2-4m. That's a lot of AIO's for a run of "gaming special" boards don't you think?

Maybe they plan on AIO everything from now on... It does look like Asetek with sleeves on the tubes.


EDIT:


The release of this card also lines up with the Asetek announcement. Not saying AMD wont have a AIO cooler but at least with EVGA we have proof in a product.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Maybe they plan on AIO everything from now on... It does look like Asetek with sleeves on the tubes.
Asetek cooling does seem like the new black. I think you're right in thinking that the EVGA card is using an Asetek cooler judging by comments on the EVGA forum and views of the card without the shroud in place.
If Asetek's cooling becomes will become the de facto standard for AMD's reference cards, it stands to reason that others will follow suit. To my eyes it certainly looks cleaner than Arctic's hybrid solution- but then, I'm not a big fan of Transformer movies either.
The release of this card also lines up with the Asetek announcement. Not saying AMD wont have a AIO cooler but at least with EVGA we have proof in a product.
Well, the Asetek announcement for the $2-4m contract specifies an OEM (Nvidia or AMD), not an AIB/AIC, so chances are the EVGA contract isn't directly related any more than Sycom or any of the other outfits adding Asetek units to their range. The fact that card pictured is Nvidia OEM reference GTX 980 rather than an EVGA designed product would also tend to work against the possibility.
Having said that, I'm sure EVGA would love to have sales that warrant committing to a seven-figure contract for cooling units for a single SKU.
 
Last edited:

FordGT90Concept

"I go fast!1!11!1!"
Joined
Oct 13, 2008
Messages
26,259 (4.47/day)
Location
IA, USA
System Name BY-2021
Processor AMD Ryzen 7 5800X (65w eco profile)
Motherboard MSI B550 Gaming Plus
Cooling Scythe Mugen (rev 5)
Memory 2 x Kingston HyperX DDR4-3200 32 GiB
Video Card(s) AMD Radeon RX 7900 XT
Storage Samsung 980 Pro, Seagate Exos X20 TB 7200 RPM
Display(s) Nixeus NX-EDG274K (3840x2160@144 DP) + Samsung SyncMaster 906BW (1440x900@60 HDMI-DVI)
Case Coolermaster HAF 932 w/ USB 3.0 5.25" bay + USB 3.2 (A+C) 3.5" bay
Audio Device(s) Realtek ALC1150, Micca OriGen+
Power Supply Enermax Platimax 850w
Mouse Nixeus REVEL-X
Keyboard Tesoro Excalibur
Software Windows 10 Home 64-bit
Benchmark Scores Faster than the tortoise; slower than the hare.
FFS. First generation HBM is limited to four 1GB stacks (256MB * 4 layers)




I was referring to the effective data rate. Lower effective memory speed = lower voltage = lower power envelope - as SK Hynix's own slide shows


EDIT: Sorry about the double post. Thought I was editing the one above.
Better document:
https://hpcuserforum.com/presentations/seattle2014/IDC_AMD_EmergingTech_Panel.pdf
819.2 Mb/s to 1,228.8 Mb/s

The fourth slide shows 30w for HBM vs 85w for GDDR5.

Edit: From what I gather, the power savings come from the logic board being on the chip rather than off chip. Everything doesn't have to go as far to get what it needs and that substantially cuts power requirements in addition to improving performance by way of reducing latency.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.58/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Edit: From what I gather, the power savings come from the logic board being on the chip rather than off chip. Everything doesn't have to go as far to get what it needs and that substantially cuts power requirements in addition to improving performance by way of reducing latency.
The power savings are certainly helped by moving off-die to interposer as is latency (although trace distance latency is minor compared to the larger decrease due to slower data rate. Latency increases with data rate - for example CAS3 or 4 is common for DDR2, while DDR3 (the basis for GDDR5) on the other hand is closer to 8-10 cycles.
The large power savings are also data rate related (as Hynix themselves highlight). It is no coincidence that vRAM started to take a significant portion of total board power budget (~30%) with the advent of faster GDDR5 running in excess of 5Gbps, or the corresponding rise of LPDDR3 and 4 for system RAM as data rates increased and the need to reduce voltage became more acute.
I think you'll find that SK Hynix's own presentation (PDF) is somewhat more comprehensive.
 
Top