• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD's next-gen RDNA 2 rumor: 40-50% faster than GeForce RTX 2080 Ti

Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
Yeah it's interesting, one of the common myths that was peddled was AMD would pull ahead when they got both major consoles using their hardware, thus all games would run great on AMD hardware, and slowly and surely Nvidia would suffer till eventually they died. The world would then be a better place and gamers and children alike would play with gumdrop smiles.


I have come to accept this constant re-occurrence of "underdog wins" for RTG. If RTG does get on top and remain on top for a while I bet most folks will be singing a different tune.
 
Joined
Mar 10, 2015
Messages
3,984 (1.19/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
I have come to accept this constant re-occurrence of "underdog wins" for RTG. If RTG does get on top and remain on top for a while I bet most folks will be singing a different tune.

I probably shouldn't say this to prevent a shit show....but mirror image of American politics...just Red vs Green instead. I don't get it. How could something like the Red Green show devolve in this?



Edit:. Sorry for the huge image.... doesn't seem to want to let me edit it from my phone.
 
Last edited:
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
I probably shouldn't say this to prevent a shit show....but mirror image of American politics...just Red vs Green instead. I don't get it. How could something like the Red Green show devolve in this?



Edit:. Sorry for the huge image.... doesn't seem to want to let me edit it from my phone.


Human LOVES picking an easy side and fight. It rewards our primate brains with those delicious dopamine. Also not to mention the superiority folks get because these are "high tech gadget" while in reality we are doing no more than playing advanced lego building.
 
Joined
Mar 18, 2015
Messages
2,960 (0.89/day)
Location
Long Island
AMD GFX card rumors are replacing "the Boy who cried Wolf" as the most popular nursery rhyme. "AMD is gonna ..." has been nthe rallying cry for 8 years now.

Mantle is going to change everything ....
HBM2 is going to change everything ....
2xx series is going to change everything ....
4xx is going to change everything ....
5xxe is going to change everything ....
Fury is going to change everything ....
Vega is going to change everything ....
5xxx is going to change everything ....

For a while, with each generation ... AMD lost a tier. With all cards OC'd, nVidia had the top 2 spots with 7xx / 2xx ... then they took a 3rd with the 950 ... then a 4th with the 1060. And while AMD narrowed the gap significantly performance wise, they also caught up with efficiency, temps and nise. The 5600 XT wins at all these things in its performance tier.

It's like the ever cheating spouse who says "this time it's going to be different." The only shining light keeping some hope alive in my head over all of this time is the 5600 XT. Not interested in rumors, not interested die size or technology ... only thing that affects our purchase decisions / recommendations is a) Performance in applications we or our "customer" actually uses ... on a daily basis, b) Noise, c) temps, d) power consumption, e) total system cos, f) total ownership cosyts includin g PSU sizem power costs, additional fans if heat disparity. Everything els is just noise.
 
Joined
Mar 10, 2015
Messages
3,984 (1.19/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Mantle is going to change everything ....

This wasn't really false.

AMD lost a tier. With all cards OC'd, nVidia had the top 2 spots with 7xx / 2xx ... then they took a 3rd with the 950 ... then a 4th with the 1060.

Lost a tier in what? I don't recall the 950 beating AMDs best?
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
AMD GFX card rumors are replacing "the Boy who cried Wolf" as the most popular nursery rhyme. "AMD is gonna ..." has been nthe rallying cry for 8 years now.

Mantle is going to change everything ....
HBM2 is going to change everything ....
2xx series is going to change everything ....
4xx is going to change everything ....
5xxe is going to change everything ....
Fury is going to change everything ....
Vega is going to change everything ....
5xxx is going to change everything ....

For a while, with each generation ... AMD lost a tier. With all cards OC'd, nVidia had the top 2 spots with 7xx / 2xx ... then they took a 3rd with the 950 ... then a 4th with the 1060. And while AMD narrowed the gap significantly performance wise, they also caught up with efficiency, temps and nise. The 5600 XT wins at all these things in its performance tier.

It's like the ever cheating spouse who says "this time it's going to be different." The only shining light keeping some hope alive in my head over all of this time is the 5600 XT. Not interested in rumors, not interested die size or technology ... only thing that affects our purchase decisions / recommendations is a) Performance in applications we or our "customer" actually uses ... on a daily basis, b) Noise, c) temps, d) power consumption, e) total system cos, f) total ownership cosyts includin g PSU sizem power costs, additional fans if heat disparity. Everything els is just noise.
What are you on about ,which of the millions of sku's of 1060 was better than Polaris, cause I didn't see it.

And your mentioning TCO like we all run servers, less than 1% gives a shit about TCO ,power use as well for that matter.

And cost comes first , initial cost for at least 80% of buyers then brand by all accounts then performance and that's about as far as 99% go.
 
Joined
Apr 30, 2011
Messages
2,652 (0.56/day)
Location
Greece
Processor AMD Ryzen 5 5600@80W
Motherboard MSI B550 Tomahawk
Cooling ZALMAN CNPS9X OPTIMA
Memory 2*8GB PATRIOT PVS416G400C9K@3733MT_C16
Video Card(s) Sapphire Radeon RX 6750 XT Pulse 12GB
Storage Sandisk SSD 128GB, Kingston A2000 NVMe 1TB, Samsung F1 1TB, WD Black 10TB
Display(s) AOC 27G2U/BK IPS 144Hz
Case SHARKOON M25-W 7.1 BLACK
Audio Device(s) Realtek 7.1 onboard
Power Supply Seasonic Core GC 500W
Mouse Sharkoon SHARK Force Black
Keyboard Trust GXT280
Software Win 7 Ultimate 64bit/Win 10 pro 64bit/Manjaro Linux

ARF

Joined
Jan 28, 2020
Messages
3,959 (2.55/day)
Location
Ex-usa
Joined
Mar 10, 2015
Messages
3,984 (1.19/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
What does 80 CUs actually represent?

NVidia and AMD GPUs are composed of "compute units". For those familiar with CPUs, a "CU" is similar to an AVX512 unit, except NVidia / AMD GPUs are much wider. I forget if AMD CUs were composed of 32 SIMD cores, or 64 SIMD cores. For this post, I'm assuming 64 SIMD-cores per CU.

NVidia calls them "SM" (Symmetric Multiprocessors), but its basically the same concept. NVidia SMs change grossly between generations. AMD's CU for Navi is extremely different from Vega (and earlier chips). So I don't quite recall RDNA's details right now.

--------

From a programming perspective, a CU controls the instruction pointer. So a CU is close to a CPU-core. If we pretended that it was a CPU, we'd call it a 80-core CPU. AMD / NVidia market based on the number of shaders however, so we'd probably call it a 5120-shader GPU. (64 shaders per CU x 80 CUs == 5120 shaders, or CUDA Cores)
 
Joined
Mar 10, 2010
Messages
11,878 (2.30/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
NVidia and AMD GPUs are composed of "compute units". For those familiar with CPUs, a "CU" is similar to an AVX512 unit, except NVidia / AMD GPUs are much wider. I forget if AMD CUs were composed of 32 SIMD cores, or 64 SIMD cores. For this post, I'm assuming 64 SIMD-cores per CU.

NVidia calls them "SM" (Symmetric Multiprocessors), but its basically the same concept. NVidia SMs change grossly between generations. AMD's CU for Navi is extremely different from Vega (and earlier chips). So I don't quite recall RDNA's details right now.

--------

From a programming perspective, a CU controls the instruction pointer. So a CU is close to a CPU-core. If we pretended that it was a CPU, we'd call it a 80-core CPU. AMD / NVidia market based on the number of shaders however, so we'd probably call it a 5120-shader GPU. (64 shaders per CU x 80 CUs == 5120 shaders, or CUDA Cores)
AMD used quad 16bit simd units as a 64bit wavefront on gcn ,rDNA uses a pair of 32bit simd unit's which can either pair up to run 64bit or just run two 32bit wavefronts.
 
Last edited:
Joined
Jul 25, 2020
Messages
19 (0.01/day)
Location
Washington
System Name Crusher
Processor 3900x
Motherboard Crosshair VII Hero (wifi)
Cooling Chiller + waterblocks on everything
Memory Team group Dark Pro B-die 3810 14-14-13-13-28-1t 32GB 4x8
Video Card(s) EVGA Black 2080ti
Storage 2x 1TB NVME M.2, PNY xcelrate and WD SN550
Display(s) Nixio X34
Case Lianli Lancool II Mesh
Power Supply AX1200i
Nope.

Xx% faster WHEN YOU'RE NOT FIGHTING DRIVER ISSUES.

Faster, but non-functional, hardware is slower. Step 1: write good drivers.

AMD cannot, see the tech support areas on the web for examples. Easy to find a hoard of people with driver issues on 5xxx cards STILL.
 
Joined
Feb 20, 2019
Messages
7,309 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
I haven't read back through all nine pages yet but one thing I will add to this "AMD hasn't taken the performance crown" discussion is that they haven't even been trying.

Navi 10 is a small, cheap, profitable chip by Nvidia standards. The 5700XT has fewer transistors than the RTX 2060S and obviously at 7nm is a much smaller die that is cheaper to produce (despite the higher cost of TSMC's 7nm process).

Yes, the 2070S/2080S and 2080Ti are faster, but they're also eye-wateringly expensive. Big Navi is also going to be expensive - both to produce and for you and me as consumers.

I somehow doubt Big Navi is it, but I really like the idea of GPUs following the Zen2 design of glueing chiplets together to make a big chip. A "Threadripper" or "EPYC" GPU based on 4 or 8 Navi cores could give us linear performance/$ increases, compared to traditional single-die solutions that become exponentially more expensive as the dies get larger. Imagine the performance of 8 5700XTs working in tandem. Clockspeeds would need to be dialled back to keep power consumption in check, but it'd still probably be 5-6x faster than a 5700XT....
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
I somehow doubt it, but I really like the idea of GPUs following the Zen2 design of glueing chiplets together to make a big chip. A "Threadripper" or "EPYC" Gpu based on 4 or 8 Navi cores could give us linear performance/$ increases, compared to traditional single-die solutions that become exponentially more expensive as the dies get larger.

Its strange, isn't it? That although GPUs are super-parallel machines, that its far harder to make a chiplet-GPU design than a chiplet-CPU design. Its probably because of the stupid-high memory bandwidth of GPUs. Zen2 Infinity Fabric is only 40GBps or 50GBps per IFOP. GPUs have 500GBps, or 1000GBps bandwidths to GDDR6 or HBM2.

GPU programmers need another "tier" of RAM to represent something slower than VRAM (~500GBps on higher end GPUs right now), but faster than PCIe x16 (15GBps), to support chiplets. Or some new architecture with a lot of thought into memory channels needs to be made.

EDIT: NVidia's NVLink and DGX-2 computers are the right kind of thinking for this problem.
 
Last edited:
Joined
Dec 22, 2011
Messages
3,890 (0.86/day)
Processor AMD Ryzen 7 3700X
Motherboard MSI MAG B550 TOMAHAWK
Cooling AMD Wraith Prism
Memory Team Group Dark Pro 8Pack Edition 3600Mhz CL16
Video Card(s) NVIDIA GeForce RTX 3080 FE
Storage Kingston A2000 1TB + Seagate HDD workhorse
Display(s) Samsung 50" QN94A Neo QLED
Case Antec 1200
Power Supply Seasonic Focus GX-850
Mouse Razer Deathadder Chroma
Keyboard Logitech UltraX
Software Windows 11
I have come to accept this constant re-occurrence of "underdog wins" for RTG. If RTG does get on top and remain on top for a while I bet most folks will be singing a different tune.

Hey, the underdog apparently might beat your two year old graphics card. Hahaha, take that Nvidia!
 
Joined
Nov 11, 2016
Messages
3,067 (1.13/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) LG OLED CX48"
Case Corsair 5000D Air
Power Supply Corsair HX850
Mouse Razor Viper Ultimate
Keyboard Corsair K75
Software win11
LOL a lot of "industry experts" here keep comparing Navi die size to Turing but they are quick to forget Vega 56 die size
With a die size of 495mm2 + 8GB HBM2 (that GamersNexus estimated HBM2 modules and interposer alone cost 175usd), yet AMD was happy to sell Vega 56 for 400usd in 2017 and 300usd in 2018-2019.
It probably cost Nvidia less to produce 2080 and 2080 Super compare to Vega 56, but Nvidia can charge 700usd for them just because there are no competition. But hey at least having a healthy profit margin mean you can get a proper driver developer team that make proper drivers :).
 

Dux

Joined
May 17, 2016
Messages
511 (0.18/day)
I'm sure AMD will make something that competes with 2018. RTX 2080TI. But Nvidia will again have the fastest cards. And probably a lot more capable of ray tracing. It would be great if AMD cought up to Nvidia highest end RTX 3000 series, but it's probably not gonna happen.
 
Joined
Feb 20, 2019
Messages
7,309 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
LOL a lot of "industry experts" here keep comparing Navi die size to Turing but they are quick to forget Vega 56 die size
With a die size of 495mm2 + 8GB HBM2 (that GamersNexus estimated HBM2 modules and interposer alone cost 175usd), yet AMD was happy to sell Vega 56 for 400usd in 2017 and 300usd in 2018-2019.
It probably cost Nvidia less to produce 2080 and 2080 Super compare to Vega 56, but Nvidia can charge 700usd for them just because there are no competition. But hey at least having a healthy profit margin mean you can get a proper driver developer team that make proper drivers :).
Exactly. AMD were making less profit on Vega than Nvidia have in a long time, simple cost to produce vs sale cost.

Anyone using die sizes in mm² to compare chips fabricated at different facilities on different process nodes needs a slap in the face and reality check. I use transistor count, but even that is only half of the picture because it's a very generalised figure that can't be directly compared between architectures:

AMD's stream processors are more capable vector units with scalar calculations offloaded to dedicated scaler-only logic (2x INT32 scalers per 64-SP CU) whilst Nvidia's 'CUDA cores' are simple scaler-based units and not much else.

Meanwhile, Nvidia dedicates a good chunk of the transistor budget to Tensor cores and DXR calculations, something RDNA1 doesn't. "But what about that Crytek Raytracing demo running on AMD?" That's because it's using the far more powerful vector capabilities of GCN to do raytracing with shader compute power instead of dedicated DXR-specific hardware like RTX does. One is not inherently better than the other, it's just that GCN's (and RDNA's) vector 'cores' were around long before DXR or RTX, so obviously not optimised specifically for raytracing. If anything, the tensor cores in Turing are actually more GCN-like than anyone with green-tinted spectacles is willing to admit.

Regardless of the underlying tech or how well it's implemented by drivers and software developers, the simple fact is that one 10bn transistor GPU is going to approximately the same cost to make as another 10bn transistor GPU, provided they are on the same node at the same foundry.

I do not actually know exactly how competitive the relative costs of each process are. All things being equal, smaller nodes are cheaper to make chips on. However, the market is subject to supply and demand, and there's a lot of demand for the smallest nodes and only so much supply. I think it was Ian Cutress talking about how, in the initial two quarters on TSMC's 7nm, almost all of the cost savings from die-shrinkage were offset by the extra cost of that process node. I guess that's what inspired the "chiplet" design of Zen2 to mix and match cheap and expensive silicon on the same package. Hopefully now that TSMC has improved and scaled up the 7FF node, it'scloser to being back in line with the generally-accepted 'smaller = cheaper' rule that's been pushing Moore's Law forwards for almost three decades now....
 
Joined
Nov 11, 2016
Messages
3,067 (1.13/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) LG OLED CX48"
Case Corsair 5000D Air
Power Supply Corsair HX850
Mouse Razor Viper Ultimate
Keyboard Corsair K75
Software win11
Exactly. AMD were making less profit on Vega than Nvidia have in a long time, simple cost to produce vs sale cost.

Anyone using die sizes in mm² to compare chips fabricated at different facilities on different process nodes needs a slap in the face and reality check. I use transistor count, but even that is only half of the picture because it's a very generalised figure that can't be directly compared between architectures:

Pretty bold words for someone who doesn't know that TSMC charge Nvidia and AMD per wafer, not per working die, smh...
Let say for each 12nm wafer, TSMC charge 300usd more than GF according to ICInsight



Spread that 300usd difference to around 60 die per 300mm wafer, the difference is only 5usd per die (~500mm2 die).
 
Last edited:

ARF

Joined
Jan 28, 2020
Messages
3,959 (2.55/day)
Location
Ex-usa
LOL a lot of "industry experts" here keep comparing Navi die size to Turing but they are quick to forget Vega 56 die size
With a die size of 495mm2 + 8GB HBM2 (that GamersNexus estimated HBM2 modules and interposer alone cost 175usd), yet AMD was happy to sell Vega 56 for 400usd in 2017 and 300usd in 2018-2019.
It probably cost Nvidia less to produce 2080 and 2080 Super compare to Vega 56, but Nvidia can charge 700usd for them just because there are no competition. But hey at least having a healthy profit margin mean you can get a proper driver developer team that make proper drivers :).

Vega is a compute oriented architecture which behaves badly if you put it in gaming which is not its primary application area, to begin with.

Probably several years ago, under the threat of bankruptcy, AMD cancelled most of the meaningful GPU projects, including a larger die size Navi 1st gen chip, and renamed the remaining 251 sq. mm to Navi 10. Navi 10 should have been at least 495 sq. mm but hey no...

Totally faulty strategy in the graphics department.
 
Joined
Nov 7, 2017
Messages
1,470 (0.62/day)
Location
Ibiza, Spain.
System Name Main
Processor R7 5950x
Motherboard MSI x570S Unify-X Max
Cooling D5 clone, 280 rad, two F14 + three F12S bottom/intake, two P14S + F14S (Rad) + two F14 (top)
Memory 2x8 GB Corsair Vengeance bdie 3600@CL16 1.35v
Video Card(s) GB 2080S WaterForce WB
Storage six M.2 pcie gen 4
Display(s) Sony 50X90J
Case Tt Level 20 HT
Audio Device(s) Asus Xonar AE, modded Sennheiser HD 558, Klipsch 2.1 THX
Power Supply Corsair RMx 750w
Mouse Logitech G903
Keyboard GSKILL Ripjaws
VR HMD NA
Software win 10 pro x64
Benchmark Scores TimeSpy score Fire Strike Ultra SuperPosition CB20
@Vya Domus
that was exactly my question, not the opposite, as you try to make it.
 
Joined
Feb 20, 2019
Messages
7,309 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
Pretty bold words for someone who doesn't know that TSMC charge Nvidia and AMD per wafer, not per working die, smh...
Let say for each 12nm wafer, TSMC charge 300usd more than GF according to ICInsight



Spread that 300usd difference to around 60 die per 300mm wafer, the difference is only 5usd per die (~500mm2 die).
It's not that simple though. If a wafer contains 20 defects and the process node is large enough that you only get 40 chips out of it, then you get 20 working parts.

The same 20 defects in a wafer and same GPU design but on a smaller node that means you can get 80 chips out of it, you still get 20 defects but now you have 60 working parts instead of 20

All else being equal, smaller nodes are cheaper because they double-dip into the beneficial attributes. More parts per wafer is lower cost per part, and smaller parts means that a fewer percentage of them will be defective, so better yields.
 
Joined
Nov 11, 2016
Messages
3,067 (1.13/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) LG OLED CX48"
Case Corsair 5000D Air
Power Supply Corsair HX850
Mouse Razor Viper Ultimate
Keyboard Corsair K75
Software win11
It's not that simple though. If a wafer contains 20 defects and the process node is large enough that you only get 40 chips out of it, then you get 20 working parts.

The same 20 defects in a wafer and same GPU design but on a smaller node that means you can get 80 chips out of it, you still get 20 defects but now you have 60 working parts instead of 20

All else being equal, smaller nodes are cheaper because they double-dip into the beneficial attributes. More parts per wafer is lower cost per part, and smaller parts means that a fewer percentage of them will be defective, so better yields.

You can get a rough idea of how much a silicon die cost form this calculator (learn it from AdoredTV)

Here is how much a wafer cost (from Wccftech)


Enter the die size, wafer price (6000usd for 16nm and 10000usd for 7nm) and yield to get the cost per silicon die
Now the only unknown value is yield, with TSMC 16/12nm being a much more mature node, I would think the TU104 (545mm2) and TU106 (445mm2) get around 80% yield while TSMC 7nm be in the 60% yield (AdoredTV expected Navi 10 yield was around 50% at launch)

So from my calculation:
TU104: 75usd per die
TU106: 60usd per die
Navi 10: 70usd per die

As you can see, AMD doesn't get any cost benefit going for TSMC 7nm, but rather the performance and efficiency associated with the superior node (which AMD sorely need)

Note: there are also defective die that can be laser cut and salvageable into a lower variant, like the RTX 2060 for TU106 and 2070 Super for TU104, 5700 and 5600XT for Navi 10. But for the sake of simplicity I just take into account the perfect die that make it into the RTX 2070 (TU106), 2080 Super (TU104) and 5700XT (Navi10)
 
Last edited:
Joined
Feb 20, 2019
Messages
7,309 (3.86/day)
System Name Bragging Rights
Processor Atom Z3735F 1.33GHz
Motherboard It has no markings but it's green
Cooling No, it's a 2.2W processor
Memory 2GB DDR3L-1333
Video Card(s) Gen7 Intel HD (4EU @ 311MHz)
Storage 32GB eMMC and 128GB Sandisk Extreme U3
Display(s) 10" IPS 1280x800 60Hz
Case Veddha T2
Audio Device(s) Apparently, yes
Power Supply Samsung 18W 5V fast-charger
Mouse MX Anywhere 2
Keyboard Logitech MX Keys (not Cherry MX at all)
VR HMD Samsung Oddyssey, not that I'd plug it into this though....
Software W10 21H1, barely
Benchmark Scores I once clocked a Celeron-300A to 564MHz on an Abit BE6 and it scored over 9000.
You can get a rough idea of how much a silicon die cost form this calculator (learn it from AdoredTV)

Here is how much a wafer cost (from Wccftech)


Enter the die size, wafer price (6000usd for 16nm and 10000usd for 7nm) and yield to get the cost per silicon die
Now the only unknown value is yield, with TSMC 16/12nm being a much more mature node, I would think the TU104 (545mm2) and TU106 (445mm2) get around 80% yield while TSMC 7nm be in the 60% yield (AdoredTV expected Navi 10 yield was around 50% at launch)

So from my calculation:
TU104: 75usd per die
TU106: 60usd per die
Navi 10: 70usd per die

As you can see, AMD doesn't get any cost benefit going for TSMC 7nm, but rather the performance and efficiency associated with the superior node (which AMD sorely need)

Note: there are also defective die that can be laser cut and salvageable into a lower variant, like the RTX 2060 for TU106 and 2070 Super for TU104, 5700 and 5600XT for Navi 10. But for the sake of simplicity I just take into account the perfect die that make it into the RTX 2070 (TU106), 2080 Super (TU104) and 5700XT (Navi10)
Nice calculator link but your argument only holds up if you use production estimates from around 16 months ago near the start of the 7nm process.

Zen2 Chiplet yields were 92.% flawless in December (8 months ago) and feeding that 8-month old data into the calculator at 250mm die size gives me 47 defective dies out of 240 for a Navi 10 yield estimate of 80.4% making each Navi 10 die $51.64 from the calculator you just linked.

You can see from posts like this that 7FF yielded better and sooner than it's predecessors too, so the lower yields of TSMC's 7FF that you're talking about and basing your reasoning on was pretty much untrue by the time Navi AIB card were on store shelves a year ago.

1596372481098.png


Your logic is sound but you're using data so old that it's giving you useless, incorrect results.
 
Joined
Apr 14, 2019
Messages
221 (0.12/day)
System Name Violet
Processor AMD Ryzen 5800X
Motherboard ASRock x570 Phantom Gaming X
Cooling Be quiet! Dark Rock Pro 4
Memory G.Skill Flare x 32GB 3400Mhz
Video Card(s) MSI 6900XT Gaming X Trio
Storage Western Digital WD Black SN750 1TB
Display(s) 3440x1440
Case Lian Li LANCOOL II MESH Performance
Power Supply Corsair RM850x
Mouse EVGA X15
Keyboard Corsair K95 RGB
Software Windows 10 64bit
RDNA 2 is 50% faster then the 2080TI, just like the 3080TI.
But the problem is the drivers....

I still buy the GTX 3070.
 
Top