• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

CHOO CHOOOOO!!!!1! Navi Hype Train be rollin'

Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
I think a lot of people equate GCN to only being a fixed micro architecture, when in fact it's also an ISA. A new GCN core does not mean it's the same compute unit design or arrangement, but it does mean that it uses and supports the GCN ISA. AMD has done an extremely poor job of distinguishing the two, and it's lead to the majority of people just glossing over any improvements at the micro architectural level because "it's still just GCN".
It is a bit of both. ISA literally means Instruction Set Architecture. There are some things on different levels this does set in stone but many others can be improved on. Whether some things that need improvements are fixed or not is not easy to know.

"Enhanced GCN Graphics Architecture"
I smell something familiar, it smells like... *sniff* Polaris
Polaris will not work. Navi will have to be based on Vega and hopefully improved upon it. Navi will have RPM (or some other form of 2*FP16), probably some form of variable rate shading and other bits of new tech. As of Turing, Nvidia is at least on feature parity with Vega. Intel's Gen11 seems to get to the same point as well. AMD has no choice and I am sure they are way ahead of this.
 
Joined
Mar 10, 2015
Messages
3,984 (1.20/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
Since no one actually bothered to read it as usual...I did not know there are potentially two chips as I had only heard rumblings of only competing with 2060. I don't think that is realistic considering they already have something 'competitive' with 2070.

Now the details that are mentioned are broken into two parts, one is for the initial AMD Navi cards that utilize the Navi 10 GPU architecture and the second is for the high-end, enthusiast grade parts that would feature the Navi 20 GPU. According to RedGamingTech, the details were acquired from sources who have been very accurate in the past as per their claim.

The details say that before Raja Koduri, AMD’s ex-head of Radeon Technologies Group, left the company, one of his major tasks was to fix many of the weaknesses in the GCN architecture. The reason to do this was to let AMD RTG focus on both, producing a next-gen architecture while working on GCN iterations to remain competitive against NVIDIA GeForce and Quadro lineups. Now we have seen that this strategy worked well for AMD in the mainstream market but their flagship products weren’t necessarily the best or to make it simple, king of the hill products that AMD wanted them to be but rather side options to NVIDIA’s enthusiast offerings.

The reason why Vega didn’t live up to the hype was that when Raja joined RTG, the design of the Vega GPU was very much completed and there was little he could do. The actual goal for Raja was to work on Navi GPUs which would still be based on the existing GCN architecture but further refined through fixes to let’s say, the geometry engine, as reported by RedGamingTech. Now it is possible and very likely that AMD had finished the design for Navi much before Raja left RTG. But what happens to Navi when it goes into the development phase, that’s something we are really close to finding out now as rumors are alleging a launch of the first Navi based Radeon RX cards in mid of 2019.

It's a "Polaris 10" successor and will probably beat RTX 2060.

Well, there could be two. So yes, the 10 may target the midrange but they may actually be targeting a higher end as well.

Edit: Also, in case it wasn't obvious, the title is a joke. Moreso, I just had never heard two Navi chips before.
 
Joined
Sep 17, 2014
Messages
20,891 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
We're looking at Navi 20 by 2020 the WCCFtech article says. And by then it should compete with an RTX 2080ti. The grossly overpriced and underperfoming 'upgrade' to Pascal.

So basically we're once again looking at yesteryears' performance by then, and Nvidia will have comfortably moved to 7nm. Its a 2080 vs VII all over again in 2020, and that is the best case scenario. I suppose we should count our blessings, and pray this will remain relevant until Intel shows some benchmarks.
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
There is likely to be at least two chips in Navi series, possibly more depending on what range of performance AMD wants to cover. Nvidia has two at and below RTX2060 already (TU106, TU116) and is likely to have three (TU117?) soon.

Architecturally, I do not think AMD is likely to continue having semi-different architectures (like Polaris and Vega). There has been a lot of talk about AMD (and especially RTG) running at low budget and GPU architectures are expensive. Even Nvidia is primarily using one architecture at one time, plus maybe a high-end compute thing that simultaneously works as a research vehicle like V100.

AMD Roadmap plus rumors from WCCFtech and other sites currently claim Navi as the swan song of GCN in 2019/2020 and Arcturus as a new architecture past this.
 

cdawall

where the hell are my stars
Joined
Jul 23, 2006
Messages
27,680 (4.27/day)
Location
Houston
System Name All the cores
Processor 2990WX
Motherboard Asrock X399M
Cooling CPU-XSPC RayStorm Neo, 2x240mm+360mm, D5PWM+140mL, GPU-2x360mm, 2xbyski, D4+D5+100mL
Memory 4x16GB G.Skill 3600
Video Card(s) (2) EVGA SC BLACK 1080Ti's
Storage 2x Samsung SM951 512GB, Samsung PM961 512GB
Display(s) Dell UP2414Q 3840X2160@60hz
Case Caselabs Mercury S5+pedestal
Audio Device(s) Fischer HA-02->Fischer FA-002W High edition/FA-003/Jubilate/FA-011 depending on my mood
Power Supply Seasonic Prime 1200w
Mouse Thermaltake Theron, Steam controller
Keyboard Keychron K8
Software W10P
Hope not, just bought a RTX 2060 and it's flying with a little more oc on top of MSI's factory oc.

Why you care by the time AMD releases it and has a driver that allows it to perform better the NV4060 will be out.
 
D

Deleted member 158293

Guest
Whatever Navi will be, Navi will literally define what the gaming industry will be for years to come from Microsoft to Apple Arcade to Sony to Google Stadia to PC.

Navi needs no hype... :respect:
 
Joined
Jan 8, 2017
Messages
8,924 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Even Nvidia is primarily using one architecture at one time, plus maybe a high-end compute thing that simultaneously works as a research vehicle like V100.

V100 is a distinct standalone product that Nvidia is selling alongside their other parts and they made that very clear, it's obvious Turing and Volta were designed concurrently. There is no maybe in this, Nvidia has without doubt separate designs/architectures for different markets.
 
Joined
Mar 10, 2015
Messages
3,984 (1.20/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
We're looking at Navi 20 by 2020 the WCCFtech article says. And by then it should compete with an RTX 2080ti. The grossly overpriced and underperfoming 'upgrade' to Pascal.

So basically we're once again looking at yesteryears' performance by then, and Nvidia will have comfortably moved to 7nm. Its a 2080 vs VII all over again in 2020, and that is the best case scenario. I suppose we should count our blessings, and pray this will remain relevant until Intel shows some benchmarks.

That is true. Personally, I would be ecstatic if they could hit 2080 ti performance AND get power draw comparable. If they can hit 2080 ti performance it will at least force NV to use full chips in their cards again. I won't hold my breath but I think one or the other could be a reality. In either case, it at least elevates my hope that they haven't completely dropped the 'high end' for Navi in theory. We'll see how it carries out in practice.
 
Joined
Feb 3, 2017
Messages
3,481 (1.32/day)
Processor R5 5600X
Motherboard ASUS ROG STRIX B550-I GAMING
Cooling Alpenföhn Black Ridge
Memory 2*16GB DDR4-2666 VLP @3800
Video Card(s) EVGA Geforce RTX 3080 XC3
Storage 1TB Samsung 970 Pro, 2TB Intel 660p
Display(s) ASUS PG279Q, Eizo EV2736W
Case Dan Cases A4-SFX
Power Supply Corsair SF600
Mouse Corsair Ironclaw Wireless RGB
Keyboard Corsair K60
VR HMD HTC Vive
V100 is a distinct standalone product that Nvidia is selling alongside their other parts and they made that very clear, it's obvious Turing and Volta were designed concurrently. There is no maybe in this, Nvidia has without doubt separate designs/architectures for different markets.
Turing is evolved from Volta. The changes are minor compared to what was changed from Pascal to Volta.
 
Joined
Sep 17, 2014
Messages
20,891 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
V100 is a distinct standalone product that Nvidia is selling alongside their other parts and they made that very clear, it's obvious Turing and Volta were designed concurrently. There is no maybe in this, Nvidia has without doubt separate designs/architectures for different markets.

I would rather say they deploy different iterations of it for each segment much like Intel has done its HEDT releases.
 
Joined
Jun 28, 2016
Messages
3,595 (1.26/day)
Whatever Navi will be, Navi will literally define what the gaming industry will be for years to come from Microsoft to Apple Arcade to Sony to Google Stadia to PC.
Yeah... I don't really understand what you wanted to say here.
Game streaming simply means putting game rendering into the cloud - just like we already did with databases, scientific/industrial computing and media.
Even in the most optimistic plans Google and Sony have shown, it'll be just a tiny part of datacenter market.

Today GPU-accelerated cloud is dominated by Nvidia and this is not going to change.

To be honest, I don't know why Google Stadia decided to get GPUs from AMD - I'd imagine they simply were cheaper.
Microsoft and Sony may go for AMD to make it compatible with consoles - price still being the more probable reason.

But don't put your hopes to high. It won't be hard for any service to change the GPU provider. Each of the companies mentioned must have prepared for this already in case AMD stops making GPUs.


And, clearly, you don't even know what Apple Arcade is (there was a news piece lately, read it). :)
 
Joined
Aug 6, 2017
Messages
7,412 (3.03/day)
Location
Poland
System Name Purple rain
Processor 10.5 thousand 4.2G 1.1v
Motherboard Zee 490 Aorus Elite
Cooling Noctua D15S
Memory 16GB 4133 CL16-16-16-31 Viper Steel
Video Card(s) RTX 2070 Super Gaming X Trio
Storage SU900 128,8200Pro 1TB,850 Pro 512+256+256,860 Evo 500,XPG950 480, Skyhawk 2TB
Display(s) Acer XB241YU+Dell S2716DG
Case P600S Silent w. Alpenfohn wing boost 3 ARGBT+ fans
Audio Device(s) K612 Pro w. FiiO E10k DAC,W830BT wireless
Power Supply Superflower Leadex Gold 850W
Mouse G903 lightspeed+powerplay,G403 wireless + Steelseries DeX + Roccat rest
Keyboard HyperX Alloy SilverSpeed (w.HyperX wrist rest),Razer Deathstalker
Software Windows 10
Benchmark Scores A LOT
That is true. Personally, I would be ecstatic if they could hit 2080 ti performance AND get power draw comparable. If they can hit 2080 ti performance it will at least force NV to use full chips in their cards again. I won't hold my breath but I think one or the other could be a reality. In either case, it at least elevates my hope that they haven't completely dropped the 'high end' for Navi in theory. We'll see how it carries out in practice.
Amd favorable yt channels have been starving lately,and they gotta eat too.
 
Low quality post by xkm1948
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
Amd favorable yt channels have been starving lately,and they gotta eat too.

Yeah need more click-bait videos farting out of their ass to get that sweet adsense money from Daddy Google
 
Joined
Oct 10, 2018
Messages
140 (0.07/day)
I believe that RX 660=GTX 1650, RX 670=GTX 1660, RX 680=GTX 1660 Ti. This technology used 7 NM but most probably it's equal with Nvidia's 12NM.

We're looking at Navi 20 by 2020 the WCCFtech article says. And by then it should compete with an RTX 2080ti. The grossly overpriced and underperfoming 'upgrade' to Pascal.

So basically we're once again looking at yesteryears' performance by then, and Nvidia will have comfortably moved to 7nm. Its a 2080 vs VII all over again in 2020, and that is the best case scenario. I suppose we should count our blessings, and pray this will remain relevant until Intel shows some benchmarks.

I agree. Navi 20 will realase in 2020-2021. If Nvidia uses 7NM which is Ampere, it will be faster than Navi 20. Also Nvidia doesn't want to use 7NM due to Transistor's conductivity. Transistors allow max 1NM. Maybe it will change in the future but not now.
 
Joined
Jan 8, 2017
Messages
8,924 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
This technology used 7 NM but most probably it's equal with Nvidia's 12NM.

"Nvidia's 12nm" is TSMC's 16nm, an almost three year old node by this point. Will TSMC's 7nm be equal to it's own 16nm node ? How do you people come up with this stuff?
 
Joined
Nov 4, 2005
Messages
11,674 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I am hopeful that AMD has cleared up their architecture issues, the unacceptably low cache hit rates that also plagued their CPU's has been the issue with GCN and how it handles resources, they keep trying for a one size fits all when they clearly need two architectures if they want to compete in compute and graphics, and the overburden they saddled themselves with is whats hurting the most. Get a new architecture for graphics and then one for compute.

I believe that RX 660=GTX 1650, RX 670=GTX 1660, RX 680=GTX 1660 Ti. This technology used 7 NM but most probably it's equal with Nvidia's 12NM.



I agree. Navi 20 will realase in 2020-2021. If Nvidia uses 7NM which is Ampere, it will be faster than Navi 20. Also Nvidia doesn't want to use 7NM due to Transistor's conductivity. Transistors allow max 1NM. Maybe it will change in the future but not now.


Process/Node size doesn't matter for "conductivity" as they are still using the same base metals. 7Nm allows for a 25% increase in performance per watt and or higher frequencies. Neither Nvidia or AMD have their own fabrication plant so there is no Nvida/AMD Nm size, its whatever they get or negotiate with fab plants. We will not see 1Nm transistors for many years, and by the time we reach that point we should be using more stacked 3D designs or some other new tech will emerge to pickup where Silicon stops.
 
Joined
Oct 10, 2018
Messages
140 (0.07/day)
"Nvidia's 12nm" is TSMC's 16nm, an almost three year old node by this point. Will TSMC's 7nm be equal to it's own 16nm node ? How do you people come up with this stuff?
I spoke for Navi. Also 28NM R9 390X is equal with 14 NM RX 580.

We will not see 1Nm transistors for many years
7NM, 5NM, 3NM, 1NM.
nano3.png

Source: IBS
 
Joined
Mar 10, 2015
Messages
3,984 (1.20/day)
System Name Wut?
Processor 3900X
Motherboard ASRock Taichi X570
Cooling Water
Memory 32GB GSkill CL16 3600mhz
Video Card(s) Vega 56
Storage 2 x AData XPG 8200 Pro 1TB
Display(s) 3440 x 1440
Case Thermaltake Tower 900
Power Supply Seasonic Prime Ultra Platinum
"Nvidia's 12nm" is TSMC's 16nm, an almost three year old node by this point. Will TSMC's 7nm be equal to it's own 16nm node ? How do you people come up with this stuff?

I think he was referring to performance not lithography.

Amd favorable yt channels have been starving lately,and they gotta eat too.

I don't watch youtube....Really though, the point of this thread was that there has not really been any interesting 'news'......just press releases. It's fun time.
 
Joined
Jan 8, 2017
Messages
8,924 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Joined
Aug 20, 2007
Messages
20,758 (3.41/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage 2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches
Software Windows 11 Enterprise (legit), Gentoo Linux x64
Joined
Nov 4, 2005
Messages
11,674 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I spoke for Navi. Also 28NM R9 390X is equal with 14 NM RX 580.


7NM, 5NM, 3NM, 1NM.
View attachment 119805
Source: IBS
Irritable Bowel Syndrome or not, do you see the price jump from 7 to 5nm? The timeline to see a 1nm transistor is exponential in both cost and time based on historical data from the last few node shrinks.


About the 28nm being equal to 14nm, correlation is NOT causation. What you are claiming is akin to a large SUV is as good as a turbo 4cylinder sports car since they both go the same speed on the highway. AMD architecture was better at graphics on 28nm process, for a better comparison lets look at what nvidia is doing with 16nm VS AMD on 7, Nvidia has a superior design so it performs better, uses less power, runs cooler.... if Nvidia put that on 7nm it would be at least 25% faster still, and use less power doing it. AMD has sucked at GPU design for awhile, aiming for a compute heavy card with an excess of bandwidth to mask the cache issues that cannot keep the shaders full, and their lack of turning off shaders while data is beign fetched and increasing the cache sizes means they use more power.
 
Joined
Jan 8, 2017
Messages
8,924 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
with an excess of bandwidth to mask the cache issues that cannot keep the shaders full, and their lack of turning off shaders while data is beign fetched and increasing the cache sizes means they use more power.

I have talked about this in another thread and basically this isn't true, not just in AMD's case but for GPU designs in general. Caches are not a critical factor for achieving high performances/utilization unlike it is the case with CPUs, you don't even have to believe me you only need look at similar sized dies for GPUs and CPUs and see how much the cache/core/shader ratio differs.

For instance on GP104 theoretically there is a grand total of 0.8 Kb of L2 cache that you can expect per shader, this is an abysmally small amount yet these GPUs operate just fine. Hit misses are mostly irrelevant because their latency is hidden by the fact that there are already instructions scheduled, therefor there is no need for quick frequent access of memory which would require large fast caches with very good hit ratios.

Instead what you actually need is a lot of memory bandwidth, AMD designs their GPUs just fine from this point of view, there is literally no other way of doing it. The reason GCN based cards have had more memory bandwidth and cache than their Nvidia equivalent is because they incidentally also have more ALUs typically. There is no mystery to any of this, it's all quite simple. I don't know why people have the impression that these guys could make such huge glaring oversights in their designs, they aren't idiots, they know what they are doing very well.
 
Last edited:
Joined
Oct 10, 2018
Messages
140 (0.07/day)
You spoke what exactly ?



Also 28nm GTX 980 is equal with 16nm GTX 1060 or 14nm RX 580, all of this means .... absolutely nothing.
I said for performance. Yes, it is nothing but 28NM to 14 NM. RTX 2060 is 12NM and it performances between GTX 1070 Ti and GTX 1080.

Irritable Bowel Syndrome or not, do you see the price jump from 7 to 5nm? The timeline to see a 1nm transistor is exponential in both cost and time based on historical data from the last few node shrinks.


About the 28nm being equal to 14nm, correlation is NOT causation. What you are claiming is akin to a large SUV is as good as a turbo 4cylinder sports car since they both go the same speed on the highway. AMD architecture was better at graphics on 28nm process, for a better comparison lets look at what nvidia is doing with 16nm VS AMD on 7, Nvidia has a superior design so it performs better, uses less power, runs cooler.... if Nvidia put that on 7nm it would be at least 25% faster still, and use less power doing it. AMD has sucked at GPU design for awhile, aiming for a compute heavy card with an excess of bandwidth to mask the cache issues that cannot keep the shaders full, and their lack of turning off shaders while data is beign fetched and increasing the cache sizes means they use more power.

I agree. For 1NM to become a reality we would first need a new material to etch it onto 1NM isn't impossible, but with our current rate of development it will take us approximately 15-20 years to see any sort of viability.

Well, What will Nvidia and AMD do for future? Will they use refresh cards or new cores such as Tensor? What do you think?
 
Joined
Nov 4, 2005
Messages
11,674 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
I have talked about this in another thread and basically this isn't true, not just in AMD's case but for GPU designs in general. Caches are not a critical factor for achieving high performances/utilization unlike it is the case with CPUs, you don't even have to believe me you only need look at similar sized dies for GPUs and CPUs and see how much the cache/core/shader ratio differs.

For instance on GP104 theoretically there is a grand total of 0.8 Kb of L2 cache that you can expect per shader, this is an abysmally small amount yet these GPUs operate just fine. Hit misses are mostly irrelevant because their latency is hidden by the fact that there are already instructions scheduled, therefor there is no need for quick frequent access of memory which would require large fast caches with very good hit ratios.

Instead what you actually need is a lot of memory bandwidth, AMD designs their GPUs just fine from this point of view, there is literally no other way of doing it. The reason GCN based cards have had more memory bandwidth and cache than their Nvidia equivalent is because they incidentally also have more ALUs typically. There is no mystery to any of this, it's all quite simple. I don't know why people have the impression that these guys could make such huge glaring oversights in their designs, they aren't idiots, they know what they are doing very well.


Bulldozer. Hawaii. Sure better than Via (CPU) and Intel (GPU) or a kid with a stick, but is that what we do here, compare to failures to feel better?

We can do the math together, The GTX680 (GK104) VS Tahiti Same everything except AMD had 25% more shaders used 25% more power, 700,000 more transistors for equal performance. 0.8Kb is still a lot of information if it can be kept full, but alas when it CAN'T usually you have two choices as then you have a shader using power, making heat, and not doing work. One is to improve cache hit rate but that takes a lot of tuning and tweaking, or you can just add more cache to increase the chances the data will be loaded, but that takes more power to run and makes more heat. Can you guess which AMD has/had been doing for years? Coupled to the fact that AMD kept their shaders full precision for all operations and Nvidia used half or partial precision for some of the same calculations (later tests show the effect of forced full precision and the performance decrease https://www.extremetech.com/gaming/273897-nvidia-gpus-take-a-heavy-hit-with-hdr-enabled ) all of which adds up to more efficient use of cache, and thus increased performance.
 
Joined
Mar 10, 2010
Messages
11,878 (2.31/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Bulldozer. Hawaii. Sure better than Via (CPU) and Intel (GPU) or a kid with a stick, but is that what we do here, compare to failures to feel better?

We can do the math together, The GTX680 (GK104) VS Tahiti Same everything except AMD had 25% more shaders used 25% more power, 700,000 more transistors for equal performance. 0.8Kb is still a lot of information if it can be kept full, but alas when it CAN'T usually you have two choices as then you have a shader using power, making heat, and not doing work. One is to improve cache hit rate but that takes a lot of tuning and tweaking, or you can just add more cache to increase the chances the data will be loaded, but that takes more power to run and makes more heat. Can you guess which AMD has/had been doing for years? Coupled to the fact that AMD kept their shaders full precision for all operations and Nvidia used half or partial precision for some of the same calculations (later tests show the effect of forced full precision and the performance decrease https://www.extremetech.com/gaming/273897-nvidia-gpus-take-a-heavy-hit-with-hdr-enabled ) all of which adds up to more efficient use of cache, and thus increased performance.
So you say a 680 is better than a 7970 then prove it's depending on use case and give proof that said Nvidia Gpu didn't age well.
Depending on use case the 7970 was always better depending on perspective ,i use compute.
But anyway would it not be better to actually discuss the Op then regurgitate the same arguable points about dead tech.
If Amd do Raytracing on navi 10 ill be surprised tbh.
Navi 20 i expect to have a go, we'll see how that goes in time.

Oh and he's right GPU are designed for streams of data not OoO data streams so the cache isn't used for possible hit's only expected and is quite small in footprint terms compared to Cpu caches.
That's why Gpu memory bandwidth matters more to Gpus then system memory bandwidth matters to CPU's , they can't store many instructions and dont have teired caches like CPU's do to buffer poor memory bandwidth.
 
Last edited:
Top