• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 3080 XT "Navi" to Challenge RTX 2070 at $330

Joined
Oct 14, 2017
Messages
210 (0.09/day)
System Name Lightning
Processor 4790K
Motherboard asrock z87 extreme 3
Cooling hwlabs black ice 20 fpi radiator, cpu mosfet blocks, MCW60 cpu block, full cover on 780Ti's
Memory corsair dominator platinum 2400C10, 32 giga, DDR3
Video Card(s) 2x780Ti
Storage intel S3700 400GB, samsung 850 pro 120 GB, a cheep intel MLC 120GB, an another even cheeper 120GB
Display(s) eizo foris fg2421
Case 700D
Audio Device(s) ESI Juli@
Power Supply seasonic platinum 1000
Mouse mx518
Software Lightning v2.0a
it the true Henry, AMD many time make tarded decisions :x
and if it's DUV it's will be like you and many others say
but if it's EUV thare is chance it mybe good
 
Joined
Dec 31, 2009
Messages
19,366 (3.73/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
im here to tell you it aint, gtx 960 vs r9 380, gtx 1060 vs rx 480, gtx 1080 vs vega 64, gtx 2080 vs vega 7. proof is there, GCN doesnt scale with teraflops, and hits a bandwidth wall quickly

the biggest bottleneck in any and every gpu is bandwidth

And if you look at benchmarks, hbm enabled cards dont catch up until high res. It's really not considered a good thing when most game at 1080p or 2560x1440. Half the cards you listed arent more than 1080p and 2660x1440 cards anyway. Those cards only catch up at higher res. They would have been better served using gddr5.
 
Last edited:
Joined
Jan 15, 2015
Messages
362 (0.11/day)
isn't the rtx 2070 pretty much a 1080 performance-wise? if so, then god damn, AMD... node shrink, and you still can't beat a 1080 ti with something that isn't HBM memory with power consumption that isn't trash. sad.
AMD seems to be interested in tiny dies these days to maximize profits.
this sounds like the bulldozer improves to excavator xD
Excavator didn't have enough cores nor cache nor clockspeed potential (due to low-grade 28nm process) to impress. It was designed to be cheap to produce. At the very least we're looking at a process improvement. The 28nm bulk Excavator used was actually inferior to GF 32nm SOI in terms of high performance.

AMD didn't develop its Bulldozer architecture the way it could have, had it chosen to go for high performance. We have no idea what a Keller-level talent could have done with it, let alone what more ordinary engineers could have done had AMD chosen to upgrade from Piledriver with a high-performance node (e.g. 22nm IBM or even 32nm GF) successor designed with things that were missing from Piledriver, like better microop caching, more capable individual cores, better AVX performance (e.g. fixing the regression from Bulldozer) and AVX-2 support, and L3 cache with decent performance. I have also heard anecdotally that Linux runs Piledriver much more efficiently than Windows when tuned for the architecture, so there may still be a Windows performance obstacle that could have been overcome.

People praised SMT and condemned CMT but we've seen enough examples recently of Intel not even enabling SMT in CPUs that offer good performance. I think it's therefore dubious to assume that SMT is needed for high performance, making the SMT is vastly superior to CMT argument questionable. I wonder if it's possible/worthwhile to do the opposite of what AMD did and have two FPU units for every integer unit.

One of the worst things about Bulldozer is that we'll never know what the architecture could have been had it been developed more effectively. It should have never been released in its original state ("Bulldozer") and Piledriver wasn't enough of an improvement either. 8 core consumer CPUs were also premature considering the primitiveness of Windows and most software.
 
Last edited:
Joined
Feb 19, 2019
Messages
324 (0.17/day)
Looks like that until NAVI comes out- the Mining craze will be back- Miners would love the new 7nm parts :-(.
 
Joined
Nov 3, 2011
Messages
690 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H115i Elite Capellix XT
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB and Toshiba N300 NAS 10TB HDD
Display(s) 2X LG 27UL600 27in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
GCN is always bandwidth starved regardless of being 7nm its gddr not hbm, amd will struggle to get it any better
256bit GDDR6-14000 would yield about 448 GB/s memory bandwidth.

Vega 56 has 410 GB/s memory bandwidth.

Vega 64 LC OC+UV at ~1750 Mhz yields similar results to VII despite it's 2X the memory bandwidth over Vega 64 LC's.


https://www.reddit.com/r/Amd/comments/9du2w4 NAVI has memory compression improvements.

Facts remains RTX 2080 Ti has 88 ROPS with six GPC blocks (with each GPC has at least a raster engine) superiority over VII's 64 ROPS and four raster engines.

TFLOPS is nothing without raster engines and ROPS (graphics read/write units). Note why AMD is pushing for compute shader path i.e. using TMUs for read/write units
 
Joined
Nov 15, 2016
Messages
454 (0.17/day)
System Name Sillicon Nightmares
Processor Intel i7 9700KF 5ghz (5.1ghz 4 core load, no avx offset), 4.7ghz ring, 1.412vcore 1.3vcio 1.264vcsa
Motherboard Asus Z390 Strix F
Cooling DEEPCOOL Gamer Storm CAPTAIN 360
Memory 2x8GB G.Skill Trident Z RGB (B-Die) 3600 14-14-14-28 1t, tRFC 220 tREFI 65535, tFAW 16, 1.545vddq
Video Card(s) ASUS GTX 1060 Strix 6GB XOC, Core: 2202-2240, Vcore: 1.075v, Mem: 9818mhz (Sillicon Lottery Jackpot)
Storage Samsung 840 EVO 1TB SSD, WD Blue 1TB, Seagate 3TB, Samsung 970 Evo Plus 512GB
Display(s) BenQ XL2430 1080p 144HZ + (2) Samsung SyncMaster 913v 1280x1024 75HZ + A Shitty TV For Movies
Case Deepcool Genome ROG Edition
Audio Device(s) Bunta Sniff Speakers From The Tip Edition With Extra Kenwoods
Power Supply Corsair AX860i/Cable Mod Cables
Mouse Logitech G602 Spilled Beer Edition
Keyboard Dell KB4021
Software Windows 10 x64
Benchmark Scores 13543 Firestrike (3dmark.com/fs/22336777) 601 points CPU-Z ST 37.4ns AIDA Memory
256bit GDDR6-14000 would yield about 448 GB/s memory bandwidth.

Vega 56 has 410 GB/s memory bandwidth.

Vega 64 LC OC+UV at ~1750 Mhz yields similar results to VII despite it's 2X the memory bandwidth over Vega 64 LC's.


https://www.reddit.com/r/Amd/comments/9du2w4 NAVI has memory compression improvements.

Facts remains RTX 2080 Ti has 88 ROPS with six GPC blocks (with each GPC has at least a raster engine) superiority over VII's 64 ROPS and four raster engines.

TFLOPS is nothing without raster engines and ROPS (graphics read/write units). Note why AMD is pushing for compute shader path i.e. using TMUs for read/write units
ill stay skeptical until release, amd hype has always fallen short of the truth (since FIJI tried to Titan X but couldnt 980ti at least)
 
Joined
Jul 18, 2007
Messages
2,693 (0.44/day)
System Name panda
Processor 6700k
Motherboard sabertooth s
Cooling raystorm block<black ice stealth 240 rad<ek dcc 18w 140 xres
Memory 32gb ripjaw v
Video Card(s) 290x gamer<ntzx g10<antec 920
Storage 950 pro 250gb boot 850 evo pr0n
Display(s) QX2710LED@110hz lg 27ud68p
Case 540 Air
Audio Device(s) nope
Power Supply 750w superflower
Mouse g502
Keyboard shine 3 with grey, black and red caps
Software win 10
Benchmark Scores http://hwbot.org/user/marsey99/
This was the 1st instance of AMD very aggressively clocking cards in the box. While the 290x was faster then the 780 out of the box, the teeny OC headroom left it unable to compete with the 780 .... with both cards overclocked... it was all 780 ... even Linus figured that out.

See 8:40 mark

It gets worse under water...4:30 mark

Aside from these prerelease announcements never living up to the hype ever sinc AMDs 2xx series, there's one thing here that gives me great pause with this announcements.... the name. Now if you want to distinguish your product from the competition because you have a better one, as is taught in Marketing 101 is "distinguish your product". The "RX 3080 XT" ... the copied the RX, they went from 2 to 3 and from 70 to 80 and threw in an XT for "extra" I guess. We saw the same thing with MoBos in mimicking the Intel MoBo naming conventions switching Z to an X. When you mimic the competition, it says "I wanna make mine sound like theirs so they will see RX 3 to their RX 2 ad 80 is bigger than 70 and infer that its "like theirs but newer, bigger, badder, faster". That was nVidias whole goal with the partnering idea ... "we will loosen up restrictions on our cards if you agree that we will lock down the naming so this type of thing won't cut into our sales". Regardless of what the new card line actually does, I wish they'd stake out their own naming conventions.

I do hope that AMD can actually deliver on this kind performance .... But if they gonna push the value claim, let's do apples and apples for a change. Right now the 2060 is faster for 100 watts less ... 100 watts at 30 hours a week costs me $44.20 a year. If the new RX 3080 XT is 100 watts more .... from a cost PoV ...

+100 watts would add +$20 to PSU Cost (Focus Gold Plus)
+100 watts would warrant an extra $15 case fan
+$44.20 a year is $176.80 ... $211.18 total ... I'd rather the pay the extra $170 the 1070.

Now my cost for electricity is way higher than most folks in USA , comparable to many Eurpean countraies and a lot cheaper than many of those. I pay 24 cents per kwh versus average US peep pays $0.11 ...for those folks the cost would be $81.03 over 4 years.

The reality is that most folks won't consider electric cost and if that's the case, the "value' argument is no longer apples and apples. Many live in apartments and it's in the rent, some living at parents house ... but if ya gonna make the "well it's not as fast but it has best value claim", it isn't valid w/o including all associated costs. Those would be mine, others may not mind the extra heat and load / extra inefficiency of on PSU; but whatever they are in each instance, all impacts should be considered.

Now with "apples and apples' having been considered, I would much welcome a card that was comparable in performance, comparable in power usage and comparable in sound and heat generated.... but in each instance only interested in comparisons with both cards were at max overclock. I hope against hope that AMD can deliver one but I'm weary of pre-release fanfare that consistenty fails to deliver. I hop that this time they can manage to out out something that fullfills the promise, but weary of followinmg pre-release news for 6 months only to be disappointed.

You used lots of words to say very little there dude.

You compared cards which are not the same as those I mentioned and then went on to waffle about things which are less relevant to most.
 
Joined
Mar 23, 2005
Messages
4,053 (0.58/day)
Location
Ancient Greece, Acropolis (Time Lord)
System Name RiseZEN Gaming PC
Processor AMD Ryzen 7 5800X @ Auto
Motherboard Asus ROG Strix X570-E Gaming ATX Motherboard
Cooling Corsair H115i Elite Capellix AIO, 280mm Radiator, Dual RGB 140mm ML Series PWM Fans
Memory G.Skill TridentZ 64GB (4 x 16GB) DDR4 3200
Video Card(s) ASUS DUAL RX 6700 XT DUAL-RX6700XT-12G
Storage Corsair Force MP500 480GB M.2 & MP510 480GB M.2 - 2 x WD_BLACK 1TB SN850X NVMe 1TB
Display(s) ASUS ROG Strix 34” XG349C 180Hz 1440p + Asus ROG 27" MG278Q 144Hz WQHD 1440p
Case Corsair Obsidian Series 450D Gaming Case
Audio Device(s) SteelSeries 5Hv2 w/ Sound Blaster Z SE
Power Supply Corsair RM750x Power Supply
Mouse Razer Death-Adder + Viper 8K HZ Ambidextrous Gaming Mouse - Ergonomic Left Hand Edition
Keyboard Logitech G910 Orion Spectrum RGB Gaming Keyboard
Software Windows 11 Pro - 64-Bit Edition
Benchmark Scores I'm the Doctor, Doctor Who. The Definition of Gaming is PC Gaming...
AMD seems to be interested in tiny dies these days to maximize profits.

Excavator didn't have enough cores nor cache nor clockspeed potential (due to low-grade 28nm process) to impress. It was designed to be cheap to produce. At the very least we're looking at a process improvement. The 28nm bulk Excavator used was actually inferior to GF 32nm SOI in terms of high performance.

AMD didn't develop its Bulldozer architecture the way it could have, had it chosen to go for high performance. We have no idea what a Keller-level talent could have done with it, let alone what more ordinary engineers could have done had AMD chosen to upgrade from Piledriver with a high-performance node (e.g. 22nm IBM or even 32nm GF) successor designed with things that were missing from Piledriver, like better microop caching, more capable individual cores, better AVX performance (e.g. fixing the regression from Bulldozer) and AVX-2 support, and L3 cache with decent performance. I have also heard anecdotally that Linux runs Piledriver much more efficiently than Windows when tuned for the architecture, so there may still be a Windows performance obstacle that could have been overcome.

People praised SMT and condemned CMT but we've seen enough examples recently of Intel not even enabling SMT in CPUs that offer good performance. I think it's therefore dubious to assume that SMT is needed for high performance, making the SMT is vastly superior to CMT argument questionable. I wonder if it's possible/worthwhile to do the opposite of what AMD did and have two FPU units for every integer unit.

One of the worst things about Bulldozer is that we'll never know what the architecture could have been had it been developed more effectively. It should have never been released in its original state ("Bulldozer") and Piledriver wasn't enough of an improvement either. 8 core consumer CPUs were also premature considering the primitiveness of Windows and most software.
I agree, Bulldozer was a major issue, because AMD relied more on automation for the Core Design of this interesting CPU. In the past AMD CPU Architects were a lot more intimate with CPU designs, such as the Athlon & Athlon 64 for example. Several years before Bulldozer was designed & launched, there was some AMD internal struggles & changes in upper management, which ultimately allowed "A Bulldozer Type Decision" Of course, most of what I just said is from memory, but I remember reading multiple articles about this. I won't put the entire blame on Rory Read, as he became CEO when Bulldozer just launched. CEO Dirk Meyer was a Computer Engineer and was the decision maker with Bulldozer. And after Lisa Su was appointed CEO, again she's a Electrical Engineer, things turned for the better. Bulldozer was on Rory Read's watch and it failed, but it did not SINK the company. Lisa Su was quick to hire Jim Keller to start the ZEN project. And so on, bla bla bla all from memory lol

Piledriver was a much more efficient version of Bulldozer, which did significantly increase the overall performance. AMD had no choice but to do this, at least for the Desktop Gaming segment.

Bulldozer -Piledriver -Steamroller -Excavator -ZEN -ZEN+ & ZEN2......

EDITED.
I got my CEO's confused and made corrections.
 
Last edited:
Joined
May 15, 2014
Messages
235 (0.07/day)
The problem is not ROP performance, it's management of resources.
GCN have changed very little over the years, while Kepler -> Maxwell -> Pascal -> Turing have continued to advance and achieve more performance per core and GFlop, to the point where they have about twice the performance per watt and 30-50% more performance per GFlop.

Sorry, I missed this earlier.

Where did you see me mentioning RBE/ROP performance? Fermi was performant, not simplistically due to GS yielding 50% > perf/clk, but due to the follow-on urach benefits of the polymorph engines allowing decoupling of the front end resulting in far greater extraction of parallelism. This gave better utilization, less bubbles/stalls in the pipeline. GF silicon implementation didn't match the expected RTL, but each iteration since has lead to improvements.

More is usually better, except when it comes at a great cost.

Does that also extend to die area? ;)

16 GB of 1 TB/s HBM2 is just pointless for gaming purposes. AMD could have used 8 or even 12 GB, and priced it lower.

It's a repurposed Mi50, whattayagonnado? As a low volume gaming SKU, it's probably the bottom of the barrel 7nm working chips that might be marginal thermal/load. The cost to package as a lower frame buffer/bandwidth SKU might be marginal & the full spec can be exploited by marketing vs the competition.

Facts remains RTX 2080 Ti has 88 ROPS with six GPC blocks (with each GPC has at least a raster engine) superiority over VII's 64 ROPS and four raster engines.

There's a simple metric really, TU102=18b transistors outperforms Vega20=13b transistors as the silicon is deployed in a much better uarch, eg 3.3TFOPs FP64 for Vega is no benefit to gamers.

TFLOPS is nothing without raster engines and ROPS (graphics read/write units). Note why AMD is pushing for compute shader path i.e. using TMUs for read/write units

The traditional GS/HS/DS geometry stages may well be deprecated in favor of more flexible & performant primitive/mesh shaders, but don't conflate GF->TU & GCN 1->9. It's not just the ROPs/TMUs in NV's favour, it's the decoupling of the front end and the ability to extract much more parallelism that allows higher utilization from lower peak FLOPs. We also need to consider better bandwidth utilization, data reuse (register/cache), etc.
 
Joined
Jul 9, 2015
Messages
3,413 (1.07/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
They've never undercut their competitor by such a significant amount

123850

123851


unless they want to go all-out on trying to regain marketshare.
Ah, ok then.
 
Joined
Apr 16, 2019
Messages
632 (0.35/day)
Hehe, well the prices will probably crash to those kinds of levels soon enough anyway, provided that they actually want to sell any, lol
 
Joined
Feb 18, 2005
Messages
5,239 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
Nope. Maybe btarunr wants to start thinking about not writing headlines that declare leaks as if they are factual. Just a thought.

Yeah, good luck with that.
 
Top