• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GeForce RTX 3080 Rips and Tears Through DOOM Eternal at 4K, Over 100 FPS

Joined
Sep 4, 2020
Messages
14 (0.01/day)
You shouldn't worry.
In order to use more VRAM in a single frame, you also need more bandwidth and computational performance as well, which means by the time you need this, this card will be too slow anyway. Resources needs to be balanced, and there is no reason to think you will "future proof" the card by having loads of extra VRAM. It has not panned out well in the past, and it will not in the future, unless games starts to use the VRAM in a completely different manner all of a sudden.


Exactly.

There are however reasons to buy extra VRAM, like various (semi-)professional uses. But for gaming it's a waste of money. Anyone who is into high-end gaming will be looking at a new card in 3-4 years anyway.

Wrong wrong wrong. AMD out 8 gigs of vram in their cards because they performed better even when their was vram to spare. I had a 290x (trix) and if it only had 4 gigs of vram then there is zero chance I would have been able to run witcher 3 at 4k30 or 1440p60 on that card. Same goes for doom 2016, 4k in vulkan on max settings I could get over 100fps but if I had 4 gigs of vram? Nope.

So the card was not past it's usefulness by the time the memory was needed. I have more examples but point made.

Also PC gamers that are REALLY into high end gaming tend to upgrade their gpu every year unless the performance uplift is only like 15%. I have a 2080ti and will likely upgrade to a 3090 although I'll wait for game benchmarks to see how the 3080 performs.
 
Joined
Jun 10, 2014
Messages
2,900 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
Wrong wrong wrong. AMD out 8 gigs of vram in their cards because they performed better even when their was vram to spare. I had a 290x (trix) and if it only had 4 gigs of vram then there is zero chance I would have been able to run witcher 3 at 4k30 or 1440p60 on that card. Same goes for doom 2016, 4k in vulkan on max settings I could get over 100fps but if I had 4 gigs of vram? Nope.
More VRAM doesn't give you more performance.

How on earth would a 290X run Doom (2016) in 4K at ultra at 100 FPS?

RTX 3070/3080 is carefully tested, and has the appropriate amount of VRAM for current games and games in development.
 
Joined
Apr 21, 2020
Messages
24 (0.02/day)
Processor Intel I9-9900K
Motherboard Gigabyte Z390 Aorus Master
Cooling NZXT Kraken x62v2
Memory 4x8GB G.Skill Flare X 3200mhz CL14
Video Card(s) Zotac RTX 2080Ti AMP + Alphacool Eiswolf 240 GPX Pro
Storage ADATA 1TB XPG SX8200 Pro + Intel 16GB Optane Memory 16GB M.2 + WD Blue 4TB
Display(s) Samsung CRG9 32:9 + Samsung S34J55W 21:9 + Samsung S24B300 16:9
Case Fractal Design Define R6 USB-C
Power Supply Corsair RM850x 850W Gold
Mouse Razer Viper Mini
Keyboard Razer Ornata Chroma
Software Windows 10 Pro
More VRAM doesn't give you more performance.

How on earth would a 290X run Doom (2016) in 4K at ultra at 100 FPS?

RTX 3070/3080 is carefully tested, and has the appropriate amount of VRAM for current games and games in development.

exactly i also agree, death stranding only uses around 4000vram at 4K and the detail is awesome.
I'm not an expert but from what i read some games still use Megatextures like Middle-earth shadow of mordor, old game but vram hungry, but the present and future is on "Physically based rendering" and "variable shade rating", Death stranding ,Doom eternal, Forza Horizon 4 use them and they are graphically superior and faster framerate. Even Star Citizen alpha in it's current state with tons of textures, which is already huge in scale, uses Physically based rendering.

I'm curios to see is DLSS 2.0 in action, despite so few games use it, if the current boost is alot higher then the 2080ti
 
Joined
Jun 11, 2017
Messages
214 (0.09/day)
Location
Montreal Canada
I think Nvidia is just doing this to stop the SLI. They know the gamers out there that wanted SLI for lower card because in the hay day you could buy two lower cards SLI them and get better performance then forking over a high price for a single card. Now Nvidia is doing there best to stop all that. Making you buy the High end card only. I remember Nvidia used to love gamers now they just love to make you pay. I might upgrade my 1070ti's in SLI to maybe couple of 2070's supers. Or just wait till nvidia makes the 4080's and finally make some 3070's with SLI support.

My two 1070's in SLI outperform a 1080 single any day. I'm still running two 660ti's in SLI on my 1080P work /gaming computer.

It's just a shame nvidia made more sales in the gamer market when they allowed SLI on lower end cards. I mean sure if I had tons of money I might fork out 1800.00 for a 3090 but kinda seems like a waste a money for just one card.
 
Joined
May 5, 2016
Messages
94 (0.03/day)
You shouldn't worry.
In order to use more VRAM in a single frame, you also need more bandwidth and computational performance as well, which means by the time you need this, this card will be too slow anyway. Resources needs to be balanced, and there is no reason to think you will "future proof" the card by having loads of extra VRAM. It has not panned out well in the past, and it will not in the future, unless games starts to use the VRAM in a completely different manner all of a sudden.


Exactly.

There are however reasons to buy extra VRAM, like various (semi-)professional uses. But for gaming it's a waste of money. Anyone who is into high-end gaming will be looking at a new card in 3-4 years anyway.

High res textures take up a lot of space in VRAM. Borderlands 3 at 4K nearly maxes out the 8GB I have available on my Vega 56.

The 3080 has a lot more horsepower but is comparatively light on VRAM. It will be an issue.
 

Near

New Member
Joined
Sep 4, 2020
Messages
1 (0.00/day)
It feels like a lot of people are forgetting that the 3090, and the 3080 will be using GDDR6x which is “effectively double the number of signal states in the GDDR6X memory bus” which increases memory bandwidth to 84GB/s for each component which translates to up to 1TB/s rates according to micron. So this should mean that even a 3070 Ti with 16GB of ram is going to still be slowed than a 3080 if it still uses GDDR6.
 
Joined
May 19, 2009
Messages
1,821 (0.33/day)
Location
Latvia
System Name Personal \\ Work - HP EliteBook 840 G6
Processor 7700X \\ i7-8565U
Motherboard Asrock X670E PG Lightning
Cooling Noctua DH-15
Memory G.SKILL Trident Z5 RGB Black 32GB 6000MHz CL36 \\ 16GB DDR4-2400
Video Card(s) ASUS RoG Strix 1070 Ti \\ Intel UHD Graphics 620
Storage 2x KC3000 2TB, Samsung 970 EVO 512GB \\ OEM 256GB NVMe SSD
Display(s) BenQ XL2411Z \\ FullHD + 2x HP Z24i external screens via docking station
Case Fractal Design Define Arc Midi R2 with window
Audio Device(s) Realtek ALC1150 with Logitech Z533
Power Supply Corsair AX860i
Mouse Logitech G502
Keyboard Corsair K55 RGB PRO
Software Windows 11 \\ Windows 10
One should note that Doom Eternal is already running well on various hardware configurations, including much lower end rigs, the game is well optimised, as was it's predecessor.
 
Joined
Dec 12, 2012
Messages
717 (0.17/day)
Location
Poland
System Name THU
Processor Intel Core i5-13600KF
Motherboard ASUS PRIME Z790-P D4
Cooling SilentiumPC Fortis 3 v2 + Arctic Cooling MX-2
Memory Crucial Ballistix 2x16 GB DDR4-3600 CL16 (dual rank)
Video Card(s) MSI GeForce RTX 4070 Ventus 3X OC 12 GB GDDR6X (2610/21000 @ 0.91 V)
Storage Lexar NM790 2 TB + Corsair MP510 960 GB + PNY XLR8 CS3030 500 GB + Toshiba E300 3 TB
Display(s) LG OLED C8 55" + ASUS VP229Q
Case Fractal Design Define R6
Audio Device(s) Yamaha RX-V381 + Monitor Audio Bronze 6 + Bronze FX | FiiO E10K-TC + Sony MDR-7506
Power Supply Corsair RM650
Mouse Logitech M705 Marathon
Keyboard Corsair K55 RGB PRO
Software Windows 10 Home
Benchmark Scores Benchmarks in 2024?
High res textures take up a lot of space in VRAM. Borderlands 3 at 4K nearly maxes out the 8GB I have available on my Vega 56.

No. Unoptimized textures take up a lot of space. There are games with beautiful textures that use 4-5 GiB of VRAM in 4K. The games that fill up the VRAM completely usually have ugly textures and lowering their quality does not make much difference anyway.
 
Joined
Mar 6, 2011
Messages
155 (0.03/day)
I seem to be seeing something different about the 8nm Samsung/Nvidia process. Volume should be good as this isn't shared with anything else like 7nm is. Nvidia prices alone should be a good indication that they can make the card efficiently otherwise they wouldn't be at these price points when there isn't competition yet. From my understanding this Samsung/Nvidia process should be a better turn out than Turing 12nm. Guess we'll see. I expect demand for 30 series to be the biggest issue. Especially 3070.

Every single indicator I've seen is that there'll be virtually no stock, and yields are awful. All the analysis I've seen agrees.

There's no way they'd have such a high % of disabled cores otherwise. 20% would seem to indicate it's at the limits of the process. Power consumption is another giveaway, and will compound issues with defectiveness.

This was chosen because there was spare line capacity on their 8nm process. It's a modified version of what was available. There was no time for a full custom process like the SHP 12nm node that the last couple of gens of NVIDIA were on. This was thrown together very quickly when Jensen failed to strong arm TSMC. It being Samsung's least popular node of recent times is not at all to the benefit of the maturity of the node, or quality of early dies for Ampere.

They're at the price they are because of NVIDIA's expectations for RDNA2 desktop, and the strength of the consoles.

They probably aren't paying that much for wafers, simply because this was capacity on 8nm (/10nm) that would have otherwise gone unused, which doesn't benefit Samsung. But some of the talk of NVIDIA only paying for working dies is likely nonsense. Certainly on the 3080/3090 wafers. Samsung would most likely lose their shirt on those. Their engineers aren't idiots ... they'd have known as soon as they saw what NVIDIA wanted that decent yields were likely unachievable anywhere near launch (maybe at all). NVIDIA were likely offered an iteration of 7nm EUV HP(ish), but it would have cost a lot more, they wouldn't have had as many wafers guaranteed, and launch likely would have been pushed 1 - 2 quarters. Maybe more. They've gambled on the 8nm ... judging by power consumption and disabled CU count, they have not exactly 'won'.
 
Joined
May 3, 2018
Messages
2,281 (1.05/day)
Can't agree more, better get what suits you now and worry less about the future, a 1080ti with the best CPU of that period can't play a 4k HDR video on YouTube let alone 8K.
My issue with the 3080 is I'm not sure if the beam is enough right now, a 12gb would've been better, but yeah the price is also an issue, maybe AMD can under cut them with similar performance (minus come features) with more vram and a bit cheaper.

You forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
 
Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
You shouldn't worry.
In order to use more VRAM in a single frame, you also need more bandwidth and computational performance as well, which means by the time you need this, this card will be too slow anyway. Resources needs to be balanced, and there is no reason to think you will "future proof" the card by having loads of extra VRAM. It has not panned out well in the past, and it will not in the future, unless games starts to use the VRAM in a completely different manner all of a sudden.

I would have agreed if this card is on the level of 2080 or Radeon VII; but we are talking about a card 30-40% faster than 2080Ti, and I believe we kind of expect 3080 to work well for 4k gaming.
While it is true that the bottleneck due to computational speed will come at some point, with even less VRAM than 2080Ti, I have to be worried about the VRAM becoming a bottleneck faster than processor in this situation.
 
Joined
Jan 31, 2011
Messages
2,202 (0.46/day)
System Name Ultima
Processor AMD Ryzen 7 5800X
Motherboard MSI Mag B550M Mortar
Cooling Arctic Liquid Freezer II 240 rev4 w/ Ryzen offset mount
Memory G.SKill Ripjaws V 2x16GB DDR4 3600
Video Card(s) Palit GeForce RTX 4070 12GB Dual
Storage WD Black SN850X 2TB Gen4, Samsung 970 Evo Plus 500GB , 1TB Crucial MX500 SSD sata,
Display(s) ASUS TUF VG249Q3A 24" 1080p 165-180Hz VRR
Case DarkFlash DLM21 Mesh
Audio Device(s) Onboard Realtek ALC1200 Audio/Nvidia HD Audio
Power Supply Corsair RM650
Mouse Steelseries Rival 3 Wireless | Wacom Intuos CTH-480
Keyboard A4Tech B314 Keyboard
Software Windows 10 Pro
i think at this point its still early to tell how much vram games would be going to use, future games will be developed with PS5 and series x architecture in mind, games may use more vram than we are used to. Were still not sure how efficient nvidia's new tensore core assisted memory compression for now, or how RTX IO would perform on future games.
 
Joined
Jan 31, 2011
Messages
2,202 (0.46/day)
System Name Ultima
Processor AMD Ryzen 7 5800X
Motherboard MSI Mag B550M Mortar
Cooling Arctic Liquid Freezer II 240 rev4 w/ Ryzen offset mount
Memory G.SKill Ripjaws V 2x16GB DDR4 3600
Video Card(s) Palit GeForce RTX 4070 12GB Dual
Storage WD Black SN850X 2TB Gen4, Samsung 970 Evo Plus 500GB , 1TB Crucial MX500 SSD sata,
Display(s) ASUS TUF VG249Q3A 24" 1080p 165-180Hz VRR
Case DarkFlash DLM21 Mesh
Audio Device(s) Onboard Realtek ALC1200 Audio/Nvidia HD Audio
Power Supply Corsair RM650
Mouse Steelseries Rival 3 Wireless | Wacom Intuos CTH-480
Keyboard A4Tech B314 Keyboard
Software Windows 10 Pro
What's that :confused:
i believed its mentioned sometime ago, for each generation, nvidia has been improving their memory compression algorithm, and this time around they would utilize AI to compress vram storage, gotta make more use of them 3rd gen tensore cores
 
Joined
Apr 12, 2013
Messages
6,743 (1.68/day)
Memory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
 
Joined
Jun 10, 2014
Messages
2,900 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
I would have agreed if this card is on the level of 2080 or Radeon VII; but we are talking about a card 30-40% faster than 2080Ti, and I believe we kind of expect 3080 to work well for 4k gaming.
While it is true that the bottleneck due to computational speed will come at some point, with even less VRAM than 2080Ti, I have to be worried about the VRAM becoming a bottleneck faster than processor in this situation.
GPU memory isn't directly managed by the games, and each generation have improved memory management and compression. Nvidia and AMD also manages memory differently, so you can't just rely on specs. Benchmarks will tell if there are any bottlenecks or not.

With every generation for the past 10+ years people have raised concerns about Nvidia's GPUs having too little memory, yet time after time they've shown to do just fine. Never forget that both Nvidia and AMD have close collaboration with game developers, they have a good idea of where the game engines will be in a couple of years.

i think at this point its still early to tell how much vram games would be going to use, future games will be developed with PS5 and series x architecture in mind, games may use more vram than we are used to. Were still not sure how efficient nvidia's new tensore core assisted memory compression for now, or how RTX IO would perform on future games.
With the consoles having 16 GB of total memory, split between OS, the software on the CPU and the GPU, it's highly unlikely that those games will delegate more than 10 GB of that for graphics.
If anything, this should mean that few games will use more than ~8GB of VRAM for the foreseeable future with these kinds of detail levels.

Memory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
Memory compression has improved with every recent architecture from Nvidia up until now. There are rumors of "tensor compression", but I haven't looked into that yet.

You forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
Compression certainly helps, but it doesn't work quite that way.
Memory compression in GPUs is lossless compression transparent to the user. As with any kind of data, the compressions rate of lossless compression is tied to information density. While the memory compression has become more sophisticated with every generation, it's still limited to compress mostly "empty" data.

Render buffers with mostly sparse data is compressed very well, while textures are generally only compressed in "empty" sections. Depending on the game, the compression rate can vary a lot. Especially games with many render passes can see some substantial gains, sometimes over 50% I believe, while others are <10%. So please don't think of memory compression as something that expands memory by xx %.
 
Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
GPU memory isn't directly managed by the games, and each generation have improved memory management and compression. Nvidia and AMD also manages memory differently, so you can't just rely on specs. Benchmarks will tell if there are any bottlenecks or not.

You are most likely correct, but then it still gives me reason to wait and see if AMD can push a card at similar performance while providing more RAM.
If 3080 had maybe 12-14 GB RAM, I would have bought it on launch day (I promised it as a gift to my brother, but now we agree to hold on for AMD)
 
Joined
Jun 10, 2014
Messages
2,900 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
You are most likely correct, but then it still gives me reason to wait and see if AMD can push a card at similar performance while providing more RAM.
If 3080 had maybe 12-14 GB RAM, I would have bought it on launch day (I promised it as a gift to my brother, but now we agree to hold on for AMD)
RTX 3080 "can't" have 12-14 GB. It has a 320-bit memory bus, which means the only balanced configurations are 10 GB and 20 GB. Doing something unbalanced is technically possible, but it created a lot of noise when they last did it on GTX 970.

The same goes for AMD and "big Navi™". If it has a 256-bit memory bus it will have 8/16 GB, for 320-bit: 10/20 GB, or 384-bit: 12/24 GB, etc., unless it uses HBM of course.
 
Joined
Jan 31, 2011
Messages
2,202 (0.46/day)
System Name Ultima
Processor AMD Ryzen 7 5800X
Motherboard MSI Mag B550M Mortar
Cooling Arctic Liquid Freezer II 240 rev4 w/ Ryzen offset mount
Memory G.SKill Ripjaws V 2x16GB DDR4 3600
Video Card(s) Palit GeForce RTX 4070 12GB Dual
Storage WD Black SN850X 2TB Gen4, Samsung 970 Evo Plus 500GB , 1TB Crucial MX500 SSD sata,
Display(s) ASUS TUF VG249Q3A 24" 1080p 165-180Hz VRR
Case DarkFlash DLM21 Mesh
Audio Device(s) Onboard Realtek ALC1200 Audio/Nvidia HD Audio
Power Supply Corsair RM650
Mouse Steelseries Rival 3 Wireless | Wacom Intuos CTH-480
Keyboard A4Tech B314 Keyboard
Software Windows 10 Pro
Memory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
It was a rumour last time that they will tap into it, but not much info on it now or if its really true and yeah like
efikkan mentioned, Memory compression improves on every generation regardless of tensor assisted or not
 
Joined
Nov 3, 2011
Messages
690 (0.15/day)
Location
Australia
System Name Eula
Processor AMD Ryzen 9 7900X PBO
Motherboard ASUS TUF Gaming X670E Plus Wifi
Cooling Corsair H115i Elite Capellix XT
Memory Trident Z5 Neo RGB DDR5-6000 64GB (4x16GB F5-6000J3038F16GX2-TZ5NR) EXPO II, OCCT Tested
Video Card(s) Gigabyte GeForce RTX 4080 GAMING OC
Storage Corsair MP600 XT NVMe 2TB, Samsung 980 Pro NVMe 2TB and Toshiba N300 NAS 10TB HDD
Display(s) 2X LG 27UL600 27in 4K HDR FreeSync/G-Sync DP
Case Phanteks Eclipse P500A D-RGB White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair HX1000 Platinum 1000W
Mouse SteelSeries Prime Pro Gaming Mouse
Keyboard SteelSeries Apex 5
Software MS Windows 11 Pro
Results nearly match RTX 2080 Super and RTX 3080 memory bandwidth scaling.
 
Joined
Jul 5, 2008
Messages
336 (0.06/day)
System Name Roxy
Processor i7 5930K @ 4.5GHz (167x27 1.35V)
Motherboard X99-A/USB3.1
Cooling Barrow Infinity Mirror, EK 45x420mm, EK X-Res w 10W DDC
Memory 2x16GB Patriot Viper 3600 @3333 16-20-20-38
Video Card(s) XFX 5700 XT Thicc III Ultra
Storage Sabrent Rocket 2TB, 4TB WD Mechanical
Display(s) Acer XZ321Q (144Mhz Freesync Curved 32" 1080p)
Case Modded Cosmos-S Red, Tempered Glass Window, Full Frontal Mesh, Black interior
Audio Device(s) Soundblaster Z
Power Supply Corsair RM 850x White
Mouse Logitech G403
Keyboard CM Storm QuickFire TK
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/e5uz5f
So where's the 3090 8k footage?
 
D

Deleted member 185088

Guest
You forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
I'm not convinced by this, in a recent Hardware Unboxed video, the 1080ti with more vram than the 2080 seems to matter. I believe the only reason is the price.
Let's wait for reviews.
 
Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
RTX 3080 "can't" have 12-14 GB. It has a 320-bit memory bus, which means the only balanced configurations are 10 GB and 20 GB. Doing something unbalanced is technically possible, but it created a lot of noise when they last did it on GTX 970.

The same goes for AMD and "big Navi™". If it has a 256-bit memory bus it will have 8/16 GB, for 320-bit: 10/20 GB, or 384-bit: 12/24 GB, etc., unless it uses HBM of course.
Oh I missed this, it makes sense. That’s also unfortunate though.
 
Joined
Jul 9, 2015
Messages
3,413 (1.06/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
More VRAM doesn't give you more performance.
It depends. Note the dive 2080 is taking at 4k (also tells you why those nice DF guys ran it that way):

1599415446706.png


"speed up":

1599415912039.png
 
Last edited:
Top