• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GeForce RTX 3080 Rips and Tears Through DOOM Eternal at 4K, Over 100 FPS

d0x360

New Member
Joined
Sep 4, 2020
Messages
4 (0.09/day)
You shouldn't worry.
In order to use more VRAM in a single frame, you also need more bandwidth and computational performance as well, which means by the time you need this, this card will be too slow anyway. Resources needs to be balanced, and there is no reason to think you will "future proof" the card by having loads of extra VRAM. It has not panned out well in the past, and it will not in the future, unless games starts to use the VRAM in a completely different manner all of a sudden.


Exactly.

There are however reasons to buy extra VRAM, like various (semi-)professional uses. But for gaming it's a waste of money. Anyone who is into high-end gaming will be looking at a new card in 3-4 years anyway.
Wrong wrong wrong. AMD out 8 gigs of vram in their cards because they performed better even when their was vram to spare. I had a 290x (trix) and if it only had 4 gigs of vram then there is zero chance I would have been able to run witcher 3 at 4k30 or 1440p60 on that card. Same goes for doom 2016, 4k in vulkan on max settings I could get over 100fps but if I had 4 gigs of vram? Nope.

So the card was not past it's usefulness by the time the memory was needed. I have more examples but point made.

Also PC gamers that are REALLY into high end gaming tend to upgrade their gpu every year unless the performance uplift is only like 15%. I have a 2080ti and will likely upgrade to a 3090 although I'll wait for game benchmarks to see how the 3080 performs.
 
Joined
Jun 10, 2014
Messages
2,210 (0.95/day)
Wrong wrong wrong. AMD out 8 gigs of vram in their cards because they performed better even when their was vram to spare. I had a 290x (trix) and if it only had 4 gigs of vram then there is zero chance I would have been able to run witcher 3 at 4k30 or 1440p60 on that card. Same goes for doom 2016, 4k in vulkan on max settings I could get over 100fps but if I had 4 gigs of vram? Nope.
More VRAM doesn't give you more performance.

How on earth would a 290X run Doom (2016) in 4K at ultra at 100 FPS?

RTX 3070/3080 is carefully tested, and has the appropriate amount of VRAM for current games and games in development.
 
Joined
Apr 21, 2020
Messages
13 (0.07/day)
Processor Intel I9-9900K
Motherboard Gigabyte Z390 Aorus Master
Cooling NZXT Kraken x62v2
Memory 4x8GB G.Skill Flare X 3200mhz CL14
Video Card(s) Zotac RTX 2080Ti AMP + Alphacool Eiswolf 240 GPX Pro
Storage ADATA 1TB XPG SX8200 Pro + Intel 16GB Optane Memory 16GB M.2 + WD Blue 4TB
Display(s) Samsung CRG9 32:9 + Samsung S34J55W 21:9 + Samsung S24B300 16:9
Case Fractal Design Define R6 USB-C
Power Supply Corsair RM850x 850W Gold
Mouse Razer Viper Mini
Keyboard Razer Ornata Chroma
Software Windows 10 Pro
More VRAM doesn't give you more performance.

How on earth would a 290X run Doom (2016) in 4K at ultra at 100 FPS?

RTX 3070/3080 is carefully tested, and has the appropriate amount of VRAM for current games and games in development.
exactly i also agree, death stranding only uses around 4000vram at 4K and the detail is awesome.
I'm not an expert but from what i read some games still use Megatextures like Middle-earth shadow of mordor, old game but vram hungry, but the present and future is on "Physically based rendering" and "variable shade rating", Death stranding ,Doom eternal, Forza Horizon 4 use them and they are graphically superior and faster framerate. Even Star Citizen alpha in it's current state with tons of textures, which is already huge in scale, uses Physically based rendering.

I'm curios to see is DLSS 2.0 in action, despite so few games use it, if the current boost is alot higher then the 2080ti
 
Joined
Jun 11, 2017
Messages
28 (0.02/day)
Location
Montreal Canada
I think Nvidia is just doing this to stop the SLI. They know the gamers out there that wanted SLI for lower card because in the hay day you could buy two lower cards SLI them and get better performance then forking over a high price for a single card. Now Nvidia is doing there best to stop all that. Making you buy the High end card only. I remember Nvidia used to love gamers now they just love to make you pay. I might upgrade my 1070ti's in SLI to maybe couple of 2070's supers. Or just wait till nvidia makes the 4080's and finally make some 3070's with SLI support.

My two 1070's in SLI outperform a 1080 single any day. I'm still running two 660ti's in SLI on my 1080P work /gaming computer.

It's just a shame nvidia made more sales in the gamer market when they allowed SLI on lower end cards. I mean sure if I had tons of money I might fork out 1800.00 for a 3090 but kinda seems like a waste a money for just one card.
 
Joined
May 5, 2016
Messages
93 (0.06/day)
You shouldn't worry.
In order to use more VRAM in a single frame, you also need more bandwidth and computational performance as well, which means by the time you need this, this card will be too slow anyway. Resources needs to be balanced, and there is no reason to think you will "future proof" the card by having loads of extra VRAM. It has not panned out well in the past, and it will not in the future, unless games starts to use the VRAM in a completely different manner all of a sudden.


Exactly.

There are however reasons to buy extra VRAM, like various (semi-)professional uses. But for gaming it's a waste of money. Anyone who is into high-end gaming will be looking at a new card in 3-4 years anyway.
High res textures take up a lot of space in VRAM. Borderlands 3 at 4K nearly maxes out the 8GB I have available on my Vega 56.

The 3080 has a lot more horsepower but is comparatively light on VRAM. It will be an issue.
 

Near

New Member
Joined
Sep 4, 2020
Messages
1 (0.02/day)
It feels like a lot of people are forgetting that the 3090, and the 3080 will be using GDDR6x which is “effectively double the number of signal states in the GDDR6X memory bus” which increases memory bandwidth to 84GB/s for each component which translates to up to 1TB/s rates according to micron. So this should mean that even a 3070 Ti with 16GB of ram is going to still be slowed than a 3080 if it still uses GDDR6.
 
Joined
May 19, 2009
Messages
1,432 (0.34/day)
Location
Latvia
System Name Personal \\ Work - HP EliteBook 840 G3
Processor i7-4790K \\ i7-6500U
Motherboard MSI Z97 Gaming 7
Cooling Noctua DH-15
Memory Corsair Vengeance Pro 32GB 2400 MHz \\ 16GB DDR4-2133
Video Card(s) ASUS RoG Strix 1070 Ti\\ Intel 520 HD
Storage Samsung 850 Pro 512GB, WD Black 2 TB, Samsung 970 Pro 512GB \\ Samsung 256GB SSD
Display(s) BenQ XL2411Z \\ FullHD + 2x HP Z24i external screens via docking station
Case Fractal Design Define Arc Midi R2 with window
Audio Device(s) Realtek ALC1150 with Logitech Z323
Power Supply Corsair AX860i
Software Windows 10
One should note that Doom Eternal is already running well on various hardware configurations, including much lower end rigs, the game is well optimised, as was it's predecessor.
 
Joined
Dec 12, 2012
Messages
99 (0.03/day)
High res textures take up a lot of space in VRAM. Borderlands 3 at 4K nearly maxes out the 8GB I have available on my Vega 56.
No. Unoptimized textures take up a lot of space. There are games with beautiful textures that use 4-5 GiB of VRAM in 4K. The games that fill up the VRAM completely usually have ugly textures and lowering their quality does not make much difference anyway.
 
Joined
Mar 6, 2011
Messages
155 (0.04/day)
I seem to be seeing something different about the 8nm Samsung/Nvidia process. Volume should be good as this isn't shared with anything else like 7nm is. Nvidia prices alone should be a good indication that they can make the card efficiently otherwise they wouldn't be at these price points when there isn't competition yet. From my understanding this Samsung/Nvidia process should be a better turn out than Turing 12nm. Guess we'll see. I expect demand for 30 series to be the biggest issue. Especially 3070.
Every single indicator I've seen is that there'll be virtually no stock, and yields are awful. All the analysis I've seen agrees.

There's no way they'd have such a high % of disabled cores otherwise. 20% would seem to indicate it's at the limits of the process. Power consumption is another giveaway, and will compound issues with defectiveness.

This was chosen because there was spare line capacity on their 8nm process. It's a modified version of what was available. There was no time for a full custom process like the SHP 12nm node that the last couple of gens of NVIDIA were on. This was thrown together very quickly when Jensen failed to strong arm TSMC. It being Samsung's least popular node of recent times is not at all to the benefit of the maturity of the node, or quality of early dies for Ampere.

They're at the price they are because of NVIDIA's expectations for RDNA2 desktop, and the strength of the consoles.

They probably aren't paying that much for wafers, simply because this was capacity on 8nm (/10nm) that would have otherwise gone unused, which doesn't benefit Samsung. But some of the talk of NVIDIA only paying for working dies is likely nonsense. Certainly on the 3080/3090 wafers. Samsung would most likely lose their shirt on those. Their engineers aren't idiots ... they'd have known as soon as they saw what NVIDIA wanted that decent yields were likely unachievable anywhere near launch (maybe at all). NVIDIA were likely offered an iteration of 7nm EUV HP(ish), but it would have cost a lot more, they wouldn't have had as many wafers guaranteed, and launch likely would have been pushed 1 - 2 quarters. Maybe more. They've gambled on the 8nm ... judging by power consumption and disabled CU count, they have not exactly 'won'.
 
Joined
May 3, 2018
Messages
199 (0.22/day)
Can't agree more, better get what suits you now and worry less about the future, a 1080ti with the best CPU of that period can't play a 4k HDR video on YouTube let alone 8K.
My issue with the 3080 is I'm not sure if the beam is enough right now, a 12gb would've been better, but yeah the price is also an issue, maybe AMD can under cut them with similar performance (minus come features) with more vram and a bit cheaper.
You forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
 
Joined
Apr 6, 2015
Messages
114 (0.06/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
You shouldn't worry.
In order to use more VRAM in a single frame, you also need more bandwidth and computational performance as well, which means by the time you need this, this card will be too slow anyway. Resources needs to be balanced, and there is no reason to think you will "future proof" the card by having loads of extra VRAM. It has not panned out well in the past, and it will not in the future, unless games starts to use the VRAM in a completely different manner all of a sudden.
I would have agreed if this card is on the level of 2080 or Radeon VII; but we are talking about a card 30-40% faster than 2080Ti, and I believe we kind of expect 3080 to work well for 4k gaming.
While it is true that the bottleneck due to computational speed will come at some point, with even less VRAM than 2080Ti, I have to be worried about the VRAM becoming a bottleneck faster than processor in this situation.
 
Joined
Jan 31, 2011
Messages
1,806 (0.51/day)
System Name TeraUltima 7
Processor Intel Core i7 3770K @4.2Ghz
Motherboard ASRock Z68 Pro3-M
Cooling Cryorig H7
Memory Kingston Hyper X 16GB DDR3 1866
Video Card(s) Palit GeForce GTX 1070 Super JetStream 2Ghz
Storage Crucial MX100 256GB SSD + 1TB Seagate HDD
Display(s) LG 23MP68VQ-P IPS 75HZ 23" 1080p MONITOR
Case Aerocool Dead Silence Black Edition Cube
Audio Device(s) Onboard Realtek ALC892 7.1 HD Audio/Nvidia HD Audio
Power Supply OCZ Stealth X Stream II 600W
Mouse Logitech G400S | Wacom Intuos CTH-480
Keyboard A4Tech G800V Gaming Keyboard
Software Windows 7 Ultimate 64bit SP1
Benchmark Scores http://www.3dmark.com/fs/9470441
i think at this point its still early to tell how much vram games would be going to use, future games will be developed with PS5 and series x architecture in mind, games may use more vram than we are used to. Were still not sure how efficient nvidia's new tensore core assisted memory compression for now, or how RTX IO would perform on future games.
 
Joined
Jan 31, 2011
Messages
1,806 (0.51/day)
System Name TeraUltima 7
Processor Intel Core i7 3770K @4.2Ghz
Motherboard ASRock Z68 Pro3-M
Cooling Cryorig H7
Memory Kingston Hyper X 16GB DDR3 1866
Video Card(s) Palit GeForce GTX 1070 Super JetStream 2Ghz
Storage Crucial MX100 256GB SSD + 1TB Seagate HDD
Display(s) LG 23MP68VQ-P IPS 75HZ 23" 1080p MONITOR
Case Aerocool Dead Silence Black Edition Cube
Audio Device(s) Onboard Realtek ALC892 7.1 HD Audio/Nvidia HD Audio
Power Supply OCZ Stealth X Stream II 600W
Mouse Logitech G400S | Wacom Intuos CTH-480
Keyboard A4Tech G800V Gaming Keyboard
Software Windows 7 Ultimate 64bit SP1
Benchmark Scores http://www.3dmark.com/fs/9470441
What's that :confused:
i believed its mentioned sometime ago, for each generation, nvidia has been improving their memory compression algorithm, and this time around they would utilize AI to compress vram storage, gotta make more use of them 3rd gen tensore cores
 
Joined
Apr 12, 2013
Messages
3,637 (1.32/day)
Memory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
 
Joined
Jun 10, 2014
Messages
2,210 (0.95/day)
I would have agreed if this card is on the level of 2080 or Radeon VII; but we are talking about a card 30-40% faster than 2080Ti, and I believe we kind of expect 3080 to work well for 4k gaming.
While it is true that the bottleneck due to computational speed will come at some point, with even less VRAM than 2080Ti, I have to be worried about the VRAM becoming a bottleneck faster than processor in this situation.
GPU memory isn't directly managed by the games, and each generation have improved memory management and compression. Nvidia and AMD also manages memory differently, so you can't just rely on specs. Benchmarks will tell if there are any bottlenecks or not.

With every generation for the past 10+ years people have raised concerns about Nvidia's GPUs having too little memory, yet time after time they've shown to do just fine. Never forget that both Nvidia and AMD have close collaboration with game developers, they have a good idea of where the game engines will be in a couple of years.

i think at this point its still early to tell how much vram games would be going to use, future games will be developed with PS5 and series x architecture in mind, games may use more vram than we are used to. Were still not sure how efficient nvidia's new tensore core assisted memory compression for now, or how RTX IO would perform on future games.
With the consoles having 16 GB of total memory, split between OS, the software on the CPU and the GPU, it's highly unlikely that those games will delegate more than 10 GB of that for graphics.
If anything, this should mean that few games will use more than ~8GB of VRAM for the foreseeable future with these kinds of detail levels.

Memory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
Memory compression has improved with every recent architecture from Nvidia up until now. There are rumors of "tensor compression", but I haven't looked into that yet.

You forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
Compression certainly helps, but it doesn't work quite that way.
Memory compression in GPUs is lossless compression transparent to the user. As with any kind of data, the compressions rate of lossless compression is tied to information density. While the memory compression has become more sophisticated with every generation, it's still limited to compress mostly "empty" data.

Render buffers with mostly sparse data is compressed very well, while textures are generally only compressed in "empty" sections. Depending on the game, the compression rate can vary a lot. Especially games with many render passes can see some substantial gains, sometimes over 50% I believe, while others are <10%. So please don't think of memory compression as something that expands memory by xx %.
 
Joined
Oct 24, 2004
Messages
1,237 (0.21/day)
System Name Seriously ?
Processor Core i7 4790K @ 4.4Ghz
Motherboard MSI Z97 Gaming 5
Cooling Alpenföhne Broken 120 + 2 huge coolermaster chassis fans
Memory 2x8GB DDR3 2133Mhz Crucial Ballistix
Video Card(s) Gigabyte Geforce RTX 2070
Storage 1x Kingston SA400 128GB / 1x Crucial CT1050MX300 1TB / 1x Sandisk SSD Plus 1TB
Display(s) 40" Samsung UE40ES5500
Case Cooler Master HAF932
Audio Device(s) onboard realtek audio
Power Supply Corsair AX1200
Mouse Microsoft intellimouse optical
Keyboard Logitech K270
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/vmb641
So please don't think of memory compression as something that expands memory by xx %.
But it's so much more fun to throw numbers out of nowhere, compared to your explanation.
And don't forget you can always download more RAM, if need be.
 
Joined
Apr 6, 2015
Messages
114 (0.06/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
GPU memory isn't directly managed by the games, and each generation have improved memory management and compression. Nvidia and AMD also manages memory differently, so you can't just rely on specs. Benchmarks will tell if there are any bottlenecks or not.
You are most likely correct, but then it still gives me reason to wait and see if AMD can push a card at similar performance while providing more RAM.
If 3080 had maybe 12-14 GB RAM, I would have bought it on launch day (I promised it as a gift to my brother, but now we agree to hold on for AMD)
 
Joined
Jun 10, 2014
Messages
2,210 (0.95/day)
You are most likely correct, but then it still gives me reason to wait and see if AMD can push a card at similar performance while providing more RAM.
If 3080 had maybe 12-14 GB RAM, I would have bought it on launch day (I promised it as a gift to my brother, but now we agree to hold on for AMD)
RTX 3080 "can't" have 12-14 GB. It has a 320-bit memory bus, which means the only balanced configurations are 10 GB and 20 GB. Doing something unbalanced is technically possible, but it created a lot of noise when they last did it on GTX 970.

The same goes for AMD and "big Navi™". If it has a 256-bit memory bus it will have 8/16 GB, for 320-bit: 10/20 GB, or 384-bit: 12/24 GB, etc., unless it uses HBM of course.
 
Joined
Jan 31, 2011
Messages
1,806 (0.51/day)
System Name TeraUltima 7
Processor Intel Core i7 3770K @4.2Ghz
Motherboard ASRock Z68 Pro3-M
Cooling Cryorig H7
Memory Kingston Hyper X 16GB DDR3 1866
Video Card(s) Palit GeForce GTX 1070 Super JetStream 2Ghz
Storage Crucial MX100 256GB SSD + 1TB Seagate HDD
Display(s) LG 23MP68VQ-P IPS 75HZ 23" 1080p MONITOR
Case Aerocool Dead Silence Black Edition Cube
Audio Device(s) Onboard Realtek ALC892 7.1 HD Audio/Nvidia HD Audio
Power Supply OCZ Stealth X Stream II 600W
Mouse Logitech G400S | Wacom Intuos CTH-480
Keyboard A4Tech G800V Gaming Keyboard
Software Windows 7 Ultimate 64bit SP1
Benchmark Scores http://www.3dmark.com/fs/9470441
Memory compression is independent of Tensor cores, & Turing was IIRC the last time they improved upon it. Tensor cores don't help memory compression & Ampere hasn't improved upon that aspect of Turing.
It was a rumour last time that they will tap into it, but not much info on it now or if its really true and yeah like
efikkan mentioned, Memory compression improves on every generation regardless of tensor assisted or not
 
Joined
Nov 3, 2011
Messages
431 (0.13/day)
System Name Fractal Define R5 | Fractal Define R6
Processor AMD Ryzen 9 3900X | Intel Core i7-9900K @ 5 Ghz all cores
Motherboard ASUS ROG Strix X570 Gaming | MSI Z390 Gaming Pro Carbon AC
Cooling CORSAIR Hydro H115i, RGB | CORSAIR Hydro H150i RGB
Memory G.Skill Trident 32GB 3200 Mhz RGB| HyperX 32GB 3600 Mhz RGB
Video Card(s) Gigabyte RTX 2080 Windforce 8G OC| MSI RTX 2080 Ti Gaming X TRIO
Display(s) 3X Samsung 23 in LED | LG 32UL950-W 32in 4K HDR FreeSync
Case Fractal R5 tempered glass | Fractal R6 tempered glass
Audio Device(s) Creative Sound Blaster Z | Creative Sound Blaster AE-7
Power Supply Seasonic 750 watts| Seasonic 1000 watts
Mouse Bloody P95s
Keyboard Logitech G810s
Software MS Windows 10 Pro version 2004
Benchmark Scores Intel Core i7-7820X 4.5 Ghz and ASUS ROG Strix X299-E Gaming parts in storage.
Results nearly match RTX 2080 Super and RTX 3080 memory bandwidth scaling.
 
Joined
Jul 5, 2008
Messages
299 (0.07/day)
System Name Red Mist
Processor i7 5930K @ 4.5GHz
Motherboard X99-A/USB3.1
Cooling CPU:Barrow Infinity Mirror;420mm;DP-1200. GPU: Alphacool Eisblock GPX;280mm Alphacool;EK x3 DDC
Memory 4x8GB 2133 Corsair LPX @ 2666MHz 14,15,15,32
Video Card(s) XFX 5700 XT Thicc III Alphacool Eisblock GPX
Storage Sabrent Rocket 2TB, 512GB Micron (1GB read 4.3GB Write), 4TB WD Mechanical
Display(s) Acer XZ321Q (144Mhz Freesync Curved 32" 1080p)
Case Modded Cosmos 1000 Red, Tempered Glass Window, Full Frontal Mesh
Audio Device(s) Soundblaster Z
Power Supply Corsair RM 850x White
Mouse Logitech G403
Keyboard CM Storm QuickFire TK
Software Windows 10 Pro
Benchmark Scores World First 4890 Crossfire Data. Highest scoring E2160 & E4600 on CPC Leaderboard.
So where's the 3090 8k footage?
 
Joined
Jan 31, 2019
Messages
225 (0.36/day)
You forget with Nvcache and tensor compression (~20% compression) this card is effectively 12GB, so I wouldn’t worry too much
I'm not convinced by this, in a recent Hardware Unboxed video, the 1080ti with more vram than the 2080 seems to matter. I believe the only reason is the price.
Let's wait for reviews.
 
Joined
Apr 6, 2015
Messages
114 (0.06/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
RTX 3080 "can't" have 12-14 GB. It has a 320-bit memory bus, which means the only balanced configurations are 10 GB and 20 GB. Doing something unbalanced is technically possible, but it created a lot of noise when they last did it on GTX 970.

The same goes for AMD and "big Navi™". If it has a 256-bit memory bus it will have 8/16 GB, for 320-bit: 10/20 GB, or 384-bit: 12/24 GB, etc., unless it uses HBM of course.
Oh I missed this, it makes sense. That’s also unfortunate though.
 
Joined
Jul 9, 2015
Messages
2,464 (1.28/day)
System Name My all round PC
Processor i5 750
Motherboard ASUS P7P55D-E
Memory 8GB
Video Card(s) Sapphire 380 OC... sold, waiting for Navi
Storage 256GB Samsung SSD + 2Tb + 1.5Tb
Display(s) Samsung 40" A650 TV
Case Thermaltake Chaser mk-I Tower
Power Supply 425w Enermax MODU 82+
Software Windows 10
More VRAM doesn't give you more performance.
It depends. Note the dive 2080 is taking at 4k (also tells you why those nice DF guys ran it that way):

1599415446706.png


"speed up":

1599415912039.png
 
Last edited:
Top