• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Micron Confirms Next-Gen NVIDIA Ampere Memory Specifications - 12 GB GDDR6X, 1 TB/s Bandwidth

Joined
Sep 11, 2015
Messages
624 (0.20/day)
There aren't any games on the market that benefit from more than 8-9 gigs even in 4K. It's been tested to death already. I'm almost 100% sure that 12 will be plenty for the time the cards will actually be usable for comfortable 4K-gaming (next 4 years at best if we're lucky).
But of course people will fool themselves into "futureproofing" argument again, compounded by AMD once again loading cards unsuitable for 4K-60 with VRAM nobody needs. We're doing this thing every launch cycle!
Nice jab at AMD. Since when are we doing 4K-60 every launch cycle? Really, really weird argument.
 
Joined
Jun 10, 2014
Messages
2,889 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
That's a silly question, just for fun, but those VRAM levels really make me wonder. Could the main system THEORETICALLY access an unused chunk of VRAM memory to use as main RAM ? I know the opposite (using system RAM as an extension of GPU ram) is hampered by the fact system ram is in general bog slow compared to VRAM, plus the bandwidth bottleneck of PCIe. But VRAM is fast. Could vram be used for system ram , in theory? Other than reasons of common sense, it being much more expensive and the whole thing being impractical and , well, stupid, is it technically doable?
You could, in theory, use pretty much any storage medium as an extension of RAM.
On a dedicated graphics card, VRAM is a separate address space, but with a modified driver you should be able to have full access to it.

The reason why swapping VRAM to RAM is slow is not bandwidth, but latency. Data has to travel both the PCIe bus and the system memory bus, with a lot of syncing etc.
 

iO

Joined
Jul 18, 2012
Messages
526 (0.12/day)
Location
Germany
Processor R7 5700x
Motherboard MSI B450i Gaming
Cooling Accelero Mono CPU Edition
Memory 16 GB VLP
Video Card(s) AMD RX 6700 XT
Storage P34A80 512GB
Display(s) LG 27UM67 UHD
Case none
Power Supply SS G-650
1300 Mhz memory clock with PAM4 encoding, wide interface and even clamshell mode. That surely will be a major pita for the PCB design...
 
Joined
Feb 11, 2009
Messages
5,389 (0.98/day)
System Name Cyberline
Processor Intel Core i7 2600k -> 12600k
Motherboard Asus P8P67 LE Rev 3.0 -> Gigabyte Z690 Auros Elite DDR4
Cooling Tuniq Tower 120 -> Custom Watercoolingloop
Memory Corsair (4x2) 8gb 1600mhz -> Crucial (8x2) 16gb 3600mhz
Video Card(s) AMD RX480 -> ... nope still the same :'(
Storage Samsung 750 Evo 250gb SSD + WD 1tb x 2 + WD 2tb -> 2tb MVMe SSD
Display(s) Philips 32inch LPF5605H (television) -> Dell S3220DGF
Case antec 600 -> Thermaltake Tenor HTCP case
Audio Device(s) Focusrite 2i4 (USB)
Power Supply Seasonic 620watt 80+ Platinum
Mouse Elecom EX-G
Keyboard Rapoo V700
Software Windows 10 Pro 64bit
What benefits would that have? How would that benefit card makers?


I know it can be used for more things, but that doesn't change my point. At all. :)

More than 12-16GB is really just wasting cash right now. Its like buying a AMD processor with 12c/24t... if gaming is only using at most 6c/12t (99.9%) more NOW?

Again, devs have headroom to play with already... they aren't even using that.

yeah because you cant use what isnt there, the hardware needs to exist for clients to be able to make use of hte software they create.
Why would a company make a game now that requires DirectX 14 that is not supported by any hardware, nobody would be able to buy it.
 

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
26,956 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,001 (2.26/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/5za05v
That's a silly question, just for fun, but those VRAM levels really make me wonder. Could the main system THEORETICALLY access an unused chunk of VRAM memory to use as main RAM ? I know the opposite (using system RAM as an extension of GPU ram) is hampered by the fact system ram is in general bog slow compared to VRAM, plus the bandwidth bottleneck of PCIe. But VRAM is fast. Could vram be used for system ram , in theory? Other than reasons of common sense, it being much more expensive and the whole thing being impractical and , well, stupid, is it technically doable?
Consoles are doing this, but current PC design doesn't allow for it. Also keep in mind that without a memory controller in the CPU that can recognise the memory type used by the GPU, it's unlikely it would work, apart as some kind of virtual cache or something, but not as system memory.

Not relevant for pcb design? Honest question
Have a look here, not related to GPUs, but explains the PCB design related issues.
 

ARF

Joined
Jan 28, 2020
Messages
3,892 (2.56/day)
Location
Ex-usa
Consoles are doing this, but current PC design doesn't allow for it. Also keep in mind that without a memory controller in the CPU that can recognise the memory type used by the GPU, it's unlikely it would work, apart as some kind of virtual cache or something, but not as system memory.


Have a look here, not related to GPUs, but explains the PCB design related issues.


Can a modern CPU actually utilise efficiently 1 TB/s throughput with its memory subsystem?
 

iO

Joined
Jul 18, 2012
Messages
526 (0.12/day)
Location
Germany
Processor R7 5700x
Motherboard MSI B450i Gaming
Cooling Accelero Mono CPU Edition
Memory 16 GB VLP
Video Card(s) AMD RX 6700 XT
Storage P34A80 512GB
Display(s) LG 27UM67 UHD
Case none
Power Supply SS G-650
Not relevant for pcb design? Honest question
A 384bit interface needs ~400 data pins and that leaked PCB suggests a really cramped layout with memory ICs being very close together. The crosstalk and interference you'll get at Ghz speeds is immense and then having to differentiate between 4 voltage levels instead of just 2 is quite challenging if you want a nice clean eye.

Eye.png
 
Joined
Apr 29, 2018
Messages
127 (0.06/day)
Or maybe 8GB on a 480 was more epeen than useful? That thing was a 1080p card, barely 1440, so 8GB on such a thing wasn't warranted IMO.
I see you still fall under the ignorant myth that resolution is the main determining factor for vram. in reality there's actually only a small difference in most games even going from 1080p to 4K. And textures have almost zero impact on performance if you have enough vram so yes 8 gigs was perfectly usable on that card in plenty of games as 4 gigs would have been a limitation and caused some stutters.
 
Joined
May 15, 2014
Messages
235 (0.07/day)
1300 Mhz memory clock with PAM4 encoding, wide interface and even clamshell mode. That surely will be a major pita for the PCB design...
Don't forget additional cost of back drilling. <$ than blind VIAs, though. Looks like they used some Mellanox tech/ideas. I wonder if PAM4 transceiver logic is in the memory controllers or a separate PHY...

Not relevant for pcb design? Honest question
Very relevant given trade-off of SNR (hence very proximal placement & orientation of GDDR6X to GPU with shortest possible traces) for the gain in bandwidth. You need a very clean board (electrical/rf signal pcb/layout/components).

Edit:
Back to 1GB per chip explains why the presumed picture of the RTX 30x0 (21x0?) card has so many memory chips. Even if it's "only" 12, they're going to take up a lot of physical space until 2GB chips arrive at some point. I guess the next Super cards might have half the amount of memory chips...

Huh, 12GB Titan RTX? Never saw something like that. Titan RTX was 24GB VRAM. Guess the VRAM capacity is just place holder
Higher tier, higher VRAM capacity SKUs will use slower GDDR6 with higher density modules.
 
Last edited:
Joined
May 19, 2016
Messages
67 (0.02/day)
i'm having nightmares thinking about buying an Nvidia gpu with Micron memory, my 1st 2080Ti had it and it never OC'd even +1mhz on core/memory


i hope Samsung is doing 3090 Vram chips
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
16,001 (2.26/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/5za05v
i'm having nightmares thinking about buying an Nvidia gpu with Micron memory, my 1st 2080Ti had it and it never OC'd even +1mhz on core/memory


i hope Samsung is doing 3090 Vram chips
Don't expect GDDR6X to overclock 1MHz either, as they've seemingly already pushed the current technology to its limit.

Higher tier, higher VRAM capacity SKUs will use slower GDDR6 with higher density modules.
Speculation or do you know this for a fact? I mean, if the GPU has a wider bus it might not make a huge difference, but then why push for super high-speed memory on the lower tier cards? This doesn't quite add up.
 
Joined
Mar 10, 2010
Messages
11,878 (2.31/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
You do realize that you have absolutely no access to such data? Unless you consider reddit drama a data.

You could look to the present 5% return rate of the 2080ti that mindfactury released last week, the highest for any single SKU.

And that's now the memory issues are sorted.

Time will tell though.
 
Joined
Dec 31, 2009
Messages
19,366 (3.72/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
I see you still fall under the ignorant myth that resolution is the main determining factor for vram. in reality there's actually only a small difference in most games even going from 1080p to 4K. And textures have almost zero impact on performance if you have enough vram so yes 8 gigs was perfectly usable on that card in plenty of games as 4 gigs would have been a limitation and caused some stutters.
welp... i was talking performance in what you quoted. I also recall, earlier in this thread (before your post) conceding your point anyway. ;)

I know it can be used for more things, but that doesn't change my point. At all. :)
 
Last edited:
Joined
May 15, 2014
Messages
235 (0.07/day)
Speculation or do you know this for a fact?
Everything is speculation until launch.

why push for super high-speed memory on the lower tier cards?
You have no choice given competition, product stack & GDDR6X availability. A lot also depends on yields.

This doesn't quite add up.
Let's speculate. If GDDR6X is currently only available in 1GBx32 modules, you need 12 for an appropriately sized (384bit) top tier SKU, so 12GB or 24GB back-to-back. If the competition has a 16GB frame buffer (even if inferior), you look bad. So 24GB, it is. Is 24GB enough for a next gen pro/ws/halo card? Halo maybe, but not pro/ws. GDDR6 is available as 2GBx32 ->48GB @ 17gbps = >800GB/s, which is plenty for full die, lower clocked, primarily non-gaming cards which can't have less frame buffer than their predecessors or lower tier pleb cards... If I were to speculate further, I might suggest only 3080/90 & perhaps the top SKU GA104 get GDDR6X. Everything above and below gets standard GDDR6 (for obviously different reasons). Leaves plenty of scope for a super refresh in 2021, as per your earlier post.
 
Last edited:
Joined
Oct 12, 2005
Messages
681 (0.10/day)
The main things with memory is you want to have always enough. Unlike let say a GPU or cpu that is a bit too slow and wo le give you low but mostly steady fps. Not having enough ram can be disastrous on performance.


It's indeed way worst if you don't have enough system ram as you have to swap, but it still bad on a GPU as the latency from ram (if you have enough) is too high for smooth gameplay.

You want in your GPU to have all it need for the next 5 or so seconds.

Still today, one of the low hanging fruits for better visual quality without involving too much calculation is better and more textures. Ram can also be used to store temporary data that will need to be reuse later in the rendering or in future frames.

CGI can use way more than 12 GB of textures.

But, the quantity is not only the main thing to consider. Speed is as important. No point of having a 2 TB SSD as GPU memory.

But in future, GPU might have a SSD slot on board to be use as a asset cache. (Like some Radeon pro card already have).

The key here is the memory bus width. I think 16 GB would have been perfect but without a hack like the 970, that would be impossible and I'm not sure they want to do that on a flagship GPU.
 
Joined
Jun 10, 2014
Messages
2,889 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
The main things with memory is you want to have always enough. Unlike let say a GPU or cpu that is a bit too slow and wo le give you low but mostly steady fps. Not having enough ram can be disastrous on performance.

It's indeed way worst if you don't have enough system ram as you have to swap, but it still bad on a GPU as the latency from ram (if you have enough) is too high for smooth gameplay.

You want in your GPU to have all it need for the next 5 or so seconds.

Still today, one of the low hanging fruits for better visual quality without involving too much calculation is better and more textures. Ram can also be used to store temporary data that will need to be reuse later in the rendering or in future frames.

But, the quantity is not only the main thing to consider. Speed is as important. No point of having a 2 TB SSD as GPU memory.
If by "speed" you mean bandwidth, then no bandwidth is ever going to be enough to use a different storage medium as VRAM. The problem is latency, and to handle latency, a specialized caching algorithm needs to be implemented into the game engine.
Most implementations of resource streaming so far has relied on fetching textures based in GPU feedback, which results in "texture popping" and stutter. This approach is a dead end, and will never work well. The proper way to solve it is to prefetch resources into a cache, which is not a hard task for most games, but still something which needs to be tailored to each game. The problem here is that most game developers uses off-the-shelf universal game engines and write no low-level engine code at all.

But in future, GPU might have a SSD slot on board to be use as a asset cache. (Like some Radeon pro card already have).
It may, but I'm very skeptical about the usefulness of this. It will be yet another special feature which practically no game engines will implement well, and yet another "gimmick" for game developers to develop for and a QA nightmare.

But most importantly, it's hardware which serves no need. Resource streaming is not hard to implement at the render engine level. As long as you prefetch well, it's no problem to stream even from a HDD, decompress and then load into VRAM.
 
Joined
Oct 12, 2005
Messages
681 (0.10/day)
Humm I never said to replace VRAM with other stuff (quite the opposite actually...)

As for SSD texture cache, time will tell. It's already a game changer on pro card for offline rendering. SSD are becoming cheap and having a 1 TB added to a 600$ video card might end up to be a minor portion of the global price.

There are advantages of having it on the GPU. Lower latency. The GPU could also handle the I/O instead of the cpu and the bandwidth between the cpu and the GPU could be used for something else. The GPU could also use it to store many things like more mipmap, more level of details for geometry etc. The demo of the unreal engine 5 demonstrated what you can do by having a more detailed working set.

And I think the fact most gaming studios use third party engine is a better thing for technology adoption than the opposite. The big engine maker have to work on getting the technology into the engine to stay competitive and the studios only have to implement it while they create theirs games.

Time will tell.
 
Joined
Oct 3, 2019
Messages
136 (0.08/day)
Processor Ryzen 3600
Motherboard MSI X470 Gaming Plus Max
Cooling stock crap AMD wraith cooler
Memory Corsair Vengeance RGB Pro 16GB DDR4-3200MHz
Video Card(s) Sapphire Nitro RX580 8GBs
Storage Adata Gammix S11 Pro 1TB nvme
Case Corsair Caribide Air 540
Humm I never said to replace VRAM with other stuff (quite the opposite actually...)

As for SSD texture cache, time will tell. It's already a game changer on pro card for offline rendering. SSD are becoming cheap and having a 1 TB added to a 600$ video card might end up to be a minor portion of the global price.

There are advantages of having it on the GPU. Lower latency. The GPU could also handle the I/O instead of the cpu and the bandwidth between the cpu and the GPU could be used for something else. The GPU could also use it to store many things like more mipmap, more level of details for geometry etc. The demo of the unreal engine 5 demonstrated what you can do by having a more detailed working set.

And I think the fact most gaming studios use third party engine is a better thing for technology adoption than the opposite. The big engine maker have to work on getting the technology into the engine to stay competitive and the studios only have to implement it while they create theirs games.

Time will tell.

Could that be how the PC ends up competing with PS5's hyped SSD system?
 
Joined
Jun 10, 2014
Messages
2,889 (0.81/day)
Processor AMD Ryzen 9 5900X ||| Intel Core i7-3930K
Motherboard ASUS ProArt B550-CREATOR ||| Asus P9X79 WS
Cooling Noctua NH-U14S ||| Be Quiet Pure Rock
Memory Crucial 2 x 16 GB 3200 MHz ||| Corsair 8 x 8 GB 1333 MHz
Video Card(s) MSI GTX 1060 3GB ||| MSI GTX 680 4GB
Storage Samsung 970 PRO 512 GB + 1 TB ||| Intel 545s 512 GB + 256 GB
Display(s) Asus ROG Swift PG278QR 27" ||| Eizo EV2416W 24"
Case Fractal Design Define 7 XL x 2
Audio Device(s) Cambridge Audio DacMagic Plus
Power Supply Seasonic Focus PX-850 x 2
Mouse Razer Abyssus
Keyboard CM Storm QuickFire XT
Software Ubuntu
As for SSD texture cache, time will tell. It's already a game changer on pro card for offline rendering. SSD are becoming cheap and having a 1 TB added to a 600$ video card might end up to be a minor portion of the global price.

There are advantages of having it on the GPU. Lower latency. The GPU could also handle the I/O instead of the cpu and the bandwidth between the cpu and the GPU could be used for something else. The GPU could also use it to store many things like more mipmap, more level of details for geometry etc. The demo of the unreal engine 5 demonstrated what you can do by having a more detailed working set.
While having an SSD locally on the graphics card will certainly be faster than fetching it from the SSD storing the game, it will still be a couple orders of magnitude more latency than VRAM, so it still needs to be prefetched carefully. In reality it doesn't solve any problem.

While SSDs of varying quality are cheap these days*, having one on the graphics card introduces a whole host of new problems. Firstly, if this is going to be some kind of "cache" for games, when it needs to be managed somehow, and the user would probably have to go in to delete or prioritize it for various games. Secondly having it in a normal SSD and going through the RAM serves a huge benefit. You can apply a lossy compressions (probably ~10-20x), decompress it in the CPU and send the decompressed data to the GPU. This way, each large game wouldn't take up >200 GB. To do the same on the graphics card, it would require even more unnecessary dedicated hardware. A graphics card should do graphics, not everything else.

*) Be aware that most "fast" SSDs only have a tiny SLC SSD that's actually fast, and a large TLC/QLC SSD which is much slower.

And I think the fact most gaming studios use third party engine is a better thing for technology adoption than the opposite. The big engine maker have to work on getting the technology into the engine to stay competitive and the studios only have to implement it while they create theirs games.
These big universal engines leaves very little room to utilize hardware well. There are barely any games properly written for DirectX 12 yet, how would you imagine more exotic gimmicks will end up?
Every year which passes by, these engines get more bloated. More and more generalized rendering code means less efficient rendering. The software is really lagging behind the hardware.
 
Joined
Sep 17, 2014
Messages
20,780 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Samsung equipped cards died too...it wasn't just micron. ;)

Micron does stack launch issues with their new GDDR stuff. Pascal also had a mandatory update due to instability on GDDR5X. Additionally its well known Samsung chips are better clockers for the past gen(s).
 
Last edited:
Joined
Dec 31, 2009
Messages
19,366 (3.72/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Micron does stack launch issues with their new GDDR stuff. Pascal also had a mandatory update due to instability on GDDR5X. Additionally its well known Samsung chips are better clockers for the past gen(s).
sorry.. what does "stack launch issues" mean?

I dont recall issues with gddr5x.. link me so I can see? :)

Overclocking is also beyond my perview, but that is true. My point was simply to clarify that both micron and Samsung equipped cards had the same issue.
 
Joined
Sep 17, 2014
Messages
20,780 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Last edited:
Top