• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GOD BOX 2.0

Joined
Apr 28, 2011
Messages
320 (0.06/day)
System Name VENTURI
Processor 2x AMD8684X Epyc (192/384 cores)
Motherboard Gigabyte MZ73-LM0 Dual socket motherboard
Cooling Air, noctua, heatsinks, silent/low noise
Memory 1.5 TB 2 LRDIMM ECC REG
Video Card(s) 2x 4090 FE RTX
Storage Raid 0 Micron 9300 Max (15.4TB each / 77TB array - overprovisioned to 64TB) & 8TB OS nvme
Display(s) Asus ProArt PAU32UCG-K
Case Modified P3 Thermaltake
Audio Device(s) harmon Kardon speakers / apple
Power Supply 2050w 2050r
Mouse Mad Catz pro X
Keyboard KeyChron Q6 Pro
Software MS 2022/ 2025 Data Center Server, Ubuntu
Benchmark Scores Gravity mark (high score)
The specs:
2x 4090 RTX Founders Edition
2x 8280L (56/112 cores), Asus c621 Sage Dual socket motherboard
1.5 TB ram. DDR4 ECC LRDIMMs
1600W digital power supply
(Data drive) 4x VROC Raid 0 Micron 9300 Max (12.8TB each / 51.2TB array) VROC Premium key
(OS Drive) Sabrent Rocket 4 Plus (8TB). 4x Samsung 860 Pro 4TB each (16TB array)
Asus PA32UCG-K monitor, TT SFF case, MS Data Center 2022 & Ubuntu


The build:
CPU run at about 41C under load, and RAM hovers at 55C under load.
NVME drives run 28C idle and 49C at the end of a full drive copy.

I'm using 1.5TB of ram, LRDIMMS
upgraded to 2x intel Xeon 8280L processors, the L have higher memory management
Dual 4090 RTX FE
The motherboard is revision 2.1x
Bios 9904 with Resizable Bar
C drive is a Sabrent TLC Rocket 4 plus 8TB
D drive is 4 Micron 9300 Max 12.8TB each (51.2 TB volume) using the U.2 connectors - Using VROC premium key
I have built in (hidden in the case) backup of 4x Samsung 860pro 4TB, as well as a network backup to a NAS in the house
Using VROC premium key, direct raid control and paths - please see pics of the WRITE speed of the internal array

Running MS Data Center 2022 as the OS and Ubuntu

To make it all work and have that in place it took a complete redesign just to have it look similar to what I had before but a lot of base design was changed.
The dual 4090 FE RTXs sit several centimeters above the board compared to where cards normally rest. This required changing all the mounting points and supports for the cards. The video cards needed a custom bracket to hold them stable as they are several centimeters above the board.
This allowed for 4x u.2 connections for the micron 9300 nvme drives in raid to fit to the board 4x u.2 plugs

The C drive now has a large pure copper heatsink on it as well, under load it only reaches 40C, idle is 26C.

The 4090 FE RTXs also operate about 29C in idle. Under load they really don't go past 51C (so far)
ever, most game play hover around 41-47C, application DL/ML may run the cards at max around 51C.

The decrease in temperature is from the additional air channel between the video cards (3.3Cm) and the motherboard. Other temp decreases were inn the CPU, Ram and chipset.

Machine runs very quietly.
Most fans ( including the GPUs) turn off at stable temps (auto settings from motherboard control other fans. CPUs at idle run about 23C (73F) in a 21C house (69-70). Under load they may reach as high as 44C, but not often.
To accommodate the 12.8 TB each 9300 nvme drives, I had to install two cooling fans internally to the design so as to have a quiet airflow over the 4 drives. (see pics) This keeps the 51.2 TB nvme drives drives at around 39-41C under load and 33C at idle. The fans are quiet and invisible to the outside but I will also include a picture of that design.

There is a picture of a Macrium backup report showing the READ speed of the C drive (Sabrent rocket 4 plus) to WRITE the backup to the NVME raid (Micron 9300 4x) at 50.6 Gb/s

PC runs very silently, nice and cool, on average components under load are basically at body temp (feels organic when described as such).
I hope this answers core questions here is a benchmark post to show some of the capability -still first on leaderboard

https://gravitymark.tellusim.com/leaderboard/

and

https://gravitymark.tellusim.com/leaderboard/?size=4k

IMG_4083.jpg

IMG_4084.jpg

IMG_4106.jpg

IMG_4094.jpg

backup.jpg

fan1.jpg

IMG_4068.jpg

10000IMG_4086.jpg


grav.jpg
 
Last edited:
Woo I gotta get me one of those :D

Except for the case, I don't like dust.
 
Gotta go fast
 
I wish I had a job... :roll:

I approve of the cable management... and everything :eek:
 
Very nice rig. I'm sure it both works and plays very hard. Looks sweet too... sure beats my closet blower mess lol.

Gotta go fast
If you use this rig to exclusively play Sonic games, I'm pretty sure that's hardware abuse.
 
Very nice rig. I'm sure it both works and plays very hard. Looks sweet too... sure beats my closet blower mess lol.


If you use this rig to exclusively play Sonic games, I'm pretty sure that's hardware abuse.
I do play some games but I also use it for school


To answer what I do with all these builds:

In my dissertation I am authoring algorithms to "clean up" radiology images.



So, in a nutshell, if a 60 minute exam acquires 100% of image data, then taking less data shortens the exam time in a scanner to 30 minutes (generic example). Less data mean anomalies and visual drops below the standard of care (SOC). The reason why entities use shortened exams is so that they can see more patients (slots) for the $$million they spent on a scanner. (Yes, that's not right) - and this is due to the US healthcare model referred to as fee-for-service (FFS)

So vendors sell FDA approved DL software to compensate for the loss of data so images can be returned to SOC and have higher utilization - shorter exam times. The software (I loathe the term Ai, -- loathe ) can also enhance images beyond SOC (about 120% depending on protocol). There isn't any higher reimbursement rate for higher def imaging.


However,

a happier medium, is to run about 60-80% of the original scan time protocol and data and get 115-155% of what would have been a 100% SOC image set through better DL CNN reconstruction (involving k-space and signal to noise ratio enhancements SNR) Hopefully this will enhance patient care by delivering higher diagnostic quality images and aiding the eventual outcome - and that's my goal.

This is not unique to me, vendors have been on this bandwagon for 3 years. I am but a student trying a few different things.

Unfortunately I am not very smart and this has been quite a learning curve for me. The industry brings new technology out and I have to recalibrate what I do. There are so many advancements and drivers in the industry. Constantly changing.

I wanted to included examples of my work (all anonymized and generic) but these are strange times and opted not to show a pre and post reconstruction bank of 1300 slices / enhanced image set in this forum - just to show what one can do with video cards.

I love video games, but its pretty impressive what can be done with GPUs -beyond gaming.

I hope that helps

If you are really interested in this, may I suggest looking at what nVidia is doing in this space:

https://developer.nvidia.com/indust...Clara?,delivery and accelerate drug discovery.
 
Do you live in a clean room?

I don't know why anyone would use an open case like this. All I can think about is how dusty it would be, how my dog would lick my ram, and how my kid would stick his finger in a fan.
 
Do you live in a clean room?

I don't know why anyone would use an open case like this. All I can think about is how dusty it would be, how my dog would lick my ram, and how my kid would stick his finger in a fan.
He probably lives in a cleaner room than yours.

(and mine, no shame)
 
I do play some games but I also use it for school


To answer what I do with all these builds:

In my dissertation I am authoring algorithms to "clean up" radiology images.



So, in a nutshell, if a 60 minute exam acquires 100% of image data, then taking less data shortens the exam time in a scanner to 30 minutes (generic example). Less data mean anomalies and visual drops below the standard of care (SOC). The reason why entities use shortened exams is so that they can see more patients (slots) for the $$million they spent on a scanner. (Yes, that's not right) - and this is due to the US healthcare model referred to as fee-for-service (FFS)

So vendors sell FDA approved DL software to compensate for the loss of data so images can be returned to SOC and have higher utilization - shorter exam times. The software (I loathe the term Ai, -- loathe ) can also enhance images beyond SOC (about 120% depending on protocol). There isn't any higher reimbursement rate for higher def imaging.


However,

a happier medium, is to run about 60-80% of the original scan time protocol and data and get 115-155% of what would have been a 100% SOC image set through better DL CNN reconstruction (involving k-space and signal to noise ratio enhancements SNR) Hopefully this will enhance patient care by delivering higher diagnostic quality images and aiding the eventual outcome - and that's my goal.

This is not unique to me, vendors have been on this bandwagon for 3 years. I am but a student trying a few different things.

Unfortunately I am not very smart and this has been quite a learning curve for me. The industry brings new technology out and I have to recalibrate what I do. There are so many advancements and drivers in the industry. Constantly changing.

I wanted to included examples of my work (all anonymized and generic) but these are strange times and opted not to show a pre and post reconstruction bank of 1300 slices / enhanced image set in this forum - just to show what one can do with video cards.

I love video games, but its pretty impressive what can be done with GPUs -beyond gaming.

I hope that helps

If you are really interested in this, may I suggest looking at what nVidia is doing in this space:

https://developer.nvidia.com/indust...Clara?,delivery and accelerate drug discovery.
Similar to my summer work at the data science building in Swansea medical school, we use deep learning to predict health outcomes from medical history and scans, improving access to appropriate care, by using the SAIL anonymised medical databank and IBM software. The objective is to use DL techniques and algorithms to do 80% of the work, and analysts clean up the outputs.
 
  • Like
Reactions: Lei
Do you live in a clean room?

I don't know why anyone would use an open case like this. All I can think about is how dusty it would be, how my dog would lick my ram, and how my kid would stick his finger in a fan.
Well... I try to keep it clean.

but its also the only way I could have a dense build that runs cool and quiet.

The former rig's dual 3090 Ti have found a home in my wife's pc (of similar design), also an SFF P1 case
wifePC.jpg



it is an obvious mini-me of mine:
IMG_4083.jpg
 
He probably lives in a cleaner room than yours.

(and mine, no shame)

I have carpets and a dog. Dust is unavoidable lol.
 
Looks like you have connections and knowledge in the related fields.

What are the current industry pain points and bottlenecks for speed to correct diagnosis, and let's say new drugs and treatments including crispr-delivered?

Is it GPU or is it researchers to make the algorithms? What can be done better with bioinformatics and computational power + machine learning?
 
Last edited:
it's covered in dust again after 5 minutes. F.

I use a HEPA filter vaccuum cleaner to sweep my room. I even sweep the ceiling with the hose attachment. it takes like an extra 3 minutes?

I then use wet cloth to wipe down surface areas.

and i make sure i wash my curtains every 3-5 months, and my blankets every month. dust is really not an issue for me.

i also run a 99.97% hepa air purifier in my room, that helps a lot, cause it runs 24/7.

i agree it can be annoying though.

sorry, started in video but decided it belonged in builds/mods like the prior build, it should be in mods/builds

Ashes of the Singularity never ran this fast:
View attachment 268232

i really want to see your points per day in folding at home. :D

fold for the TPU team!
 
by Jupiter can i join your church that thing deserves worship :) well done bud.
 
Ladies and gentlemen, there are more transistors in this GOD BOX than entire neurons in your humanly brain :clap:
 
Yeah but they are limited by software he he he he. So my brain still wins:toast:
Ladies and gentlemen, there are more transistors in this GOD BOX than entire neurons in your humanly brain :clap:
 
He probably lives in a cleaner room than yours.

(and mine, no shame)
You frog live in a lagoon :D:love: Mine however, I think kangaroos and buffalos feel quite at home if there were in my room.

Kid Fail GIF


Yeah but they are limited by software he he he he. So my brain still wins:toast:
Can you look at 1000 brain tumor images and learn how they look like and then tell me from 5 different tumors which one is which? No you can't....
 
Since SLI is no longer supported, what do you use the second GPU for?

Nice build.
 
Since SLI is no longer supported, what do you use the second GPU for?

Nice build.
Hint, it's not for gaming.

You can pool GPU resources in professional software.
 
Since SLI is no longer supported, what do you use the second GPU for?

Nice build.
While I can play some games as dual GPU, and I can use my custom running painful auto diff sli workaround, generally speaking it’s not worth it, however even with ONLY one 4090 (sigh) its no slouch.
It also does help having 112 threads and 50+Gb/s drive speed, and OS customizations to optimize ram, nodes, etc. So in general, it just "feels" fast.

the rig is used for deep learning and my dissertation

in deep learning I use 100% of both cards and still need more power, so the answer is that

this is best on a use case scenario
 
Last edited:
Back
Top