• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

GIGABYTE Expands AMD EPYC Family with New Density Optimized Server

Joined
Sep 22, 2017
Messages
889 (0.37/day)
GIGABYTE continues our active development of new AMD EPYC platforms with the release of the 2U 4 Node H261-Z60, the first AMD EPYC variant of our Density Optimized Server Series. The H261-Z60 combines 4 individual hot pluggable sliding node trays into a 2U server box. The node trays slide in and out easily from the rear of the unit.

EPYC Performance
Each node supports dual AMD EPYC 7000 series processors, with up to 32 cores, 64 threads and 8 channels of memory per CPU. Therefore, each node can feature up to 64 cores and 128 threads of compute power. Memory wise, each socket utilizes EPYC's 8 channels of memory with 1 x DIMM per channel / 8 x DIMMS per socket, for a total capacity of 16 x DIMMS per node (over 2TB of memory supported per each node ).

Maximum compute in this system can enable data center footprints to be reduced by up to 50% compared with a standard 1U dual socket server. And GIGABYTE has recently demonstrated that our server design is perfectly optimized for AMD EPYC by achieving one of the top scores of the SPEC CPU 2017 Benchmark for AMD EPYC single socket & dual socket systems.

R151-Z30 achieved highest SPEC CPU 2017 performance benchmark for single-socket AMD Naples platform vs other vendors as of May 2018

R181-Z91 achieved second highest SPEC CPU 2017 performance benchmark for dual-socket AMD Naples platform vs other vendors as of May 2018

Ultra-Fast Storage Support
In the front of the unit are 24 x 2.5" hot-swappable drive bays, offering a capacity of 6 x HDD or SSD SATA / SAS storage drives per node. In addition, each node features dual M.2 ports (PCIe Gen3 x 4) to support ultra-fast, ultra-dense NVMe flash storage devices. Dual M.2 support is double the capacity of competing products on the market.

Best-In Class Expansion Flexibility
Dual 1GbE LAN ports are integrated into each node as a standard networking option. In addition, each node features 2 x half-length low profile PCIe Gen3 x 16 slots and 1 x OCP Gen3 x 16 mezzanine slot for adding additional expansion options such as high speed networking or RAID storage cards. GIGABYTE delivers best-in class expansion slot options for this form factor.

Easy & Efficient Multi-Node Management
The H261-Z60 features a system-wide Aspeed CMC (Central Management Controller) and LAN module switch, connecting internally to Aspeed BMCs integrated on each node. This results only in one MLAN connection required for management of all four nodes, resulting in less ToR (Top of Rack) cabling and less ports required on your top of rack switch (only one port instead for four required for remote management of all nodes).

Ring Topology Feature for Multi-Server Management
Going a step further, the H261-Z60 also features the ability to create a "ring" connection for management of all servers in a rack. Only two switch connections are needed, while each server is connected to each other in a chain. The ring will not be broken even if one server in the chain is shut down. This can even further reduce cabling and switch port usage for even greater cost savings and management efficiency.

Optional Ring Topology Kit must be added

Efficient Power & Cooling
GIGABYTE's H261-Z60 is designed for not only greater compute density but also with better power and cost efficiency in mind. The system architecture features shared cooling and power for the nodes, with a dual fan wall of 8 (4 x 2) easy swap fans and 2 x 2200W redundant PSUs. In addition, the nodes connect directly to the system backplane with GIGABYTE's Direct Board Connection Technology, resulting in less cabling and improved airflow for better cooling efficiency.

GIGABYTE's unrivalled expertise and experience in system design leverages and optimizes AMD EPYC's benefits to offer to our customers a product extremely on-point to meet their needs for maximized compute resources in a limited footprint with excellent expansion choices, management functionality and power & cooling efficiency.

Please visit http://b2b.gigabyte.com for more information on our complete product range.

View at TechPowerUp Main Site
 
Joined
Nov 6, 2016
Messages
1,577 (0.58/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
That's some serious density with respect to x86 cores....so 256 cores/512 threads per 2U enclosure? Crazy, and 16TB of RAM per 2U enclosure? Isn't Epyc 2 going to have 64core processors? So then it's be 512 cores per 2U?
 
Joined
Sep 14, 2017
Messages
610 (0.25/day)
Wow, that is indeed EPIC. I wonder if there's a quad socket board out there for EPYC? I know it goes a bit against the grain from the marketing push again Intel. But curious as to why two boards instead of one board for density. I'm guessing cost and complexity?
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
18,934 (2.85/day)
Location
Piteå
System Name Black MC in Tokyo
Processor Ryzen 5 5600
Motherboard Asrock B450M-HDV
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Kingston Fury 3400mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston A400 240GB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Line6 UX1 + some headphones, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Cherry MX Board 1.0 TKL Brown
VR HMD Acer Mixed Reality Headset
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
I had no idea Gigabyte made server stuff as well. How big are they?
 

HTC

Joined
Apr 1, 2008
Messages
4,604 (0.78/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 2600X
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Nitro+ Radeon RX 480 OC 4 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 19.04 LTS
Stupid question: how do they cool the CPUs in these?
 
Last edited:
Joined
May 9, 2012
Messages
8,409 (1.92/day)
Location
Ovronnaz, Wallis, Switzerland
System Name main/SFFHTPCARGH!(tm)/Xiaomi Mi TV Stick/Samsung Galaxy S23/Ally
Processor Ryzen 7 5800X3D/i7-3770/S905X/Snapdragon 8 Gen 2/Ryzen Z1 Extreme
Motherboard MSI MAG B550 Tomahawk/HP SFF Q77 Express/uh?/uh?/Asus
Cooling Enermax ETS-T50 Axe aRGB /basic HP HSF /errr.../oh! liqui..wait, no:sizable vapor chamber/a nice one
Memory 64gb Corsair Vengeance Pro 3600mhz DDR4/8gb DDR3 1600/2gb LPDDR3/8gb LPDDR5x 4200/16gb LPDDR5
Video Card(s) Hellhound Spectral White RX 7900 XTX 24gb/GT 730/Mali 450MP5/Adreno 740/RDNA3 768 core
Storage 250gb870EVO/500gb860EVO/2tbSandisk/NVMe2tb+1tb/4tbextreme V2/1TB Arion/500gb/8gb/256gb/2tb SN770M
Display(s) X58222 32" 2880x1620/32"FHDTV/273E3LHSB 27" 1920x1080/6.67"/AMOLED 2X panel FHD+120hz/FHD 120hz
Case Cougar Panzer Max/Elite 8300 SFF/None/back/back-front Gorilla Glass Victus 2+ UAG Monarch Carbon
Audio Device(s) Logi Z333/SB Audigy RX/HDMI/HDMI/Dolby Atmos/KZ x HBB PR2/Edifier STAX Spirit S3 & SamsungxAKG beans
Power Supply Chieftec Proton BDF-1000C /HP 240w/12v 1.5A/4Smart Voltplug PD 30W/Asus USB-C 65W
Mouse Speedlink Sovos Vertical-Asus ROG Spatha-Logi Ergo M575/Xiaomi XMRM-006/touch/touch
Keyboard Endorfy Thock 75% <3/none/touch/virtual
VR HMD Medion Erazer
Software Win10 64/Win8.1 64/Android TV 8.1/Android 13/Win11 64
Benchmark Scores bench...mark? i do leave mark on bench sometime, to remember which one is the most comfortable. :o
Stupid question: how do they cool the CPUs in these?
well already answered, right?

"Efficient Power & Cooling
GIGABYTE's H261-Z60 is designed for not only greater compute density but also with better power and cost efficiency in mind. The system architecture features shared cooling and power for the nodes, with a dual fan wall of 8 (4 x 2) easy swap fans and 2 x 2200W redundant PSUs. In addition, the nodes connect directly to the system backplane with GIGABYTE's Direct Board Connection Technology, resulting in less cabling and improved airflow for better cooling efficiency. "

and with standard chunk of metal heatsink
s-l640.jpgsnk-_p0062p_1.jpg

as they always did in low U server rack
 
  • Like
Reactions: HTC
Joined
Dec 28, 2012
Messages
3,479 (0.84/day)
System Name Skunkworks
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software openSUSE tumbleweed/Mint 21.2
Stupid question: how do they cool the CPUs in these?
Similar to BTX cooling. Fans on the front of the machine draw in cool air and push it through the machine, you can cool quite high TDPs in that manner. Look up dell optiplex 620 SFF desktops, they were only about 3 inches tall but could cool 140 watt pentium D ovens with a single 80mm fan. The processors in the sever have simple copper heatsinks that go almost to the top of the server, forcing the cold air to flow over them before exiting the server.

These will also typically be put into server rooms, and server rooms are designed specifically for cooling. the front of the server will face the "cold row" where powerful air conditioners keep the temperature around 60F and pump tons of air in, creating positive air pressure into the server. Ours at work have 3 AC outlets per 5 rows of servers, and each of the 3 units pumps out over 16000CFM. The forced cold air keeps heat from building up. Heat is pumped into the other side of the server rack where the AC system pulls its air from to keep air moving. Some low power servers can run without their fans without overheating due to the sheer amount of cold air being pumped in (found that the hard way when a server's fan controller blew up over the weekend, couldnt get parts until the following Wednesday. ran a little warmer, but never got over 60C. Not bad for dual 6 core xeons and 24 HDDs.)

I had no idea Gigabyte made server stuff as well. How big are they?

Not very big. They dont go for the same market as dell/HP/lenovo. gigabyte servers are more common in smaller businesses where there may be 2 or 3 servers for an entire company.
 
Last edited:
Joined
Aug 27, 2015
Messages
555 (0.18/day)
Location
In the middle of nowhere
System Name Scrapped Parts, Unite !
Processor Ryzen 5 3600 @4.0 Ghz
Motherboard MSI B450-A Pro MAX
Cooling Stock
Memory Team Group Elite 16 GB 3133Mhz
Video Card(s) Colorful iGame GeForce GTX1060 Vulcan U 6G
Storage Hitachi 500 GB, Sony 1TB, KINGSTON 400A 120GB // Samsung 160 GB
Display(s) HP 2009f
Case Xigmatek Asgard Pro // Cooler Master Centurion 5
Power Supply OCZ ModXStream Pro 500 W
Mouse Logitech G102
Software Windows 10 x64
Benchmark Scores Minesweeper 30fps, Tetris 40 fps, with overheated CPU and GPU
Intel's wet dream
 

uberknob1

New Member
Joined
Jun 18, 2018
Messages
13 (0.01/day)
Stupid question: how do they cool the CPUs in these?
Ever heard a server fan? like a jet engine taking off.. usually servers are stored out of the way from users and people in general, with their high CPU and core count they usually require a shed load of air being moved to cool them which generates a lot of noise but as this is an after thought compared to a high spec PC in which performance and noise might be paramount it doesn't really matter all that much, not too say that they probably also have air con being pumped into the room where they sit, performance and reliability being king, noise is an afterthought when it comes to a server environment
 
Joined
Nov 24, 2017
Messages
853 (0.36/day)
Location
Asia
Processor Intel Core i5 4590
Motherboard Gigabyte Z97x Gaming 3
Cooling Intel Stock Cooler
Memory 8GiB(2x4GiB) DDR3-1600 [800MHz]
Video Card(s) XFX RX 560D 4GiB
Storage Transcend SSD370S 128GB; Toshiba DT01ACA100 1TB HDD
Display(s) Samsung S20D300 20" 768p TN
Case Cooler Master MasterBox E501L
Audio Device(s) Realtek ALC1150
Power Supply Corsair VS450
Mouse A4Tech N-70FX
Software Windows 10 Pro
Benchmark Scores BaseMark GPU : 250 Point in HD 4600
Wow, that is indeed EPIC. I wonder if there's a quad socket board out there for EPYC? I know it goes a bit against the grain from the marketing push again Intel. But curious as to why two boards instead of one board for density. I'm guessing cost and complexity?
Current generation Zen basesd EPYC cpus support 1 & 2 socket per board. Thats why 2 seperate board.
 
Joined
Feb 19, 2009
Messages
1,151 (0.21/day)
Location
I live in Norway
Processor R9 5800x3d | R7 3900X | 4800H | 2x Xeon gold 6142
Motherboard Asrock X570M | AB350M Pro 4 | Asus Tuf A15
Cooling Air | Air | duh laptop
Memory 64gb G.skill SniperX @3600 CL16 | 128gb | 32GB | 192gb
Video Card(s) RTX 4080 |Quadro P5000 | RTX2060M
Storage Many drives
Display(s) M32Q,AOC 27" 144hz something.
Case Jonsbo D41
Power Supply Corsair RM850x
Mouse g502 Lightspeed
Keyboard G913 tkl
Software win11, proxmox
Benchmark Scores 33000FS, 16300 TS. Lappy, 7000 TS.
Wow, that is indeed EPIC. I wonder if there's a quad socket board out there for EPYC? I know it goes a bit against the grain from the marketing push again Intel. But curious as to why two boards instead of one board for density. I'm guessing cost and complexity?

4P servers are not offered by AMD cause the nichè is so nichè that it doesn't make sense.
I doubt even Intel finds it worth doing but they currently still are.

If software went to more core licensing instead of cpu that would change.
 
Top