Wednesday, May 30th 2012

Super Micro Intros X9 5X GPU Workstation with NVIDIA Maximus Certification

Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server technology and green computing, now offers NVIDIA Maximus technology in its latest high-end, enterprise-class X9 SuperWorkstation (7047GR-TRF), allowing users to simultaneously design, render and simulate on the same workstation, avoiding traditional, time-consuming and costly processing downtime. Supermicro's NVIDIA Maximus certified solution integrates an NVIDIA Quadro series graphics processing unit (GPU) dedicated for design and visualization tasks with four NVIDIA Tesla C2075 co-processors dedicated to handling compute-intensive tasks like simulation—an industry-first configuration of NVIDIA Maximus technology.

This powerful GPU duo delivers scientists, engineers and designers the specialized compute capacity to interact with 3D models in CAD/CAM applications, while simultaneously rendering or outputting complex CAE simulations. This ability to multitask with both compute and graphics-heavy applications together, in real time, on a single workstation dramatically accelerates productivity and allows more opportunities for creative exploration.

"Supermicro's NVIDIA Maximus certified 7047GR-TRF SuperWorkstation opens the door to personal supercomputing for scientific, engineering and entertainment fields, and closes the gap between design and realization," said Wally Liaw, Vice President of Sales, International at Supermicro. "Our solution allows users to free themselves from compute limitations and to challenge their creativity with an unprecedented four Tesla GPUs plus one Quadro GPU in a 4U Tower, more than any other system in this class on the market. With this incredible performance at the desktop, designers can spend more time interacting with complex models and sophisticated simulations and less time waiting, allowing them to deliver results faster to market."

"Supermicro's professional-level SuperWorkstations harness the power of NVIDIA Maximus technology to sharply improve productivity," said Jeff Brown, general manager of the Professional Solutions Group at NVIDIA. "Supermicro is outstanding at integrating NVIDIA GPU technology into their workstations, and the 7047GR-TRF marks only the start of their efforts to incorporate NVIDIA Maximus's power and flexibility."

What sets the 7047GR-TRF apart as an outstanding enterprise-class system and earns it the SuperWorkstation brand is its multitude of high-value features. Fully configured with 4 double-width NVIDIA Tesla GPUs and a Quadro graphics card, the 7047GR-TRF still has single PCI-E 3.0 x8 and PCI-E 2.0 x4 (in x8) slots available for additional high-bandwidth network and high-performance storage expansions. The 7047GR-TRF is built on Supermicro's high-end X9DRG-QF serverboard supporting dual Intel Xeon E5-2600 family processors for ultimate CPU performance. PCI-E 3.0 support offers future-proof expansion and a cost-effective upgrade path to next generation NVIDIA GPUs. For memory intensive applications, this solution accommodates up to 512GB of DDR3 1600MHz Reg. ECC memory in 16x DIMM sockets and massive internal storage capacity that supports up to 8x hot-swap 3.5" HDDs utilizing onboard 2x SATA3 and 8x SATA2 ports. Supporting this advanced technology and maintaining mission-critical uptime are redundant 1620W power supplies with the industry's highest efficiency Platinum Level (94%+) rating, along with multi-zone thermal controlled fans for optimal cooling and additional energy efficiency.

Supermicro's GPU SuperWorkstations and SuperServers are defining the future of supercomputing for intersecting fields of science, engineering and digital content creation. The 7047GR-TRF is the first of a line of SuperWorkstations to support NVIDIA Maximus technology. For a complete look at Supermicro's total line of high performance, high-efficiency server and storage solutions, visit www.supermicro.com or go to www.supermicro.com/Maximus to select a Supermicro NVIDIA Maximus powered supercomputer.
Add your own comment

14 Comments on Super Micro Intros X9 5X GPU Workstation with NVIDIA Maximus Certification

#1
FreedomEclipse
~Technological Technocrat~
I want one of these to power my house!! I want everything to automated and work off RFID or voice recognition!!
Posted on Reply
#2
Jizzler
For those interested, the board:

Posted on Reply
#3
JMccovery
I wonder if Nvidia's Maximus technology allow up to 9-11 GPUs (1 or 2 Quadro + 9 or 10 Tesla) to be used on one of those Quad Socket 2011 boards that make use of 160 PCIe lanes...
Posted on Reply
#4
wishgranter
The problem is that Win7 (8) cannot use more than a 8 GPUs, its a problem with adresing memory, but under Linux you can use up to 16 GPUs. need a "PC" with as much as possible GPUS and RAM for photogrametry calculations of ruined buildings and maps.....


by: JMccovery
I wonder if Nvidia's Maximus technology allow up to 9-11 GPUs (1 or 2 Quadro + 9 or 10 Tesla) to be used on one of those Quad Socket 2011 boards that make use of 160 PCIe lanes...
Posted on Reply
#5
sergionography
i wonder why tahiti isnt in one of these when it has like twice the compute performance of those teslas, i wonder what amd is waiting for
and come to think about it AMD now owns seamicro
interesting
Posted on Reply
#6
Jizzler
by: sergionography
i wonder why tahiti isnt in one of these when it has like twice the computer performance of those teslas, i wonder what amd is waiting for
and come to think about it AMD now owns seamicro
interesting
Go for it :D

With 7970's a quarter the price of those Telsas, can deck out a 7047GR-TRF for under $10K ;)
Posted on Reply
#7
Xzibit
by: sergionography
i wonder why tahiti isnt in one of these when it has like twice the computer performance of those teslas, i wonder what amd is waiting for
and come to think about it AMD now owns seamicro
interesting
I think because like Nividia Quadro, Tesla it wouldnt be Tahiti but rather a Southern Island AMD FirePro or FireStream

Nvidia has left the door wide open for AMD to walk right into the sub-entry level Graphic Accelerator market.
Posted on Reply
#8
St.Alia-Of-The-Knife
by: Xzibit
I think because like Nividia Quadro, Tesla it wouldnt be Tahiti but rather a Southern Island AMD FirePro or FireStream

Nvidia has left the door wide open for AMD to walk right into the sub-entry level Graphic Accelerator market.
maybe because AMD lacks CUDA computing
Posted on Reply
#9
Xzibit
by: St.Alia-Of-The-Knife
maybe because AMD lacks CUDA computing
Remind me when CUDA was the only way to compute on a GPU ? :rolleyes:

CUDA is just nvidias API they have been pushing and its not smart that the 600 series is slower then their 500 series cards.

If your in the sub-entry. You propably dont have loads of cash or funds to be buying 1k-5k GPGPUs when you can buy entire systems for that with multi-gpus and modest computing power.

With Kepler GeForce they not only stood still they took a small step back. Why get 680 or 690 when you can get better performance from a 570 and get even better performance from the 7800s or 7900s.

As it is right now you can look at it 2 ways in the sub-entry. Nvidia is charging more for less performance than last series or AMD is charging about $50-$70 for the improved GPU computing performance.
Posted on Reply
#10
Dippyskoodlez
by: FreedomEclipse
I want one of these to power my house!! I want everything to automated and work off RFID or voice recognition!!
RFID you could do with a small army of netduinos for $500 :toast:
Posted on Reply
#11
sergionography
by: Xzibit
I think because like Nividia Quadro, Tesla it wouldnt be Tahiti but rather a Southern Island AMD FirePro or FireStream

Nvidia has left the door wide open for AMD to walk right into the sub-entry level Graphic Accelerator market.
good point, but still makes me wonder what amd is waiting for with tahiti -__-
they keep showing off about how good they are in compute when it means nothing aslong as its being sold as a gaming card

by: St.Alia-Of-The-Knife
maybe because AMD lacks CUDA computing
cuda seems to be slowly dying or atleast losing ground as NVidia has always restricted it on its own gpus which made developers hesitant to use it, now however i hear that nvidia made it open source or something but i doubt that will change anything as its too late, openCL and openGL seem to be covering much more ground

by: Xzibit
Remind me when CUDA was the only way to compute on a GPU ? :rolleyes:

CUDA is just nvidias API they have been pushing and its not smart that the 600 series is slower then their 500 series cards.

If your in the sub-entry. You propably dont have loads of cash or funds to be buying 1k-5k GPGPUs when you can buy entire systems for that with multi-gpus and modest computing power.

With Kepler GeForce they not only stood still they took a small step back. Why get 680 or 690 when you can get better performance from a 570 and get even better performance from the 7800s or 7900s.

As it is right now you can look at it 2 ways in the sub-entry. Nvidia is charging more for less performance than last series or AMD is charging about $50-$70 for the improved GPU computing performance.
well thats because the gk104 is a gaming card, gk110 will be solely for tesla as far as i know and it will have a huge die size 550+ with twice the shaders of gk104 and dp floating point support, so it will be pretty darn fast
but its good to note that AMD has the best of both worlds and succeeded in what NVidia sorta failed at, which is to make an architecture good for both compute and graphics(gaming) without being inefficient and power hungry
looking at last gen cards, gtx560ti had a 360mm2 die size, and hd6970 had a 389mm2 die size but performed much better in games, even an hd6950 either matched or surpassed nvidias gtx560ti
so in a way it looks like GCN is a much better and more balanced Fermi, while Kepler is a much better VLIW4 in terms of what they do best
but who knows, mayb gk110 will use a modified kepler architecture that will be more compute capable per core than gk104's kepler
either way, good job amd for giving NVidia all the sweet time to catch up XD
Posted on Reply
#12
Dippyskoodlez
by: sergionography

cuda seems to be slowly dying or atleast losing ground as NVidia has always restricted it on its own gpus which made developers hesitant to use it, now however i hear that nvidia made it open source or something but i doubt that will change anything as its too late, openCL and openGL seem to be covering much more ground
Not really "dying" it's just not being added to things left and right anymore, because the stuff that can make use of it already has it. Market saturation.
Posted on Reply
#13
sergionography
by: Dippyskoodlez
Not really "dying" it's just not being added to things left and right anymore, because the stuff that can make use of it already has it. Market saturation.
yes but now there are other alternatives too, look at adobe photoshop, it always had nvidia acceleration(im assuming cuda acceleration) but on cs6 they switched to opencl
Posted on Reply
#14
Dippyskoodlez
by: sergionography
yes but now there are other alternatives too, look at adobe photoshop, it always had nvidia acceleration(im assuming cuda acceleration) but on cs6 they switched to opencl
Because maintaining a CUDA and opencl codebase doesn't make sense if it isn't really relied upon for speed. F@H on the other hand would see benefit from maintaining both.
Posted on Reply
Add your own comment