• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Introduces the FirePro S10000 Server Graphics Card

Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Nah, didn't think so.

Hey, smarty pants all those cards still use 6-pin and/or 8-pin AuX connectors.
Considering the PCI-SIG rates the PCI-E slot for a nominal 75W power delivery, where the hell else do you think the board draws its power from?

You think a SC cluster or data centre has ATX PSU's ??

Maybe you should watch this and point out where the PSU's are, or maybe tell these guys they're doing it wrong.
My point is 225w is an implied specification
Which is already what I've said...and much earlier than you did, so why the bleating? Oh, I know why,...you just need to troll.
Nothing stoping someone from putting a higher TDP card there other than dated hardware
Nothing at all, except possibly change the cooling and power cabling - and no I don't mean just the individual 6 and 8 pin PCI-E connectors. I mean the main power conduits from the cabinets to the power source. Then of course if a cabinet is being refitted for S10000 then you would have to re-cable all 42 racks in a cabinet for 2 x 8-pin instead of the nominal 6-pin + 8-pin at four cables per rack multiplied by the number of boards per rack, as well as the main power conduits...then of course you'd have to upgrade the cooling system -which for most big iron is water cooling and refrigeration.
 
Last edited:
Joined
Apr 30, 2012
Messages
3,881 (0.89/day)
Considering the PCI-SIG rates the PCI-E slot for a nominal 75W power delivery, where the hell else do you think the board draws its power from?

Thats a PCIe gen 2.0 slot incase you haven noticed

Whats one of the difference between PCIe Gen 2.0 and 2.1/3.0 more power flexability. So yes if you get more recent parts you get more options. I'm sure you'll see them in G8 series of that HP server you linked. The lower numerical versions already have updated MB with Gen 3 slots added. So there is one possibility.

Only ones that can currently take advantage of it are Intel and AMD cards since they are PCIe gen 3.0 spec. All Nvidias K20x & K20 are PCIe gen 2.0 spec.

Makes no difference. The point is performance/watt, or in the case of servers/HPC, staying within the rack specification (more often than not) of 225W per board.

I see pural and specifications. I'd like to see the information your refering to for myself thats all.

HumanSmoke said:
Nothing at all, except possibly change the cooling and power cabling - and no I don't mean just the individual 6 and 8 pin PCI-E connectors. I mean the main power conduits from the cabinets to the power source. Then of course if a cabinet is being refitted for S10000 then you would have to re-cable all 42 racks in a cabinet for 2 x 8-pin instead of the nominal 6-pin + 8-pin at four cables per rack multiplied by the number of boards per rack, as well as the main power conduits...then of course you'd have to upgrade the cooling system -which for most big iron is water cooling and refrigeration.

Obviously something taken into consideration when these machines were built

So how about that specification link ? ;)
 
Last edited:
Joined
Apr 7, 2011
Messages
1,380 (0.29/day)
System Name Desktop
Processor Intel Xeon E5-1680v2
Motherboard ASUS Sabertooth X79
Cooling Intel AIO
Memory 8x4GB DDR3 1866MHz
Video Card(s) EVGA GTX 970 SC
Storage Crucial MX500 1TB + 2x WD RE 4TB HDD
Display(s) HP ZR24w
Case Fractal Define XL Black
Audio Device(s) Schiit Modi Uber/Sony CDP-XA20ES/Pioneer CT-656>Sony TA-F630ESD>Sennheiser HD600
Power Supply Corsair HX850
Mouse Logitech G603
Keyboard Logitech G613
Software Windows 10 Pro x64
What the above sentence actually says is that server racks in general are designed with a 225W board in mind.

One thing to consider here is that these cards go into a custom designed HPC where the standard "server" design is less common.
You have custom cooling, custom power delivery etc.. You can see that if you look at Cray's HPC's...
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
One thing to consider here is that these cards go into a custom designed HPC where the standard "server" design is less common.
You have custom cooling, custom power delivery etc.. You can see that if you look at Cray's HPC's...

Yeah, I figured that SANAM for instance is new build from Adtech (the S10000 supercomputer), and all new builds would be pretty straightforward to put together (once you know the requirements) regardless of fit out - they all seem based on a modular approach whether they be compute cluster or data center. My thinking was more along the lines of refitting older systems with newer more competent components - there are still a lot of big clusters running older GPGPU for instance- and I would assume a refit presents its own problems different from a ground up new build.
Refitting in general would be a considerable initial expenditure- Titan for instance, retained the bulk of the hardware from Jaguar, but the upgrade still took a year (Oct 2011-Nov 2012) and cost $96 million- the principle difference seems to be an upgrading of power delivery and swapping out Fermi 225W TDP boards for K20X (235W)- the CPU side of the compute node remains untouched.
 
Joined
Apr 7, 2011
Messages
1,380 (0.29/day)
System Name Desktop
Processor Intel Xeon E5-1680v2
Motherboard ASUS Sabertooth X79
Cooling Intel AIO
Memory 8x4GB DDR3 1866MHz
Video Card(s) EVGA GTX 970 SC
Storage Crucial MX500 1TB + 2x WD RE 4TB HDD
Display(s) HP ZR24w
Case Fractal Define XL Black
Audio Device(s) Schiit Modi Uber/Sony CDP-XA20ES/Pioneer CT-656>Sony TA-F630ESD>Sennheiser HD600
Power Supply Corsair HX850
Mouse Logitech G603
Keyboard Logitech G613
Software Windows 10 Pro x64
Titan for instance, retained the bulk of the hardware from Jaguar, but the upgrade still took a year (Oct 2011-Nov 2012) and cost $96 million- the principle difference seems to be an upgrading of power delivery and swapping out Fermi 225W TDP boards for K20X (235W)- the CPU side of the compute node remains untouched.

First phase were CPU upgrades (new Opterons), interconnects, and memory (600TB). After that they had to wait for the GPU's.
And IIRC Jaguar didn't have any GPU's before.
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
First phase were CPU upgrades (new Opterons), interconnects, and memory (600TB). After that they had to wait for the GPU's.
Thanks. I'd forgotten about the 16GB RAM increase per node. Weren't the "old" CPU's (Opteron 2435) reallocated to what was ORNL's old XT4 partition to upgrade it to XT5 specification (Jaguar being a 18688 node XT5 + 7832 node XT4...the XT5 being upgraded to Titan (XK7) and the XT4 to XT5) and Kraken's upgrade (ORNL + University of Tennessee)? The partition is mentioned in the Jaguar wiki page, but not Titan. With the reallocation I was under the impression that ORNL's Opteron 6274's were basically overall additions to capacity at ORNL.
And IIRC Jaguar didn't have any GPU's before.
Actually a physical impossibility I would have thought. CPU-only clusters still need GPU's for visualization*, although the Fermi's were added when the CPU upgrade took place.
Phase I of this upgrade also populated 960 of these XK6 nodes with NVIDIA Fermi GPUs.
[source]

*IIRC, The Intel Xeon + Xeon Phi Stampede also uses Tesla K20X for the same reason
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
40,435 (6.61/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
Learn to be respectful to members of these forums.

Makes no difference. The point is performance/watt, or in the case of servers/HPC, staying within the rack specification (more often than not) of 225W per board.

If a power limiter affected stated performance you'd have an argument, but as the case stands, you are making excuses not a valid point. And just for the record, the gaming charts don't have a direct bearing on server/WS/HPC parts- as I mentioned before, you can't get a true apples-to-apples comparison between gaming and pro parts- all they can do is provide an inkling into the efficiency of the GPU. If you want to use a gaming environment argument, why don't you take it to a gaming card thread, because it is nonsensical to apply it to co-processors.



Because volt modding is (of course) the first requirement for server co-processors [/sarcasm]
Take your bs to a gaming thread.
 
Joined
Apr 7, 2011
Messages
1,380 (0.29/day)
System Name Desktop
Processor Intel Xeon E5-1680v2
Motherboard ASUS Sabertooth X79
Cooling Intel AIO
Memory 8x4GB DDR3 1866MHz
Video Card(s) EVGA GTX 970 SC
Storage Crucial MX500 1TB + 2x WD RE 4TB HDD
Display(s) HP ZR24w
Case Fractal Define XL Black
Audio Device(s) Schiit Modi Uber/Sony CDP-XA20ES/Pioneer CT-656>Sony TA-F630ESD>Sennheiser HD600
Power Supply Corsair HX850
Mouse Logitech G603
Keyboard Logitech G613
Software Windows 10 Pro x64
Phase I of this upgrade also populated 960 of these XK6 nodes with NVIDIA Fermi GPUs.

Yeah, but that was already phase 1 upgrade to Titan, Jaguar itself didn't have them (maybe I didn't word my post very well, sorry).
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Learn to be respectful to members of these forums.

Stay on topic and it shouldn't be a problem. If you can tell me how moaning about a lack of volt modding opportunity in Nvidia cards has any relevance to pro graphics -workstation or GPGPU, I'll gladly issue an apology....until that happens I view it as a cheap trolling attempt, not particularly apropos of anything regarding the hardware being discussed.
Yeah, but that was already phase 1 upgrade to Titan, Jaguar itself didn't have them (maybe I didn't word my post very well, sorry).

That's probably my confusion I think. I tend to think of Jaguar and Titan as the same beast, and didn't make the differentiation regarding timeline. My bad.
 
Last edited:

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
40,435 (6.61/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
Stay on topic and it shouldn't be a problem. If you can tell me how moaning about a lack of volt modding opportunity in Nvidia cards has any relevance to pro graphics -workstation or GPGPU, I'll gladly issue an apology.


That's probably my confusion I think. I tend to think of Jaguar and Titan as the same beast, and didn't make the differentiation regarding timeline. My bad.

i was stating they have tighter control of voltages across the board is all.
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
i was stating they have tighter control of voltages across the board is all.
Not quite...
ya NV forced any Voltage Mods out (EVBot being the biggest example of this)

When have volt mods ever been an issue with server co-processors? How does Nvidia locking down voltages on desktop Kepler have any relevance to Tesla or Quadro boards ?
Have you ever heard of people who overclock a math co-processor ? Kind of defeats the purpose of using ECC RAM and placing an emphasis on FP64 don't ya think?
Learn to be respectful to members of these forums.
Taking your lead ?....
dont be a jack ass
:shadedshu
 
Last edited:

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
40,435 (6.61/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
Not quite...


When have volt mods ever been an issue with server co-processors? How does Nvidia locking down voltages on desktop Kepler have any relevance to Tesla or Quadro boards ?
Have you ever heard of people who overclock a math co-processor ? Kind of defeats the purpose of using ECC RAM and placing an emphasis on FP64 don't ya think?

:shadedshu:rolleyes:

i find it funny you keep on arguing, but anyways it was in relation as how those parts cant reach the maximum voltage level because of precautions. I know certain models of Quadro and FirePro are for mission critical, just as Much as Itanium/SPARC etc are. I do realize that OC can cause ECC to corrupt the data. But anyways im just saying be respectful of the users here dude.
 
Joined
Nov 27, 2007
Messages
28,754 (4.82/day)
Location
Miami, Florida
Processor AMD Ryzen 7 7800X3D
Motherboard ASUS ROG Crosshair X670E Hero
Cooling EK FLT 240 DDC, x2 Black Ice Nemesis 360GTX, x1 EK-Quantum Surface P240, Phanteks D30-120 fans x9
Memory G.SKILL Trident Z5 Neo RGB Series (AMD Expo) DDR5 RAM 32GB (2x16GB) 6000MT/s CL30
Video Card(s) ZOTAC Gaming GeForce RTX 4090 AMP Extreme AIRO
Storage Samsung Pro 980 2TB NVMe (OS and Games) // WD Black 10TB HDD (Storage)
Display(s) Samsung 49" Ultrawide Gaming Monitor
Case Phanteks NV7
Power Supply ASUS Rog Thor 1200 Certified 1200W
Software Windows 11 64 Bit Home Edition
Back on track fellas, let's keep this thread rolling clean.
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
TDP doesn't stand for the 'REAL' power consumption.
and both companies do not measure TDP in the same way.
that is my point.hope you understand.
I understand what you're saying, which is basically the printed specification doesn't match real world power usage. A fact that I think we are in agreement on. My point is that the printed specification for professional graphics and arithmetic co-processors is a guideline only, and that regardless of the stated number, I believe that one architecture is favoured over another with regards performance/watt.

HPCWire is of the same opinion- that is to say that Nvidia's GK110 has superior efficiency to that of the S10000 and Xeon Phi when judged on their own performance. Moreover, they believe that Beacon (Xeon Phi) and SANAM (S10000) only sit at the top of the Green500 list because of their asymmetrical configuration (very low CPU to GPU ratio)- something I also noted earlier.
(Source: HPCWire podcast link]
Thats a PCIe gen 2.0 slot incase you haven noticed
Whats one of the difference between PCIe Gen 2.0 and 2.1/3.0 more power flexability
225W through a PCI-E slot ? whatever. :roll: (150W is max for a PCI-E slot. Join up and learn something)
All Nvidias K20x & K20 are PCIe gen 2.0 spec.
Incorrect. K20/K20X are at present limited to PCI-E2.0 because of the AMD Opteron CPU's they are paired with (which of course are PCIe 2.0 limited). Validation for Xeon E5 (which is PCIe 3.0 capable) means GK110 is a PCIe 3.0 board...in much the same way that all the other Kepler parts are (K5000 and K10 for example). In much the same vein, you can't validate a HD 7970 or GTX 680 for PCI-E 3.0 operation on an AMD motherboard/CPU - all validation for AMD's HD 7000 series and Kepler was accomplished on Intel hardware.
 
Joined
Apr 30, 2012
Messages
3,881 (0.89/day)
225W through a PCI-E slot ? whatever. :roll: (150W is max for a PCI-E slot. Join up and learn something)

Wow, you are grasping at straws. I didnt specify power output but if it makes you feel good go right ahead. :laugh:

Incorrect. K20/K20X are at present limited to PCI-E2.0 because of the AMD Opteron CPU's they are paired with (which of course are PCIe 2.0 limited). Validation for Xeon E5 (which is PCIe 3.0 capable) means GK110 is a PCIe 3.0 board...in much the same way that all the other Kepler parts are (K5000 and K10 for example). In much the same vein, you can't validate a HD 7970 or GTX 680 for PCI-E 3.0 operation on an AMD motherboard/CPU - all validation for AMD's HD 7000 series and Kepler was accomplished on Intel hardware.

Wow again. You might aswell have said look a PCIe 2.0 card can fit in PCIe 3.0 slot. :laugh:

Nvidia GPU Accelerator Board Specifications
Tesla K20X
Tesla K20

PCI Express Gen2 ×16 system interface

How many times is it now?
It seams you'll do and make up anything to cheerlead on Nvidias side even when its on there own website proving you wrong. I hope they are paying you because if they arent its sad.
:shadedshu


Whos the troll now ? :D

HumanSmoke said:
Verifiable numbers or STFU.

:laugh:

P.S.
-Still waiting on that 225w server specification link. ;)
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
Wow, you are grasping at straws. I didnt specify power output but if it makes you feel good go right ahead. :laugh:
Hey, you're the one that thinks a 225W card can draw all its power from the PCIe slot :slap:
Wow again. You might aswell have said look a PCIe 2.0 card can fit in PCIe 3.0 slot. :laugh:
I'm pretty sure GK110 will be validated for PCI-E3.0 just as every other Kepler GPU before it. The validation process is (like X79) an Intel issue. Pity you can't get PCI-E 3.0 validation on an AMD chipset it would make life simpler. - Heise have already clarified the validation process for K20/K20X
Both computational cards Nvidia precaution specified only for PCIe 2.0, because with Xeon E5 with some boards were still problems. Nvidia underlined told heise online that the hardware support PCIe 3.0, the graphics card BIOS, the card sets but on PCIe 2.0. It stands OEMs however, if for OEM systems K20-cards with "disconnection from the" use PCIe 3.0. via Google translate
-Still waiting on that 225w server specification link. ;)
And I've already explained to you what was previously written by myself
or in the case of servers/HPC, staying within the rack specification (more often than not*) of 225W per board....What I mean, and Anand for that matter, is that server racks are more often than not optimized for 225W per PCIE unit, both for cooling, power usage, and cabling. What's so hard to understand?
Ryan Smith-Anandtech said:
K20X will be NVIDIA’s leading Tesla K20 product, offering the best performance at the highest power consumption (235W). K20 meanwhile will be cheaper, a bit slower, and perhaps most importantly lower power at 225W. On that note, despite the fact that the difference is all of 10W, 225W is a very important cutoff in the HPC space – many servers and chasses are designed around that being their maximum TDP for PCIe cards
Now. If you still plan on baiting I'll see what I can do about reporting your posting. You've already been told exactly the posting meant, and you still persevere in posting juvenile rejoinders based on faulty semantics (* "How can "more often than not" be construed as a descriptor for an absolute specification for the industry ??? :shadedshu ) and an inability to parse a simple compound sentence.

Now if you don't think that server racks largely cater for a 225W TDP specced board I suggest you furnish some proof to the contrary (Hey, you could find all the vendors who spec their blades for 375W TDP boards for extra credit)...c'mon make a name for yourself, prove Ryan Smith at Anandtech wrong. :rolleyes: While your at it try to find where I made any reference about 225W being a server specification for add in boards. The only mention I made was regarding boards with a 225W specification being generally standardized for server racks.

Y'know nevermind. You made my ignore list
 
Last edited:
Joined
Apr 30, 2012
Messages
3,881 (0.89/day)
Your something else for sure :)

Now. If you still plan on baiting I'll see what I can do about reporting your posting. You've already been told exactly the posting meant, and you still persevere in posting juvenile rejoinders based on faulty semantics (* "How can "more often than not" be construed as a descriptor for an absolute specification for the industry ??? :shadedshu ) and an inability to parse a simple compound sentence.

Becarefull what you wish for. Moderators might find out about the majority of your post out side of Nvidia based threads are spent defaming the competition and others with differant views than yours.

Do you only read what you want ?.

Servers and HPC racks in general are built around a 225W per board specification. Example HP , and from Anandtech...

Makes no difference. The point is performance/watt, or in the case of servers/HPC, staying within the rack specification (more often than not) of 225W per board.

You just can't own up to the fact that there is no specification and you implied as if there is one.

I was just asking to provide a link to such a specification since if there was one it be available to be referance from various crediable sources.

No link. no such thing.

Hey, you're the one that thinks a 225W card can draw all its power from the PCIe slot :slap:

Really ? Still ? Even after you included this in the same post ?

you still persevere in posting juvenile rejoinders based on faulty semantics
an inability to parse a simple compound sentence.

Let me remind you of previous post in this thread I have made just to enlighten you since it seams you only read what you want :D

PCIe Gen 2 = 75w
(2) 6-pin = 150w (75w each)
Total = 225w
Not a Server Specification

Hey, smarty pants all those cards still use 6-pin and/or 8-pin AuX connectors.

Whats one of the difference between PCIe Gen 2.0 and 2.1/3.0 more power flexability

Wow again. You might aswell have said look a PCIe 2.0 card can fit in PCIe 3.0 slot.

Nvidia GPU Accelerator Board Specifications
Tesla K20X
Tesla K20

Hmm... I referance PCIe Gen 2 power output + 6-pin power and mention there is a power difference from PCIe 2.0 to 2.1 & 3.0. Oh yeah i'm also linking to Nvidias own web-site with specifications of 2 cards with diagrams of AuX connectors and how they should be used

And your conclusion is that I thought PCIe slot was the sole source of power :laugh:

Like I said several times before. Follow your own advise cause your something else.

I'm pretty sure GK110 will be validated for PCI-E3.0 just as every other Kepler GPU before it. The validation process is (like X79) an Intel issue. Pity you can't get PCI-E 3.0 validation on an AMD chipset it would make life simpler. - Heise have already clarified the validation process for K20/K20X

Speculation is fine but if I have to choose between your speculation to what Nvidia has posted on there specification sheets.

I'll believe Nvidia :laugh:

Now if you don't think that server racks largely cater for a 225W TDP specced board I suggest you furnish some proof to the contrary (Hey, you could find all the vendors who spec their blades for 375W TDP boards for extra credit)...c'mon make a name for yourself, prove Ryan Smith at Anandtech wrong. :rolleyes: While your at it try to find where I made any reference about 225W being a server specification for add in boards. The only mention I made was regarding boards with a 225W specification being generally standardized for server racks.

Classic troll move :toast: I cant provide proof to what I say so why dont you disprove it. :laugh:

There is more than just one company. Its a shame you spent all your time just trolling for Nvidia.

You shouldnt get mad when your wrong. When your wrong your wrong. Move on dont make up stuff or lash out at people who pointed something you didnt like. Provide credable links to back up your views.

Being hostle towards others with a different view then yours is no way to enhance the community in this forum. No reason to jump into non-Nvidia threads and start disparaging it or its posters because you didnt like its content or someone doesnt like the same company you do as much as you.



Think i'll go have me some hot coco. :toast:
 
Last edited:

KooKKiK

New Member
Joined
Sep 12, 2011
Messages
31 (0.01/day)
I understand what you're saying, which is basically the printed specification doesn't match real world power usage. A fact that I think we are in agreement on. My point is that the printed specification for professional graphics and arithmetic co-processors is a guideline only, and that regardless of the stated number, I believe that one architecture is favoured over another with regards performance/watt.

HPCWire is of the same opinion- that is to say that Nvidia's GK110 has superior efficiency to that of the S10000 and Xeon Phi when judged on their own performance. Moreover, they believe that Beacon (Xeon Phi) and SANAM (S10000) only sit at the top of the Green500 list because of their asymmetrical configuration (very low CPU to GPU ratio)- something I also noted earlier.

ok, show me the real power consumption test and i will believe you.


not that old and completely wrong argument repeating again. :banghead:

Single precision......................................... ..............Double precision
W/S9000.(225W)...3.23FTlop....14.36 GFlop/watt.......0.81 TFlop.....3.58 GFlop/watt
S10000...(375W)...5.91TFlop....15.76 GFlop/watt........1.48 TFlop....3.95 GFlop/watt
K10........(225W)...4.85TFlop....21.56 GFlop/watt........0.19 TFlop.... Negligable
K20........(225W)...3.52TFlop....15.64 GFlop/watt........1.17 TFlop.....5.20 GFlop/watt
K20X......(235W)...3.95TFlop....16.81 GFlop/watt........1.31 TFlop.....5.57 GFlop/sec
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
18,914 (2.86/day)
Location
Piteå
System Name Black MC in Tokyo
Processor Ryzen 5 5600
Motherboard Asrock B450M-HDV
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Kingston Fury 3400mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston A400 240GB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Line6 UX1 + some headphones, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Cherry MX Board 1.0 TKL Brown
VR HMD Acer Mixed Reality Headset
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
I've actually read the entire thread and it feels like you're not talking (typing? tylking?) to each other but over each other. It's quite funny actually. :laugh:
 
Joined
Sep 7, 2011
Messages
2,785 (0.61/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
ok, show me the real power consumption test and i will believe you
Sure- here you go. Southern Islands FirePro vs Kepler Quadro
 
Last edited:
Top