1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Introduces the FirePro S10000 Server Graphics Card

Discussion in 'News' started by Cristian_25H, Nov 12, 2012.

  1. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,480 (1.29/day)
    Thanks Received:
    521
    Nah, didn't think so.

    Considering the PCI-SIG rates the PCI-E slot for a nominal 75W power delivery, where the hell else do you think the board draws its power from?

    You think a SC cluster or data centre has ATX PSU's ??

    Maybe you should watch this and point out where the PSU's are, or maybe tell these guys they're doing it wrong.
    Which is already what I've said...and much earlier than you did, so why the bleating? Oh, I know why,...you just need to troll.
    Nothing at all, except possibly change the cooling and power cabling - and no I don't mean just the individual 6 and 8 pin PCI-E connectors. I mean the main power conduits from the cabinets to the power source. Then of course if a cabinet is being refitted for S10000 then you would have to re-cable all 42 racks in a cabinet for 2 x 8-pin instead of the nominal 6-pin + 8-pin at four cables per rack multiplied by the number of boards per rack, as well as the main power conduits...then of course you'd have to upgrade the cooling system -which for most big iron is water cooling and refrigeration.
     
    Last edited: Nov 17, 2012
  2. Xzibit

    Joined:
    Apr 30, 2012
    Messages:
    1,121 (1.23/day)
    Thanks Received:
    252
    Thats a PCIe gen 2.0 slot incase you haven noticed

    Whats one of the difference between PCIe Gen 2.0 and 2.1/3.0 more power flexability. So yes if you get more recent parts you get more options. I'm sure you'll see them in G8 series of that HP server you linked. The lower numerical versions already have updated MB with Gen 3 slots added. So there is one possibility.

    Only ones that can currently take advantage of it are Intel and AMD cards since they are PCIe gen 3.0 spec. All Nvidias K20x & K20 are PCIe gen 2.0 spec.

    I see pural and specifications. I'd like to see the information your refering to for myself thats all.

    Obviously something taken into consideration when these machines were built

    So how about that specification link ? ;)
     
    Last edited: Nov 17, 2012
  3. repman244

    repman244

    Joined:
    Apr 7, 2011
    Messages:
    1,104 (0.85/day)
    Thanks Received:
    456
    One thing to consider here is that these cards go into a custom designed HPC where the standard "server" design is less common.
    You have custom cooling, custom power delivery etc.. You can see that if you look at Cray's HPC's...
     
  4. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,480 (1.29/day)
    Thanks Received:
    521
    Yeah, I figured that SANAM for instance is new build from Adtech (the S10000 supercomputer), and all new builds would be pretty straightforward to put together (once you know the requirements) regardless of fit out - they all seem based on a modular approach whether they be compute cluster or data center. My thinking was more along the lines of refitting older systems with newer more competent components - there are still a lot of big clusters running older GPGPU for instance- and I would assume a refit presents its own problems different from a ground up new build.
    Refitting in general would be a considerable initial expenditure- Titan for instance, retained the bulk of the hardware from Jaguar, but the upgrade still took a year (Oct 2011-Nov 2012) and cost $96 million- the principle difference seems to be an upgrading of power delivery and swapping out Fermi 225W TDP boards for K20X (235W)- the CPU side of the compute node remains untouched.
     
  5. repman244

    repman244

    Joined:
    Apr 7, 2011
    Messages:
    1,104 (0.85/day)
    Thanks Received:
    456
    First phase were CPU upgrades (new Opterons), interconnects, and memory (600TB). After that they had to wait for the GPU's.
    And IIRC Jaguar didn't have any GPU's before.
     
  6. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,480 (1.29/day)
    Thanks Received:
    521
    Thanks. I'd forgotten about the 16GB RAM increase per node. Weren't the "old" CPU's (Opteron 2435) reallocated to what was ORNL's old XT4 partition to upgrade it to XT5 specification (Jaguar being a 18688 node XT5 + 7832 node XT4...the XT5 being upgraded to Titan (XK7) and the XT4 to XT5) and Kraken's upgrade (ORNL + University of Tennessee)? The partition is mentioned in the Jaguar wiki page, but not Titan. With the reallocation I was under the impression that ORNL's Opteron 6274's were basically overall additions to capacity at ORNL.
    Actually a physical impossibility I would have thought. CPU-only clusters still need GPU's for visualization*, although the Fermi's were added when the CPU upgrade took place.
    [source]

    *IIRC, The Intel Xeon + Xeon Phi Stampede also uses Tesla K20X for the same reason
     
  7. eidairaman1

    eidairaman1

    Joined:
    Jul 2, 2007
    Messages:
    13,019 (4.87/day)
    Thanks Received:
    1,641
    Learn to be respectful to members of these forums.

     
  8. repman244

    repman244

    Joined:
    Apr 7, 2011
    Messages:
    1,104 (0.85/day)
    Thanks Received:
    456
    Yeah, but that was already phase 1 upgrade to Titan, Jaguar itself didn't have them (maybe I didn't word my post very well, sorry).
     
    HumanSmoke says thanks.
  9. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,480 (1.29/day)
    Thanks Received:
    521
    Stay on topic and it shouldn't be a problem. If you can tell me how moaning about a lack of volt modding opportunity in Nvidia cards has any relevance to pro graphics -workstation or GPGPU, I'll gladly issue an apology....until that happens I view it as a cheap trolling attempt, not particularly apropos of anything regarding the hardware being discussed.
    That's probably my confusion I think. I tend to think of Jaguar and Titan as the same beast, and didn't make the differentiation regarding timeline. My bad.
     
    Last edited: Nov 17, 2012
  10. eidairaman1

    eidairaman1

    Joined:
    Jul 2, 2007
    Messages:
    13,019 (4.87/day)
    Thanks Received:
    1,641
    i was stating they have tighter control of voltages across the board is all.
     
  11. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,480 (1.29/day)
    Thanks Received:
    521
    Not quite...
    When have volt mods ever been an issue with server co-processors? How does Nvidia locking down voltages on desktop Kepler have any relevance to Tesla or Quadro boards ?
    Have you ever heard of people who overclock a math co-processor ? Kind of defeats the purpose of using ECC RAM and placing an emphasis on FP64 don't ya think?
    Taking your lead ?....
    :shadedshu
     
    Last edited: Nov 17, 2012
  12. eidairaman1

    eidairaman1

    Joined:
    Jul 2, 2007
    Messages:
    13,019 (4.87/day)
    Thanks Received:
    1,641
    :shadedshu:rolleyes:

    i find it funny you keep on arguing, but anyways it was in relation as how those parts cant reach the maximum voltage level because of precautions. I know certain models of Quadro and FirePro are for mission critical, just as Much as Itanium/SPARC etc are. I do realize that OC can cause ECC to corrupt the data. But anyways im just saying be respectful of the users here dude.
     
  13. Chicken Patty

    Chicken Patty WCG Moderator Staff Member

    Joined:
    Nov 27, 2007
    Messages:
    28,341 (11.22/day)
    Thanks Received:
    12,227
    Location:
    Miami, Florida
    Back on track fellas, let's keep this thread rolling clean.
     
  14. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,480 (1.29/day)
    Thanks Received:
    521
    I understand what you're saying, which is basically the printed specification doesn't match real world power usage. A fact that I think we are in agreement on. My point is that the printed specification for professional graphics and arithmetic co-processors is a guideline only, and that regardless of the stated number, I believe that one architecture is favoured over another with regards performance/watt.

    HPCWire is of the same opinion- that is to say that Nvidia's GK110 has superior efficiency to that of the S10000 and Xeon Phi when judged on their own performance. Moreover, they believe that Beacon (Xeon Phi) and SANAM (S10000) only sit at the top of the Green500 list because of their asymmetrical configuration (very low CPU to GPU ratio)- something I also noted earlier.
    (Source: HPCWire podcast link]
    225W through a PCI-E slot ? whatever. :roll: (150W is max for a PCI-E slot. Join up and learn something)
    Incorrect. K20/K20X are at present limited to PCI-E2.0 because of the AMD Opteron CPU's they are paired with (which of course are PCIe 2.0 limited). Validation for Xeon E5 (which is PCIe 3.0 capable) means GK110 is a PCIe 3.0 board...in much the same way that all the other Kepler parts are (K5000 and K10 for example). In much the same vein, you can't validate a HD 7970 or GTX 680 for PCI-E 3.0 operation on an AMD motherboard/CPU - all validation for AMD's HD 7000 series and Kepler was accomplished on Intel hardware.
     
  15. Xzibit

    Joined:
    Apr 30, 2012
    Messages:
    1,121 (1.23/day)
    Thanks Received:
    252
    Wow, you are grasping at straws. I didnt specify power output but if it makes you feel good go right ahead. :laugh:

    Wow again. You might aswell have said look a PCIe 2.0 card can fit in PCIe 3.0 slot. :laugh:

    Nvidia GPU Accelerator Board Specifications
    Tesla K20X
    Tesla K20

    How many times is it now?
    It seams you'll do and make up anything to cheerlead on Nvidias side even when its on there own website proving you wrong. I hope they are paying you because if they arent its sad.
    :shadedshu


    Whos the troll now ? :D

    :laugh:

    P.S.
    -Still waiting on that 225w server specification link. ;)
     
    eidairaman1 says thanks.
  16. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,480 (1.29/day)
    Thanks Received:
    521
    Hey, you're the one that thinks a 225W card can draw all its power from the PCIe slot :slap:
    I'm pretty sure GK110 will be validated for PCI-E3.0 just as every other Kepler GPU before it. The validation process is (like X79) an Intel issue. Pity you can't get PCI-E 3.0 validation on an AMD chipset it would make life simpler. - Heise have already clarified the validation process for K20/K20X
    And I've already explained to you what was previously written by myself
    Now. If you still plan on baiting I'll see what I can do about reporting your posting. You've already been told exactly the posting meant, and you still persevere in posting juvenile rejoinders based on faulty semantics (* "How can "more often than not" be construed as a descriptor for an absolute specification for the industry ??? :shadedshu ) and an inability to parse a simple compound sentence.

    Now if you don't think that server racks largely cater for a 225W TDP specced board I suggest you furnish some proof to the contrary (Hey, you could find all the vendors who spec their blades for 375W TDP boards for extra credit)...c'mon make a name for yourself, prove Ryan Smith at Anandtech wrong. :rolleyes: While your at it try to find where I made any reference about 225W being a server specification for add in boards. The only mention I made was regarding boards with a 225W specification being generally standardized for server racks.

    Y'know nevermind. You made my ignore list
     
    Last edited: Nov 18, 2012
  17. Xzibit

    Joined:
    Apr 30, 2012
    Messages:
    1,121 (1.23/day)
    Thanks Received:
    252
    Your something else for sure :)

    Becarefull what you wish for. Moderators might find out about the majority of your post out side of Nvidia based threads are spent defaming the competition and others with differant views than yours.

    Do you only read what you want ?.

    You just can't own up to the fact that there is no specification and you implied as if there is one.

    I was just asking to provide a link to such a specification since if there was one it be available to be referance from various crediable sources.

    No link. no such thing.

    Really ? Still ? Even after you included this in the same post ?

    Let me remind you of previous post in this thread I have made just to enlighten you since it seams you only read what you want :D

    Hmm... I referance PCIe Gen 2 power output + 6-pin power and mention there is a power difference from PCIe 2.0 to 2.1 & 3.0. Oh yeah i'm also linking to Nvidias own web-site with specifications of 2 cards with diagrams of AuX connectors and how they should be used

    And your conclusion is that I thought PCIe slot was the sole source of power :laugh:

    Like I said several times before. Follow your own advise cause your something else.

    Speculation is fine but if I have to choose between your speculation to what Nvidia has posted on there specification sheets.

    I'll believe Nvidia :laugh:

    Classic troll move :toast: I cant provide proof to what I say so why dont you disprove it. :laugh:

    There is more than just one company. Its a shame you spent all your time just trolling for Nvidia.

    You shouldnt get mad when your wrong. When your wrong your wrong. Move on dont make up stuff or lash out at people who pointed something you didnt like. Provide credable links to back up your views.

    Being hostle towards others with a different view then yours is no way to enhance the community in this forum. No reason to jump into non-Nvidia threads and start disparaging it or its posters because you didnt like its content or someone doesnt like the same company you do as much as you.



    Think i'll go have me some hot coco. :toast:
     
    Last edited: Nov 18, 2012
    eidairaman1 says thanks.
  18. KooKKiK New Member

    Joined:
    Sep 12, 2011
    Messages:
    31 (0.03/day)
    Thanks Received:
    4
    ok, show me the real power consumption test and i will believe you.


    not that old and completely wrong argument repeating again. :banghead:

     
  19. Frick

    Frick Fishfaced Nincompoop

    Joined:
    Feb 27, 2006
    Messages:
    10,792 (3.41/day)
    Thanks Received:
    2,349
    I've actually read the entire thread and it feels like you're not talking (typing? tylking?) to each other but over each other. It's quite funny actually. :laugh:
     
  20. HumanSmoke

    HumanSmoke

    Joined:
    Sep 7, 2011
    Messages:
    1,480 (1.29/day)
    Thanks Received:
    521
    Last edited: Sep 4, 2013

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page