1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Debuts New 12- and 16-Core Opteron 6300 Series Processors

Discussion in 'News' started by Cristian_25H, Jan 22, 2014.

  1. Cristian_25H

    Cristian_25H News Poster

    Joined:
    Dec 6, 2011
    Messages:
    4,260 (4.42/day)
    Thanks Received:
    1,137
    Location:
    Still on the East Side
    AMD today announced the immediate availability of its new 12- and 16-core AMD Opteron 6300 Series server processors, code named "Warsaw." Designed for enterprise workloads, the new AMD Opteron 6300 Series processors feature the "Piledriver" core and are fully socket and software compatible with the existing AMD Opteron 6300 Series. The power efficiency and cost effectiveness of the new products are ideal for the AMD Open 3.0 Open Compute Platform - the industry's most cost effective Open Compute platform.

    Driven by customers' requests, the new AMD Opteron 6338P (12 core) and 6370P (16 core) processors are optimized to handle the heavily virtualized workloads found in enterprise environments, including the more complex compute needs of data analysis, xSQL and traditional databases, at optimal performance per-watt, per-dollar.

    [​IMG]

    "With the continued move to virtualized environments for more efficient server utilization, more and more workloads are limited by memory capacity and I/O bandwidth," said Suresh Gopalakrishnan, corporate vice president and general manager, Server Business Unit, AMD. "The Opteron 6338P and 6370P processors are server CPUs optimized to deliver improved performance per-watt for virtualized private cloud deployments with less power and at lower cost points."

    The new AMD Opteron 6338P and 6370P processors are available today through Penguin and Avnet system integrators and have been qualified for servers from Sugon and Supermicro at a starting price of $377 and $598, respectively. More information can be found on AMD's website.

    [​IMG]
    fullinfusion says thanks.
  2. fullinfusion

    fullinfusion 1.21 Gigawatts

    Joined:
    Jan 11, 2008
    Messages:
    8,018 (3.36/day)
    Thanks Received:
    1,884
    Wow nothing wrong with it's price!

    16 real cores :twitch:
  3. buildzoid

    buildzoid

    Joined:
    Oct 29, 2012
    Messages:
    1,097 (1.73/day)
    Thanks Received:
    293
    Location:
    CZ
    If there were desktop boards for these I'd be all over the 12 core variant.
  4. Pap1er

    Pap1er

    Joined:
    Jul 13, 2010
    Messages:
    29 (0.02/day)
    Thanks Received:
    1
    Location:
    Slovakia
    I would also like to see desktop board for these meat grinders
  5. ZetZet

    ZetZet

    Joined:
    Feb 27, 2013
    Messages:
    166 (0.32/day)
    Thanks Received:
    22
    Location:
    Lithuania
    Not all that real.
    Prima.Vera says thanks.
  6. buildzoid

    buildzoid

    Joined:
    Oct 29, 2012
    Messages:
    1,097 (1.73/day)
    Thanks Received:
    293
    Location:
    CZ
    More real than intel's 8 cores 16 threads. The 8 extra threads only appear in specific scenarios and in others they don't exist whereas AMD's 16 cores are always capable of doing 16 tasks simultaneously it just doesn't scale perfectly because 1 core will do 100% single core performance but 16 will only do around 1260% instead of semi perfect scaling like what intel has where 1 core does 100% and 8 cores do 799% and with hyper threading it maxes out at 1038% so in some scenarios(3D graphics rendering) the 2000$ 8 core intel will beat the 600$ 16 core AMD and the AMD will win in video encoding and similar dumb work loads like searching for stuff so the AMD is a better server CPU than the intel.
  7. Assimilator

    Assimilator

    Joined:
    Feb 18, 2005
    Messages:
    582 (0.17/day)
    Thanks Received:
    95
    And, sadly, the Xeons will still beat the ever living crap out of these.
  8. NC37

    NC37

    Joined:
    Oct 30, 2008
    Messages:
    1,170 (0.56/day)
    Thanks Received:
    261
    I dunno. In multithreading AMD was beating Intel. Xeons are another story but when it comes to price for the performance. That I'd be interested to see.
  9. SIGSEGV

    SIGSEGV

    Joined:
    Mar 31, 2012
    Messages:
    504 (0.60/day)
    Thanks Received:
    107

    Sadly, there is no opteron based server available in my country..
    So i have no choice instead using (buying) xeon server and workstation for my lab which is very expensive..
    That was very frustating..
  10. techy1

    Joined:
    Jan 20, 2014
    Messages:
    59 (0.32/day)
    Thanks Received:
    7
    soon there will be AMD marketing slides about +400% performance increase over "other competitors" 4 core CPUs :D
  11. ensabrenoir

    ensabrenoir

    Joined:
    Apr 16, 2010
    Messages:
    1,168 (0.75/day)
    Thanks Received:
    183
    ...cool and at a great price..... but once again the lemming approach. A bunch of little ...adequate cores. The best result would be a price reduction at intel....bah hhhaaa hhhhaaaa:roll: yeah right. Maybe some day but not by this. Nether the less Amd is still moving in the right direction.
  12. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    5,955 (6.54/day)
    Thanks Received:
    1,921
    Location:
    Concord, NH
    I would like everyone to remember what the equivalent Xeon is at that price point. I'm willing to bet that the Opteron is more cost effective, considering a 10 Core Xeon starts at 1600 USD, I think everything needs to be put into perspective. I would rather take two 16c Opterons than a single 10c Xeon, but that's just me.
    Roph says thanks.
  13. buildzoid

    buildzoid

    Joined:
    Oct 29, 2012
    Messages:
    1,097 (1.73/day)
    Thanks Received:
    293
    Location:
    CZ
    it'd be true for the integer math capability but not much else.
  14. Fragman

    Joined:
    May 11, 2006
    Messages:
    61 (0.02/day)
    Thanks Received:
    3
    Location:
    Denmark EU
    Your either to stupid or don't know anything about Amd CPU's they are all independent cores with own multiplier and volt control and if 1 core go's up in speed all the other stays down until used.
    That makes for better power usage.
  15. Breit

    Breit

    Joined:
    Jun 4, 2004
    Messages:
    195 (0.05/day)
    Thanks Received:
    31
    I don't get what the power characteristics have to do with the debate about what counts as a "real" core and what does not?!
    The fact is that with the Bulldozer architecture AMD choose to implement CMT in the form of modules rather than Hyperthreading as implemented by Intel (here called SMT). A module on an AMD CPU acts as 2 independent cores, but nonetheless they share certain functional units together. So technically they are NOT 2 independent cores. It's more or less the same as with Intels Hyperthreading, where a core can run 2 threads simultaneously and is seen by the OS as 2, but is actually only one core.
    So maybe AMDs implementation of CMT/SMT in the form of modules is a step further in the direction of independent cores than Intel is with Hyperthreading. But all that doesn't really matter at all. At the end of the day, what counts is the performance you get out of the CPU (or performance per dollar or performance per watt, whatever matters most to you).

    As far as I'm concerned, they should advertise these as 6 modules / 12 threads and 8 modules / 16 threads like Intel does with for instance the 8 core / 16 threads (8c/16t) nomenclature...
  16. Prima.Vera

    Prima.Vera

    Joined:
    Sep 15, 2011
    Messages:
    2,185 (2.09/day)
    Thanks Received:
    283
    Wow. You must be very smart for insulting and flaming users. Please, go on...
    thebluebumblebee and Aquinus say thanks.
  17. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    5,955 (6.54/day)
    Thanks Received:
    1,921
    Location:
    Concord, NH
    The problem with that statement is that there is enough shared hardware to run two threads in tandem where hyper-threading won't always because it depends on parts of the CPU that are not being used.

    Intel uses unused resources in the CPU to get extra multi-threaded performance. AMD added extra hardware for multi-threaded performance as opposed to using just the extra resources available. The performance of a module vs the performance of a single core with HT has costs and benefits of their own. With an Intel CPU, that second thread doesn't nearly have as much processing power that the first thread does, where with AMD, the amount of performance that second "thread" or "core" if you will has much more tangible gains than the HT thread does.

    It's worth mentioning that the integer units do have enough hardware to run two full threads side-by-side. It's the floating point unit that doesn't but even still, FMA is supposed to give some ability to decouple the 256-bit FP unit to do two 128-bit ops at once.

    I think AMD's goal is to emphasize what CPUs do best, integer math, and let GPUs do what they do best, FP math. Not to say that a CPU shouldn't do any FP math, but if there is a lot of FP math to be done, a GPU is better optimized to do those kinds of operations.

    Also, I should add that I'm pretty sure that AMD clocks are controlled on a per-module basis but parts of each module can be power gated to improve power usage. One of the biggest benefits of having a module is that you save die space to add that second thread without too much of a hit on single-threaded performance (relatively speaking).

    Please don't feed the ducks trolls.
    TRWOV and Prima.Vera say thanks.
  18. Prima.Vera

    Prima.Vera

    Joined:
    Sep 15, 2011
    Messages:
    2,185 (2.09/day)
    Thanks Received:
    283
    Aq, agree with you.
    However I have a question. Don't you think this approach is somehow not ideal for AMD, because in this way, a core is having a lot less transistors than Intel's, therefore the bad performance in single-threaded applications, like games for example?
    I don't understand why AMD is still going for strong GPU performance, even on the so called top CPU's, instead of having a GPU with only basic stuff to run the Win 7 desktop, then with the available space to increase the transistor count for each of the cores?? This way I think they will finally have a CPU to compete with the I7. Just some thoughts.
  19. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    5,955 (6.54/day)
    Thanks Received:
    1,921
    Location:
    Concord, NH
    Well, AMD has always pushed the "future is fusion" motto. HSA has always been a constant theme of theirs. I will be thrilled when AMD has an APU where CPU and iGPU compute units are shared, further blurring the distinction between massively parallel workloads on GPUs and fast serial workloads on CPUs.

    Either way, CPUs are fast enough where there definitely is a point of diminishing returns. A CPU will only go so fast and you can only cram so many transistors in any given area. Also, on games that can utilize multi-core systems well, AMD isn't trailing behind all that much. Considering upcoming consoles have 8c CPUs in them, there will be more of a push to utilize that kind of hardware. It's completely realistic for a machine to have at least 4 logical threads now and as many as 8 for a consumer CPU. This wasn't the case several years ago.
  20. Breit

    Breit

    Joined:
    Jun 4, 2004
    Messages:
    195 (0.05/day)
    Thanks Received:
    31
    I guess that's because it's technically very challenging and AMD might simply not be able to come up with something better? Just a guess... ;)
  21. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    8,114 (2.55/day)
    Thanks Received:
    1,125
    Dual socket with 16 cores, can run 32 VM's in one rackmount tray, company X has 320 employees running thin clients, 10 racks plus one and assuming same drive/memory/board cost the AMD will win for $$$ reason alone. Data entry jobs don't need Xeon core performance for 10 key and typing.
    Frick says thanks.
    10 Million points folded for TPU
  22. Breit

    Breit

    Joined:
    Jun 4, 2004
    Messages:
    195 (0.05/day)
    Thanks Received:
    31
    Sure? In theory you might be right, but most of at least consumer grade hardware is not that great at FP math (I'm talking about DP-FP of course).
    An ordinary Core i7-4770K quad-core has a DP performance of about 177 GFLOPS. Thats for a 84W CPU (talking TDP). NVidia's 780Ti though is rated at 210 GFLOPS DP performance (DP is crippled on consumer chips, I know), but this comes at a cost of a whopping 250W TDP, which is about 3x the power draw! So simple math tells me that the Haswell i7 is about twice as efficient in DP-FP calculations as the current-gen GPU hardware is...
    Single precision might be a totally different story though. :)
  23. james888

    james888

    Joined:
    Jun 27, 2011
    Messages:
    4,279 (3.80/day)
    Thanks Received:
    1,423
    An amd 7970 has ~1060 GFLOPS DP performance at 225 tdp. Amd gpu's are pretty darn great at compute and amd apu's will use amd gpu's not nvidea gpu's. So your comparison with a 780ti is silly.
    Crunching for Team TPU
  24. Breit

    Breit

    Joined:
    Jun 4, 2004
    Messages:
    195 (0.05/day)
    Thanks Received:
    31
    Even it its way of topic:
    A nVidia Titan has ~1300 GFLOPS DP at 250W TDP, but that was not the point.
    All that compute power on your GPU is pretty useless unless you have a task where you have to crunch numbers for an extended period of time AND your task can be scheduled in parallel, but I guess you know that. The latencies for copying data to the GPU and after processing there from the GPU back to the main memory / CPU are way to high for any mixed workload to perform well, so strong single-threaded FP performance will always be important in some way.
  25. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    5,955 (6.54/day)
    Thanks Received:
    1,921
    Location:
    Concord, NH
    Might read into APUs again. There are benefits to be had by having HUMA on an APU, which solves the memory copying problem. The simple point is that CPUs are good at serial processing and GPUs are good at massively parallel ops. Depending on your workload, one may be better than the other. More often than not though, CPUs are doing integer math and GPUs are doing floating point math (single or double).

    Basically CPUs are good at working with data that changes a lot (relatively small amounts of data that change a lot). GPUs are good at processing (or transforming if you will) a lot of data in a relatively fixed way.

    So a simple example of what GPUs do best would be something like.
    Code:
    add 9 and multiply by 2 to every element of [1 2 3 4 5 6 7 8 9 ... 1000]
    Where a CPU would excel at something like adding all of those elements, or doing something that reduces those values, as opposed to transforming it to a set of the same size as the input.

    See Stream Processing on Wikipedia.

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page