Tuesday, November 14th 2017

China Pulls Ahead of U.S. in Latest TOP500 List

The fiftieth TOP500 list of the fastest supercomputers in the world has China overtaking the US in the total number of ranked systems by a margin of 202 to 143. It is the largest number of supercomputers China has ever claimed on the TOP500 ranking, with the US presence shrinking to its lowest level since the list's inception 25 years ago.

Just six months ago, the US led with 169 systems, with China coming in at 160. Despite the reversal of fortunes, the 143 systems claimed by the US gives them a solid second place finish, with Japan in third place with 35, followed by Germany with 20, France with 18, and the UK with 15.
China has also overtaken the US in aggregate performance as well. The Asian superpower now claims 35.4 percent of the TOP500 flops, with the US in second place with 29.6 percent.

The top 10 systems remain largely unchanged since the June 2017 list, with a couple of notable exceptions.

Sunway TaihuLight, a system developed by China's National Research Center of Parallel Computer Engineering & Technology (NRCPC), and installed at the National Supercomputing Center in Wuxi, maintains its number one ranking for the fourth time, with a High Performance Linpack (HPL) mark of 93.01 petaflops.
Tianhe-2 (Milky Way-2), a system developed by China's National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzho, China, is still the number two system at 33.86 petaflops.
Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland, maintains its number three position with 19.59 petaflops, reaffirming its status as the most powerful supercomputer in Europe. Piz Daint was upgraded last year with NVIDIA Tesla P100 GPUs, which more than doubled its HPL performance of 9.77 petaflops.
The new number four system is the upgraded Gyoukou supercomputer, a ZettaScaler-2.2 system deployed at Japan's Agency for Marine-Earth Science and Technology, which was the home of the Earth Simulator. Gyoukou was able to achieve an HPL result of 19.14 petaflops. using PEZY-SC2 accelerators, along with conventional Intel Xeon processors. The system's 19,860,000 cores represent the highest level of concurrency ever recorded on the TOP500 rankings of supercomputers.

Titan, a five-year-old Cray XK7 system installed at the Department of Energy's (DOE) Oak Ridge National Laboratory, and still the largest system in the US, slips down to number five. Its 17.59 petaflops are mainly the result of its NVIDIA K20x GPU accelerators.
Sequoia, an IBM BlueGene/Q system installed at DOE's Lawrence Livermore National Laboratory, is the number six system on the list with a mark of 17.17 petaflops. It was deployed in 2011.

The new number seven system is Trinity, a Cray XC40 supercomputer operated by Los Alamos National Laboratory and Sandia National Laboratories. It was recently upgraded with Intel "Knights Landing" Xeon Phi processors, which propelled it from 8.10 petaflops six months ago to its current high-water mark of 14.14 petaflops.

Cori, a Cray XC40 supercomputer, installed at the National Energy Research Scientific Computing Center (NERSC), is now the eighth fastest supercomputer in the world. Its 1,630 Intel Xeon "Haswell" processor nodes and 9,300 Intel Xeon Phi 7250 nodes yielded an HPL result of 14.01 petaflops.

At 13.55 petaflops, Oakforest-PACS, a Fujitsu PRIMERGY CX1640 M1 installed at Joint Center for Advanced High Performance Computing in Japan, is the number nine system. It too is powered by Intel "Knights Landing" Xeon Phi processors.

Fujitsu's K computer installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan, is now the number 10 system at 10.51 petaflops. Its performance is derived from its 88 thousand SPARC64 processor cores linked by Fujitsu's Tofu interconnect. Despite its tenth-place showing on HPL, the K Computer is the top-ranked system on the High-Performance Conjugate Gradients (HPCG) benchmark.

For the first time, each of the top 10 supercomputers delivered more than 10 petaflops on HPL. There are also 181 systems with performance greater than a petaflop - up from 138 on the June 2017 list. Taking a broader look, the combined performance of all 500 systems has grown to 845 petaflops, compared to 749 petaflops six months ago and 672 petaflops one year ago. Even though aggregate performance grew by nearly 100 petaflops, the relative increase is well below the list's long-term historical trend.

A further reflection of this slowdown is the list turnover. The entry point in the latest rankings moved up to 548 teraflops, compared to 432 teraflops in June. The 548-teraflop system was in position 370 in the previous TOP500 list. The turnover is in line with what has been observed over the last four years, but is much lower than previous levels.

A total of 102 systems employ accelerator/coprocessor technology, compared to 91 six months ago. 86 of these use NVIDIA GPUs, 12 systems make use Intel Xeon Phi coprocessor technology, and 5 are using PEZY Computing accelerators. Two systems use a combination of NVIDIA GPU and Intel Xeon Phi coprocessors. An additional 14 systems now use Xeon Phi chips as the main processing unit.

Green500 Highlights

Turning to the new Green500 rankings, the top three positions are taken by newly installed systems in Japan, all of which are based on the ZettaScaler-2.2 architecture and the PEZY-SC2 accelerator. The SC2 is a second-generation 2048-core chip that provides a peak performance of 8.192 teraflops in single-precision.

The most efficient of these ZettaScaler supercomputers is the Shoubu system B installed at RIKEN's Advanced Center for Computing and Communication. It achieved a power efficiency of 17.0 gigaflops/watt.
The number two Green500 system is the Suiren2 cluster at the High Energy Accelerator Research Organization/KEK, which managed to reach 16.8 gigaflops/watt.

The number three Green500 slot was captured by the PEZY Computing's own Sakura system. It achieved 14.2 gigaflops/watt. All of these top three systems are positioned in the bottom half of the TOP500 rankings: Shoubu system B at position 258, Suiren2 at 306, and Sakura at 275.

The fourth greenest supercomputer is a DGX SaturnV Volta system, which is installed at NVIDIA headquarters in San Jose, California. It achieved 15.1 gigaflops/watt, and comes in at number 149 on the TOP500 list. The number five system is Gyoukou, yet another ZettaScaler-2.2 machine. It achieved an efficiency of 14.2 gigaflops/watt and it currently ranks as the fourth most powerful supercomputer in the world.
Vendor trends

A total of 471 systems, representing 94.2 percent of the total, are now using Intel processors, which is slightly up from 92.8 percent six months ago. The share of IBM Power processors is at 14 systems, down from 21 systems in June.

The number of systems using Gigabit Ethernet is unchanged at 228 systems, in large part thanks to 204 systems now using 10G Ethernet. InfiniBand technology is found in 163 systems, down from 178 systems in the previous list, and remains the second most-used system interconnect technology in the list. Intel Omni-Path technology is now in 35 systems, down from 38 six month ago.

HPE has the lead in the number of installed supercomputers at 122, which represents nearly a quarter of all TOP500 systems. This includes several systems originally installed by SGI, which is now owned by HPE. HPE accounted for 144 systems six months ago.

Lenovo follows HPE with 81 systems down from 88 systems on the June list. Inspur rose further in the ranks and has now 56 systems, up from only 20 six month ago. Cray now has 53 systems, down from 57 systems six month ago. Sugon features 51 systems in the list, up from 44 in June. IBM follows with only 19 systems remaining under their label. These are mostly BlueGene/Q supercomputers, reflecting an aging install base. The average age of IBM systems on the list is now five years.

Cray continues to be the clear performance leader, claiming 19.5 percent of the list's aggregate performance. HPE is second with 15.2 percent of the TOP500 flops. Thanks to the number one Sunway TaihuLight system, NRCPC retains the third spot with 11.1 percent of the total performance. Lenovo is fourth with 9.1 percent of performance, followed by Inspur at 6.3 percent, IBM at 6.1 percent and Sugon at 5.2 percent. All top vendors, with the exception of Inspur and Sugon, lost performance share compared to six months ago.

HPCG Results

The TOP500 list is now incorporating the High-Performance Conjugate Gradient (HPCG) benchmark results into the list to provide a more balanced look at system performance. The benchmark incorporates calculations in sparse matrix multiplication, global collectives, and vector updates, which more closely represents the mix of computational and data access patterns used in many supercomputing codes.

As previously mentioned, the fastest system using the HPCG benchmark remains Fujitsu's K computer, which is ranked number 10 in the overall TOP500 rankings. It achieved 602.7 teraflops on HPCG, followed closely by Tianhe-2 with a score of 580.0 teraflops. The upgraded Trinity supercomputer comes in at number three at 546.1 teraflops, followed by Piz Daint at number four with 486.4 teraflops, and Sunway TaihuLight at number five at 480.8 teraflops.

The International Space Station computer, built by HPE, is now listed in the HPCG results, making it the "highest" computer on the list.Source: Top500.org
Add your own comment

14 Comments on China Pulls Ahead of U.S. in Latest TOP500 List

#1
StrayKAT
Most of it is made in Taiwan anyways :p

And I can't tell if that's actually a win for China or the US... and neither do they probably.
Posted on Reply
#3
bug
It doesn't matter how big it is, it matters what you do with it :D

On a more serious note, China also has the largest radiotelescope in the world, but apparently of the handful of people capable of running it, no one is willing to move to China to do it.
Still, it's interesting to see where things are headed.
Posted on Reply
#4
StrayKAT
bug said:
It doesn't matter how big it is, it matters what you do with it :D

On a more serious note, China also has the largest radiotelescope in the world, but apparently of the handful of people capable of running it, no one is willing to move to China to do it.
Still, it's interesting to see where things are headed.
Considering the average mentality there, I wonder how well they take care of these particular things. Because they let a lot of their buildings go to waste (or rather, they just tear them down and build new ones). Even normal people don't even take care of little things, like their cars and bikes. I'm sure it's all better in computers and sciences, but I wonder just how much.
Posted on Reply
#5
bug
StrayKAT said:
Considering the average mentality there, I wonder how well they take care of these particular things. Because they let a lot of their buildings go to waste (or rather, they just tear them down and build new ones). Even normal people don't even take care of little things, like their cars and bikes. I'm sure it's all better in computers and sciences, but I wonder just how much.
Who cares? It's their stuff. Though if we're talking about building and demolishing, that can't be good for their health.
But yes, it is a different world. I haven't seen/experienced it myself, but I know people that have.
Posted on Reply
#6
FordGT90Concept
"I go fast!1!11!1!"
Top500 isn't really a good metric going forward anymore. USA has a lot of quantum and neural network processors that can solve a lot more complex problems than super computer clusters can but they suck at brute force FLOPS.
Posted on Reply
#7
Prima.Vera
I have read some interviews a long time ago, that in order for an A.I. to become self aware, an equivalent performance of 100 Zetta flops is needed, so we are still looong way until there. Probably 100 more years? ;)
Posted on Reply
#8
Vya Domus
FordGT90Concept said:
USA has a lot of quantum and neural network processors that can solve a lot more complex problems than super computer clusters can but they suck at brute force FLOPS.
Last time I checked quantum processors were still nowhere near as fast or robust as traditional computing but maybe I am wrong. In addition to that their practicality is still under a big question mark.

Prima.Vera said:
I have read some interviews a long time ago, that in order for an A.I. to become self aware
No one has a damn clue how to even quantify self awareness/consciousnesses , let alone make a prediction on how much computing power it would take. Hell , it might be that it's something completely independent from that.
Posted on Reply
#9
bug
Prima.Vera said:
I have read some interviews a long time ago, that in order for an A.I. to become self aware, an equivalent performance of 100 Zetta flops is needed, so we are still looong way until there. Probably 100 more years? ;)
Neah, that's not how it works. You only need enough to make an AI that can make a better version of itself. Then it can take it from there ;)
It also matters a lot how you define AI. Will a simple neural network fit the description? Something that passes the Turing test? Does it need to know everything from the start? Does it need to actively learn?
Posted on Reply
#10
FordGT90Concept
"I go fast!1!11!1!"
Vya Domus said:
Last time I checked quantum processors were still nowhere near as fast or robust as traditional computing but maybe I am wrong. In addition to that their practicality is still under a question mark.
They're far better for solving specific types of problems.

[LEFT]
bug said:
Does it need to actively learn?
This. And Google already made one.
[/LEFT]
Posted on Reply
#11
Vya Domus
FordGT90Concept said:
They're far better for solving specific types of problems.
Exactly my point , limited robustness. I have no doubt they will find their usefulness in some areas but as to surpass or at the very least become comparable in all aspects to traditional computing , I have my doubts.
Posted on Reply
#12
FordGT90Concept
"I go fast!1!11!1!"
These TOP500 supercomputer clusters would take years to solve problems the quantum processors can solve in hours. FLOPS are floating-point calculations. Quantum are deep-branching logic. The former is good for physics calculations, the latter is good for finding viable routes through complex datasets.
Posted on Reply
#13
OSdevr
Vya Domus said:
No one has a damn clue how to even quantify self awareness/consciousnesses , let alone make a prediction on how much computing power it would take. Hell , it might be that it's something completely independent from that.
Indeed, they don't call it the hard problem for nothing. It's one of the only questions we don't even know how to ask.
Posted on Reply
#14
StrayKAT
bug said:
Who cares? It's their stuff. Though if we're talking about building and demolishing, that can't be good for their health.
But yes, it is a different world. I haven't seen/experienced it myself, but I know people that have.
It's not that I care. I'm just making an observation. I wonder if the tech world over there fares any better. I honestly don't know.
Posted on Reply
Add your own comment