What's new

China Dominates the World TOP500 Supercomputers

China Navigating The Homegrown Waters For Exascale
dbd1bb528cfdf8e46ad7224fdd15592e
Jeffrey Burt, 2 days ago

China_chips.jpg

A major part of China’s several initiatives to build an exascale-class supercomputers has been the country’s determination to rely mostly on homegrown technologies – from processors and accelerators to interconnects and software – rather than turn to vendors outside of its borders, particularly those from the United States. The drive is part of a larger effort by China’s leaders to grow key industries inside the country, including its technology sector, to the point where they can compete with others across the globe.

Some of the fruits of the push for China-made technology components can be seen in the country’s Sunway TaihuLight supercomputer, the massive system that sat atop the Top500’s list of the world’s fastest systems that use the Linpack benchmark until it was toppled in June by Summit system, which is based on technologies from IBM, Nvidia and Mellanox and is housed at the Oak Ridge National Laboratory in the United States. The Sunway TaihuLight, which delivers a peak performance of 93 teraflops, is powered by Sunway’s SW26010 processors, uses an interconnect technology from Sunway and runs the Sunway RaiseOS 2.0.5 operating system.

But as the country works toward its exascale system, engineers looking at the technology have to weight such factors as how the systems will be used and the budgets available for the development of various components, and the reliance on homegrown technologies is raising its own challenges, including the need to develop an ecosystem to support them, according to Qian Depei, Beihang University, and Sun Yat-sen University and dean of the School of Data and Computer Science at Sun Yat-sen University, who spoke at this week’s SC18 supercomputing conference in Dallas.

Such discussions about the ongoing competition between the United States and China in supercomputing and HPC tend to crop up around the times of the ISC and SC supercomputing shows, and it’s no different this week. Even with the latest version of the Top500 list, much of the focus was not only on the fact that Sierra, another IBM-based supercomputer at the Lawrence Livermore National Lab, muscled its way into the number-two spot and dropped TaihuLight into third place, but also that China grew its share of the 500 systems on the list to 227 – accounting for 45 percent – while the United States saw its numbers fall to 109 supercomputers, or 22 percent. However, those U.S. systems are on average more powerful, giving the country 38 percent of the aggregate system performance on the list. China had 31 percent.

The competition is not only about national pride. The leaders in supercomputer, HPC and particularly exascale computing – which is needed to run increasingly complex HPC workloads that more and more include big data analytics and artificial intelligence – will have an edge in everything from scientific research and the military to healthcare and the economy. The United States and China appear to be in a race to see which will get there first, though readers of The Next Platform know that the European Union is aggressively pursuing its own exascale initiatives, as is Japan.

During his address, Depei told attendees that China has made high-performance computing a focus since 2002 and now has turned its efforts to building an exascale system.

“HPC has been identified as one of the priority areas in China since the early 1990s,” Depei said. “In the last 15 years or so we have implemented three key projects. It was quite unusual [for a country] to continually support key projects in one area under the national high-performance program. That reflects the importance of the high-performance program. The result of the project was some petascale machines.”

The most well-known of those systems was TaihuLight and Tianhe-2, which went online in 2013 and held the top spot in the Top500 until being knocked off by TaihuLight two years ago. The country’s supercomputing infrastructure – called China National Grid – now includes 200 PFLOPs of shared computing power and more than 160PB of shared storage running 400 applications and services that serve about 1,900 user groups. It includes two main sites, six national supercomputing centers, 10 ordinary sites and one operations center.

Now the country is in the midst of the project to build an exascale system, which is based on building three prototype systems – Sugon, Tianhe, and Sunway. Sugon will use traditional technologies like x86 processors and accelerators made by Chinese chip maker Hygon, a multi-level interconnect design and immersive cooling that will do away with the need of fans. The Tianhe prototype will use new 16-nanometer MT-2000+ many-core processor from Matrix, a 3D butterfly network with a maximum of four hops for the whole system.

The Sunway prototype will use the SW26010 chips, a high-bandwidth and high-throughput network powered by a self-developed network chip, and a water-cooling system with enhanced copper cold plate. A node will include two processors and four-way DDR4 memory, while a supernode will comprise 256 nodes and full 256-x-256 connection.


Depei said the challenges that need to be overcome include power consumption, application performance, programmability and resilience.

“The energy efficiency is the most challenging part of the project,” he said. “Without that limitation, I think it’s relatively easier to build an exascale system. So how can we balance the power consumption, performance and programmability? How can we support wide range of applications while keep high application proficiency and how do we improve the resilience for long-term, nonstop applications?”

The engineers are weighing such questions as whether to develop a heterogeneous, accelerated system or one that leverages a many-core architecture. They’re focusing on hybrid memory that includes DRAM and non-volatile memory (NVM) and putting the memory closer to the processor. They also considering an optical interconnect and placing it closer to the chips by shrinking the size of the optical devices. As far as compute goes, the question is whether to go with a special-purpose or general-purpose processor.

“The number of exascale computing applications is small, so should we use a very efficient special-purpose architecture to support those applications?” he asked. “On the other hand, Chinese machines will be installed at general purpose computing centers, so it’s impossible to support only small number of applications. Our solution is combining general purpose plus special purpose.”

Work also is being done outside of the system itself. The country has upgraded the China National Grid, creating a service environment that includes a portal for users, growing it to 19 sites and improving the bandwidth. They’re creating an application development platform and another platform to drive HPC education and increase the country’s talent pool, as well as working to build an application ecosystem for its exascale system.

“Because the future exascale system will be implemented with our homegrown processor, the ecosystem has become a very crucial issue,” Depei said. “We need the libraries, the compilers, the OS, the runtime to support the new processor, and we also need some binary dynamic translation to execute commercial software on our system. We need the tools to improve the performance and energy efficiency, and we know we also need the application development support. This is a very long-term job. We need the cooperation of the industry and also the end users.”


https://www.nextplatform.com/2018/11/15/china-navigating-the-homegrown-waters-for-exascale/amp/
 
China Spills Details on Exascale Prototypes
Michael Feldman | November 19, 2018 21:24 CET

At SC18, Depei Qian delivered a talk where he revealed some of the beefier details of the three Chinese exascale prototype systems installed in 2018. The 45-minute session confirmed some of the speculation about these machine that we have reported on, but also offered a deeper dive into their design and underlying hardware elements.

Before he got into the prototype particulars, Qian, who is the chief scientist of the China’s national R&D project on high performance computing, presented an overview of the country’s exascale effort, specifically its goals and challenges. With regard to the former, he reiterated China’s commitment to making sure the technologies that would be used for these machines would be “self-controllable,” with the implication that most if not all of the hardware and software elements would be developed domestically. The nature of the three prototypes certainly reflects this strategy.

Qian also talked about more specific goals for these supercomputers. Specifically, a Chinese exascale system will provide a peak performance of one peak exaflop – so apparently ignoring the Linpack requirement that most other nations are adhering to); a minimum system memory capacity of 10 PB; an interconnect that offers HPC-style latency and scalability and delivers 500Gbps of node-to-node bandwidth, although most of these systems seem to topping out at 400Gbps; and a system-level energy efficiency of at least 30 gigaflops per watt.

That 30 gigaflops/watt figure works out to about 33 megawatts for an exaflop, which is slightly higher that the 20MW to 30MW being envisioned in exascale programs in the US, Japan, and the EU – and those are for Linpack exaflops. In fact, Qian said energy efficiency is their number one challenge, the lesser ones being application performance, programmability, and resilience.

As far as the prototypes go, Qian’s talk at SC18 was the first instance of a public presentation that revealed the hardware makeup of these systems. A fair amount of this was provided in a slide deck he presented last year in Japan, but since this was prior to the installation of the prototypes, some of that information is no longer accurate.

All three prototypes -- Sugon, Tianhe, and Sunway (ShenWei) – were deployed over the last 10 months, with the last one being unveiled just a month ago. In Qian’s description of their design and components, we now have a fairly good understanding for what the full exascale systems will look like when, although some critical details are still missing.

Sugon prototype

sugon-prototype-slide-800x492.jpg

As we speculated in October, the Sugon prototype is indeed equipped with the AMD-licensed Hygon x86 processors. The advantage to this design for the supercomputing community in China is that it will maintain compatibility HPC software that’s already in production today.

The more interesting tidbit here is that the prototype will also use something called a “DCU” to act as an accelerator. Apparently, these chips are provided by Hygon as well and, according to Qian’s 2017 presentation will deliver 15 teraflops per chip in the full-blown exascale system. However, their performance to date appears to be just a fraction of that.

In the 512-node Sugon prototype, there are two Hygon x86 CPUs, plus two Hygon DCUs per node, but in the current test configuration, only half the DCUs are being used. And since the peak performance of the whole machine is 3.18 petaflops, that means the DCU in the protype is delivering something in the neighborhood of 6 teraflops – not bad, but they will need to more than double that over the next couple of years if they intend to meet their goals.

Sugon is aiming for the x86 CPU to deliver about a teraflop per chip in the exascale system, which either means Hygon has to bump up the performance in the implementation of its first-generation Zen CPU or is planning to license the Zen 2 or Zen 3 IP from AMD, either of which could easily supply the needed teraflop.

The Sugon prototype interconnect is a 6D Torus, based on 200Gbps technology of undetermined origin. It looks like they are aiming for about twice that bandwidth at some point, although that would be 100Gbps short of the generic 500Gbps exascale goal. Whatever it is, the interconnect relies on optical technology as part of its implementation.

The other interesting design feature of the Sugon machine is the use of an immersive cooling system. The prototype is employing something called Imm058, a coolant that boils at the relatively low temperature of 50C (122F). That makes it a good deal more effective than liquid cooling based on water, which boils at 100C (212F).

Tianhe prototype

tianhe-prototype-slide-800x471.jpg

Qian provided the least amount of detail for the Tianhe prototype, including the processor that will power it. As we have speculated in the past, we think this system will be based on a Chinese-designed Arm chip, which will likely be some version of Phytium’s Xiaomi platform.

In Qian’s SC18 presentation, as well as the one in 2017, the chip is only characterized as a new manycore processor that balances compute and memory, which frankly could be anything. But since China intends to build an Arm-based exascale supercomputer as one of its three options, by the process of elimination, this has to be it. Unless, of course, they have changed their minds.

As with the Sugon prototype, the Tianhe system is made up of 512 nodes, and delivers the nearly identical amount of performance: 3.14 petaflops. That suggests quite a powerful processor, something akin to the ShenWei manycore chip (see below), or perhaps a more modest processor that is suitable for a four-socket-per-node setup.

The network is a 3D butterfly design with a maximum of four hops. It is based on a high radix router chip that draws less than 200 watts of power. Optoelectrical technology will be used for the interconnect fabric, which in the final exascale system will provide 400 Gbps of bandwidth per node.

The design also emphasizes fault tolerance as a key design feature. This is implemented in the interconnect, as well as a new but undefined storage media.

Bottom line: This machine is still largely a mystery.

Sunway (Shenwei) prototype

sunway-prototype-slide-800x428.jpg

This one uses the ShenWei 26010 (SW26010) processor, the 260-core processor that currently powers the number three-ranked TaihuLight supercomputer. Each prototype node has two of these processors, which together deliver about 6 peak teraflops. The entire 512-node machine offers 3.13 petaflops.

In its current configuration, each node provides 11 gigaflops per watt. Sunway engineers will have to nearly triple that to meet the stated target for exascale energy efficiency. Needless to say, that’s a lot of innovation that needs to occur in the two to three years of remaining time before the final system is expected to be deployed.

Unlike the Sunway TaihuLight supercomputer, which uses Mellanox InfiniBand as the basis of its interconnect fabric, the exascale prototype employs a home-grown network chip that provides 200Gbps of point-to-point bandwidth. Again, this is part of China’s strategy to bring all the exascale technology in-country. Along those the same lines, this prototype’s storage subsystem is based on a ShenWei storage box.

As with the other prototypes, the Sunway system uses a liquid cooling system, but in this case a more conventional one based on a copper cold plate design.

Final thoughts

It’s probably no accident that each of these prototypes were deployed with 512 nodes. The standard size will make it easier to evaluate these systems on a level playing field, while providing at least petascale performance for developing and running software. Despite that, these are not pre-exascale machines in the sense that they will serve as direct stepping stones to full-up exascale supercomputers.

These 3-petaflop prototypes are more like technology testbeds, and it will be a challenge to scale these designs over a single generation without an intervening pre-exascale platform. We may yet see such systems deployed in China over the next two or three years (in fact, it plausible to consider TaihuLight as such a machine), but time is not on their side. The stated goal of bringing up the first exascale system in 2020 seems less likely than it did two years ago, and even a 2021 deployment would be a significant accomplishment.

Furthermore, although China has made noteworthy strides in designing and developing high performance processors like ShenWei, as Qian admitted, the country is playing catchup in semiconductor manufacturing and packaging. That will slow development of a next generation of processors, network chips, and memory devices needed for their exascale machinery.

That said, China’s exascale efforts are poised to change the global supercomputing landscape, not just for these extreme-scale systems, but for everyday HPC. At a time when Moore’s Law is slowing down, and high performance computing is being redefined by applications in data analytics and machine learning, the global community will benefit from a greater diversity of designs and approaches. The emergence of these first exascale supercomputers may turn out to be the least interesting part of all of this.


China Spills Details on Exascale Prototypes | TOP500 Supercomputer Sites
 
Zettascale by 2035? China Thinks So | HPCWire
By Tiffany Trader
December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. In a recently published paper, a team from the National University of Defense Technology in China, responsible for the Tianhe series of supercomputers, suggests that it will be possible to build a zettascale machine by 2035. The paper outlines six major challenges with respect to hardware and software, concluding with recommendations to support zettascale computing.

NUDT-zettascale-table.png

China’s zettascale strawman

The perspective piece gives an interesting peek into China’s post-exascale intentions (the project is supported by the National Key Technology R&D Program of China), but the challenges presented will be familiar to anyone engaged in pushing the boundary on leadership supercomputing.

The article “Moving from exascale to zettascale computing: challenges and techniques,” published in Frontiers of Information Technology & Electronic Engineering, (as part of a special issue organized by the Chinese Academy of Engineering on post-exascale computing) works as high-level survey of focus areas for breaching the next big performance horizon. And when might that be? The research team, even while pointing to slowdowns in performance gains, has set an ambitious goal: 2035. For the purposes of having a consistent metric, they’ve defined zettascale as a system capable of 10^21 double-precision 64-bit floating-point operations per second peak performance.

The potential impact of mixed-precision arithmetic and AI-type algorithms on performance metrics (already in motion) was not a focus topic, but the authors did note, “With the continuous expansion of application types and scales, we expect that the conventional scientific computing and the new intelligent computing will further enrich the application layer. Techniques (such as machine learning) will be used to auto-tune various workloads during runtime (Zhang et al., 2018).”

The likely impact on architectures was also noted:

"Since conventional HPC applications and emerging intelligent computing applications (such as deep learning) will both exist in the future, the processor design should take mixed precision arithmetic into consideration to support a large variety of application workloads."

The paper is organized thusly:

1 Introduction
2 Future technical challenges in high performance computing
2.2 Challenges in power consumption
2.3 Challenges in interconnection
2.4 Challenges in the storage system
2.5 Challenges in reliability
2.6 Challenges in programming​
3 Future high-performance computing technology evolution and revolution
3.1 Architecture
3.2 High-performance interconnecting technology
3.3 Emerging storage technology
3.4 New manufacturing process
3.5 Programming models and environments​
4 Suggestions for zettascale computing
The 9-page paper is accessible and best read in full. This excerpt from the final section gives a sense of the directions under consideration:

“To realize these metrics, micro-architectures will evolve to consist of more diverse and heterogeneous components. Many forms of specialized accelerators (including new computing paradigms like quantum computing) are likely to co-exist to boost high performance computing in a joint effort. Enabled by new interconnect materials such as photonic crystals, fully optical-interconnecting systems may come into use, leading to more scalable, high-speed, and low-cost interconnection.

“The storage system will be more hierarchical to increase data access bandwidth and to reduce latency. The 2.5D/3D stack memory and the NVM technology will be more mature. With the development of material science, the memristor may be put into practice to close the gap between storage and computing, and the traditional DRAM may end life. To reduce power consumption, cooling will be achieved at multiple levels, from the cabinet/board level to the chip level.

“The programming model and software stack will also evolve to suit the new hardware models. Except for the MPI+X programming model, new programming models for new computing paradigms and new computing devices will be developed, with the balance of performance, portability, and productivity in mind. Conventional HPC applications and emerging intelligent computing applications will co-exist in the future, and both hardware and software layers need to adapt to this application workload evolution (Asch et al., 2018).”

Link to Journal article: https://link.springer.com/article/10.1631/FITEE.1800494

Link to paper: https://link.springer.com/content/pdf/10.1631/FITEE.1800494.pdf

Special issue on post-exascale supercomputing: https://link.springer.com/journal/11714/19/10/page/1
 
Last edited:
China tests new-generation exascale supercomputer prototype
Source: Xinhua| 2019-01-17 21:07:29|Editor: ZX

TIANJIN, Jan. 17 (Xinhua) -- The prototype of China's new-generation exascale supercomputer Tianhe-3 has been tested for over 30 organizations in China, and it is expected to provide computing services to users in China and overseas, the National Supercomputer Center in Tianjin said.

The prototype was operated to meet simultaneous demands from 30 organizations including the Chinese Academy of Sciences and the China Aerodynamics Research and Development Center, said Meng Xiangfei, head of the center's applied research and development department.

It has provided computing services for over 50 apps in fields of large aircraft, spacecraft, new generation reactors, electromagnetic simulation and pharmaceuticals, he said.

The sample machine passed tests in July last year and is ready for application. It is a first-phase result in the research of exascale supercomputer capable of a quintillion calculations per second.

The new supercomputer Tianhe-3 will be 200 times faster and have 100 times more storage capacity than the Tianhe-1 supercomputer, China's first petaflop supercomputer launched in 2010.

Zhang Ting, an engineer with the center, said the supercomputer prototype will provide high-quality computing and technical service to clients in high-performance computing, artificial intelligence and big data. It is expected to help boost computing capabilities for technological institutions.
 
China plans multibillion-dollar investment to knock US from top spot in fastest supercomputer ranking | South China Morning Post
  • China and the US dominate when it comes to the world’s fastest supercomputers, owning 45.4 per cent and 21.8 per cent of the top systems globally respectively
  • Multibillion-dollar investment aimed at upgrading three existing supercomputer labs to the latest exascale computing technology over three-year period

Li Tao

Published: 12:18pm, 18 Mar, 2019

China is planning a multibillion-dollar investment to upgrade its supercomputer infrastructure to regain leadership after the US took top spot for the fastest supercomputer in 2018, ending China’s five-year dominance, according to people familiar with the matter.

China is aiming for its newest Shuguang supercomputers to operate at about 50 per cent faster than the current best US machines, which assuming all goes to plan should help China wrest the title back from the US in this year’s rankings of the world’s fastest machines, according to people, who asked not to be named discussing private information.

These next-generation Chinese supercomputers will be delivered to the computer network information Centre of the Chinese Academy of Sciences (CAS) in Beijing for the global Top500 rankings of the world’s fastest computers, the people said.

The ability to produce state-of-the-art supercomputers is an important metric of any nation’s technical prowess as they are widely deployed for tasks ranging from weather predictions and modelling ocean currents to energy technology and simulating nuclear explosions. Demand for supercomputing in commercial applications is also on the rise, driven by developments in artificial intelligence.

In 2015, US President Barack Obama signed an executive order to authorise the creation of the National Strategic Computing Initiative (NSCI) to accelerate the development of technologies for exascale supercomputers and to fund research into post-semiconductor-based computing.

Exascale computing refers to machines capable of at least a quintillion (or a billion billion) calculations per second.

Calls to the computer network information centre of CAS seeking confirmation of the plan were not answered and the centre did not immediately reply to an email seeking comment. Phone calls made to China’s Ministry of Science and Technology, which coordinates the country’s science and technology activities, went unanswered. The National Networking and Information Technology Research and Development (NITRD) Program that oversees the NSCI did not immediately respond to an email asking for comment on China’s plan.

China and the US dominate when it comes to the world’s fastest supercomputers, owning 45.4 per cent and 21.8 per cent of the top systems globally respectively, followed by 6.2 per cent for Japan and 4 per cent in the United Kingdom, according to the Top500 list released in November. Supercomputer rivalry between the US and China has also been reflected in trade friction between the two countries, especially since China’s rapid rise in the field.

China began to build supercomputers without US semiconductors after the Obama administration banned the sale of high-end Intel, Nvidia and AMD chips for Chinese supercomputers in 2015. The following year, China launched its Sunway TaihuLight supercomputer, powered by a Linux-based Chinese operating system and incorporating a locally developed chip called Matrix-2000. This machine became the fastest supercomputer on the Top500 list in June 2016.

“Huge information processing capability is the foundation of artificial intelligence, the industrial internet, 5G and other future industries,” said Cao Zhongxiong, executive director of new technology studies at Shenzhen-based think tank China Development Institute. “Although the US is a major competitor and it has tried to rein in China’s progress, the enormous internal demand for supercomputing capacity has forced China to solve the problems through its independent development.”

China’s planned investment, funded by the central government and respective local governments, will help the country lay out a bigger blueprint for the future development of Chinese supercomputers.

Specifically, funding will be used to upgrade three existing facilities to the latest exascale computing machines over the next three years.

The Qingdao National Laboratory for Marine Science and Technology, the National Supercomputing Centre of Tianjin and National Supercomputing Centre in Shenzhen are expected to complete their upgrade to exascale computing machines in 2020, 2021 and 2022, respectively, as part of efforts by China for “continuous leadership” in supercomputing, said the people, adding that the exascale computers in these centres should be able to perform calculations several times faster than Summit, the top US machine.

The US has its Exascale Computing Project with the goal of launching an exascale computing ecosystem by 2021.

The four other national supercomputer centres in China are located in Wuxi, Jiangsu province, Ji’nan, Shandong province, Changsha, Hunan province, and Guangzhou, Guangdong province.

Although the US has dominated supercomputing for many years, China has been No 1 on the global Top500 list since the launch of Tianhe-2 in 2013. Located in the National Supercomputer Centre in Guangzhou, Tianhe-2 was built by China’s National University of Defence Technology.

China was able to maintain No 1 spot until 2017. However, in June 2018 the US Summit supercomputer operated by the US Department of Energy became No 1 in the Top500 list, pushing Sunway TaihuLight at the National Supercomputing Centre in Wuxi into second place.

In the most recent semi-annual global contest in November last year, the Summit and Sierra US supercomputers led in the charts, while China’s Sunway TaihuLight and Tianhe-2 were in third and fourth positions.

Leading supercomputer manufacturers in China include the National Research Centre of Parallel Computer Engineering and Technology, Dawning Information Industry, and the National University of Defence Technology.
 
China races to regain first place in world of supercomputers

Source:Global Times Published: 2019/4/1


451092ed-482f-4595-a801-57d34026b286.jpeg

Chinese supercomputer Sunway TaihuLight in Wuxi, East China's Jiangsu Province. Photo: VCG

A key national supercomputing project in Jinan, East China's Shandong Province, that will house some of the world's fastest supercomputers finished the construction of major building on Sunday.

China is rapidly upgrading its supercomputer infrastructure to regain leadership after the US took top spot for the fastest supercomputer in 2018.

It took only 108 days to build and fit out the major areas of the main building in the Science and Technology Park project of the National Supercomputing Center, according to a report by cnr.cn on Monday.

The building will house an exascale-class supercomputer, the report said.

China is expecting to win back the first place on the top 500 list of the world's fastest supercomputers this year, with the ongoing development of three exascale-class supercomputers - the Sunway, the Shuguang and the Tianhe-3.

These supercomputers will be able to carry out more than 1 billion billion calculations a second, which would beat the US Summit supercomputer. The Summit took first place in 2018.

A Sunway prototype exascale-class computer that started operating at the national supercomputer center in August 2018 has expanded more than 30 items of applications in 12 fields, and its key devices such as processors, network chipsets, storage and management systems are all domestically developed, according to the National Supercomputing Center in Jinan, Xinhua reported in March.

"The research and application of Sunway prototype computers have fully validated the core technology of exascale-class computing, which has paved the way for the next generation of supercomputer development," said Pan Jingshan, deputy director of the National Supercomputing Center in Jinan, according to Xinhua.

http://www.globaltimes.cn/content/1144291.shtml
 
ASC Accelerates AI Talents Training with Supercomputer Challenge | HPCwire
ASC Accelerates AI Talents Training with Supercomputer Challenge
April 23, 2019

SPONSORED CONTENT BY INSPUR
ASC19 Student Supercomputer Challenge (ASC19) entered its breathtaking final week. From April 21-25, teams from 20 renowned universities around the world are taking part in the final round of the competition at Dalian University of Technology. They need to design and build a system combining high-performance computing (HPC) and artificial intelligence (AI) to flexibly respond to the challenges brought by traditional scientific computing and emerging AI computing.

...
 
ASC HPC Challenge@aschpc

Congratulations to National Tsing Hua University (NTHU) on winning the overall champion at #ASC19 Finals!

A ROUND OF APPLAUSE FOR TEAM NTHU!
1f44f.png
1f44f.png
1f44f.png


3:55 PM - Apr 25, 2019

***###***​

FYI, National Tsing Hua University is NOT the same as Tsing Hua University.

National Tsing Hua University - Wikipedia
National Tsing Hua University (NTHU; Chinese: 國立清華大學) is a research university located in Hsinchu City, Taiwan, R.O.C.

The university was first founded in Beijing. After the Kuomintang retreated to Taiwan in 1949 following defeat by the Communist Party of China in the Chinese Civil War, NTHU was re-established in Hsinchu City in 1956.

Today, there are 7 colleges, 17 departments and 22 research institutes affiliated to the university. College of Nuclear Science of NTHU is the sole educational and research institution focusing on the peaceful applications of nuclear power in Taiwan.[5]
 
CHINA FLESHES OUT EXASCALE DESIGN FOR TIANHE-3 SUPERCOMPUTER
May 2, 2019 Michael Feldman
tianhe2-604x381.jpg

One reason China has a good chance of hitting its ambitious goal to reach exascale computing in 2020 is that the government is funding three separate architectural paths to attain that milestone. This internal competition will pit the National University of Defense Technology (NUDT), the National Research Center of Parallel Computer and Sugon (formerly Dawning) against one another to come up with the country’s (and perhaps the world’s) first exascale supercomputer.

As it stands today, each vendor has developed and deployed a 512-node prototype system based on what appears to be primarily pre-exascale componentry. Transforming these very modest prototypes into 100,000-node-plus exascale supercomputers is going to be quite a challenge, not only because it represents a huge leap in scale, but also because China is committed to powering these systems using relatively immature domestic processors. At a recent presentation by NUDT’s Ruibo Wang, he recapped the three prototypes that were deployed in 2018 and filled in some of the specifics on his organization’s plans for its exascale machine: Tianhe-3.

Let’s start with the NRCPC prototype, which, as a CPU-only machine, is probably the most conventional of the bunch. In fact, it’s the only non-accelerated architecture currently vying for exascale honors in China. Each of its nodes is equipped with two ShenWei 26010 (SW26010) processors, the same chip that is powering Sunway’s TaihuLight supercomputer. The 26010 has 260 cores and delivers about 3 teraflops of 64-bit floating point performance. Presumably, Sunway has a more powerful ShenWei chip in the works for NRCPC’s future exascale system, although it hasn’t offered any indication of what that might look like. We would expect it to deliver something on the order of 10 teraflops.


The Sugon prototype is a heterogenous machine comprised of nodes, each outfitted with two Hygon x86 CPU and two DCUs, and hooked together by a 6D torus network. The CPU is a licensed clone of AMD’s first-generation EPYC processor, while the DCU is an accelerator built by Hygon. In a 2017 presentation by Depei Qian, he said the DCU in the full exascale system will deliver 15 teraflops, which certainly is not the case for the prototype system. One interesting facet of the Sugon machine is that it’s being cooled by a liquid immersion system, which might indicate that the DCU chip dissipates an enormous amount of heat.


The NUDT prototype is another heterogenous architecture, in this case using CPUs of unknown parentage, plus the Matrix-2000+, a 128-core general-purpose DSP chip. The Matrix-2000+ is presumably the successor to the Matrix-2000, the accelerator used in the 100-petaflop Tianhe-2A supercomputer, which is currently the number four system on the TOP500 list. At peak, the Matrix-2000+ delivers two teraflops of performance and burns about 130 watts. If they were to be used to power an exaflop machine on their own, the DSP chips alone would draw about 65 megawatts.


However, for NUDT’s Tianhe-3 exascale system, the plan is to use the upcoming Matrix-3000 DSP and some future CPU. The DSP is expected to sport at least 96 cores and deliver more than 10 teraflops of performance, while the 64-core CPU will provide 2 teraflops. Each blade will be equipped with eight of these DSPs paired with eight CPUs, providing 96 teraflops per blade.


The entire system will be comprised of 100 cabinets, each containing 128 blades, which works out to 1.29 exaflops (peak). Everything will be hooked together with a homegrown 400Gbps network, using a 3D butterfly topology. That will provide a maximum of five hops between any two nodes. Cooling will be provided by a hybrid air/water system, which is expected to deliver a PUE of less than 1.1.


The only big mystery remaining is the nature of Tianhe-3’s CPU. As we’ve speculated before, we’re guessing that it’s going to be some sort of Arm processor. That still makes a lot of sense, especially because China has hinted for some time that one of its exascale systems will be using this architecture. Given the processor’s two teraflop performance goal, it may even end up being an Armv8-A implementation with the Scalable Vector Extension (SVE).

If they decide to go down that route, one possible avenue for NUDT is that they could license Fujitsu’s A64FX design, the Arm SVE technology behind Japan’s Post-K exascale supercomputer. Not only do these processors deliver 2.7 teraflops of performance today, but Fujitsu has already developed a set of HPC libraries for them. As we reported just a couple of week ago, Fujitsu is looking to sell some of the technology it developed for Post-K, and the intellectual property behind its HPC Arm chip might be its most bankable product.

In any case, if the Tianhe-3 developers are on schedule, we’ll find out soon enough on what they chose for their CPU design.


https://www.nextplatform.com/2019/05/02/china-fleshes-out-exascale-design-for-tianhe-3/
 
Sugon plans high-tech base in Fuzhou
By Fan Feifei and Hu Meidong in Fuzhou | China Daily | Updated: 2019-05-09 10:12
f_art.gif
w_art.gif
in_art.gif
more_art.gif


5cd38c9da3104842e4aca306.jpeg
Visitors leave the venue of the 2nd Digital China Summit that concluded on Wednesday in Fuzhou, Fujian province. [Photo by Zhu Xingxin/China Daily]

China's leading supercomputer manufacturer Dawning Information Industry Co Ltd, also known as Sugon, hopes to further integrate its advanced computing technology with more emerging industries, such as biologicals, new energy vehicles, high-end manufacturing and environmental protection, all of which have huge demands for such cutting-edge technology.

The company plans to establish a high-end machine manufacturing base in Fuzhou, Fujian province. It signed strategic cooperation agreements with Fuzhou municipal government, Fujian Electronics & Information (Group) Co Ltd and CITIC Network Co Ltd during the Second Digital China Summit.

Under the agreement, a digitalized, informationalized and intelligent manufacturing plant will be built to boost the growth of super computing, cloud computing, big data and artificial intelligence, and foster new growth drivers for the local economy.

Ren Jingyang, senior vice-president of Sugon, said the company is stepping up efforts to build a national-level advanced computing innovation center, considering the burgeoning demand for computing power.

Ren said the center will gather other companies engaged in software, algorithms, applications and research institutes to solve the problems in the advanced computing sector and to make breakthroughs in related core technologies.

"In addition, we will promote the integration of advanced computing with industry applications," he said, adding such computing technology, which is developing very fast with high iteration rate, will have broad application prospects in emerging sectors.

According to Ren, the company now has a 40-percent share in the domestic market. Moreover, Sugon is dedicated to developing server, storage, urban and industrial cloud computing, and big data businesses, and promoting the building of cloud data service network covering hundreds of cities and sectors, Ren said.

"More enterprises and organizations should enter the cloud computing field as the country's overall computing power is insufficient," Ren said.

According to the China Academy of Information and Communications Technology, the global cloud computing market could be worth as much as $143.5 billion by 2020, and China is one of the world's fastest-growing markets.

Backed by the Chinese Academy of Sciences, Sugon is one of China's earliest and largest high-performance computing vendors. It produced 57 supercomputers in the latest top 500 supercomputer rankings.

Zhang Yunquan, a researcher with the institute of computing technology at the Chinese Academy of Sciences, said the country's supercomputer sector is booming, with applications expanding from internet, big data and AI to gene sequencing and finance segments.

Sugon posted a year-on-year growth of 43.89 percent in revenue last year, with its operating income amounting to 9.06 billion yuan ($1.34 billion). It also injected 724 million yuan into the research and development fields last year, a significant increase of over 68 percent year-on-year, the company said.
 
The city of Zhengzhou gets the nation's 7th supercomputing centre

100Pflops peak, completion in H1 2020.

第7个国家超算中心落地郑州,明年上半年建成

2019-05-14 13:01:43 来源:观察者网

河南省人民政府门户网站5月14日消息,日前,国家超级计算郑州中心获得科技部批复筹建,成为全国第7家批复建设的国家超级计算中心,也是科技部出台认定管理办法后批复筹建的首家国家超级计算中心。

超级计算能力是衡量一个国家科技核心竞争力和综合国力的重要标志,以超级计算机为核心工具的国家超级计算中心是国家战略性信息基础设施和科技创新战略平台,截止目前全国共建成天津、济南、长沙、深圳、广州、无锡等6家国家超级计算中心。

根据筹建方案,国家超级计算郑州中心拟依托郑州大学建设运营,计划于2020年上半年建设完成,配备技术先进、自主可控新一代超级计算机系统,峰值计算能力达到100Pflops,存储容量100P,力争计算能力排名进入国际同期前10名。

观察者网注:Flops是floating point operations per second每秒所执行的浮点运算次数的英文缩写。它是衡量一个电脑计算能力的标准。一个PFlops(petaflops) 等于每秒1千万亿次的浮点运算。100P=10万T,1T=1024 GB。比如一个用户需要10G空间存储资料,1000万个用户就需要 1000万个10G,也就是10万T。)

国家超级计算郑州中心将围绕人工智能、装备制造、精准医学、生物育种等方面开展一批重点特色应用,着力建设为全国具有重要影响力的战略基础设施和重大科研装置,打造高端信息人才培养和溢出的策源地,为我省产业转型发展提供强有力的算力支撑。

“近年来,中国在超级计算机方面发展迅速,目前已在国际超级计算领域占据重要位置。”中国科学院院士陈国良9日在国家超级计算济南中心科技园启动仪式现场介绍说,超级计算机是国家基础性战略资源,应用于解决尖端技术研发领域的计算问题,是实现高性能计算的大国重器,是促进科技创新的重要信息基础设施。
 
TOP500 Becomes a Petaflop Club for Supercomputers | TOP500 Supercomputer Sites
TOP500 News Team | June 17, 2019 03:00 CEST

BERKELEY, Calif.; FRANKFURT, Germany; and KNOXVILLE, Tenn.— The 53rd edition of the TOP500 marks a milestone in the 26-year history of the list. For the first time, all 500 systems deliver a petaflop or more on the High Performance Linpack (HPL) benchmark, with the entry level to the list now at 1.022 petaflops.

Top 10 rundown

The top of the list remains largely unchanged, with only two new entries in the top 10, one of which was an existing system that was upgraded with additional capacity.

Two IBM-built supercomputers, Summit and Sierra, installed at the Department of Energy’s Oak Ridge National Laboratory (ORNL) in Tennessee and Lawrence Livermore National Laboratory in California, respectively, retain the first two positions on the list. Both derive their computational power from Power 9 CPUs and NVIDIA V100 GPUs. The Summit system slightly improved its HPL result from six months ago, delivering a record 148.6 petaflops, while the number two Sierra system remains unchanged at 94.6 petaflops.

The Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, holds the number three position with 93.0 petaflops. It’s powered by more than 10 million SW26010 processor cores.

At number four is the Tianhe-2A (Milky Way-2A) supercomputer, developed by China’s National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzhou. It used a combination of Intel Xeon and Matrix-2000 processors to achieve an HPL result of 61.4 petaflops.

Frontera, the only new supercomputer in the top 10, attained its number five ranking by delivering 23.5 petaflops on HPL. The Dell C6420 system, powered by Intel Xeon Platinum 8280 processors, is installed at the Texas Advanced Computing Center of the University of Texas.

At number six is Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland. It’s equipped with Intel Xeon CPUs and NVIDIA P100 GPUs. Piz Daint remains the most powerful system in Europe.

Trinity, a Cray XC40 system operated by Los Alamos National Laboratory and Sandia National Laboratories improves its performance to 20.2 petaflops, which earns it the number seven position. It’s powered by Intel Xeon and Xeon Phi processors.

The AI Bridging Cloud Infrastructure (ABCI) is installed in Japan at the National Institute of Advanced Industrial Science and Technology (AIST) and is listed at number eight, delivering 19.9 petaflops. The Fujitsu-built system is equipped with Intel Xeon Gold processors and NVIDIA Tesla V100 GPUs.

SuperMUC-NG is in the number nine position with 19.5 petaflops. It’s installed at the Leibniz-Rechenzentrum (Leibniz Supercomputing Centre) in Garching, near Munich. The Lenovo-built machine is equipped with Intel Platinum Xeon processors, as well as the company’s Omni-Path interconnect.

The upgraded Lassen supercomputer captures the number 10 spot, with an upgrade that boosted its original 15.4 petaflops result on HPL to 18.2 petaflops. Installed at Lawrence Livermore National Laboratory, Lassen is the unclassified counterpart to the classified Sierra system and shares the same IBM Power9/NVIDIA V100 GPU architecture.

China leads by sheer numbers, US by performance

China claims the most TOP500 systems, with 219, followed by the United States, with 116. Japan is in third place with 29 systems, followed by France, with 19, the United Kingdom, with 18, and Germany with 14.

Despite the US being a distant second in total number of systems, it claims a large number of systems near the top of the list. That enables it to maintain its lead in overall HPL capacity, with 38.4 percent of the aggregate list performance. (Summit and Sierra, alone, represent 15.6 percent of the list’s HPL flops.) China, with its comparatively smaller systems, takes second place, with 29.9 percent of the performance total.

Chinese vendors lead the way

China’s dominance in system numbers is also reflected in vendor shares. Lenovo claims the greatest number of systems on the list, with 173, followed by Inspur with 71, and Sugon, with 63. All three improved on their system share from six months ago.

HPE, with 40 systems, and Cray, with 39 systems, occupy fourth and fifth place, respectively. Bull, as the only European-based system vendor on the list, claims 21 systems, followed by Fujitsu, with 13, and IBM, with 12. However, since IBM is the vendor of Summit, Sierra, and a number of other large systems, the company’s aggregate TOP500 performance is 207 petaflops, a number only exceeded by Lenovo, with 14 times as many systems.

Intel and NVIDIA set the pace in silicon

From a processor perspective, Intel continues to dominate the TOP500 list, with the company’s chips appearing in 95.6 percent of all systems. IBM Power CPUs are in seven systems, followed by AMD processors, which are present in three systems. A single supercomputer on the list, Astra, is powered by Arm processors.

A total of 133 systems on the TOP500 list employ accelerator or coprocessor technology, down slightly from 138 six months ago. Of these, 125 systems use NVIDIA GPUs. About half of those (62) using the latest Volta-generation processors, with the remainder (60) based on Pascal and Kepler technology.

Interconnects – a mixed bag

From an interconnect perspective, Ethernet continues to dominate the list overall, laying claim to 54.2 percent of TOP500 systems. InfiniBand is the second most popular interconnect, appearing in 25 percent of systems, followed by custom and proprietary interconnects at 10.8 percent, and Omni-Path at 9.8 percent.

However, when looking at the 50 fastest supercomputers on the list, those numbers change dramatically, with custom interconnects being used in 40 percent of the top systems, followed by InfiniBand at 38 percent, Omni-Path at 10 percent, and Ethernet at 2 percent (a single system).

Green500 results

Turning to the related Green500 list, energy efficiency hasn’t moved much since the previous list was released in November 2018. The Shoubu system B maintains its number one position with an efficiency of 17.6 gigaflops/watt. Nvidia’s DGX SaturnV Volta system holds on to second place with 15.1 gigaflops/watt, followed by Summit at 14.7 gigaflops/watt and the AI Bridging Cloud Infrastructure (ABCI) at 14.4 gigaflops/watt. The MareNostrum P9 CTE cluster improved its result from six month ago to capture the fifth position with 14.1 gigaflops/watt. Overall, the average energy efficiency of systems on the Green500 list has improved from 3.0 gigaflops/watt, six months ago, to 3.2 gigaflops today.

HPCG results

The benchmark results for High-Performance Conjugate Gradient (HPCG) Benchmark were largely unchanged from last November, with the top five entries of Summit, Sierra, the K computer, Trinity, and the AI Bridging Cloud Infrastructure maintaining their previous rankings from November 2018. Summit and Sierra remain the only two systems to exceed a petaflop on the HPCG benchmark, delivering 2.9 petaflops and 1.8 petaflops, respectively. The average HPCG result on the current list is 213.3 teraflops, a marginal increase from 211.2 six months ago.
 
US block spurs tech independence drive by Chinese companies
By Huang Ge Source:Global Times Published: 2019/6/23 21:48:39

US block spurs tech independence drive

be5ee480-c2d5-47af-9362-0a9e81542ace.jpeg
A worker checks the TH-2 High Performance Computer System in a lab in Guangzhou, capital of South China's Guangdong Province. File photo: VCG

The latest US blacklisting of the Chinese supercomputing companies will not reduce domestic technology companies' resolve to pursue innovation and research and development (R&D) as they strive to make up for shortcomings in certain segments to pursue further growth despite "irrational assaults" by Washington, industry insiders said.

The US Commerce Department said on Friday it was adding five Chinese companies to an "entity list," citing so-called national security concerns.

The entities are some of China's leading supercomputer makers - Sugon, the Wuxi Jiangnan Institute of Computing Technology, semiconductor company Higon, Chengdu Haiguang Integrated Circuit and Chengdu Haiguang Microelectronics Technology, according to a filing released by the US department.

The list would ban the Chinese companies from buying US components without US government approval. The relevant decision was scheduled to take effect on Monday, media reports said.

The move represents another unilateral sanction on Chinese companies after the US put Huawei Technologies Co and 70 of its affiliates on an entity list in May.

It also comes before a scheduled meeting between President Xi Jinping and US President Donald Trump later this week at the G20 summit in Japan to discuss bilateral trade differences.

Coming at this time, the US crackdown is likely aimed at increasing pressure on China to gain more bargaining chips for the upcoming trade and economic talks, Chinese experts said.

They also noted that like its ongoing curbs on Huawei, the US block on the Chinese supercomputing industry is intended to cut the supply chain of Chinese supercomputer makers to further weaken the nation's technological and economic development.

"The US-initiated trade war and the country's continuous crackdown on China's technology sector showed America's obvious ill intention - to keep its hegemony in the world market," said Zhuang Rui, deputy dean of the University of International Business and Economics' Institute of International Economics in Beijing.

The rise of China's advanced technology sector in recent years, including segments such as 5G and supercomputing, has made the US feel rising pressure, experts said.

China continues to claim the largest number of supercomputers on a global Top 500 list, with 219 systems, or 43.8 percent of the total, followed by the US with 116, and Japan third with 29, according to an industry ranking released on June 17 in Frankfurt, Germany.

The US has long been curbing the growth of China's supercomputing sector by restricting sales of products to Chinas, which drove Chinese companies to develop their own technology and seek growth on a large scale, Fu Liang, a Beijing-based independent industry analyst, told the Global Times on Sunday.

After the US banned Intel to export Xeon Phi chip to China's supercomputing industry in April 2015, China's self-developed supercomputers beat the US' supercomputers in computing speed in 2016.

The latest Chinese additions to the US blacklist are likely to be affected in the short term as their supply chain will be interrupted, Fu said.

But it will also push them to find alternatives and pursue independent R&D to overcome shortcomings in the sector, he said.

According to a statement Sugon filed with the Shanghai Stock Exchange on late Sunday, the company is verifying relevant content, comprehensively evaluating the impact of the ban and making preparations.

Independent growth

Competition between the Chinese and US advanced manufacturing and technology sectors has intensified in recent years, but the US is headed in the wrong direction, which is one intended to "maintain its competitiveness by attacking its competitors," Zhuang told the Global Times on Sunday.

"To confront the continuous US pressure, Chinese technology companies have realized that only by focusing on independent innovation and R&D, can they gain autonomy in the industry," she said.

Now the US is targeting Chinese industries that need support from US suppliers, and such restrictions will likely to extend to more sectors in which China has performed well on a global basis, such as biotechnology, Fu said.

But because China has a large market and is building up resources in areas such as capital and talent in industries like high-speed internet, he said, "the country will be able to find other solutions independent of the US."

"Education matters in the process," Zhuang said, noting that China is expected to put more focus on nurturing technology specialists.

Experts also said that US companies that threaten the interests of Chinese companies could possibly be added to China's "unreliable entity list."

China announced on May 31 that it would establish an "Unreliable Entity List" of foreign companies and individuals that block or cut supplies to Chinese companies for non-commercial purposes, and seriously damage the legitimate rights and interests of Chinese enterprises.

Shares of US semiconductor companies tumbled on Friday after the Commerce Department announced its latest attack targets. Advanced Micro Devices fell 3.03 percent, Xilinx dropped 2.28 percent and Nvidia was down 1.52 percent.
 
China ‘has decided not to fan the flames on supercomputing rivalry’ amid US tensions | South China Morning Post
  • According to the Top500 list published last week, the US has retained its top position as the producer of the fastest supercomputers in the world
Li Tao, Bien Perez
Published: 12:00am, 26 Jun, 2019

d9a1bed2-971b-11e9-b82d-cb52a89d5dff_image_hires_080750.JPG
Employees are reflected on glass as they work in front of supercomputers at The National Supercomputer Center in Jinan, Shandong province, China, 17 October 2018. Photo: EPA-EFE

China chose not to confront the US directly in the field of supercomputing before the Trump administration’s recent decision to add five Chinese high-performance computing companies to its trade blacklist, according to people familiar with the matter.

Chinese decision makers decided to withhold the country’s newest Shuguang supercomputers from the latest supercomputing contest, even though they operate more than 50 per cent faster than the best current US machines, as China does not want to fan the flames of existing trade tensions, said the sources, who declined to be named as the information is private.

According to the Top500 list published last week, the US has retained its top position as the producer of the fastest supercomputers in the world. China, which has not introduced any new machines in recent months, is in second place. The Top500 list is released twice a year, once in June and again in November.

The newest Shuguang supercomputers, currently located at the computer network information Centre of the Chinese Academy of Sciences (CAS) in Beijing, are capable of performing more than 200 petaflops. A petaflop refers to one quadrillion (or a million billion) calculations per second.

The Shuguang supercomputers’ abilities far exceed the US leaders in the chart, Summit and Sierra, two IBM-built supercomputers that delivered a record 148.6 petaflops and 94.6 petaflops in the June contest respectively, said the people.

“China is finding itself with no choice but to create its own alternatives to US technology,” said Paul Haswell, a partner who advises technology companies at international law firm Pinsent Masons.

“This inevitably takes time, and will have a corresponding impact on China’s development in all fields requiring state-of-the-art supercomputing tech.”

However, China’s strategic concession to play a low-key game on supercomputing rivalry did not stop the US Commerce Department last Friday from adding five Chinese top supercomputing developers to its Entity List, which effectively bars them from purchasing American technology.

What you need to know about the Chinese supercomputer firms added to US trade blacklist

Supercomputers have become an emblem of technological might, and apart from bragging rights, they can be applied to sensitive areas such as nuclear weapons development, encryption and missile defence, among others.

They are also used for weather prediction, modelling ocean currents, in energy technology and for simulating nuclear explosions. Demand for supercomputing in commercial applications is also on the rise, driven by developments in artificial intelligence.

The US Commerce Department said it was adding Sugon, the Wuxi Jiangnan Institute of Computing Technology, Higon, Chengdu Haiguang Integrated Circuit and Chengdu Haiguang Microelectronics Technology – along with numerous aliases of the five entities – to the list over concerns about military applications of the supercomputers they are developing.

The move comes ahead of US President Donald Trump and Chinese President Xi Jinping’s meeting during the G20 summit in Japan this week, and as China and the US vie to produce the first exascale computer, a next-generation machine capable of one quintillion – or a billion billion – calculations a second.

In 2015, US President Barack Obama signed an executive order to authorise the creation of the National Strategic Computing Initiative (NSCI) to accelerate the development of technologies for exascale supercomputers and to fund research into post-semiconductor-based computing.

China plans massive computer investment to knock US from top spot

China began to build supercomputers without US semiconductors after the Obama administration banned the sale of high-end Intel, Nvidia and AMD chips for Chinese supercomputers in 2015.

The US decision to block the five Chinese supercomputing companies will not have a “decisive impact” as domestic players, according to a report on Tuesday by the Science and Technology Daily, the official newspaper of China’s Ministry of Science and Technology.

That’s because they are already capable of producing key components, including CPUs for the supercomputers, unassisted, the report said.

Sugon, officially known as Dawning Information Industry Co, is a leading Chinese company in the field of high-performance computing (HPC), servers, storage, cloud-computing and big data, and is also a key developer of the Shuguang supercomputers.

Backed by the Chinese Academy of Sciences, Sugon was also the first company to bring China into the global top 3 for supercomputing, and held pole position in China’s Top 100 rankings for HPC for eight consecutive years, from 2009 to 2016, according to its website.

China is planning multibillion-dollar investments to upgrade its supercomputer infrastructure to regain leadership after the US took top spot for the fastest supercomputer in 2018, ending China’s five-year dominance, the South China Morning Post reported in March, citing people familiar with the matter.

The US has its Exascale Computing Project, which has the goal of launching an exascale computing ecosystem by 2021.

But in China, the Qingdao National Laboratory for Marine Science and Technology, the National Supercomputing Centre of Tianjin and the National Supercomputing Centre in Shenzhen are expected to complete their upgrade to exascale computing machines in 2020, 2021 and 2022, respectively, as part of China’s drive for “continuous leadership” in the field, the Post also reported in March.

Additional reporting by Bien Perez
 

Back
Top Bottom