Cloud Giants Will Drive Supercomputing Demand

Summary Bullets:

C. Drake

• Major cloud service providers, including Amazon Web Services, Google Cloud, and Microsoft Azure, are intent on capturing a larger share of the lucrative market for supercomputing.

• Competition between IT infrastructure vendors and cloud service providers will make supercomputing more accessible, while also helping to drive use case development.

Supercomputers have traditionally been major fixed infrastructure investments that are designed for specific tasks, and which comprise hardware provided by IT infrastructure providers such as Lenovo, Dell Technologies, and Hewlett Packard Enterprise (HPE). However, increasingly the world’s largest cloud service providers, including Amazon Web Services, Google Cloud, and Microsoft Azure, are intent on capturing a larger share of this lucrative market.

Microsoft Azure recently announced the general availability of its most powerful public cloud supercomputer service, which is powered by NVIDIA A100 Tensor Core GPUs. Microsoft joins AWS, Google, and Oracle, which have all recently launched similar supercomputing-as-a-service offerings, based on NVIDIA’s A100 platform. Microsoft has demonstrated that pre-release performance testing yielded processing speeds of 16.59 petaflops (one thousand million million floating-point operations per second). This result would position Microsoft’s new cloud offering within the top 20 of the November 2020 Top 500 list of the world’s fastest supercomputers.

Supercomputing is similar to high performance computing (HPC), but delivers the most powerful processing capabilities available. Unlike an HPC server cluster, which can be used to support multiple, diverse applications, a supercomputer typically consists of a single computer that is customized to perform a specific task, for example medical research. During the COVID-19 pandemic, powerful supercomputer clusters played an important role in helping governments and other organisations with pandemic-related initiatives. These included running models to help scientists understand the virus and how it spreads, as well as supporting advancements in therapeutics, and even producing vaccines.

Recent months have seen the announcement of several new supercomputer initiatives, including a EUR20 million, Lenovo-supplied supercomputer for SURF, the ICT cooperative for education and research in the Netherlands, a new, HPE-supplied national supercomputer for Singapore, and a new 10-petaflop, Fujitsu-supplied supercomputer for Portugal and the European Union.

Although demand for supercomputing appears to be on the rise, it is important to consider its longer-term growth potential. This is a difficult question to address, as much depends on the evolution of future supercomputing use cases. Apart from medical and scientific research, supercomputing applications include climate change modelling, oil and gas exploration, and computer-aimed engineering. These are use cases that involve large amounts of data, machine learning algorithms, and complex modelling and simulation. The future of supercomputing is closely linked to the development of AI, big data analytics, and IoT.

In future, it seems likely that a growing proportion of both HPC and supercomputing capacity will be consumed as a service, instead of as a fixed IT investment. The IT infrastructure providers recognise the disruptive power of the cloud service providers within this market and are responding with their own as-a-service options for HPC and supercomputing. These are being offered through flexible consumption businesses such as HPE GreenLake, Dell Flex on Demand, and Lenovo TruScale Infrastructure Services. In December 2020, HPE announced the availability of HPC solutions via its GreenLake flexible consumption business, in a move designed to encourage mainstream enterprise adoption of HPC. HPE plans to expand the rest of its HPC portfolio to as-a-service offerings, including its Cray-based supercomputing platforms, in the future.

In addition to increasing the level of competition among infrastructure and service providers, the expanded focus on flexible consumption models has potential to make HPC and supercomputing more accessible to new organizations, for which those solutions have traditionally been prohibitively expensive. This could, in turn, help to open up new use cases and drive their development.

What do you think?

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.