Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

  1. Blog
  2. Article

Jon Thor Kristinsson
on 12 April 2022

What is High-performance computing (HPC)? [part 1]


In this blog, we will be introducing the concept of High-performance computing (HPC) and HPC clusters. We will also introduce a few categories of practical workloads important in the HPC space.

This blog is a part of a series of blogs on HPC where we will introduce you to the world of HPC

What is HPC?

High-Performance Computing is the procedure of combining computational resources together as a single resource. The combined resources are often referred to as a supercomputer or a compute cluster. The reason this is done is to make it possible to deliver computational intensity and the ability to process complex computational workloads and applications at high speeds and in parallel. Those workloads require computing power and performance that is often beyond the capabilities of a typical desktop computer or a workstation.

What are HPC clusters?

HPC clusters are often made of a group of servers; these servers are generally referred to as compute nodes. Some clusters can be as small as a few nodes and some have hundreds and even thousands of compute nodes all connected together over a network so they can work together to solve these advanced computational workloads. These networks sometimes use high-speed interconnects or other low latency network solutions to reduce computational latency. The workloads often have significant storage requirements, either in terms of data size or throughput. So, it’s common to deploy both a high-performance storage solution, often referred to as scratch used for in-flight computation storage, along with a general-purpose solution for user data, applications, and archival. To use all these resources as effectively as possible and in parallel, sometimes a message passing interface generally referred to as MPI is used.

The workloads generally run in batches and are managed by a batch scheduler. This scheduler is a vital component of HPC clusters, it keeps track of the available resources of a cluster, it queues workloads when there are not sufficient computational resources and when computational resources become available the scheduler efficiently assigns workloads to available resources.

HPC clusters and solutions are available anywhere and can be deployed on-premise, in the cloud, or as a hybrid solution combining both on-premise and off and even on the edge.

What are the main use cases of HPC?

HPC is used to solve some of the most advanced and toughest computational problems we have today. These problems exist in all types of environments, such as science, engineering, or business. Thanks to HPC, computational fluid dynamics (CFD) workloads can simulate the flow of fluids to solve problems in numerous fields such as aerodynamics, industrial system design or weather simulation, and many more. High-performance data analytics (HPDA) is the combination of HPC and Data Analytics where parallel processing and powerful analytics are used to analyse huge data sets at incredible speeds. This is used for anything from real-time data analysis, high-frequency stock trading, and even some highly complex analytics problems found in scientific research. Large computational clusters are also used to render whole movies or create visual effects for certain scenes. Genome processing and sequencing is another field that needs HPC due to the huge data sets that are analysed and interpreted to figure out hereditary conditions or other medical anomalies. HPC can even be used to figure out large-scale logistic and supply problems such as those existing in Retail. Whatever the need, HPC has the ability to solve it. 

Summary

This blog has introduced you to high-performance computing, the components that make HPC clusters and how they are used to solve the toughest computational problems we have today. We have also covered some of the many ways high-performance computing can be used in the industry with a brief overview of computational fluid dynamics and high-performance data analytics.

If you are interested in more information take a look at how Scania is Mastering multi cloud for HPC systems with Juju, or dive into our blog on Data Centre Automation for HPC, Or explore some of our other HPC content.

In the next blog, we will be highlighting some of the many ways a HPC cluster can be deployed.

Related posts


Lech Sandecki
23 October 2024

6 facts for CentOS users who are holding on

Cloud and server Article

Considering migrating to Ubuntu from other Linux platforms, such as CentOS? Find six useful facts to get started! ...


Kris Sharma
17 October 2024

Why is Ubuntu Linux the leading choice to replace CentOS for financial services?

Financial Services Article

Financial services are powered by technology. The customer experience is increasingly driven by data, with tailoring of products and services to reflect individual behaviors and preferences. All of this rests on a foundation of secure, stable technology that can support agility and flexibility to adapt to customer needs, whilst at the sam ...


Canonical
11 April 2024

Ventana and Canonical collaborate on enabling enterprise data center, high-performance and AI computing on RISC-V

Silicon Article

This blog is co-authored by Gordan Markuš, Canonical and Kumar Sankaran, Ventana Micro Systems Unlocking the future of semiconductor innovation  RISC-V, an open standard instruction set architecture (ISA), is rapidly shaping the future of high-performance computing, edge computing, and artificial intelligence. The RISC-V customizable and ...