Skip to content →

High Performance Computing (HPC)

What is HPC?

High-Performance Computing (HPC) is a solution for researchers who wish to conduct research analysis involving large data sets that need to be analyzed in a quick and efficient way. Typical work computers are simply not able to provide the massive computational power needed to solve large-scale research problems. The high computational power offered by HPC at Illinois State University facilitates large-scale research for ISU faculty members.

Simply put, HPC is the ability to process data and perform complex calculations at high speeds. To put it into perspective, a laptop or desktop with a 3 GHz processor can perform around 3 billion calculations per second. While that is much faster than any human can achieve, it pales in comparison to HPC solutions that can perform many more calculations than that.

Mission

The main purpose of the HPC projects is to find the most effective and efficient way to allow faculty the ability to calculate large amounts of data. Finding solutions and methods that will organize and summarize the data in a way for faculty to easily access and understand.

Intercollegiate Biomathematics Alliance

The Intercollegiate Biomathematics Alliance (IBA) initiated a preliminary model of scientific computing facility for a limited group of researchers using a high-performance computer (HPC), “CLOUD for Layering, Organizing, and Utilizing Data” (IBA-CLOUD).

This is Each node sports a 32-core/64-thread AMD EPYC™ 7551P processor and RAID array with full-disk encryption for maximum performance and reliability. IBA currently offers access to members for their research-level computing needs.  

For more information about resources for members can be found here: Resources for Members – Center for Collaborative Studies in Mathematical Biology (illinoisstate.edu) and the peer program created by the IBA can be found here: PEER Program – Center for Collaborative Studies in Mathematical Biology (illinoisstate.edu).

How Does HPC work?

Image showing how an HPC cluster is formed. Multicore Processors working together form a node, groups of Nodes make up an HPC cluster.

To build a high-performance computing architecture, compute servers are networked together into a “cluster”. Software programs and algorithms are run simultaneously on the servers in the cluster. The cluster is networked to the data storage to capture the output. Together, these components operate seamlessly to complete a diverse set of tasks.

To operate at maximum performance, each component must keep pace with the others. For example, the storage component must be able to feed and ingest data to and from the compute servers as quickly as it is processed. Likewise, the networking components must be able to support the high-speed transportation of data between compute servers and the data storage. If one component cannot keep up with the rest, the performance of the entire HPC infrastructure suffers.

HPC – Calculation of Data

HPC calculates large amounts of data using nodes. These nodes contain many CPU/GPU’s, RAM, and very fast drives to crunch a tremendous amount of data. The HPC is a cluster of computers, so instead of a faculty member using only one CPU with a few cores, the HPC nodes provide access to thousands of cores to focus on the problem at hand.

How HPC is Related to Infrastructure, AR/VR, & Cybersecurity

HPC is related to infrastructure as (hopefully) IoT, Sensors, and Edge computing will allow others to use the HPC to do heavy computational work with all that data. For instance, it would be interesting if HPC could blow up stars and used that data to create an AR/VR experience of what it would be like to witness those stars exploding. Wouldn’t it be pretty cool to be able to observe stars colliding? Or being able to experience when a star transforms into a supernova? For AR/VR, HPCs would make teaching and learning about physics a whole lot different since it would allow the teacher to work through equations and then experience that equations when it’s live.

Thus, the role of HPC for AR/VR is to make the computed equation live (you can see it happen). Of course, that’s until AR/VR headsets are provided. HPC also plays an important role in cybersecurity as it can compute large amounts of data. For cybersecurity, this data could be about what data is stolen and compromised, which could lead to extra preventive measures.

Role of HPC in Infrastructure, AR/VR, & Cybersecurity

The role of HPC coincides with its efforts. HPC efforts will be inter-disciplinary and this (hopefully) allows for cross-functional efforts between the four subcommittees. So, all the subcommittees can work together and support each other when and where appropriate.

HPC For Humanities

The HPC, DV (Data Visualization), SC (Statistical Consulting), and DA (Data Analysis) subcommittee has all sorts of opportunities to work with faculty in the Humanities as its efforts are more than just the HPC hardware. It’s about Data Visualization. More specifically, how to use data to tell a story and work to get the correct procedure (e.g. linear progression) and correct too; (SAS or SPSS) to get the job done.

For example, Jamie Johnston who has researched for counting lunar craters will be using HPC to conduct image analysis with GPU nodes.

Aspects of HPC Helping Non-Business Users

That is one of the objectives of the HPC, DV (Data Visualization), SC (Statistical Consulting), and DA (Data Analysis) subcommittees – to work with faculty who may not be considering large data sets. Instead of a GPU to kill stars, how could we use the CPUs to do qualitative analysis or some type of video analysis/processing. That is what the subcommittee is supposed to address.

HPC Software

For HPC, we have some software that runs well on the Linux node. This software is Ansys/Fluent which is simulation software. Other software includes pyCuda, R, OpenMP, and C++. There is also a request for Matlab that is being worked on, in addition to SPSS.

We are also open to learning about other ways to run Windows applications on larger machines, and could even be one large Windows server instead of the Linux HPC cluster. For instance, there are some software apps that take a long time on a faculty member’s computer, however, we want to find a way to off-load that work to a larger compute resource (e.g., a server) which then leaves the faculty member to use their computer for other work. One of those apps is called NVivo, and we are working on how best to provide access to that software via more resources (e.g. a server).

For the GPUs, we use CUDA. CUDA is a platform and programming model for CUDA-enabled GPUs. The platform exposes GPUs for general-purpose computing. CUDA provides C/C++ language extension and APIs for programming and managing GPUs.

ISU’s HPC and Its Parts

ISU’s HPC is made up of clusters (which are made up of nodes). A node in HPC is basically a server. HPC cluster consists of hundreds of these servers that work together to deliver high computational power. The high computing cluster here at ISU is equipped with 30 powerful nodes.

The HPC at Illinois State has 1 Login Node & 1 Head Node (User/Schedule), 24 Compute Nodes, 1 Big Mem Node & 2 GPU Nodes (Compute Cluster), and 1 Storage Node (Data Storage), for a total of 30 Nodes. The High-Performance Compute Cluster is managed using Bright Cluster Manager for HPC. Researchers can schedule workloads on the cluster using SLURM, where they can determine the amount of computing power they need for their workload. Upper bounds for what researchers can request can be limited using Bright.

The common configuration of ISU’s HPC is;

Processor: 24 Core / 48 Thread

RAM: 16Gb

  • 2933MT/s, Dual Rank

GPU: NVIDIA Tesla V100S (optional)

  • 32Gb VRAM
  • More information here.
  • GPUs are available for specific workloads that are able to utilize them and would benefit from the GPU processing power on a case by case basis

More information regarding NVIDIA for High-Performance Computing.

Contact For Further Questions…

Name: Craig Jackson

Email: cejack2@IllinoisState.edu

Phone Number: 309-438-9525

Skip to toolbar