Skip to content

About

A brief history of the HPC

In 2003, an ICTS technician assisting a professor with his computer realised the need for an HPC at the University of the Free State (UFS). The professor spent over five months attempting to install scientific software on another computer. This task was daunting because documentation was neglected, like the case of many scientific software packages. After a week, the technician was able to install the software successfully. However, the computer had to remain on and uninterrupted whilst the simulation was running. This posed the next potential pitfall, power outages or system crashes.

The first UFS High-Performance Computing (HPC) Proof-of-Concept (PoC) cluster was built in 2005, consisting of decommissioned computer laboratory equipment. In 2006, the first generation (22 nodes) was procured. It was decided to replace each generation at about three-year intervals with one or two generations of machines in production.

Generation Year Procured Year Decommissioned Number of Nodes Number of Users
1st 2006 2009 22 5 - 16
2nd 2009 2012 18 16 - 22
3rd 2011 2014 18 22 - 25
4th 2013 2017 22 20 - 54
5th 2017 2024 32 54 - 152
6th 2024 - 8 152

The above table indicates the iteration of technology used in the HPC, the size of the HPC, and its user base. There is a fluctuation in the number of nodes vs. the user base's growth. However, due to technological advancements during the replacement cycles, the number of nodes may decrease and provide similar or better performance than the previous generation of machines.

The 5th generation of machines is still in production (in 2024) but will be phased out during the year. The 6th generation is much smaller than its predecessor but contains some GPUs. Three additional GPU servers are procured, each with five or six NVIDIA H200 GPUs and two Terabytes of system memory (RAM). For more information on the production servers, see the technical specifications.

The eResearch Network (eRN)

In 2022, ICTS also introduced the eResearch Network (eRN). This is not just a physical computer network but a cohort of research groups utilising computational resources to perform research. On the hardware side, services, such as private cloud services and data repositories, are all safeguarded behind a Virtual Private Network (VPN) and firewalls explicitly configured to allow freedom to users with less compromise to security.

On the human side, research support groups such as the Directorate Research Development (DRD), the Interdisciplinary Centre for Digital Futures (ICDF), the Digital Scholarship Centre (DSC) closely work together with research groups amongst others the Unit of Engineering Sciences, Physics, Virology, Geology, the Department of Sociology collaborate and strengthens research initiatives in One Health, the Complex Systems Hub, and various other research collaborations throughout the UFS.

About our Researchers

See the following topics to explore the who uses the HPC and how to acknowledge the HPC in publications.

  • NRF Ratings


    Researchers affiliated with the HPC, who are NTF-rated

    read more ...

  • Publications


    Publications such as articles, degrees, and proceedings assisted by using the HPC or its services.

    read more ...

  • Acknowledge the HPC Unit


    How to acknowledge the HPC unit in publications.

    See recommendation