In addition to computing time, support from a High-Level Support Team (HLST) may be assigned to selected research projects.
HLSTs will help projects of outstanding scientific value to further utilise the capabilities of PRACE Tier-0 systems through code optimisation.
HLSTs are available in combination to Tier-0 systems of the following PRACE hosting members:
- Grand Équipement National de Calcul Intensif GENCI, France
- GAUSS Centre for Supercomputing GCS, Germany
- CINECA – Consorzio Interuniversitario, Italy
- Barcelona Supercomputing Center BSC, Spain
- Swiss National Supercomputing Centre CSCS at the Swiss Federal Institute of Technology in Zurich (ETH Zurich), Switzerland
In addition, the HLSTs located at the 5 different hosting sites coordinate together in the continued testing and improving of the libraries and applications which run on their Tier-0 systems, and for ensuring early transitions to new technologies.
Besides their specialised user support, the HLSTs are also responsible for continued testing and improving of the libraries and applications which run on Tier-0 systems, and for ensuring early transitions to new technologies.
HLSTs provide dedicated application support to assigned projects through benchmarking, code optimisation and scaling-out of applications. Code is optimised by porting code for the allocated Tier-0 system and/or optimising I/O for large data volumes.
HLST support ranges from 1 to 6 months, and is considered a collaboration as researchers of awarded projects and HLST members work together – providing scientific expertise and HPC expertise respectively.
Collaborations from 6 to 12 months are also possible for projects that require refactoring of codes. In such cases a significant amount of time will be required to rewrite code (in an optimal manner) with new programming models and languages.
HLST support has been in place since the start of the PRACE 2 optional programme in 2017, and until now has supported 11 research projects.
HLST Success Stories
Improving an embarrassingly parallel task
Carmen Domene of the University of Bath studies the biochemical and biophysical processes of bacterial and eukaryotic sodium channels. These membrane proteins are vital for a variety of physiological processes and consequently involved in many diseases – among them Alzheimer’s disease or Parkinson’s disease. To fully understand the molecular processes and the selectivity of such ion channels thousands of simulations in the order of microseconds are needed. Christopher Bignamini of the High-Level Support Team for Piz Daint – the flagship computing system at Switzerland’s Supercomputing Centre CSCS – helped with this. He assisted the Domene research group mainly in the initial phase of the project by optimising the way batch jobs were submitted. The problem addressed was an “embarrassingly parallel” one, meaning that there was little or no need for communication between individual parallel tasks. Therefore, Bignamini helped the researchers to employ a specific way of implementing the job submission workflow that optimised the use of computer resources.
Best use of computer time in the search for new antibiotics
It is a known problem: existing antibiotics loose effectiveness because pathogenic bacteria develop resistances. A promising target family for new antibiotics are proteins regulating the activity of the ribosome – the large molecular machine in charge of protein production. Maria J. Ramos at the University of Porto uses molecular dynamics to characterise such ribosomal proteins in order to develop new compounds killing the bacteria. The problem, however, was scalability: The code used called “Amber” showed good scalability up to 265 cores but not beyond that. That is why the High-Level Support Team for MareNostrum at BSC in Barcelona, Spain, helped Ramos and her co-worker Ana Oliveira to test the code for better parallel performance. In addition, the team helped with scripting to enable the submission of all required simulations at once and to distribute the work in a parallel fashion to the different nodes of the machine. In this way, the HLST helped the researchers in a limited time to identify the best possible way to use the system and the project’s allocated computing resources.