HPC Systems

The PRACE RI provides access to distributed persistent pan-European world class HPC computing and data management resources and services. Expertise in efficient use of the resources is available through participating centers throughout Europe.
Available resources are announced for each Call for Proposals.

PRACE production systems (in alphabetical order of country):

Joliot Curie Supercomputer

JOLIOT-Curie, GENCI@CEA France

Joliot-Curie of GENCI, located in France at the Très Grand Centre de Calcul (TGCC) operated by CEA near Paris. Joliot-Curie is a Atos/BULL Sequana system X1000 based on a balanced architecture (compute, memory, network and I/O) with 3 compute partitions:

  • SKL (standard x86)
    • 1 656 compute nodes, each with Intel Skylake 8168 24-core 2.7 GHz dual processors, for a total of 79 488 compute cores and 6.86 PFlop/s peak performance
    • 192 GB of DDR4 memory per node – (4GB/core)
    • InfiniBand EDR 100 Gb/s interconnect
  • KNL (manycore x86)
    • 828 Intel KNL 7250 nodes each with a 1.4 GHz, 68-core processor, for a total of 56 304 compute core and 2.52 PFlops peak performance
    • 96 GB of DDR4 memory per node and 16 GB MCDRAM memory per node
    • BULL BXI high speed interconnect
  • Rome (standard x86)
    • 2 292 nodes with two 64-core AMD Epyc 2nd gen (Rome) processors, 2.5 GHz, 2 GB/core, for a total of 293 376 compute cores and 11.75 PFlops peak performance
    • 256 GB of DDR4 memory per node
    • Infiniband HDR 100 Gb/s interconnect
25 additional nodes for post processing and remote visualisation, access to a 500 GB/s multi level Lustre filesystem. Please find further information here.

For technical assistance: hotline.tgcc@cea.frc

JUWELS supercomputer

JUWELS, GCS@FZJ, Germany

The successor system of JUQUEEN called Jülich Wizard for European Leadership Science (JUWELS) is a milestone on the road to a new generation of ultraflexible modular supercomputers targeting a broader range of tasks – from big data applications right up to compute-intensive simulations.

The Cluster module, which was supplied in spring 2018 by French IT company Atos in cooperation with software specialists at German enterprise ParTec, is equipped with Intel Xeon 24-core Skylake CPUs and excels with its versatility and ease of use. It has a theoretical peak performance of 12 petaflop/s. The nodes are connected to a Mellanox InfiniBand high-speed network. Another unique feature of the module is its novel, ultra-energy-efficient warm-water cooling system.

In 2020 a so-called Booster module designed for extreme computing power and artificial intelligence tasks extended JUWELS. Again, Atos designed the booster in cooperation with ParTec, involving NVIDIA and Mellanox using a co-design process. With the launch of the booster in 2020 it was the fastest system in Europe and position seven amongst all systems listed in the November 2020 Top500 list, having a theoretical peak performance of 70TFlop/s. The booster is equipped with the latest generation of NVIDIA A100 GPUs and AMD EPYC host CPUs. Both modules, cluster and booster, can be utilized individually but are also tightly coupled to form a modular supercomputer system.

For further information please read here.

For technical assistance: sc[at]fz-juelich.de

© Forschungszentrum Jülich / R.-U. Limbach

​Back to Top

HLRS Hawk supercomputer

HAWK, GCS@HLRS, Germany

The High-Performance Computing Center Stuttgart (HLRS) hosts an HPE Apollo system named Hawk. The system officially came online in 2020. The machine features 720,896 compute cores and has a theoretical peak performance of 26 petaflops. The system is designed to serve a wide range of sciences, including the life sciences, energy and environmental sciences, high-energy physics, and astrophysics, but places a special empahsis on supporting the computational and scientific engineering communities in academia and industry.

© Ben Derzian for HLRS

​Back to Top

LRZ SuperMUC supercomputer

SuperMUC-NG, GCS@LRZ, Germany

SuperMUC-NG is the Tier-0 supercomputer at the Leibniz Supercomputing Centre (Leibniz-Rechenzentrum, LRZ) of the Bavarian Academy of Sciences and Humanities in Garching near Munich, Germany. It provides resources to PRACE via the German Gauss Centre for Supercomputing (GCS).
SuperMUC-NG consists of 6 336 thin nodes (96 GB each) and 144 fat nodes (768 GB memory each), equipped with Intel Skylake processors,each node having 48 cores. All 311 040 compute cores together, connected by an Intel OmniPath Interconnect Network with a fat tree network topology, deliver a peak performance of 26.9 PFlop/s.
The parallel filesystem (IBM Spectrum Scale, GPFS) has a capacity of 50 PByte with 500 GByte/s I/O bandwidth.
For Long Term Data Storage 20 PByte capacity with 70 GByte/s bandwidth are available. The programming environment is Linux (SLES12 SP3), Intel Parallel Studio and OpenHPC. An OpenStack Compute Cloud is attached to SuperMUC-NG.
SuperMUC-NG is cooled with hot water of up to 50 centigrade. The heat removal efficiency is 97%.
An Energy Aware Scheduling system further assists in saving energy. Adsorption chillers reuse the waste heat to generate cooling for other components.
The LINPACK performance of SuperMUC-NG was measured to be 19.5 PFlop/s, positioning SuperMUC-NG as number 8 on November 2018 world’s TOP500 list of supercomputers.

For technical assistance: lrzpost[at]lrz.de or https://servicedesk.lrz.de/?lang=en.

MARCONI supercomputer

MARCONI, CINECA, Italy

CINECA’s new tier-0 system Marconi100 is equipped with the new IBM Power9 processors and Nvidia V100 GPUs.
It consist of 55 Racks for a total of 980 compute Nodes. Each node contains 2 IBM POWER9 AC922 at 2.6(3.1) GHz (16 cores each), 4 NVIDIA Volta V100 GPUs/node and 256 GB RAM.
The entire system is connected via the Mellanox IB EDR DragonFly++.
The global peak performance of the Marconi system is about 32 petaflops.

© CINECA

​Back to Top

MARENOSTRUM supercomputer

MareNostrum 4, BSC, Spain

MareNostrum 4 Supercomputer – hosted by BSC in Barcelona, Spain.
MareNostrum is based on Intel latest generation general purpose Xeon E5 processors with 2.1 GHz (two CPUs with 24 cores each per node, 48 cores/node), 2 GB/core and 240 GB of local SSD disk acting as local /tmp. A total of 48 racks, each with 72 compute nodes, for a total of 3 456 nodes. A bit more than 200 nodes have 8GB/core. All nodes are interconnected through an Intel Omni-Path 100Gbits/s network, with a non-blocking fat tree network topology. MareNostrum has a peak performance of 11,14 Petaflops.

For technical assistance: support[at]bsc.es

Piz Daint Supercomputer

Piz Daint, ETH Zurich/CSCS, Switzerland

Piz Daint supercomputer is a Cray XC50 system and the flagship system at CSCS – Swiss National Supercomputing Centre, Lugano. Piz Daint is a hybrid Cray XC50 system with a 4 400 nodes available to the User Lab. The compute nodes are equipped with an Intel® Xeon® E5-2690 v3 @ 2.60GHz (12 cores, 64GB RAM) and NVIDIA® Tesla® P100 16GB. The nodes are connected by the “Aries” proprietary interconnect from Cray, with a dragonfly network topology. For further information visit the CSCS website.

For technical questions: help[at]cscs.ch

© CSCS

​Back to Top