EuroHPC JU Benchmark and Development Access Calls

The Call for Proposals for EuroHPC JU Benchmark and Development Access Modes are continuously open calls, with a maximum time-to-resources-access (start-date) of two weeks after the date of submission.
The next cut-off dates for proposals are:

  • 1 November 2021 – 11:00 AM CET
  • 1 December 2021 – 11:00 AM CET
  • 1 January 2022 – 11:00 AM CET
  • 1 February 2022 – 11:00 AM CET
  • 1 March 2022 – 11:00 AM CET

The following table shows EuroHPC JU Petascale systems and their current availability for Benchmark and Development Access.

SystemArchitectureSite (Country)BenchmarkDevelopment
Vega CPU StandardBullSequana XH2000IZUM Maribor (SI)
Vega CPU Large MemoryBullSequana XH2000IZUM Maribor (SI)
Vega GPUBullSequana XH2000IZUM Maribor (SI)
Karolina CPUHPE Apollo 2000 Gen10 Plus, x86_64IT4I VSB-TUO (CZ)
Karolina GPUHPE Apollo 6500 Gen10 Plus, Nvidia GPGPUIT4I VSB-TUO (CZ)
Karolina analyticsHPE Superdome Flex, x86_64IT4I VSB-TUO (CZ)

The indicative schedule of the EuroHPC JU Calls for Proposals for Benchmark and Development Access are as follows.

ActionTypes A & B
Proposal submissionContinuously open for submission. Cut-off date for review on 1st of every month
Administrative validation2 working days after submission
Review of proposals & decisionUp to 2 weeks after the cut-off date
Access start dateUp to 2-3 weeks after the cut-off date

The process to access resources after the communication of the decision on allocation is two-fold:

  • Awardees are asked to confirm their availability and readiness to use the awarded resources during the stated period
  • Access is provided after receipt of a positive reply

The allocation decision, including the type of access, start/end dates, duration, and the resources awarded is not open for negotiation.
Obligations and restrictions to awardees are applied as defined in the EuroHPC Access Policy.

In case of oversubscription of a particular resource, resources will be awarded on a first-come, first-served basis to suitable proposals based on the submission date until the resources are exhausted.

For more details regarding EuroHPC calls please contact us

Systems Description

Vega

Vega is an Atos BullSequana XH2000 system able to deliver more than 6.9 Petaflops of aggregated sustained performance. It comprises computer partitions with different computing characteristics, and two high-performance storage systems, one based on Lustre and another on Ceph. EuroHPC owns 35% of the available resources. The table numbers below (unless indicated differently) refer to the fraction of resources offered from EuroHPC.

  
Sustained performance (whole system / aggregated)6.9 petaflops
Peak performance: (whole system / aggregated)10.1 petaflops
Compute partitions

CPU Standard: 798 x nodes, each node with 2xAMD Epyc 7H12, 256GB DDR4, 1xHDR100, 1×1.92 TB M.2 SSD.

CPU Large memory: 192 x nodes, each node with 2xAMD Epyc 7H12, 1TB DDR4, 1xHDR100, 1×1.92 TB M.2 SSD

GPU: 60 x nodes, each node with 4xNvidia A100 (Nvlink), 2xAMD Epyc 7H12, 512GB DDR4, 2xHDR100, 1×1.92 TB M.2 SSD

Central Processing Unit (CPU)AMD Epyc 7H12 (64C, 2.6GHz)
Graphics Processing Unit (GPU)Nvidia A100 (40GB HBM2 memory)
Storage capacityHigh-performance NVMe Lustre (1PB system total), large-capacity Ceph (23PB system total)
Target applicationsTraditional Computational, AI, Big Data/HPDA, Large-scale data processing
Other detailsWide bandwidth for data transfers to other national and international computing centres (up to 500 Gbit/s). Data processing throughput 400GB/s from high-performance storage and 200Gb/s from large capacity storage

For details regarding the Vega system and the user support provided please contact us.

Karolina

Karolina is an HPE Apollo system able to deliver more than 9.4 Petaflops of aggregated sustained performance. It comprises computer partitions with different computing characteristics, and high-performance storage system based on Lustre. EuroHPC owns 35% of the available resources.

  
Sustained performance (whole system / aggregated)9.4 petaflops
Peak performance: (whole system / aggregated)15.4 petaflops
Compute partitions

CPU: 720 x nodes, each node with 2xAMD Epyc 7H12, 256GB DDR4, 1xHDR100.

GPU: 72 x nodes, each node with 8xNvidia A100 (switched Nvlink), 2xAMD Epyc 7763, 1024GB DDR4, 4xHDR100

Data analytics: 1 node, NUMA SMP, 32x Intel Xeon Platinum 8268, 24.5TB DDR4, 2xHDR100

Central Processing Unit (CPU)AMD Epyc 7H12 (64C, 2.6GHz)
Graphics Processing Unit (GPU)Nvidia A100 (40GB HBM2 memory)
Storage capacityHigh-performance NVMe Lustre (1.2 PB system total)
Target applicationsTraditional Computational, AI, Big Data/HPDA, Large-scale data processing
Other detailsWide bandwidth for data transfers to other national and international computing centres (up to 2×100 Gbit/s). Data processing throughput 1200GB/s from high-performance storage

For details regarding the Karolina system and the user support provided please visit https://docs.it4i.cz/karolina/introduction/ or contact us eurohpc-call@it4i.cz.