PRACE-ICEI Calls For Proposals – Call #7

The resources in these calls are from the Fenix Research Infrastructure, funded by the European ICEI project (

These calls for proposals are for European researchers from academia, research institutes in need of scalable computing resources, interactive computing services, VM services and data storage to carry on their research.

Submission Dates

Calls are organised quarterly calls according to the following timeline (calls will be open until resources are available):

 Call openingCall closureAllocation
Call #724/09/202129/10/2021From 01/01/2022

Dates for calls in 2022 will be announced soon.

The allocations are limited to 12 months.

Who can Apply

The PRACE ICEI program is open to all European researchers, research organizations needing resource allocations regardless of funding sources.

You are eligible to apply for the call for proposals only if you need one or more:

  • Scalable computing services
  • Interactive computing services
  • Virtual Machine services
  • Archival data repository
  • Active data repository

The minimum request depends on the system of interest as reported in the table of available resources below.

With respect to the maximum request, applicants are encouraged not to exceed 20% of the available resources.

Applicants may apply for the present resources in addition to resources offered on Tier-0 systems (PRACE Project Access) or on Tier-1 systems that mainly target large scalable computing services. Applicants are hence strongly encouraged to apply for multiple services offered by the present call (scalable, interactive computing and data) and make their case for the need of these complementary services.

Available Resources

The following tables summarise the amount of resources planned to be available during the allocation period of Call #7.

Scalable Computing Services

Component Site (Country) Minimum request Total Resources for Call 7 Unit
Piz Daint Multicore CSCS (CH) 36 000 125 272 node hours
JUSUF-HPC JSC (DE) 36 000 57 124 node hours
Scalable computing cluster CINECA (IT) node hours

Interactive computing services

Component Site (Country) Minimum request Total Resources for Call 7 Unit
JUSUF-HPC JSC (DE) node hours
Interactive Computing Cluster CEA (FR) 288 29 960 node hours
Piz Daint Hybrid CSCS (CH) 36 000 404 860 node hours
Interactive computing cluster CINECA (IT) node hours
Interactive computing cluster BSC (ES) node hours

VM services1

Component Site (Country) Minimum request Total Resources for Call 7 Unit
JUSUF-HPC JSC (DE) 0.012 1 nodes
Openstack compute node CEA (FR) 1 4 nodes
Pollux Openstack compute node CSCS (CH) 1 3.25 nodes
Nord3 BSC (ES) 1 3 nodes
Meucci Openstack cluster CINECA 1 1 nodes


Archival data repositories

Component Site (Country) Minimum request Total Resources for Call 7 Unit
Archival CEA (FR) 1 1 1128 TByte
Archival Data Repository CSCS (CH) 1 380 TByte
Archival Data Repository CINECA (IT) 1 1 095 TByte
Active Archive 2 BSC (ES) 90 400 TByte

Active data repositories

Component Site (Country) Minimum request Total Resources for Call 7 Unit
HPST@JUELICH JSC (DE) 10 85 795 Tbyte*day
Lustre Flash CEA (FR) 10 144 Tbyte
Data Warp CSCS (CH) 1 5.5 Tbyte
HPC Storage @ CINECA CINECA (IT) 1 16 Tbyte*day
HPC Storage @ BSC BSC (ES) 2.5 Tbyte

Further details on the components are provided here:

Components by France (CEA)

  • Interactive Computing: Linux servers with GPU, 36 CPU cores, and 384GB of memory
  • + 2 large memory nodes (3000 GB of RAM) with GPU
  • VM services: Cloud computing facility providing Virtual machines up to 36 cores and 192GB of memory
  • Archival
    • Object store accessible through the SWIFT protocol
    • Capacitive Lustre filesystem with automated migration to tapes
  • Active data storage:
    • Lustre Flash: POSIX Lustre filesystem based on SSD technologies, 950TB+ of usable capacity
    • Work: POSIX Lustre filesystem for active data

Components by Germany (JSC)

  • JUSUF-HPC: Compute cluster for interactive workloads and VM hosting. The nodes are equipped with two AMD EPYC Rome 7742 64-core CPUs and feature 256 GB DDR4 memory. Nodes with an additional Nvidia Volta V100 GPU with 16 GB high-bandwidth memory are available. The nodes in the cluster will be interconnected with a 100 Gb/s HDR100 high-speed interconnect in a full fat-tree topology and will access JSC’s central storage infrastructure. The cluster and VM hosting partition share the same underlying hardware. The VMs can host long-running services. Security restrictions regarding access to other facility services apply. The hosted VMs do not have access to the high-speed interconnect but can natively access selected subsets of JSC’s storage offerings with limitations regarding access and modification permissions. Access to a GPU from VMs will be offered at a later point in 2020 time depending on the technology roadmap of the cloud infrastructure components.
  • HPST: NVM-based I/O acceleration platform with up to 2 TB/s bandwidth. The HPST will enable applications to obtain high read and write performance (streaming and small-block size I/O) with moderate workload scalability. It functions as a cache for the scratch file system. The HPST will be accessible from the cluster part of the ICCP system as well as the JUWELS PRACE Tier-0 resource. A compute-time allocation on one of these systems is a prerequisite for the use of the HPST.

Components by Italy (CINECA)

  • Interactive Computing: GNU/Linux nodes based on Intel Technology (Intel Broadwell)
  • 2x Intel Xeon E5-2697 v4), 256 GB RAM and partially equipped with GPU Cards (NVIDIA P100 or V100)
  • Active Repository: GPFS based high performance file system
  • Archive Repository: Object Store accessible through SWIFT protocol using IBM Spectrum Scale technology
  • VM: Cloud Computing facility providing VMs up to 128GB, 40 vCPU each. Access to data repositories will be provided either through NFS or SWIFT protocols

Components by Spain (BSC)

  • Nord3: Intel Sandybridge cluster being able to be used as VM host or scalable cluster
  • 8 iDataPlex racks, each with 84 nodes dx360m4. Each node has the following configuration:
    • 2x Intel SandyBridge-EP E5-2670/1600 20M 8-core at 2.6 GHz and 32 GB RAM
  • Interactive computing cluster: Nodes for interactive access with dense memory
  • Active Storage: GPFS Storage accessed from HPC clusters
  • Archive Storage: HSM system with Object storage interface

Components by Switzerland (ETH Zurich/CSCS)

  • Piz Daint is a Cray System XC40/XC50
  • Interactive Computing: XC50 Compute Nodes Intel® Xeon® E5-2690 v3 @ 2.60GHz (12 cores, 64GB RAM) and NVIDIA® Tesla® P100 16GB
  • Scalable Computing: XC40 Compute Nodes Two Intel® Xeon® E5-2695 v4 @ 2.10GHz (2 x 18 cores, 64/128 GB RAM)
  • VM: Dual-socket Linux server with 512 GB RAM
  • Active Repository: Data Warp plus the Scratch file system (with no quota). DataWarp is an early access technology. User workflows need to be adapted/augmented
  • Archive Repository: Object store SWIFT

Proposal Requirements

PRACE ICEI Proposal Requirements

  • Short description of the scientific goals and objectives including progress beyond state of the art and scientific impact
  • Type of resources required, i.e. scalable computing resource, interactive computing resources, VM services, archival and active data repositories
  • Resources requests based on limits defined above (including information on scalability for scalable compute resources requests)
  • Description of the software and services needed to successfully complete the project
  • Description of the research methods (including a project workplan), algorithms, and code parallelization approach (including memory requirements)
  • Information on Data Management
  • Description of special needs (if any)

Reviewing Process

A Review Panel established by PRACE composed by up to 8/10 well-known scientists (they can be different for each calls) shall discuss the scientific merit of the proposals with the support of a technical team and decides which proposal should be granted based on the criteria listed below.

Review Criteria

Soundness of the Methodology and tools
Does the request describe appropriate tools, methods and approaches for addressing the research objectives?

Appropriateness Research Plan
Is the research plan appropriate to achieve the scientific goals described in the proposal? Has the applicant properly estimated the resources?

Appropriateness of the Project Timeline
Has the applicant estimated the right number of simulations to achieve the goals in the given time?

Significance of the Proposed Research
Is the proposed work supported by a grant or grants that have undergone review for intellectual merit and/or broader impact. What is the scientific impact?

Proposals explicitly requesting multiple services (scalable, interactive, storage) will be favoured.

Technical assessment
Software availability on the requested resource. The codes necessary for the project must already be available on the system requested, or in case of codes developed by the applicants, they must have been sufficiently tested for efficiency, scalability (if applicable), and suitability. Feasibility of the requested resource. The requested system(s) must be suitable for the proposed project.

How to Submit

To submit proposals, fill the application form available here and send it by email to

The Peer Review team will send you confirmation within a couple of days on the receipt of the application, if you do not hear back from us please write an email to check if the proposal has been received successfully.

For technical questions or support before the application submission, you can contact the ICEI team at

Call for Proposals

Terms of Access

The Principal Investigator (PI) shall lead the project and is expected to be an essential participant in its implementation. The PI will have the overall responsibility for the management of the project and interactions with PRACE. Please make sure that the contact details for the PI are correct and that e-mail addresses used are professional e-mail addresses.

The usage of PRACE-ICEI resources needs to be acknowledged for all data produced through PRACE-ICEI allocations, both in publications and when depositing the data to other infrastructures.

The PI commits to:

  • Provide to PRACE a final report (max 1 page) within 3 months from the completion of an allocation. The report shall describe in particular if and how the objectives of the project have been achieved and, if not, which were the encountered challenges. The report will be kept for internal PRACE use and it will not be published if not agreed in advance with the PI.
  • Acknowledge the role of the HPC Centre and PRACE in all publications which include the results above mentioned. Users shall use the following wording in such acknowledgement in all such papers and other publications:

We acknowledge PRACE for awarding access to the Fenix Infrastructure resources at [hosting site], which are partially funded from the European Union’s Horizon 2020 research and innovation programme through the ICEI project under the grant agreement No. 800858.”

  • Allow PRACE to publish the his/her name together with title of the project, scientific goals and objectives, and awarded resources on the PRACE website.

Awardees will be expected to reply favourably when asked to be interviewed for PRACE publications and/or send visualisations or other materials for promotional purposes.