PRACE-ICEI Calls for Proposals – Call #2

The resources in these calls are from the Fenix Research Infrastructure, funded by the European ICEI project (

These calls for proposals are for researchers from academia, research institutes in need of scalable computing resources, interactive computing services, VM services and data storage to carry on their research.

Submission Dates

Quarterly calls (if resources are still available):

  • Call #2: 13 June-13 July 2020
    • To start computing 1st September 2020
  • Call #3: 1-30 September 2020
    • To start computing 1st December 2020

Dates for calls in 2021 will be announced later in 2020.

The allocations are limited to 12 months.

Who can Apply

The PRACE ICEI program is open to all researchers, research organsations needing resource allocations regardless of funding sources.

You are eligible to apply for the call for proposals only if you need:

  • Scalable computing services
  • Interactive computing services
  • Virtual Machine services
  • Archival data repository
  • Active data repository

The minimum request depends on the system of interest as reported in the table of available resources below.

With respect to the maximum request, applicants are encouraged not to exceed 20% of the available resources.

Applicants may apply for the present resources in addition to resources offered on Tier-0 systems (PRACE Project Access) or on Tier-1 systems that mainly target large scalable computing services. Applicants are hence strongly encouraged to apply for multiple services offered by the present call (scalable, interactive computing and data) and make their case for the need of these complementary services.

Available Resources

The following tables summarise the amount of resources planned to be available during the allocation period of Call #2.

As some hardware is still under installation at the sites, some of the information might be slightly updated in the future.

Scalable Computing Services

ComponentSite (Country)Minimum requestTotal Resources for Call 2Unit
Piz Daint MulticoreCSCS (CH)36 000240 424node hours

Interactive computing services

ComponentSite (Country)Minimum requestTotal Resources for Call 2Unit
ICCP@JUELICHJSC (DE) 253 344node hours
Interactive Computing ClusterCEA (FR)2887 489node hours
Piz Daint HybridCSCS (CH)36 00081 960node hours
Interactive computing clusterCINECA (IT)25 000155 672node hours
Interactive computing clusterBSC (ES)8103 240node hours

VM services1

ComponentSite (Country)Minimum requestTotal Resources for Call 2Unit
Openstack compute nodeCEA (FR)13nodes
Pollux Openstack compute nodeCSCS (CH)15.25nodes
Nord3BSC (ES)418.6nodes
Meucci Openstack clusterCINECA124nodes


Archival data repositories

ComponentSite (Country)Minimum requestTotal Resources for Call 2Unit
ArchivalCEA (FR)12 025TByte
Archival Data RepositoryCSCS (CH)1492TByte
Archival Data RepositoryCINECA (IT)11 250TByte
Active Archive 2BSC (ES)90900TByte

Active data repositories

ComponentSite (Country)Minimum requestTotal Resources for Call 2Unit
HPST@JUELICHJSC (DE)1087 360Tbyte*day
Lustre FlashCEA (FR)1090 129Tbyte*day
Data WarpCSCS (CH)19.6Tbyte
HPC Storage @ CINECACINECA (IT)140Tbyte*day
HPC Storage @ BSCBSC (ES)2.510.5Tbyte

Further details on the components are provided here:

Components by France (CEA)

  • Interactive Computing: Linux servers with GPU, 36 CPU cores, and 384GB of memory
  • + 2 large memory nodes (3000 GB of RAM) with GPU
  • VM services: Cloud computing facility providing Virtual machines up to 36 cores and 192GB of memory
  • Archival
    • Object store accessible through the SWIFT protocol
    • Capacitive Lustre filesystem with automated migration to tapes
  • Active data storage:
    • Lustre Flash: POSIX Lustre filesystem based on SSD technologies, 950TB+ of usable capacity
    • Work: POSIX Lustre filesystem for active data

Components by Germany (JSC)

  • ICEI-ICCP: Compute cluster for interactive workloads and VM hosting. The nodes are equipped with two AMD EPYC Rome 7742 64-core CPUs and feature 256 GB DDR4 memory. Nodes with an additional Nvidia Volta V100 GPU with 16 GB high-bandwidth memory are available. The nodes in the cluster will be interconnected with a 100 Gb/s HDR100 high-speed interconnect in a full fat-tree topology and will access JSC’s central storage infrastructure. The cluster and VM hosting partition share the same underlying hardware. The VMs can host long-running services. Security restrictions regarding access to other facility services apply. The hosted VMs do not have access to the high-speed interconnect but can natively access selected subsets of JSC’s storage offerings with limitations regarding access and modification permissions. Access to a GPU from VMs will be offered at a later point in 2020 time depending on the technology roadmap of the cloud infrastructure components.
  • HPST: NVM-based I/O acceleration platform with up to 2 TB/s bandwidth. The HPST will enable applications to obtain high read and write performance (streaming and small-block size I/O) with moderate workload scalability. It functions as a cache for the scratch file system. The HPST will be accessible from the cluster part of the ICCP system as well as the JUWELS PRACE Tier-0 resource. A compute-time allocation on one of these systems is a prerequisite for the use of the HPST.

Components by Italy (CINECA)

  • Interactive Computing: GNU/Linux nodes based on Intel Technology (Intel Broadwell)
  • 2x Intel Xeon E5-2697 v4), 256 GB RAM and partially equipped with GPU Cards (NVIDIA P100 or V100)
  • Active Repository: GPFS based high performance file system
  • Archive Repository: Object Store accessible through SWIFT protocol using IBM Spectrum Scale technology
  • VM: Cloud Computing facility providing VMs up to 128GB, 40 vCPU each. Access to data repositories will be provided either through NFS or SWIFT protocols

Components by Spain (BSC)

  • Nord3: Intel Sandybridge cluster being able to be used as VM host or scalable cluster
  • 8 iDataPlex racks, each with 84 nodes dx360m4. Each node has the following configuration:
    • 2x Intel SandyBridge-EP E5-2670/1600 20M 8-core at 2.6 GHz and 32 GB RAM
  • Interactive computing cluster: Nodes for interactive access with dense memory
  • Active Storage: GPFS Storage accessed from HPC clusters
  • Archive Storage: HSM system with Object storage interface

Components by Switzerland (ETH Zurich/CSCS)

  • Piz Daint is a Cray System XC40/XC50
  • Interactive Computing: XC50 Compute Nodes Intel® Xeon® E5-2690 v3 @ 2.60GHz (12 cores, 64GB RAM) and NVIDIA® Tesla® P100 16GB
  • Scalable Computing: XC40 Compute Nodes Two Intel® Xeon® E5-2695 v4 @ 2.10GHz (2 x 18 cores, 64/128 GB RAM)
  • VM: Dual-socket Linux server with 512 GB RAM
  • Active Repository: Data Warp plus the Scratch file system (with no quota). DataWarp is an early access technology. User workflows need to be adapted/augmented
  • Archive Repository: Object store SWIFT

Proposal Requirements

PRACE ICEI Proposal Requirements

  • Short description of the scientific goals and objectives
  • Type of resources required, i.e. scalable computing resource, interactive computing resources, VM services, archival and active data repositories
  • Resources requests based on limits defined above (including information on scalability for scalable compute resources requests)
  • Description of the software and services needed to successfully complete the project
  • Description of the research methods (including a project workplan), algorithms, and code parallelization approach (including memory requirements)
  • Description of special needs (if any)

Reviewing Process

A Review Panel established by PRACE composed by up to 8/10 well-known scientists (they can be different for each calls) shall discuss the scientific merit of the proposals with the support of a technical team and decides which proposal should be granted based on the criteria listed below.

Review Criteria

Soundness of the Methodology and tools
Does the request describe appropriate tools, methods and approaches for addressing the research objectives?

Appropriateness Research Plan
Is the research plan appropriate to achieve the scientific goals described in the proposal? Has the applicant properly estimated the resources?

Appropriateness of the Project Timeline
Has the applicant estimated the right number of simulations to achieve the goals in the given time?

Significance of the Proposed Research
Is the proposed work supported by a grant or grants that have undergone review for intellectual merit and/or broader impact. What is the scientific impact?

Proposals explicitly requesting multiple services (scalable, interactive, storage) will be favoured.

Technical assessment
Software availability on the requested resource. The codes necessary for the project must already be available on the system requested, or in case of codes developed by the applicants, they must have been sufficiently tested for efficiency, scalability (if applicable), and suitability. Feasibility of the requested resource. The requested system(s) must be suitable for the proposed project.

How to Submit

To submit proposals, fill the application form available here and send it by email to

Call for Proposals


Applicants must acknowledge PRACE in all publications that describe results obtained using PRACE resources. Users shall use the following wording in such acknowledgement in all such papers and other publications:
We acknowledge PRACE for awarding access to the Fenix Infrastructure resources, which are partially funded from the European Union’s Horizon 2020 research and innovation programme through the ICEI project under the grant agreement No. 800858.