PRACE-ICEI Calls for Proposals – Call #5

The resources in these calls are from the Fenix Research Infrastructure, funded by the European ICEI project (https://fenix-ri.eu/).

These calls for proposals are for European researchers from academia, research institutes in need of scalable computing resources, interactive computing services, VM services and data storage to carry on their research.

PRACE logo

Submission Dates

Calls are organised quarterly calls according to the following timeline (calls will be open until resources are available):
Call opening Call closure Allocation
Call #5 12/03/2021 16/04/2021 From 01/06/2021
Call #6 31/05/2021 09/07/2021 From 01/10/2021
Call #7 20/09/2021 29/10/2021 From 01/01/2022
Dates for calls in 2022 will be announced later in 2021. The allocations are limited to 12 months.

Who can Apply

The PRACE ICEI program is open to all European researchers, research organizations needing resource allocations regardless of funding sources.

You are eligible to apply for the call for proposals only if you need one or more:

  • Scalable computing services
  • Interactive computing services
  • Virtual Machine services
  • Archival data repository
  • Active data repository

The minimum request depends on the system of interest as reported in the table of available resources below.

With respect to the maximum request, applicants are encouraged not to exceed 20% of the available resources.

Applicants may apply for the present resources in addition to resources offered on Tier-0 systems (PRACE Project Access) or on Tier-1 systems that mainly target large scalable computing services. Applicants are hence strongly encouraged to apply for multiple services offered by the present call (scalable, interactive computing and data) and make their case for the need of these complementary services.

Available Resources

The following tables summarise the amount of resources planned to be available during the allocation period of Call #5.

As some hardware is still under installation at the sites, some of the information might be slightly updated in the future.

Scalable Computing Services

ComponentSite (Country)Minimum requestTotal Resources for Call 5Unit
Piz Daint MulticoreCSCS (CH)36 000118 424node hours
JUSUF-HPCJSC (DE)36 00057 124node hours
Scalable computing clusterCINECA (IT)36 00040 548node hours

Interactive computing services

ComponentSite (Country)Minimum requestTotal Resources for Call 5Unit
JUSUF-HPCJSC (DE)1 000node hours
Interactive Computing ClusterCEA (FR)28829 960node hours
Piz Daint HybridCSCS (CH)36 00045 960node hours
Interactive computing clusterCINECA (IT)25 000178 252node hours
Interactive computing clusterBSC (ES)8101 741node hours

VM services1

ComponentSite (Country)Minimum requestTotal Resources for Call 5Unit
JUSUF-HPC/td>JSC (DE)0.0121nodes
Openstack compute nodeCEA (FR)14nodes
Pollux Openstack compute nodeCSCS (CH)13.25nodes
Nord3BSC (ES)18.6nodes
Meucci Openstack clusterCINECA11nodes

1https://fenix-ri.eu/infrastructure/resources/planned-resources

Archival data repositories

ComponentSite (Country)Minimum requestTotal Resources for Call 5Unit
Swift storageCEA (FR)501 1128TByte
Archival Data RepositoryCSCS (CH)1452TByte
Archival Data RepositoryCINECA (IT)11 105TByte
Active Archive 2BSC (ES)90900TByte

Active data repositories

ComponentSite (Country)Minimum requestTotal Resources for Call 5Unit
HPST@JUELICHJSC (DE)1085 795Tbyte*day
Lustre work flashCEA (FR)10144Tbyte
Data WarpCSCS (CH)15.5Tbyte
HPC Storage @ CINECACINECA (IT)116Tbyte*day
HPC Storage @ BSCBSC (ES)2.510.5Tbyte

Further details on the components are provided here:

Components by France (CEA)

  • Interactive Computing: Linux servers with GPU, 36 CPU cores, and 384GB of memory
  • + 2 large memory nodes (3000 GB of RAM) with GPU
  • VM services: Cloud computing facility providing Virtual machines up to 36 cores and 192GB of memory
  • Archival
    • Object store accessible through the SWIFT protocol
    • Capacitive Lustre filesystem with automated migration to tapes
  • Active data storage:
    • Lustre Flash: POSIX Lustre filesystem based on SSD technologies, 950TB+ of usable capacity
    • Work: POSIX Lustre filesystem for active data

Components by Germany (JSC)

  • JUSUF-HPC: Compute cluster for interactive workloads and VM hosting. The nodes are equipped with two AMD EPYC Rome 7742 64-core CPUs and feature 256 GB DDR4 memory. Nodes with an additional Nvidia Volta V100 GPU with 16 GB high-bandwidth memory are available. The nodes in the cluster will be interconnected with a 100 Gb/s HDR100 high-speed interconnect in a full fat-tree topology and will access JSC’s central storage infrastructure. The cluster and VM hosting partition share the same underlying hardware. The VMs can host long-running services. Security restrictions regarding access to other facility services apply. The hosted VMs do not have access to the high-speed interconnect but can natively access selected subsets of JSC’s storage offerings with limitations regarding access and modification permissions. Access to a GPU from VMs will be offered at a later point in 2020 time depending on the technology roadmap of the cloud infrastructure components.
  • HPST: NVM-based I/O acceleration platform with up to 2 TB/s bandwidth. The HPST will enable applications to obtain high read and write performance (streaming and small-block size I/O) with moderate workload scalability. It functions as a cache for the scratch file system. The HPST will be accessible from the cluster part of the ICCP system as well as the JUWELS PRACE Tier-0 resource. A compute-time allocation on one of these systems is a prerequisite for the use of the HPST.

Components by Italy (CINECA)

  • Interactive Computing: GNU/Linux nodes based on Intel Technology (Intel Broadwell)
  • 2x Intel Xeon E5-2697 v4), 256 GB RAM and partially equipped with GPU Cards (NVIDIA P100 or V100)
  • Active Repository: GPFS based high performance file system
  • Archive Repository: Object Store accessible through SWIFT protocol using IBM Spectrum Scale technology
  • VM: Cloud Computing facility providing VMs up to 128GB, 40 vCPU each. Access to data repositories will be provided either through NFS or SWIFT protocols

Components by Spain (BSC)

  • Nord3: Intel Sandybridge cluster being able to be used as VM host or scalable cluster
  • 8 iDataPlex racks, each with 84 nodes dx360m4. Each node has the following configuration:
    • 2x Intel SandyBridge-EP E5-2670/1600 20M 8-core at 2.6 GHz and 32 GB RAM
  • Interactive computing cluster: Nodes for interactive access with dense memory
  • Active Storage: GPFS Storage accessed from HPC clusters
  • Archive Storage: HSM system with Object storage interface

Components by Switzerland (ETH Zurich/CSCS)

  • Piz Daint is a Cray System XC40/XC50
  • Interactive Computing: XC50 Compute Nodes Intel® Xeon® E5-2690 v3 @ 2.60GHz (12 cores, 64GB RAM) and NVIDIA® Tesla® P100 16GB
  • Scalable Computing: XC40 Compute Nodes Two Intel® Xeon® E5-2695 v4 @ 2.10GHz (2 x 18 cores, 64/128 GB RAM)
  • VM: Dual-socket Linux server with 512 GB RAM
  • Active Repository: Data Warp plus the Scratch file system (with no quota). DataWarp is an early access technology. User workflows need to be adapted/augmented
  • Archive Repository: Object store SWIFT

Proposal Requirements

PRACE ICEI Proposal Requirements

  • Short description of the scientific goals and objectives including progress beyond state of the art and scientific impact
  • Type of resources required, i.e. scalable computing resource, interactive computing resources, VM services, archival and active data repositories
  • Resources requests based on limits defined above (including information on scalability for scalable compute resources requests)
  • Description of the software and services needed to successfully complete the project
  • Description of the research methods (including a project workplan), algorithms, and code parallelization approach (including memory requirements)
  • Information on Data Management
  • Description of special needs (if any)

Reviewing Process

A Review Panel established by PRACE composed by up to 8/10 well-known scientists (they can be different for each calls) shall discuss the scientific merit of the proposals with the support of a technical team and decides which proposal should be granted based on the criteria listed below.

Review Criteria

Soundness of the Methodology and tools
Does the request describe appropriate tools, methods and approaches for addressing the research objectives?

Appropriateness Research Plan
Is the research plan appropriate to achieve the scientific goals described in the proposal? Has the applicant properly estimated the resources?

Appropriateness of the Project Timeline
Has the applicant estimated the right number of simulations to achieve the goals in the given time?

Significance of the Proposed Research
Is the proposed work supported by a grant or grants that have undergone review for intellectual merit and/or broader impact. What is the scientific impact?

Proposals explicitly requesting multiple services (scalable, interactive, storage) will be favoured.

Technical assessment
Software availability on the requested resource. The codes necessary for the project must already be available on the system requested, or in case of codes developed by the applicants, they must have been sufficiently tested for efficiency, scalability (if applicable), and suitability. Feasibility of the requested resource. The requested system(s) must be suitable for the proposed project.

How to Submit

To submit proposals, fill the application form available here and send it by email to icei-calls@prace-ri.eu.

The Peer Review team will send you confirmation within a couple of days on the receipt of the application, if you do not hear back from us please write an email to check if the proposal has been received successfully.

Call for Proposals

Acknowledgement

Applicants must acknowledge PRACE in all publications that describe results obtained using PRACE resources. Users shall use the following wording in such acknowledgement in all such papers and other publications:

We acknowledge PRACE for awarding access to the Fenix Infrastructure resources, which are partially funded from the European Union’s Horizon 2020 research and innovation programme through the ICEI project under the grant agreement No. 800858.