Cineca offers cloud computing and HPC resources and the related access and support service, to accommodate a variety of customer needs
At present, Cineca hosts 3 main HPC systems namely MARCONI, MARCONI100 and GALILEO, integrated into a common working environment to enable easy access and portability of data across the different platforms.
MARCONI, is Tier-0 level system constituting 5900 nodes, each containing 2 x Intel Xeon E5-2697 v4 processor with a clock of 2.30 GHz. The nodes are connected via Intel Omnipath network. The total performance of is 20 PFlops.
MARCONI100 is based on the IBM Power9 architecture with NVIDIA Volta GPUs. Specifically, each node will host 2x16 cores IBM POWER9 AC922 at 3.1 GHz with 256 GB/node of RAM memory and 4 x NVIDIA Volta V100 GPUs per node, Nvlink 2.0, 16GB. The number of nodes will be 980, totallying 31360 cores. Internal Network: Mellanox Infiniband EDR DragonFly+.
GALILEO is Tier-1 level system constituting 1022 nodes, each with 2 x 18-cores Intel Xeon E5-2697 v4 (Broadwell) at 2.30 GHz processor. The nodes are connected via Infiniband network and has a peak performance of is 2.3 PFlops. A small part of the GALILEO (~60 nodes) are equipped with nVidia K80 and V100 GPU.
The following data storage facilities are available:
Scratch: each system has its own local scratch area (pointed by $CINECA_SCRATCH env variable)
Work: working storage is mounted to the three systems (pointed by $WORK env variable)
DRES: a shared storage area is mounted on all machine's login-nodes (pointed by $DRES env variable)
Tape: a tape library (12 PB, expandable to 16PB) is connected to the DRES storage area as a multi-level archive (via LTFS)
Cineca offers a variety of third-party applications and community codes installed on its HPC systems. A list is available at https://www.hpc.cineca.it/content/application-software-science. Most of the software are available through application modules (set of instructions and variable settings specific for the application).
CINECA CLOUD Infrastructure
The HPC cloud infrastructure stands out by providing 80-state-of-the-art node servers for high performance calculations each equipped with 2 Broadwell 18-core processors (E5-2697 v4) and 250 GB of DDR4 RAM and interconnected via a capable 25 Gb/s Mellanox Ethernet network. Complete the infrastructure with a dedicated CEPH storage of 200 TB capacity in high availability (RAID6). This cloud infrastructure is tightly connected to a GSS storage of 6PB seen by all other infrastructure. This setup enables the use of all available HPC systems, addressing HPC workloads in conjunction with cloud resources.
To use the cloud the user will be provided with a set of resources available through the official Openstack dashboard at https://cloud.hpc.cineca.it. In the cloud environment, it is also possible to request the creation of shared storage DRES accessible both from HPC systems and from Virtual Machines via NFS, OpenStack Swift or FUSE protocols.
SPECIAL ACCESS CONDITIONS
users have to register to the userdb
Private companies use HPC resources mainly to run industrial simulations and parallel AI, when a high computational effort is required. In all other cases of data processing cloud computing and hybrid solutions are used.