# New Cascade Lake Nodes¶

The new Cascade Lake upgrade to CSD3 is currently available to Tier2, DiRAC and IRIS projects. Access will be widened to Cambridge users on Tuesday 17th November.

The cluster is brand new, so issues can be expected. Please email support@hpc.cam.ac.uk about these as usual.

## Comparison with skylake nodes¶

Please note that although the new CPU cores are very similar to Skylake, there are some important differences between the Skylake and Cascade Lake nodes which it is necessary to understand:

• The new nodes are in a new cclake Slurm partition. Your existing -CPU projects will be able to submit jobs to this.
• The new nodes have 56 cpus (1 cpu = 1 core), and 192GB of RAM. This means that they have 3420 MiB per cpu, compared to 5980 MiB per cpu in the skylake partition.
• The new nodes are interconnected by HDR Infiniband, rather than Omni-Path. This means that MPI jobs in particular may need to change which MPI module they use for best performance.

## Recommendations for running on cclake¶

The per-job wallclock time limits are currently unchanged compared to skylake/pascal/knl at 36 hours and 12 hours for SL1/2 and SL3 respectively.

The per-job, per-user cpu limits are now 2240 and 448 cpus for SL1/2 and SL3 respectively (based on the lowest common multiple of 32 and 56 and so that these limits represent whole numbers of both skylake and cclake nodes).

These limits should be regarded as provisional and may be revised.

### Default submission script for cclake¶

You should find a symbolic link to a new default job submission script modified for the cclake nodes in your home directory, called:

slurm_submit.peta4-cclake


This is set up for MPI jobs using cclake, but can be modified for other types of job. If you prefer to modify your existing job scripts, please see the following sections for guidance.

### Jobs not using MPI and requiring no more than 3420 MiB per cpu¶

In this case you should be able to simply add the cclake partition to your sbatch directive, e.g.:

#SBATCH -p skylake,cclake


will submit a job able to run on the first nodes available in either the skylake or cclake partitions.

### Jobs requiring more than 3420 MiB per cpu¶

You will need to specify either the –mem= or –cpus-per-task= directive, in addition to the -p directive, in order to make sure you have enough memory at run time. Note that in this case, Slurm will satisfy the memory requirement by allocating (and charging for) more cpus if necessary:

#SBATCH -p skylake,cclake
#SBATCH --nodes=1
#SBATCH --mem=5900


In the above example we are ensuring that the single-task job will run on either skylake or cclake; in the former case it will be allocated 1 cpu (because 5900 MiB fits into 5980 MiB per cpu), whereas in the latter case Slurm needs to allocate 2 cpus (since 5900 MiB exceeds the 3420 MiB per cpu available in cclake).

### Jobs requiring MPI¶

Please note that in this case, it makes no sense to specify both the skylake and cclake partitions because they have independent interconnects which cannot communicate with each other. Therefore specify only the cclake partition:

#SBATCH -p cclake


We recommend using Intel MPI 2020, which is a newer version of Intel MPI than is presented currently in the skylake environment. There are other, related changes to the default environment seen by jobs running on the cclake nodes. If you wish to recompile or test against this new environment, either request an interactive cclake node with e.g.:

sintr -A yourproject-cpu -N1 -n4 -t 1:00:00 -p cclake


and work on a cclake node directly, or replace your current login node environment as follows:

ssh yourid@login-cpu.hpc.cam.ac.uk
module purge