Cascade Lake Nodes¶
The Cascade Lake upgrade to CSD3 entered general service in November 2020. All users are encouraged to make use of these nodes.
Comparison with skylake nodes¶
Please note that although the newer Cascade Lake cpu cores are very similar to Skylake, there are some important differences between the Skylake and Cascade Lake nodes which it is necessary to understand:
- The Cascade Lake nodes are named according to the scheme cpu-p-[1-672].
- The Cascade Lake nodes are in separate cclake and cclake-himem Slurm partitions. Your existing -CPU projects will be able to submit jobs to these.
- The cclake nodes have 56 cpus (1 cpu = 1 core), and 192 GiB of RAM. This means that they have 3420 MiB per cpu, compared to 5980 MiB per cpu in the skylake partition.
- The cclake-himem nodes have 56 cpus (1 cpu = 1 core), and 384 GiB of RAM. This means that they have 6840 MiB per cpu, compared to 12030 MiB per cpu in the skylake-himem partition.
- The Cascade Lake nodes are interconnected by HDR Infiniband, rather than Omni-Path. This means that MPI jobs in particular may need to change which MPI module they use for best performance.
Recommendations for running on cclake¶
The per-job wallclock time limits are currently unchanged compared to skylake/pascal at 36 hours and 12 hours for SL1/2 and SL3 respectively.
The per-job, per-user cpu limits are now 4256 and 448 cpus for SL1/2 and SL3 respectively (based on the lowest common multiple of 32 and 56 and so that these limits represent whole numbers of both skylake and cclake nodes).
These limits should be regarded as provisional and may be revised.
Default submission script for cclake¶
You should find a symbolic link to a Cascade Lake default job submission script modified for the cclake nodes in your home directory, called:
slurm_submit.peta4-cclake
This is set up for MPI jobs using cclake, but can be modified for other types of job. If you prefer to modify your existing job scripts, please see the following sections for guidance.
Jobs not using MPI and requiring no more than 3420 MiB per cpu¶
In this case you should be able to simply add the cclake partition to your sbatch directive, e.g.:
#SBATCH -p cclake
will submit a job able to run on the first nodes available in the cclake partition.
Jobs requiring more than 3420 MiB per cpu¶
In the case of larger memory requirements, it is most efficient to submit instead to the cclake-himem partition which will allocate 6840 MiB per cpu:
#SBATCH -p cclake-himem
If this amount of memory per cpu is insufficient you will need to specify either the –mem= or –cpus-per-task= directive, in addition to the -p directive, in order to make sure you have enough memory at run time. Note that in this case, Slurm will satisfy the memory requirement by allocating (and charging for) more cpus if necessary. E.g.:
#SBATCH -p cclake-himem
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --mem=8000
In the above example we are requesting 8000 MiB of memory per node, but only one task. Slurm will usually allocate one cpu per task, but here, because it enforces 6840 MiB per cpu for cclake-himem, it will allocate 2 cpus to the single task in order that the job will have the 8000 MiB which it claims it requires. Note that this increases the number of cpu core hours consumed by the job and hence the charge. Also note that since each cpu receives 6840 MiB by default anyway, the user would lose nothing by requesting 13000 MiB instead of 8000 MiB here.
Jobs requiring MPI¶
Please note that in this case, applications may need to be recompiled for use on cclake/cclake-himem since the skylake and cclake nodes have independent interconnects of different hardware types (Omni-Path and Infiniband respectively) and a different choice of MPI library is probably necessary.
We recommend using Intel MPI 2020, which is a newer version of Intel MPI than is presented currently in the skylake environment. There are other, related changes to the default environment seen by jobs running on the cclake nodes. If you wish to recompile or test against this new environment, either request an interactive cclake node with e.g.:
sintr -A yourproject-cpu -N1 -n4 -t 1:00:00 -p cclake
and work on a cclake node directly, or replace your current login node environment as follows:
ssh yourid@login-cpu.hpc.cam.ac.uk
module purge
module load rhel7/default-ccl
Ultimately we expect the skylake and cclake environments to converge but initially it is necessary to load the rhel7/default-ccl module to access the cclake environment on the login-cpu nodes, as shown above.