Dawn - Intel GPU (PVC) Nodes ============================ *These new nodes entered Early Access service in January 2024* .. image:: ../images/dawn-banner.jpg :width: 600 :alt: Banner image for the Dawn supercomputer - The word Dawn in orange lettering over a nighttime satellite image of the UK and Europe with the sun beginning to rise. Hardware -------- The Dawn (PVC) nodes are: - 256 `Dell PowerEdge XE9640 `_ servers each consisting of: - 2x Intel(R) Xeon(R) Platinum 8468 (formerly codenamed Sapphire Rapids) (96 cores in total) - 1024 GiB RAM - 4x Intel(R) Data Center GPU Max 1550 GPUs (formerly codenamed Ponte Vecchio) (128 GiB GPU RAM each) - Xe-Link 4-way GPU interconnect within the node - Quad-rail NVIDIA (Mellanox) HDR200 InfiniBand interconnect and each PVC GPU contains two stacks (previously known as tiles) and 1024 compute units. Software -------- At the time of writing, we recommend logging in initially to the CSD3 login-icelake nodes (login-icelake.hpc.cam.ac.uk). To ensure your environment is clean and set up correctly for Dawn, please purge your modules and load the base Dawn environment: .. code-block:: bash module purge module load default-dawn The PVC nodes run `Rocky Linux 8`_, which is a rebuild of Red Hat Enterprise Linux 8 (RHEL8). This is in contrast to some of the older CSD3 partitions (cclake) which at the time of writing run CentOS7_, which is a rebuild of Red Hat Enterprise Linux 7 (RHEL7). The Sapphire Rapids CPUs on these nodes are also more modern and support newer instructions than most other CSD3 partitions. As we provide a separate set of modules specifically for dawn nodes, in general, we don't support running software built for other CSD3 partitions on Dawn nodes. Therefore you are strongly recommended to rebuild your software on the Dawn nodes rather than try to run binaries previously compiled on CSD3. Be aware that the software environment for Dawn is optimised for its hardware and the binaries may fail to run on other CSD3 nodes, including cpu (*login-p*) and icelake (*login-q*) login nodes. If you wish to recompile or test against this new environment, we recommend requesting an interactive node with command:: sintr -t 01:00:00 --exclusive -A YOURPROJECT-DAWN-GPU -p pvc -n 1 -c 24 --gres=gpu:1 The nodes are named according to the scheme *pvc-s-[1-256]*. .. _`Rocky Linux 8`: https://rockylinux.org/ .. _CentOS7: https://www.centos.org/ Slurm partition --------------- The PVC (pvc-s) nodes are in a new **pvc** Slurm partition. Dawn Slurm projects follow the CSD3 naming convention for GPU projects and contain units of GPU hours. Additionally Dawn project names follow the pattern NAME-DAWN-GPU. .. The pvc-s nodes have **96 cpus** (1 cpu = 1 core), and 1024 GiB of RAM. This means that Slurm will allocate **24 cpus per GPU**. Recommendations for running on Dawn ----------------------------------- The resource limits are currently set to a maximum of 64 GPUs per user with a maximum wallclock time of 36 hours per job. These limits should be regarded as provisional and may be revised. Default submission script for dawn ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A template submission script will be provided soon. To submit a job to the Dawn PVC partition, your batch script should look similar to the following example: .. code-block:: bash #!/bin/bash -l #SBATCH --job-name=my-batch-job #SBATCH --account= #SBATCH --partition=pvc # Dawn PVC partition #SBATCH -n 4 # Number of tasks (usually number of MPI ranks) #SBATCH -c 24 # Number of cores per task #SBATCH --gres=gpus:1 # Number of requested GPUs per node module purge module load default-dawn # Set up environment below for example by loading more modules srun Jobs requiring N GPUs where N < 4 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Although there are 4 GPUs in each node it is possible to request fewer than this, e.g. to request 3 GPUs use:: #SBATCH --nodes=1 #SBATCH --gres=gpu:3 #SBATCH -p pvc .. Slurm will enforce allocation of a proportional number of CPUs (24) per GPU. Note that if you either do not specify a number of GPUs per node with *--gres*, or request more than one node with less than 4 GPUs per node, you will receive an error on submission. Jobs requiring multiple nodes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Multi-node jobs need to request 4 GPUs per node, i.e.:: #SBATCH --gres=gpu:4 Jobs requiring MPI ^^^^^^^^^^^^^^^^^^ We currently recommend using Intel MPI provided by the oneAPI toolkit: .. code-block:: bash module av intel-oneapi-mpi .. Performance considerations for MPI jobs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ On systems with multiple GPUs and multiple NICs, such as Dawn with 4x HDR NICs and 4x PVC GPUs per node, care should be taken to ensure that GPUs communicate with the closest NIC, to ensure maximum GPU-NIC throughput. Furthermore, each GPU should be assigned to its closest set of CPU cores (NUMA domain). This can be achieved by querying the topology of the machine you are running on. .. (using nvidia-smi topo -m), and then instrumenting your MPI and/or run script to ensure correct placement. On Wilkes3, each pair of GPUs shares a NIC, so we need to ensure that the local NIC to each pair is used for all non-peer-to-peer communication. TODO: Binding script for Dawn Given the above binding script (assume it's name is run.sh), the corresponding MPI launch command can be modified to:: mpirun -npernode $mpi_tasks_per_node -np $np --bind-to none ./run.sh $application $options Note that this approach requires exclusive access to a node. Multithreading jobs ^^^^^^^^^^^^^^^^^^^ If your code uses multithreading (e.g. host-based OpenMP), you will need to specify the number of threads per process in your Slurm batch script using the `cores-per-task` parameter. For example, to run a hybrid MPI-OpenMP application using 24 processes and 4 threads per task: .. code-block:: bash #SBATCH -n 24 # or --ntasks #SBATCH -c 4 # or --cores-per-task If you do _not_ specify the `cores-per-task` parameter Slurm will pin the threads to the same core, reducing performance. Recommended Compilers ^^^^^^^^^^^^^^^^^^^^^ We recommend using the `Intel oneAPI `_ compilers for C, C++ and Fortran: .. code-block:: bash module avail intel-oneapi-compilers These compilers support both standard, host-based code as well as SYCL for C++ codes, and OpenMP offload in C, C++ and Fortran. Please note that the 'classic' Intel compilers (icc, icpc and ifort) have been deprecated or removed; only the 'new' compilers (icx, icpx and ifx) are supported and are the only ones that support GPUs. To enable SYCL support: .. code-block:: bash icpx -fsycl For OpenMP offload (note `-fiopenmp`, not `-fopenmp`): .. code-block:: bash # C icx -fiopenmp -fopenmp-targets=spir64 # Fortran ifx -fiopenmp -fopenmp-targets=spir64 Both Intel MPI and the oneMKL performance libraries support both CPU and the PVC GPUs, and can be found as follows: .. code-block:: bash module av intel-oneapi-mpi module av intel-oneapi-mkl Machine Learning & Data Science frameworks ------------------------------------------ We provide a set of pre-populated Conda environments based on the Intel Distribution for Python: .. code-block:: bash module av intelpython-conda conda info -e This module provides environments for PyTorch and Tensorflow. Please note that Intel code and documentation sometimes refers to 'XPUs', a more generic term for accelerators, GPU or otherwise. For Dawn, 'XPU' and 'GPU' can usually be considered interchangeable. PyTorch ^^^^^^^ PyTorch on Intel GPUs is supported by the `Intel Extension for PyTorch `_. On Dawn this version of PyTorch is accessible as a conda environment named pytorch-gpu: .. code-block:: bash module load intelpython-conda conda activate pytorch-gpu Adapting your code to run on the PVCs is straightforward and only takes a few lines of code. For details, see the official documentation - but as a quick example: .. code-block:: python import torch import intel_extension_for_pytorch as ipex ... # Enable GPU model = model.to('xpu') data = data.to('xpu') model = ipex.optimize(model, dtype=torch.float32) TensorFlow ^^^^^^^^^^ Intel supports optimised TensorFlow on both CPU and GPU, using the `Intel Extension for TensorFlow `_. On Dawn this version on TensorFlow is accessible as a conda environment named tensorflow-gpu: .. code-block:: bash module load intelpython-conda conda activate tensorflow-gpu To run on the PVCs, there should be no need to modify your code - the Intel optimised implementation will run automatically on the GPU, assuming it has been installed as `intel-extension-for-tensorflow[xpu]`. Jax/OpenXLA ^^^^^^^^^^^ Documentation can be found on GitHub: `Intel OpenXLA `_ Julia ^^^^^ This is currently known not to work correctly on PVC GPUs. (Mar 2024)