OpenMM¶
From the OpenMM website http://openmm.org/
OpenMM is a high performance toolkit for molecular simulation. Use it as a library, or as an application. We include extensive language bindings for Python, C, C++, and even Fortran. The code is open source and actively maintained on Github, licensed under MIT and LGPL. Part of the Omnia suite of tools for predictive biomolecular simulation.
Installing OpenMM on CSD3¶
Below are instructions of how to install openmm in a miniconda virtual environment. Part of the installation instructions are located at http://docs.openmm.org/latest/userguide/application.html#installing-openmm .
Connect to a login node with:
ssh your-username@login.hpc.cam.ac.uk
module purge
module load rhel8/default-amp
# Install miniconda 3 (skip the following 4 lines if you already have it)
mkdir -p miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda3/miniconda.sh
bash miniconda3/miniconda.sh -b -u -p miniconda3
rm miniconda3/miniconda.sh
# Install and test openmm. * This operation must be done on a GPU node (interactive session) for the tests to be successful *
# Check for cuda version
if [ -z "$CUDA_PATH" ]; then
module load cuda
fi
if [ -z "$CUDA_PATH" ]; then
echo "Module load failed, aborting"
return 1
fi
CUDA_VERSION=$(basename $CUDA_PATH)
if [ -z "$CUDA_VERSION" ]; then
echo "Error, could not extract CUDA version, aborting"
fi
# module load cuda if needed
source miniconda3/bin/activate
conda create -p openmm-env
conda activate openmm-env/
# Install openmm build matching the CUDA version loaded on the GPU node
conda install -c conda-forge openmm cuda-version=$CUDA_VERSION
Running OpenMM on CSD3¶
To submit an OpenMM job please use the following submission script:
#!/bin/bash
#! Which project should be charged (NB Wilkes2 projects end in '-GPU'):
#SBATCH -A CHANGE_TO_YOUR_ACCOUNT-GPU
#! Change to relevant GPU cluster
#SBATCH -p ampere
#SBATCH -o openmm.out
#SBATCH -e openmm.err
#! How many whole nodes should be allocated?
#SBATCH --nodes=1
#! How many (MPI) tasks will there be in total?
#! Note probably this should not exceed the total number of GPUs in use.
#SBATCH --ntasks=1
#! Specify the number of GPUs per node (between 1 and 4; must be 4 if nodes>1).
#SBATCH --gres=gpu:1
#! How much wallclock time will be required?
#SBATCH --time=00:10:00
#! What types of email messages do you wish to receive?
#SBATCH --mail-type=FAIL
#! Uncomment this to prevent the job from being requeued (e.g. if
#! interrupted by node failure or system downtime):
##SBATCH --no-requeue
#! sbatch directives end here (put any additional directives above this line)
#! Notes:
#! Charging is determined by GPU number*walltime.
#! Number of nodes and tasks per node allocated by SLURM (do not change):
numnodes=$SLURM_JOB_NUM_NODES
numtasks=$SLURM_NTASKS
mpi_tasks_per_node=$(echo "$SLURM_TASKS_PER_NODE" | sed -e 's/^\([0-9][0-9]*\).*$/\1/')
#! ############################################################
#! Modify the settings below to specify the application's environment, location
#! and launch method:
#! Optionally modify the environment seen by the application
#! (note that SLURM reproduces the environment at submission irrespective of ~/.bashrc):
. /etc/profile.d/modules.sh # Leave this line (enables the module command)
module purge # Removes all modules still loaded
module load rhel8/default-amp # REQUIRED - loads the basic environment. Change accordingly to the partition set in line 4.
# Edit next line to point to your own miniconda3 installation (see setup)
source miniconda3/bin/activate
source activate openmm-env/
python -m openmm.testInstallation # update with your own openmm python script. This runs the default tests.
conda deactivate
Here, we are using OpenMM’s own testing suite to ensure the application is running properly. The output should be similar to the following (times might be slightly different):
There are 4 Platforms available:
1 Reference - Successfully computed forces
2 CPU - Successfully computed forces
3 CUDA - Successfully computed forces
4 OpenCL - Successfully computed forces
Median difference in forces between platforms:
Reference vs. CPU: 6.29882e-06
Reference vs. CUDA: 6.73039e-06
CPU vs. CUDA: 7.76279e-07
Reference vs. OpenCL: 6.75426e-06
CPU vs. OpenCL: 8.19109e-07
CUDA vs. OpenCL: 2.19595e-07
The slurm submit script can be submitted to the queue with sbatch
. This will run on one node using one core up to a limit of 10 minutes. To run on more cores the -n
option should be adjusted and the timelimit can be changed with -t
.