GROMACS¶
Gromacs describes itself at http://www.gromacs.org
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
Running GROMACS on CSD3¶
GROMACS is supported on each of the skylake, pascal and KNL hardware partitions on CSD3. As GROMACS is carefully designed to make best use of the hardware available we have seperate builds of the key mdrun component for each type of hardware.
To load the most recent build of GROMACS use:
module load gromacs/2019.3
which will make available the gmx front-end as well as various mdrun binaries. There are mdrun variants for skylake, pascal and KNL (named mdrun_skylake etc) as well as variants with and without a suffix _d for single and double precision.
A sample job script to run GROMACS:
#!/bin/bash
#SBATCH --account MYACCOUNT-CPU
#SBATCH --partition skylake
#SBATCH --nodes 2
#SBATCH --ntasks 64
#SBATCH --time 02:00:00
module purge
module load rhel7/default-peta4 gromacs/2019.3
MDRUN=mdrun_$SLURM_JOB_PARITION
mpirun $MDRUN -v -deffnm em
where we have automatically determined the correct mdrun binary to run for the slurm partition. This will read from an input file em.tpr which can be prepared with the other GROMACS tools.
To run on GPU the job script is almost identical:
#!/bin/bash
#SBATCH --account MYACCOUNT-GPU
#SBATCH --partition pascal
#SBATCH --nodes 2
#SBATCH --ntasks 8
#SBATCH --gres=gpu:4
#SBATCH --time 02:00:00
module purge
module load rhel7/default-gpu gromacs/2019.3
MDRUN=mdrun_$SLURM_JOB_PARITION
mpirun $MDRUN -v -deffnm em
where we have modified the directives to slurm to ask for: 4 GPUs per node, a single MPI task per GPU, and also loaded the default GPU environment (rhel7/default-gpu) rather than the default Intel envionment (rhel7/default-peta4).
Checkpointing GROMACS jobs¶
In case the simulation requires more than the job timelimits allowed by our policies, GROMACS supports checkpointing. Please refer to the gromacs documentation at https://manual.gromacs.org/current/user-guide/managing-simulations.html.