LAMMPS Molecular Dynamics Simulator¶
From the LAMMPS website http://lammps.sandia.gov
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator.
LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.
LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. Many of its models have versions that provide accelerated performance on CPUs, GPUs, and Intel Xeon Phis. The code is designed to be easy to modify or extend with new functionality.
LAMMPS is distributed as an open source code under the terms of the GPL. The current version can be downloaded here. Links are also included to older F90/F77 versions. Periodic releases are also available on SourceForge.
LAMMPS is distributed by Sandia National Laboratories, a US Department of Energy laboratory. The main authors of LAMMPS are listed on this page along with contact info and other contributors. Funding for LAMMPS development has come primarily from DOE (OASCR, OBER, ASCI, LDRD, Genomes-to-Life) and is acknowledged here.
Building LAMMPS on CSD3¶
As LAMMPS has so many possible different configurations we normally find it more useful to provide help to compile the specific setup rather than make a global install. For example, to build a configuration of LAMMPS that supports parallel computation on CPUs using the Reax force-field:
#download LAMMPS git clone --depth=1 https://github.com/lammps/lammps.git cd lammps #load the CSD3 modules for CPU/KNL architectures module purge module load rhel7/default-peta4 #switch to the src directory cd src #activate the user-reaxc package make yes-user-reaxc #compile lammps with mpi support make mpi
This will produce an executable
lmp_mpi in that directory which you can then use with the
Some LAMMPS subpackages support optimised versions for the specific hardware on CSD3. For example the EAM force-field has support for CPU, KNL and GPU specific optimisations. To build a configuration that supports running EAM calculations on the Peta4-KNL system:
#download LAMMPS git clone --depth=1 http://github.com/lammps/lammps cd lammps #load the CSD3 modules for CPU/KNL architectures module purge module load rhel7/default-peta4 #switch to the src directory cd src #activate support for EAM forcefields make yes-manybody #activate intel-optimised support for intel architectures make yes-user-intel #build lammps for intel CPU architectures make intel_cpu_intelmpi #build lammps for intel MKL architecture make knl
This will produce executables
lmp_knl for CPU and KNL architectures. To build a configuration that supports running EAM calculations on the Wilkes2 GPU system:
#download LAMMPS git clone --depth=1 http://github.com/lammps/lammps cd lammps #load the CSD3 modules for the GPU architecture module purge module load rhel7/default-gpu #build the gpu support library with P100 support pushd lib/gpu export CUDA_HOME=$CUDA_INSTALL_PATH sed -i 's/CUDA_ARCH =.*/CUDA_ARCH = -arch=sm_60/' Makefile.linux make -j -f Makefile.linux popd pushd src #activate support for EAM forcefields make yes-manybody #activate support for GPUs make yes-gpu #build lammps with mpi support make -j mpi popd
This will also produce an executable
lmp_mpi in the
src directory which you can use with
eam pair style.
LAMMPS can be run with a sbatch script similar to:
#!/bin/bash #SBATCH -A MYACCOUNT #SBATCH -p pascal #SBATCH -t 8:00:00 #SBATCH -N 4 #SBATCH -n 16 module purge module load rhel7/default-gpu app="$HOME/lammps/src/lmp_mpi" mpirun $app -sf gpu -pk gpu 4 -i lammps.in
where we are using an input file
lammps.in to run on the GPU system, making use of 16 GPUs on 4 compute nodes. To use more or less GPUs the
-n options should be changed, bearing in mind that our GPU compute nodes have 4 GPUs per node. As always the
-A option should be changed to your specific slurm accoutn and the job time limit can be adjusted with the