LAMMPS Molecular Dynamics Simulator =================================== From the LAMMPS website http://www.lammps.org LAMMPS is a classical molecular dynamics code with a focus on materials modeling. It's an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. Many of its models have versions that provide accelerated performance on CPUs, GPUs, and Intel Xeon Phis. The code is designed to be easy to modify or extend with new functionality. Building LAMMPS on CSD3 ----------------------- As LAMMPS has so many possible different configurations we normally find it more useful to provide help to compile the specific setup rather than make a global install. For example, to build a configuration of LAMMPS that supports parallel computation on CPUs using the Reax force-field: .. literalinclude:: scripts/lammps/lammps_setup_example1.sh :language: bash This will produce an executable ``lmp`` in that directory which you can then use with the ``pair/reax`` type. Some LAMMPS subpackages support optimised versions for the specific hardware on CSD3. For example the EAM force-field has support for CPU and GPU specific optimisations. To build a configuration that supports running EAM calculations on CPU : .. literalinclude:: scripts/lammps/lammps_setup_example2.sh :language: bash This will produce the executable ``lmp`` for Intel CPU architecture. To build a configuration that supports running EAM calculations on the Ampere partition: .. literalinclude:: scripts/lammps/lammps_setup_gpu.sh :language: bash This will also produce an executable ``lmp`` in the ``build`` directory which you can use with ``eam`` pair style. Running LAMMPS -------------- LAMMPS can be run with a sbatch script similar to: .. literalinclude:: scripts/lammps/lammps_gpu_sbatch :language: bash where we are using an input file ``in.colloid`` to run on the GPU system, making use of all 4 GPUs on 1 compute node, running 2 MPI tasks per GPU thanks to the `CUDA Multi-Process Service (MPS) `_ wrapper script detailed inside. This wrapper script allows to run efficiently and concurently multiple MPI processes on a GPU by making use of the Hyper-Q capabilities on the latest NVIDIA cards. This allowing to optimise the GPU usage when each MPI process only use a fraction of the GPU memory available (80 GB on the Ampere cards). Without the MPS, the MPI processes will be run sequentially, wasting GPU time. To use more or less GPUs the ``-N`` and ``-n`` options should be changed, bearing in mind that our GPU compute nodes have 4 GPUs per node. As always the ``-A`` option should be changed to your specific slurm account and the job time limit can be adjusted with the ``-t`` option.