Modules Environment

Software on CSD3 is primarily controlled through the modules environment. By loading and switching modules you control the compilers, libraries and software available.

When compiling or running programs on CSD3 you will need to set up the correct modules, to load your compiler and any librarys that are requierd (e.g. numerical libraries, IO format libraries).

Additionally, if you are compiling parallel applications using MPI (or SHMEM, etc.) then you will need to load one of the MPI environments and use the appropriate compiler wrapper scripts.

By default, the rhel/default-{system-name} module is loaded, depending on the login node you used, which loads a set of other modules.

Basic usage of the module command on CSD3 is covered below. For full documentation please see:

Note: The modules provided by the Spack package manager behave differently to those usually encountered in Linux environments. In particular, each module has the versions of dependency libraries hardcoded using RPATH. More information is provided below. You can identify Spack modules as they have a random string of 7 characters at the end of their name, e.g.: boost-1.66.0-intel-17.0.4-2xrjal4.

Using the modules environment

Information on the available modules

Finding out which modules (and hence which compilers, libraries and software) are available on the system is performed using the module avail command:

[user@system ~]$ module avail

This will list all the names and versions of the modules available on the service. Not all of them may work for your account however, due to, for example, licencing restrictions. You will notice that for many modules we have more than one version, each of which is identified by a version number. One of these versions is the default. As the service develops the default version will change.

How to redirect module output

If using the BASH shell you may redirect standard error (stderr) to standard output (stdout) using the 2>&1 syntax. This enables you to search for specific items in the available modules:

#The below command will list all available modules with the phrase 'python-3.6' in them
[user@system ~]$ module avail 2>&1 | grep -i python-3.6

Identifying currently loaded modules

The simple module list command will give the names of the modules and their versions you have presently loaded in your envionment:

[user@login-e-11 ~]$ module list
Currently Loaded Modulefiles:
1) slurm                          6) turbovnc/2.0.1                11) intel/impi/2017.4/intel       16) intel/bundles/complib/2017.4
2) rhel7/global                   7) vgl/2.5.1/64                  12) intel/libs/idb/2017.4         17) rhel7/default-peta4
3) spack/current                  8) singularity/current           13) intel/libs/tbb/2017.4
4) dot                            9) intel/compilers/2017.4        14) intel/libs/ipp/2017.4
5) java/jdk1.8.0_45              10) intel/mkl/2017.4              15) intel/libs/daal/2017.4

Loading, unloading and swapping modules

To load a module to use module add or module load. For example, to load the pgi commpiler into the development environment:

module load pgi

This will load the default version of the pgi commpilers Library. If you need a specfic version of the module, you can add more information:

module load pgi/2018

will load version 18.1 for you, regardless of the default. If you want to clean up, module remove will remove a loaded module:

module remove pgi

(or module rm pgi or module unload pgi) will unload whatever version of pgi (even if it is not the default) you might have loaded.

Available Compiler Suites

Note: As CSD3 uses dynamic linking by default you will generally also need to load any modules you used to compile your code in your job submission script when you run your code.

Intel Compiler Suite

The Intel compiler suite is accessed by loading the intel/bundles/complib/* module. For example:

module load intel/bundles/complib/2017.4

Once you have loaded the module, the compilers are available as:

  • ifort - Fortran
  • icc - C
  • icpc - C++

C++ with Intel Compilers

Intel compilers rely on GCC C++ headers and libraries to support more recent C++11 features. If you are using Intel compilers to compile C++ on CSD3 you should also load the gcc/5.4.0 module to have access to the correct C++ files:

module load gcc-5.4.0-gcc-4.8.5-fis24gg

Note: You will also need to load this module in your job submission scripts when running code compiled in this way.

GCC Compiler Suite

The GCC compiler suite is accessed by loading the gcc module. For example:

module load gcc-7.2.0-gcc-4.8.5-pqn7o2k

Once you have loaded the module, the compilers are available as:

  • gfortran - Fortran
  • gcc - C
  • g++ - C++

PGI Compiler Suite

The Portland Group (PGI) compilers are available under the pgi modules. For example:

module load pgi/2018

Once you have loaded the module, the compilers are available as:

  • pgfortran - Fortran
  • pgcc - C
  • pg++ - C++

Compiling MPI codes

There are two prefered MPI libraries currently available on CSD3:

  • Intel MPI
  • OpenMPI

Using Intel MPI

To compile MPI code with Intel MPI, using any compiler, you must first load the intel/bundles/complib/2017.4 module (which on the cpu and knl machines is loaded by default as part of the rhel7/default-peta4 module):

module load intel/bundles/complib/2017.4

This makes the compiler wrapper scripts available to you. To change the underlying compiler, use the I_MPI_ environment variables:

Language Intel (default) GCC PGI
Fortran I_MPI_F90=ifort I_MPI_F90=gfortran I_MPI_F90=pgfortran
C++ I_MPI_CXX=icpc I_MPI_CXX=g++ I_MPI_CXX=pg++
C I_MPI_CC=icc I_MPI_CC=gcc I_MPI_CC=pgcc

Further details on using the different compiler suites with Intel MPI are available in the following sections.