Modules Environment =================================== Software on CSD3 is primarily controlled through the *modules* environment. By loading and switching modules you control the compilers, libraries and software available. When compiling or running programs on CSD3 you will need to set up the correct modules, to load your compiler and any librarys that are requierd (e.g. numerical libraries, IO format libraries). Additionally, if you are compiling parallel applications using MPI (or SHMEM, etc.) then you will need to load one of the MPI environments and use the appropriate compiler wrapper scripts. By default, the rhel/default-{system-name} module is loaded, depending on the login node you used, which loads a set of other modules. Basic usage of the ``module`` command on CSD3 is covered below. For full documentation please see: - `Linux manual page on modules `__ **Note:** The modules provided by the `Spack `__ package manager behave differently to those usually encountered in Linux environments. In particular, each module has the versions of dependency libraries hardcoded using RPATH. More information is provided below. You can identify Spack modules as they have a random string of 7 characters at the end of their name, e.g.: ``boost-1.66.0-intel-17.0.4-2xrjal4``. Using the modules environment ----------------------------- Information on the available modules ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Finding out which modules (and hence which compilers, libraries and software) are available on the system is performed using the ``module avail`` command: :: [user@system ~]$ module avail This will list all the names and versions of the modules available on the service. Not all of them may work for your account however, due to, for example, licencing restrictions. You will notice that for many modules we have more than one version, each of which is identified by a version number. One of these versions is the default. As the service develops the default version will change. How to redirect module output ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If using the BASH shell you may redirect standard error (**stderr**) to standard output (**stdout**) using the ``2>&1`` syntax. This enables you to search for specific items in the available modules: :: #The below command will list all available modules with the phrase 'python-3.6' in them [user@system ~]$ module avail 2>&1 | grep -i python-3.6 python-3.6.1-gcc-5.4.0-23fr5u4 python-3.6.1-gcc-5.4.0-64u3a4w python-3.6.1-gcc-5.4.0-vag3zpv python-3.6.1-gcc-5.4.0-xk7ym4l python-3.6.2-gcc-5.4.0-me5fsee python-3.6.2-intel-17.0.4-lv2lxsb Identifying currently loaded modules ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The simple ``module list`` command will give the names of the modules and their versions you have presently loaded in your envionment: :: [user@login-e-11 ~]$ module list Currently Loaded Modulefiles: 1) slurm 6) turbovnc/2.0.1 11) intel/impi/2017.4/intel 16) intel/bundles/complib/2017.4 2) rhel7/global 7) vgl/2.5.1/64 12) intel/libs/idb/2017.4 17) rhel7/default-peta4 3) spack/current 8) singularity/current 13) intel/libs/tbb/2017.4 4) dot 9) intel/compilers/2017.4 14) intel/libs/ipp/2017.4 5) java/jdk1.8.0_45 10) intel/mkl/2017.4 15) intel/libs/daal/2017.4 Loading, unloading and swapping modules ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To load a module to use ``module add`` or ``module load``. For example, to load the pgi commpiler into the development environment: :: module load pgi This will load the default version of the pgi commpilers Library. If you need a specfic version of the module, you can add more information: :: module load pgi/2018 will load version 18.1 for you, regardless of the default. If you want to clean up, ``module remove`` will remove a loaded module: :: module remove pgi (or ``module rm pgi`` or ``module unload pgi``) will unload whatever version of pgi (even if it is not the default) you might have loaded. Private and group modules ------------------------- It is possible to create private modules using the following procedure: 1. Create a folder named ``privatemodules`` in the home directory with: :: cd ~ mkdir privatemodules 2. Add module files in the ``privatemodules`` folder 3. Access the private modules with: :: module load use.own This command can be added in the .bashrc file for accessing the private modules without entering the above command everytime the user logins. 3. To list the private modules is done using the ``modules av`` command. The private modules list will be shown below the other global module lists. To create group modules a folder with the module files and has to be created in the rds folder that is available to all the members of the group. Moreover, the program files that the modules need must the rds storage too. Both of these folders and files containted must have read permission by all users. Each user in the group must create a symbolic link to the folder with the group module files. An example of this command is shown below. Please change the path to the group modules to the absolute path to the folder with your group's modules. :: cd ~/privatemodules ln -s /rds/my-groups-project/my-id/groupmodules/ groupmodules Available Compiler Suites ------------------------- **Note:** As CSD3 uses dynamic linking by default you will generally also need to load any modules you used to compile your code in your job submission script when you run your code. Intel Compiler Suite ~~~~~~~~~~~~~~~~~~~~ The Intel compiler suite is accessed by loading the ``intel/bundles/complib/*`` module. For example: :: module load intel/bundles/complib/2017.4 Once you have loaded the module, the compilers are available as: * ``ifort`` - Fortran * ``icc`` - C * ``icpc`` - C++ C++ with Intel Compilers ^^^^^^^^^^^^^^^^^^^^^^^^ Intel compilers rely on GCC C++ headers and libraries to support more recent C++11 features. If you are using Intel compilers to compile C++ on CSD3 you should also load the gcc/5.4.0 module to have access to the correct C++ files: :: module load gcc-5.4.0-gcc-4.8.5-fis24gg **Note:** You will also need to load this module in your job submission scripts when running code compiled in this way. GCC Compiler Suite ~~~~~~~~~~~~~~~~~~ The GCC compiler suite is accessed by loading the ``gcc`` module. For example: :: module load gcc-7.2.0-gcc-4.8.5-pqn7o2k Once you have loaded the module, the compilers are available as: * ``gfortran`` - Fortran * ``gcc`` - C * ``g++`` - C++ PGI Compiler Suite ~~~~~~~~~~~~~~~~~~ The Portland Group (PGI) compilers are available under the ``pgi`` modules. For example: :: module load pgi/2018 Once you have loaded the module, the compilers are available as: * ``pgfortran`` - Fortran * ``pgcc`` - C * ``pg++`` - C++ Compiling MPI codes ------------------- There are two prefered MPI libraries currently available on CSD3: * Intel MPI * OpenMPI Using Intel MPI ~~~~~~~~~~~~~~~ To compile MPI code with Intel MPI, using any compiler, you must first load the ``intel/bundles/complib/2020.2`` module (which on the cclake CPU nodes is loaded by default as part of the ``rhel7/default-ccl`` module): :: module load intel/bundles/complib/2020.2 This makes the compiler wrapper scripts available to you. To change the underlying compiler, use the ``I_MPI_`` environment variables: +----------+-------------------+--------------------+---------------------+ | Language | Intel (default) | GCC | PGI | +==========+===================+====================+=====================+ | Fortran | I_MPI_F90=ifort | I_MPI_F90=gfortran | I_MPI_F90=pgfortran | +----------+-------------------+--------------------+---------------------+ | C++ | I_MPI_CXX=icpc | I_MPI_CXX=g++ | I_MPI_CXX=pg++ | +----------+-------------------+--------------------+---------------------+ | C | I_MPI_CC=icc | I_MPI_CC=gcc | I_MPI_CC=pgcc | +----------+-------------------+--------------------+---------------------+ Further details on using the different compiler suites with Intel MPI are available in the following sections.