Search Docs by Keyword

Table of Contents

SPACK Package Manager


Introduction to Spack

Spack is a package management tool designed to support multiple versions and configurations of software on a wide variety of platforms and environments. It was designed for large supercomputer centers, where many users and application teams share common installations of software on clusters with exotic architectures, using non-standard libraries.

Spack is non-destructive: installing a new version does not break existing installations. In this way several configurations can coexist on the same system.

Most importantly, Spack is simple. It offers a simple spec syntax so that users can specify versions and configuration options concisely. Spack is also simple for package authors: package files are written in pure Python, and specs allow package authors to maintain a single file for many different builds of the same package.

Note: These instructions are intended to guide you on how to use Spack on the FAS RC Cannon cluster.

Installation and Setup

Install and Setup

Spack works out of the box. Simply clone Spack to get going. In this example, we will clone the latest version of Spack.

Note: Spack can be installed in your home or lab space. For best performance and efficiency, we recommend to install Spack in your lab directory, e.g., /n/holylabs/<PI_LAB>/Lab/software or other lab storage if holylabs is not available.

$ git clone -c feature.manyFiles=true https://github.com/spack/spack.git
Cloning into 'spack'...
remote: Enumerating objects: 686304, done.
remote: Counting objects: 100% (1134/1134), done.
remote: Compressing objects: 100% (560/560), done.
remote: Total 686304 (delta 913), reused 573 (delta 569), pack-reused 685170 (from 5)
Receiving objects: 100% (686304/686304), 231.28 MiB | 43.53 MiB/s, done.
Resolving deltas: 100% (325977/325977), done.
Updating files: 100% (1709/1709), done.

This will create the spack folder in the current directory. Next, go to this directory and add Spack to your path. Spack has some nice command-line integration tools, so instead of simply appending to your PATH variable, source the Spack setup script.

$ cd spack/
$ source share/spack/setup-env.sh
$ spack --version
1.0.0.dev0 (3b00a98cc8e8c1db33453d564f508928090be5a0)

Your version is likely different because Spack updates their Github often.

Group Permissions

By default, Spack will match your usual file permissions which typically are set up without group write permission. For lab wide installs of Spack though, you will want to ensure that it has group write enforced. You can set this by going to the etc/spack directory in your Spack installation and adding a file called packages.yaml (or editing the exiting one) with the following contents. Example for the jharvard_lab (substitute jharvard_lab with your own lab):

packages:
  all:
    permissions:
      write: group
      group: jharvard_lab

Default Architecture

By default, Spack will autodetect the architecture of your underlying hardware and build software to match it. However, in cases where you are running on heterogeneous hardware, it is best to use a more generic flag. You can set this by editing the file etc/spack/packages.yaml located inside the spack folder (if you don’t have the file etc/spack/packages.yaml, you can create it). Add the following contents:

packages:
  all:
    target: [x86_64]

Relocating Spack

Once your Spack environment has been installed it cannot be easily moved. Some of the packages in Spack hardcode the absolute paths into themselves and thus cannot be changed with out rebuilding them. As such simply copying the Spack installation will not actually move the Spack installation.

The easiest way to move a space install if you need to keep the exact same stack of software is to first create a spack environment with all the software you need. Once you have that you can export the environment similar to how you would for conda environments. After that you can then use that environment file export to rebuild in the new location.

Available Spack Packages

A complete list of all available Spack packages can be found also here. The spack list displays the available packages, e.g.,

$ spack list
==> 6752 packages
<omitted output>

NOTE: You can also look for available spack packages at https://packages.spack.io

The spack list command can also take a query string. Spack automatically adds wildcards to both ends of the string, or you can add your own wildcards. For example, we can view all available Python packages.

# with wildcard at both ends of the strings
$ spack list py
==> 1979 packages
<omitted outout>

# add your own wilcard: here, list packages that start with py
$ spack list 'py-*'
==> 1960 packages.
<omitted output>

You can also look for specific packages, e.g.,

$ spack list lammps
==> 1 packages.
lammps

You can display available software versions, e.g.,

$ spack versions lammps
==> Safe versions (already checksummed):
  master    20211214  20210929.2  20210929  20210831  20210728  20210514  20210310  20200721  20200505  20200227  20200204  20200109  20191030  20190807  20181212  20181127  20181109  20181010  20180905  20180822  20180316  20170922
  20220107  20211027  20210929.1  20210920  20210730  20210702  20210408  20201029  20200630  20200303  20200218  20200124  20191120  20190919  20190605  20181207  20181115  20181024  20180918  20180831  20180629  20180222  20170901
==> Remote versions (not yet checksummed):
  1Sep2017

Note: for the spack versions command, the package name needs to match exactly. For example, spack versions lamm will not be found:

$ spack versions lamm
==> Error: Package 'lamm' not found.
You may need to run 'spack clean -m'.

Installing Packages

Installing packages with Spack is very straightforward. To install a package simply type spack install PACKAGE_NAME. Large packages with multiple dependencies can take significant time to install, thus we recommend doing this in a screen/tmux session or a Open Ondemand Remote Desktop session.

To install the latest version of a package, type:

$ spack install bzip2

To install a specific version (1.0.8) of bzip2, add @ and the version number you need:

$ spack install bzip2@1.0.8

Here we installed a specific version (1.0.8) of bzip2. The installed packages can be displayed by the command spack find:

$ spack find
-- linux-rocky8-icelake / gcc@8.5.0 -----------------------------
bzip2@1.0.8  diffutils@3.8  libiconv@1.16
==> 3 installed packages

One can also request that Spack uses a specific compiler flavor / version to install packages, e.g.,

$ spack install zlib@1.2.13%gcc@8.5.0

To specify the desired compiler, one uses the % sigil.

The @ sigil is used to specify versions, both of packages and of compilers, e.g.,

$ spack install zlib@1.2.8
$ spack install zlib@1.2.8%gcc@8.5.0

Finding External Packages

Spack will normally built its own package stack, even if there are libaries available as part of the operating system. If you want Spack to build against system libraries instead of building its own you will need to have it discover what libraries available natively on the system. You can do this using the spack external find.

$ spack external find
==> The following specs have been detected on this system and added to /n/home/jharvard/.spack/packages.yaml
autoconf@2.69    binutils@2.30.117  curl@7.61.1    findutils@4.6.0  git@2.31.1   groff@1.22.3   m4@1.4.18      openssl@1.1.1k  tar@1.30
automake@1.16.1  coreutils@8.30     diffutils@3.6  gawk@4.2.1       gmake@4.2.1  libtool@2.4.6  openssh@8.0p1  pkgconf@1.4.2   texinfo@6.5

This even works with modules loaded from other package managers. You simply have to have those loaded prior to running the find command. After these have been added to Spack, Spack will try to use them if it can in future builds rather than installing its own versions.

Using an Lmod module in Spack

Use your favorite text editor, e.g., Vim, Emacs,VSCode, etc., to edit the package configuration YAML file ~/.spack/packages.yaml, e.g.,

vi ~/.spack/packages.yaml

Each package section in this file is similar to the below:

packages:
  package1:
    # settings for package1
  package2:
    # settings for package2
  fftw:
    externals:
    - spec: fftw@3.3.10
      prefix: /n/sw/helmod-rocky8/apps/MPI/gcc/14.2.0-fasrc01/openmpi/5.0.5-fasrc01/fftw/3.3.10-fasrc01
    buildable: false

To obtain the prefix of a module that will be used in Spack, find the module’s <MODULENAME>_HOME.

Let’s say you would like to use fftw/3.3.10-fasrc01 module instead of building it with Spack. You can find its <MODULENAME>_HOME with:

$ echo $FFTW_HOME
/n/sw/helmod-rocky8/apps/MPI/gcc/14.2.0-fasrc01/openmpi/5.0.5-fasrc01/fftw/3.3.10-fasrc01

Alernatively, you can find <MODULENAME>_HOME with

$ module display fftw/3.3.10-fasrc01

Uninstalling Packages

Spack provides an easy way to uninstall packages with the spack uninstall PACKAGE_NAME, e.g.,

$ spack uninstall zlib@1.2.13%gcc@8.5.0
==> The following packages will be uninstalled:

    -- linux-rocky8-icelake / gcc@8.5.0 -----------------------------
    xlt7jpk zlib@1.2.13

==> Do you want to proceed? [y/N] y
==> Successfully uninstalled zlib@1.2.13%gcc@8.5.0+optimize+pic+shared build_system=makefile arch=linux-rocky8-icelake/xlt7jpk

Note: The recommended way of uninstalling packages is by specifying the full package name, including the package version and compiler flavor and version used to install the package on the first place.

Using Installed Packages

There are several different ways to use Spack packages once you have installed them. The easiest way is to use spack load PACKAGE_NAME to load and spack unload PACKAGE_NAME to unload packages, e.g.,

$ spack load bzip2
$ which bzip2
/home/spack/opt/spack/linux-rocky8-icelake/gcc-8.5.0/bzip2-1.0.8-aohgpu7zn62kzpanpohuevbkufypbnff/bin/bzip2

The loaded packages can be listed with spack find --loaded, e.g.,

$ spack find --loaded
-- linux-rocky8-icelake / gcc@8.5.0 -----------------------------
bzip2@1.0.8  diffutils@3.8  libiconv@1.16
==> 3 loaded packages

If you no longer need the loaded packages, you can unload them with:

$ spack unload
$ spack find --loaded
==> 0 loaded packages

Configuration

Compiler Configuration

On the cluster, we support a set of core compilers, such as GNU (GCC) compiler suit, Intel, and PGI provided on the cluster through software modules.

Spack has the ability to build packages with multiple compilers and compiler versions. This can be particularly useful, if a package needs to be built with specific compilers and compiler versions. You can display the available compilers by the spack compiler list command.

If you have never used Spack, you will likely have no compiler listed (see Add GCC compiler section below for how to add compilers):

$ spack compiler list
==> No compilers available. Run `spack compiler find` to autodetect compilers

If you have used Spack before, you may see system-level compilers provided by the operating system (OS) itself:

$ spack compiler list
==> Available compilers
-- gcc rocky8-x86_64 --------------------------------------------
[e]  gcc@8.5.0

-- llvm rocky8-x86_64 -------------------------------------------
[e]  llvm@19.1.7

You can easily add additional compilers to spack by loading the appropriate software modules, running the spack compiler find command, and edit the compilers.yaml configuration file. For instance, if you need GCC version 12.2.0 you need to do the following:

Load the required software module

$ module load gcc/14.2.0-fasrc01
$ which gcc
/n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01/bin/gcc

Add GCC compiler version to the spack compilers

$ spack compiler find
==> Added 1 new compiler to /n/home01/jharvard/.spack/packages.yaml
    gcc@14.2.0
==> Compilers are defined in the following files:
    /n/home01/jharvard/.spack/packages.yaml

If you run spack compiler list again, you will see that the new compiler has been added to the compiler list and made a default (listed first), e.g.,

$ spack compiler list
==> Available compilers
-- gcc rocky8-x86_64 --------------------------------------------
[e]  gcc@8.5.0  [e]  gcc@14.2.0

-- llvm rocky8-x86_64 -------------------------------------------
[e]  llvm@19.1.7

Note: By default, spack does not fill in the modules: field in the ~/.spack/packages.yaml file. If you are using a compiler from a module, then you should add this field manually.

Edit manually the compiler configuration file

Use your favorite text editor, e.g., Vim, Emacs,VSCode, etc., to edit the compiler configuration YAML file ~/.spack/packages.yaml, e.g.,

vi ~/.spack/packages.yaml

Each compiler is defined as a package in ~/.spack/packages.yaml. Below, you can see gcc 14.2.0 (from module) and gcc 8.5.0 (from OS) defined:

packages:
  gcc:
    externals:
    - spec: gcc@14.2.0 languages:='c,c++,fortran'
      prefix: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01
      extra_attributes:
        compilers:
          c: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01/bin/gcc
          cxx: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01/bin/g++
          fortran: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01/bin/gfortran
    - spec: gcc@8.5.0 languages:='c,c++,fortran'
      prefix: /usr
      extra_attributes:
        compilers:
          c: /usr/bin/gcc
          cxx: /usr/bin/g++
          fortran: /usr/bin/gfortran

We have to add the modules: definition for gcc 14.2.0:

packages:
  gcc:
    externals:
    - spec: gcc@14.2.0 languages:='c,c++,fortran'
      prefix: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01
      extra_attributes:
        compilers:
          c: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01/bin/gcc
          cxx: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01/bin/g++
          fortran: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01/bin/gfortran
      modules: [gcc/14.2.0-fasrc01]

and save the packages file. If more than one modules are required by the compiler, these need to be separated by semicolon ;.

We can display the configuration of a specific compiler by the spack compiler info command, e.g.,

$ spack compiler info gcc@14.2.0
gcc@=14.2.0 languages:='c,c++,fortran' arch=linux-rocky8-x86_64:
  prefix: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01
  compilers:
    c: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01/bin/gcc
    cxx: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01/bin/g++
    fortran: /n/sw/helmod-rocky8/apps/Core/gcc/14.2.0-fasrc01/bin/gfortran
  modules:
    gcc/14.2.0-fasrc01

Once the new compiler is configured, it can be used to build packages. The below example shows how to install the GNU Scientific Library (GSL) with gcc@14.2.0.

# Check available GSL versions
$ spack versions gsl
==> Safe versions (already checksummed):
  2.8  2.7.1  2.7  2.6  2.5  2.4  2.3  2.2.1  2.1  2.0  1.16
==> Remote versions (not yet checksummed):
  2.2  1.15  1.14  1.13  1.12  1.11  1.10  1.9  1.8  1.7  1.6  1.5  1.4  1.3  1.2  1.1.1  1.1  1.0

# Install GSL version 2.8 with GCC version 14.2.0
$ spack install gsl@2.8%gcc@14.2.0

# Load the installed package
$ spack load gsl@2.8%gcc@14.2.0

# List the loaded package
$ spack find --loaded
-- linux-rocky8-x86_64 / gcc@14.2.0 -----------------------------
gsl@2.8
==> 1 loaded package

MPI Configuration

Many HPC software packages work in parallel using MPI. Although Spack has the ability to install MPI libraries from scratch, the recommended way is to configure Spack to use MPI already available on the cluster as software modules, instead of building its own MPI libraries.

MPI is configured through the packages.yaml file. For instance, if we need OpenMPI version 5.0.5 compiled with GCC version 14, we could follow the below steps to add this MPI configuration:

Determine the MPI location / prefix

$ module load gcc/14.2.0-fasrc01 openmpi/5.0.5-fasrc01
$ echo $MPI_HOME
/n/sw/helmod-rocky8/apps/Comp/gcc/14.2.0-fasrc01/openmpi/5.0.5-fasrc01

Edit manually the packages configuration file

Use your favorite text editor, e.g., Vim, Emacs,VSCode, etc., to edit the packages configuration YAML file ~/.spack/packages.yaml, e.g.,

$ vi ~/.spack/packages.yaml

Note: If the file ~/.spack/packages.yaml does not exist, you will need to create it.

Include the following contents:

packages:
  openmpi:
    externals:
    - spec: openmpi@5.0.5%gcc@14.2.0
      prefix: /n/sw/helmod-rocky8/apps/Comp/gcc/14.2.0-fasrc01/openmpi/5.0.5-fasrc01
    buildable: false

The option buildable: False reassures that MPI won’t be built from source. Instead, Spack will use the MPI provided as a software module in the corresponding prefix.

Once the MPI is configured, it can be used to build packages. The below example shows how to install HDF5 version 1.12.2 with openmpi@5.0.5 and gcc@14.2.0.

Note: Please note the command module purge. This is required as otherwise the build fails.

$ module purge
$ spack install hdf5@1.14.6 % gcc@14.2.0 ^ openmpi@5.0.5

Intel MPI Configuration

Here we provide instructions on how to set up spack to build applications with Intel-MPI on the FASRC Cannon cluster. Intel MPI Library is now included in the Intel oneAPI HPC Toolkit.

Intel Compiler Configuration

The first step involves setting spack to use the Intel compiler, which is provided as a software module. This follows similar procedure to that of adding the GCC compiler.

Load the required software module
$ module load intel/23.0.0-fasrc01
$ which icc
/n/sw/intel-oneapi-2023/compiler/2023.0.0/linux/bin/intel64/icc
Add this Intel compiler version to the spack compilers
$ spack compiler add

If you run the command spack compilers, you will see that the following 3 compilers have been added:

$ spack compilers
...
-- dpcpp rocky8-x86_64 ------------------------------------------
dpcpp@2023.0.0

-- intel rocky8-x86_64 ------------------------------------------
intel@2021.8.0

-- oneapi rocky8-x86_64 -----------------------------------------
oneapi@2023.0.0
Edit manually the compiler configuration file

Use your favorite text editor, e.g., Vim, Emacs, VSCode, etc., to edit the compiler configuration YAML file ~/.spack/linux/compilers.yaml, e.g.,

$ vi ~/.spack/linux/compilers.yaml

Each -compiler: section in this file is similar to the below:

- compiler:
    spec: intel@2021.8.0
    paths:
      cc: /n/sw/intel-oneapi-2023/compiler/2023.0.0/linux/bin/intel64/icc
      cxx: /n/sw/intel-oneapi-2023/compiler/2023.0.0/linux/bin/intel64/icpc
      f77: /n/sw/intel-oneapi-2023/compiler/2023.0.0/linux/bin/intel64/ifort
      fc: /n/sw/intel-oneapi-2023/compiler/2023.0.0/linux/bin/intel64/ifort
    flags: {}
    operating_system: rocky8
    target: x86_64
    modules: []
    environment: {}
    extra_rpaths: []

Note: Here we focus specifically on the intel@2021.8.0 compiler as it is required by the Intel MPI Library.

We have to edit the modules: [] line to read

    modules: [intel/23.0.0-fasrc01]

and save the compilers.yaml file.

We can display the configuration of a specific compiler by the spack compiler info command, e.g.,

$ spack compiler info intel@2021.8.0
intel@2021.8.0:
        paths:
                cc = /n/sw/intel-oneapi-2023/compiler/2023.0.0/linux/bin/intel64/icc
                cxx = /n/sw/intel-oneapi-2023/compiler/2023.0.0/linux/bin/intel64/icpc
                f77 = /n/sw/intel-oneapi-2023/compiler/2023.0.0/linux/bin/intel64/ifort
                fc = /n/sw/intel-oneapi-2023/compiler/2023.0.0/linux/bin/intel64/ifort
        modules  = ['intel/23.0.0-fasrc01']
        operating system  = rocky8

Setting up the Intel MPI Library

Use your favorite text editor, e.g., Vim, Emacs, VSCode, etc., to edit the packages configuration YAML file ~/.spack/packages.yaml, e.g.,

$ vi ~/.spack/packages.yaml

Note: If the file ~/.spack/packages.yaml does not exist, you will need to create it.

Include the following contents:

packages:
  intel-oneapi-mpi:
    externals:
    - spec: intel-oneapi-mpi@2021.8.0%intel@2021.8.0
      prefix: /n/sw/intel-oneapi-2023
    buildable: false

Example

Once spack is configured to use Intel MPI, it can be used to build packages with it. The below example shows how to install HDF5 version 1.13.2 with intel@2021.8.0 and intel-oneapi-mpi@2021.8.0.

You can first test this using the spack spec command to show how the spec is concretized:

$ spack spec hdf5@1.13.2%intel@2021.8.0+mpi+fortran+cxx+hl+threadsafe ^ intel-oneapi-mpi@2021.8.0%intel@2021.8.0

Next, you can build it:

$ spack install hdf5@1.13.2%intel@2021.8.0+mpi+fortran+cxx+hl+threadsafe ^ intel-oneapi-mpi@2021.8.0%intel@2021.8.0

Spack Environments

Spack environments are a powerful feature of the Spack package manager that enable users to create isolated and reproducible environments for their software projects. Each Spack environment contains a specific set of packages and dependencies, which are installed in a self-contained directory tree. This means that different projects can have different versions of the same package, without interfering with each other. Spack environments also allow users to share their software environments with others, making it easier to collaborate on scientific projects.

Creating and activating environments

To create a new Spack environment:

$ spack env create myenv
$ spack env activate -p myenv

To deactivate an environment:

$ spack env deactivate

To list available environments:

$ spack env list

To remove an environment:

$ spack env remove myenv

For more detailed information about Spack environments, please refer to the Environments Tutorial.

Application Recipes

This section provides step-by-step instructions for installing and configuring specific scientific applications using Spack.

GROMACS with MPI

GROMACS is a free and open-source software suite for high-performance molecular dynamics and output analysis.

The below instructions provide a spack recipe for building MPI capable instance of GROMACS on the FASRC Cannon cluster.

Compiler and MPI Library Spack configuration

Here we will use the GNU/GCC compiler suite together with OpenMPI.

The below instructions assume that spack is already configured to use the GCC compiler gcc@12.2.0 and OpenMPI Library openmpi@4.1.5, as explained here.

Create GROMACS spack environment and activate it

spack env create gromacs
spack env activate -p gromacs

Install the GROMACS environment

Add the required packages to the spack environment
spack add openmpi@4.1.5
spack add gromacs@2023.3 + mpi + openmp % gcc@12.2.0 ^ openmpi@4.1.5
Install the environment

Once all required packages are added to the environment, it can be installed with:

spack install

Use GROMACS

Once the environment is installed, all installed packages in the GROMACS environment are available on the PATH, e.g.:

[gromacs] [pkrastev@builds01 Spack]$ gmx_mpi -h
                    :-) GROMACS - gmx_mpi, 2023.3-spack (-:

Executable:   /builds/pkrastev/Spack/spack/opt/spack/linux-rocky8-x86_64/gcc-12.2.0/gromacs-2023.3-42ku4gzzitbmzoy4zq43o3ozwr5el3tx/bin/gmx_mpi
Data prefix:  /builds/pkrastev/Spack/spack/opt/spack/linux-rocky8-x86_64/gcc-12.2.0/gromacs-2023.3-42ku4gzzitbmzoy4zq43o3ozwr5el3tx
Working dir:  /builds/pkrastev/Spack
Command line:
  gmx_mpi -h
Interactive runs

You can run GROMACS interactively. This assumes you have requested an interactive session first, as explained here.

In order to set up your GROMACS environment, you need to run the commands:

### Replace <PATH TO> with the actual path to your spack installation
. <PATH TO>/spack/share/spack/setup-env.sh
spack env activate gromacs
Batch jobs

When submitting batch-jobs, you will need to add the below lines to your submission script:

# --- Activate the GROMACS Spack environment., e.g., ---
### NOTE: Replace <PATH TO> with the actual path to your spack installation
. <PATH TO>/spack/share/spack/setup-env.sh
spack env activate gromacs

LAMMPS with MPI

LAMMPS is a classical molecular dynamics code with a focus on materials modeling. It’s an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator.

The below instructions provide a spack recipe for building MPI capable instance of LAMMPS on the FASRC Cannon cluster.

Pre-requisites: Compiler and MPI Library Spack configuration

Here we will use the GNU/GCC compiler suite together with OpenMPI. We will also use a module for FFTW.

The below instructions assume that spack is already configured to use the GCC compiler gcc@14.2.0, OpenMPI Library openmpi@5.0.5, and FFTW with module fftw/3.3.10-fasrc01. If you have not configured them yet, see:

  1. To add gcc compiler: Spack compiler configuration
  2. To add openmpi: Spack MPI Configuration
  3. To add fftw as an external package: Using an Lmod module in Spack

Create LAMMPS spack environment and activate it

First, request an interactive job

salloc --partition test --time 06:00:00 --mem-per-cpu 4G -c 8

Second, download and activate Spack. For performance, we recommend using a Lab share in Holyoke (i.e., path starts with holy) instead of using your home directory. Here, we show an example with /n/holylabs:

cd /n/holylabs/jharvard_lab/Lab/jharvard
git clone -c feature.manyFiles=true https://github.com/spack/spack.git spack_lammps
cd spack_lammps/
source share/spack/setup-env.sh

Finally, create a Spack environment and activate it

spack env create lammps
spack env activate -p lammps

Install the LAMMPS environment

Note on architecture

If you are planning to run LAMMPS in different partitions, we recommend setting Spack to a general architecture. Otherwise, Spack will detect the architecture of the node that you are building LAMMPS and optimize for this specific architecture and may not run on another hardware. For example, LAMMPS built on Sapphire Rapids may not run on Cascade Lake.

Install libbsd

Note: In this recipe, we first install libbsd with the system version of the GCC compiler, gcc@8.5.0, as the installation fails, if we try to add it directly to the environment and install it with gcc@12.2.0.

spack install --add libbsd@0.12.2 % gcc@8.5.0
Add the rest of the required packages to the spack environment

First, add Python≤3.10 because newer versions of Python do not contain the package distutils (reference) and will cause the installation to fail.

spack add python@3.10

Second, add openmpi

spack add openmpi@5.0.5

Third, add FFTW

spack add fftw@3.3.10

Then, add LAMMPS required packages

spack add lammps +asphere +body +class2 +colloid +compress +coreshell +dipole +granular +kokkos +kspace +manybody +mc +misc +molecule +mpiio +openmp-package +peri +python +qeq +replica +rigid +shock +snap +spin +srd +user-reaxc +user-misc % gcc@14.2.0 ^ openmpi@5.0.5
Install the environment

Once all required packages are added to the environment, it can be installed with (note that the installation can take 1-2 hours):

spack install

Use LAMMPS

Once the environment is installed, all installed packages in the LAMMPS environment are available on the PATH, e.g.:

[lammps] [jharvard@holy8a24102 spack_lammps]$ lmp -h

Large-scale Atomic/Molecular Massively Parallel Simulator - 29 Oct 2020

Usage example: lmp -var t 300 -echo screen -in in.alloy

List of command line options supported by this LAMMPS executable:
Interactive runs

You can run LAMMPS interactively in both serial and parallel mode. This assumes you have requested an interactive session first, as explained here.

Prerequisite:

Source Spack and activate environment

### NOTE: Replace <PATH TO spack_lammps> with the actual path to your spack installation

[jharvard@holy8a24102 ~]$ cd <PATH TO spack_lammps>
[jharvard@holy8a24102 spack_lammps]$ source share/spack/setup-env.sh
[jharvard@holy8a24102 spack_lammps]$ spack env activate -p lammps

Serial

[lammps] [jharvard@holy8a24301 spack_lammps]$ lmp -in in.demo

Parallel (e.g., 4 MPI tasks)

[lammps] [jharvard@holy8a24301 spack_lammps]$ mpirun -np 4 lmp -in in.demo
Batch jobs

Example batch job submission script

Below is an example batch-job submission script run_lammps.sh using the LAMMPS spack environment.

#!/bin/bash
#SBATCH -J lammps_test        # job name
#SBATCH -o lammps_test.out    # standard output file
#SBATCH -e lammps_test.err    # standard error file
#SBATCH -p shared             # partition
#SBATCH -n 4                  # ntasks
#SBATCH -t 00:30:00           # time in HH:MM:SS
#SBATCH --mem-per-cpu=500     # memory in megabytes

# --- Activate the LAMMPS Spack environment., e.g., ---
### NOTE: Replace <PATH TO> with the actual path to your spack installation
. <PATH TO>/spack_lammps/share/spack/setup-env.sh
spack env activate lammps

# --- Run the executable ---
srun -n $SLURM_NTASKS --mpi=pmix lmp -in in.demo

Submit the job

sbatch run_lammps.sh

WRF (Weather Research and Forecasting)

The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. WRF features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility. The model serves a wide range of meteorological applications across scales from tens of meters to thousands of kilometers.

WRF official website: https://www.mmm.ucar.edu/weather-research-and-forecasting-model

Compiler and MPI Library Spack configuration

We use the Intel compiler suite together with the Intel MPI Library. The below instructions assume that spack is already configured to use the Intel compiler intel@2021.8.0 and Intel MPI Library intel-oneapi-mpi@2021.8.0.

Create WRF spack environment and activate it

spack env create wrf
spack env activate -p wrf

Add the required packages to the spack environment

In addition to WRF and WPS we also build ncview and ncl.

spack add intel-oneapi-mpi@2021.8.0
spack add hdf5@1.12%intel@2021.8.0 +cxx+fortran+hl+threadsafe
spack add libpng@1.6.37%intel@2021.8.0
spack add jasper@1.900.1%intel@2021.8.0
spack add netcdf-c@4.9.0%intel@2021.8.0
spack add netcdf-fortran@4.6.0%intel@2021.8.0
spack add xz@5.4.2%intel@2021.8.0
spack add wrf@4.4%intel@2021.8.0
spack add wps@4.3.1%intel@2021.8.0
spack add cairo@1.16.0%gcc@8.5.0
spack add ncview@2.1.8%intel@2021.8.0
spack add ncl@6.6.2%intel@2021.8.0

NOTE: Here we use the gcc@8.5.0 compiler to build cairo@1.16 as it fails to compile with the Intel compiler.

Install the WRF environment

Once all required packages are added to the environment, it can be installed with:

spack install

Use WRF/ WPS

Once the environment is installed, WRF and WPS (and any other packages from the environment, such as ncview), are available on the PATH, e.g.:

[wrf] [pkrastev@builds01 spack]$ which wrf.exe
/builds/pkrastev/Spack/spack/var/spack/environments/wrf/.spack-env/view/main/wrf.exe

Troubleshooting

When spack builds it uses a stage directory located in /tmp. Spack also cleans up this space once it is done building, regardless of if the build succeeds or fails. This can make troubleshooting failed builds difficult as the logs from those builds are stored in stage. To preserve these files for debugging you will first want to set the $TMP environmental variable to a location that you want to dump files in stage to. Then you will want to add the --keep-stage flag to spack (ex. spack install --keep-stage <package>), which tells spack to keep the staging files rather than remove them.

Cannot open shared object file: No such file or directory

This error occurs when the compiler cannot find a library it is dependent on. For example:

/n/sw/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/../libexec/gcc/x86_64-pc-linux-gnu/10.2.0/cc1: error while loading shared libraries: libmpfr.so.6: cannot open shared object file: No such file or directory

In this error the compiler cannot find a library it is dependent on mpfr. To fix this we will need to add the relevant library to the compiler definition in ~/.spack/packages.yaml. In this case we are using gcc/10.2.0-fasrc01 which when loaded also loads:

[jharvard@holy7c22501 ~]# module list

Currently Loaded Modules:
  1) gmp/6.2.1-fasrc01   2) mpfr/4.1.0-fasrc01   3) mpc/1.2.1-fasrc01   4) gcc/10.2.0-fasrc01

So we will need to grab the location of these libraries to add them. To find that you can do:

[jharvard@holy7c22501 ~]# module display mpfr/4.1.0-fasrc01

And then pull out the LIBRARY_PATH. Once we have the paths for all three of these dependencies we can add them to the ~/.spack/packages.yaml as follows

- compiler:
    spec: gcc@10.2.0
    paths:
      cc: /n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/gcc
      cxx: /n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/g++
      f77: /n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/gfortran
      fc: /n/helmod/apps/centos7/Core/gcc/10.2.0-fasrc01/bin/gfortran
    flags: {}
    operating_system: centos7
    target: x86_64
    modules: []
    environment:
      prepend_path:
        LIBRARY_PATH: /n/helmod/apps/centos7/Core/mpc/1.2.1-fasrc01/lib64:/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/lib64:/n/helmod/apps/centos7/Core/gmp/6.2.1-fasrc01/lib64
        LD_LIBRARY_PATH: /n/helmod/apps/centos7/Core/mpc/1.2.1-fasrc01/lib64:/n/helmod/apps/centos7/Core/mpfr/4.1.0-fasrc01/lib64:/n/helmod/apps/centos7/Core/gmp/6.2.1-fasrc01/lib64
    extra_rpaths: []

Namely we needed to add the prepend_path to the environment. With those additional paths defined the compiler will now work because it can find its dependencies.

C compiler cannot create executables

This is the same type of error as the Cannot open shared object file: No such file or directory. Namely the compiler cannot find the libraries it is dependent on. See the troubleshooting section for the shared objects error for how to resolve.

Error: Only supported on macOS

If you are trying to install a package and get an error about only macOS

$ spack install r@3.4.2
==> Error: Only supported on macOS

You need to update your compilers. For example, here you can see only Ubuntu compilers are available, which do not work on Rocky 8

$ spack compiler list
==> Available compilers
-- clang ubuntu18.04-x86_64 -------------------------------------
clang@7.0.0

-- gcc ubuntu18.04-x86_64 ---------------------------------------
gcc@7.5.0  gcc@6.5.0

Then, run compiler find to update compilers

$ spack compiler find
==> Added 1 new compiler to /n/home01/jharvard/.spack/packages.yaml
    gcc@8.5.0
==> Compilers are defined in the following files:
    /n/home01/jharvard/.spack/packages.yaml

Now, you can see a Rocky 8 compiler is also available

$ spack compiler list
==> Available compilers
-- clang ubuntu18.04-x86_64 -------------------------------------
clang@7.0.0

-- gcc rocky8-x86_64 --------------------------------------------
gcc@8.5.0

-- gcc ubuntu18.04-x86_64 ---------------------------------------
gcc@7.5.0  gcc@6.5.0

And you can proceed with the spack package installs.

Assembly Error

If your package has a gmake as a dependency, you may run into this error:

/tmp/ccRlxmkM.s:202: Error: no such instruction: `vmovw %ebp,%xmm3'

First, check that as version is ≤2.38 with:

$ as --version
GNU assembler version 2.30-123.el8

If that’s the case, then use a generic linux architecture as explained in Default Architecture.

References

Bookmarkable Links

© The President and Fellows of Harvard College.
Except where otherwise noted, this content is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.