Difference between revisions of "SPACK Mirror on JLab CUE"

From epsciwiki
Jump to navigation Jump to search
 
(34 intermediate revisions by the same user not shown)
Line 1: Line 1:
= Overview =
 
  
[https://spack.io/ SPACK] is a package manager used to maintain multiple versions of software compiled at JLab. The spack manager takes care of many stages of managing the software packages used for the scientific program. It keeps track of multiple software versions built using multiple compilers and even with multiple dependency lists. For example, you can have a version of gemc that uses root v6.18/00, GEANT 10.1.02, and was built with the gcc 9.2.0 compiler. You can also build another version which changes any or all of those version numbers and spack will happily organize it.
 
  
Mostly, we want to use spack to centrally manage some standard builds of commonly used software packages. This avoids every researcher from having to build their own copies which can be costly in storage, computing, and their own time. This will include external, 3rd party packages like CLHEP and internal software like gemc.
+
<span style="color:red">''WARNING: This page is deprecated. Please go to the [[SPACK on JLab ifarm]] page.''</span>
  
There are three primary use cases for the software built with the spack system:
+
= Using the JLab SPACK Repository =
 +
== Overview ==
  
# Users on the JLab CUE want to use the pre-built binary versions on JLab computers
+
[https://spack.io/ SPACK] is a package manager used to maintain multiple versions of software compiled with various compilers for various OSes. The EPSCI group takes the primary responsibility for maintaining the SPACK repository at JLab. SPACK has a rich feature set that allows a lot of flexibility in how one can use it to manage their software. This page describes details of how SPACK is implemented at JLab for the ENP program.
# Users running offsite want to access the binaries through [https://cernvm.cern.ch/fs/ /cvmfs]
+
 
 +
There are three primary use cases for the software built with the SPACK system:
 +
 
 +
# Users on the JLab SciComp farm (ifarm) want to use the pre-built binary versions on JLab computers
 +
# Users running offsite want to use the pre-built binary versions on their local computers
 
# Users want to install the pre-built binaries on their local computer so they can run untethered
 
# Users want to install the pre-built binaries on their local computer so they can run untethered
  
The first two of these are satisfied by using ''/cvmfs''. The third use case uses a web accessible spack ''buildcache'' and is a bit more fickle. If you have the option of using /cvmfs, then it is recommended that you use that.
+
The first two of these are satisfied by using ''/cvmfs''. The third use case uses a web accessible SPACK ''buildcache'' and is quite a bit more fickle. Officially, we do not support option 3 because of this.
 +
 
 +
== Quickstart ==
 +
 
 +
The spack builds may be used directly from the host OS or via a container. In both cases, the software is installed in network mounted ''/cvmfs/oasis.opensciencegrid.org'' so the host will need to have that set up and working. It is recommended to use the container since there may be system packages installed there that the spack packages require.
 +
 
 +
=== Using Singularity ===
 +
 
 +
First, make sure ''singularity'' is in your PATH. On the JLab ifarm system you can do this:
 +
 
 +
  source /etc/profile.d/modules.sh
 +
  module load singularity
 +
 
 +
Second, run a singularity shell with from one of the supported OSes. You can find compatible images in the ''/cvmfs/oasis.opensciencegrid.org/jlab/epsci/singularity/images'' directory. Note that in order to access the ''/cvmfs'' directory inside the container, you will need to bind it using the ''-B/cvmfs:/cvmfs'' option. Once the container has started, you will need to setup the spack environment by sourcing the correct ''setup=env.sh'' script.
 +
 
 +
  singularity shell -B/cvmfs:/cvmfs /cvmfs/oasis.opensciencegrid.org/jlab/epsci/singularity/images/epsci-ubuntu-22.04.img
 +
  Singularity> source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/ubuntu/22.04/share/spack/setup-env.sh
 +
 
 +
To see available spack packages, run ''spack find''. Below is an example of the output.
 +
 
 +
  Singularity> spack find     
 +
  -- linux-ubuntu22.04-x86_64 / gcc@11.3.0 ------------------------
 +
  berkeley-db@18.1.40                geant4-data@11.0.0    libxml2@2.10.3          py-wheel@0.37.1
 +
  binutils@2.38                      gettext@0.21.1        libxmu@1.1.2            python@3.10.8
 +
  bison@3.8.2                        glproto@1.4.17        libxrandr@1.5.0          randrproto@1.5.0
 +
  bzip2@1.0.8                        glx@1.4              libxrender@0.9.10        re2c@2.2
 +
  ca-certificates-mozilla@2022-10-11  hwloc@2.8.0          libxt@1.1.5              readline@8.2
 +
  clhep@2.4.6.0                      inputproto@2.3.2      llvm@14.0.6              renderproto@0.11.1
 +
  cmake@3.25.1                        kbproto@1.0.7        lmod@8.7.2              sqlite@3.40.0
 +
  curl@7.85.0                        libbsd@0.11.5        lua@5.4.4                tar@1.34
 +
  diffutils@3.8                      libedit@3.1-20210216  lua-luafilesystem@1_8_0  tcl@8.6.12
 +
  expat@2.5.0                        libffi@3.4.2          lua-luaposix@35.0        texinfo@7.0
 +
  findutils@4.9.0                    libice@1.0.9          m4@1.4.19                unzip@6.0
 +
  flex@2.6.3                          libiconv@1.16        mesa@22.1.2              util-linux-uuid@2.38.1
 +
  g4abla@3.1                          libmd@1.0.4          mesa-glu@9.0.2          util-macros@1.19.3
 +
  g4emlow@8.0                        libpciaccess@0.16    meson@1.0.0              xcb-proto@1.14.1
 +
  g4ensdfstate@2.3                    libpthread-stubs@0.4  ncurses@6.3              xerces-c@3.2.3
 +
  g4incl@1.0                          libsigsegv@2.13      ninja@1.11.1            xextproto@7.3.0
 +
  g4ndl@4.6                          libsm@1.2.3          openssl@1.1.1s          xproto@7.0.31
 +
  g4particlexs@4.0                    libtool@2.4.7        perl@5.36.0              xrandr@1.5.0
 +
  g4photonevaporation@5.7            libunwind@1.6.2      perl-data-dumper@2.173  xtrans@1.3.5
 +
  g4pii@1.3                          libx11@1.7.0          pigz@2.7                xz@5.2.7
 +
  g4radioactivedecay@5.6              libxau@1.0.8          pkgconf@1.8.0            zlib@1.2.13
 +
  g4realsurface@2.2                  libxcb@1.14          py-mako@1.2.2            zstd@1.5.2
 +
  g4saiddata@2.0                      libxcrypt@4.4.33      py-markupsafe@2.1.1
 +
  gdbm@1.23                          libxdmcp@1.1.2        py-pip@22.2.2
 +
  geant4@11.0.3                      libxext@1.3.3        py-setuptools@65.5.0
 +
  ==> 97 installed packages
 +
 
 +
To load a package, use the ''spack load'' command. For example:
 +
 
 +
  Singularity> spack load geant4
 +
 
 +
Note that this may pull in other dependency packages. For example, ''python@3.10.8'' will be loaded by the above, superseding the Ubuntu 22.04 system installed python 3.10.6.
 +
 
 +
 
 +
 
 +
=== Using via host OS ===
 +
 
 +
WARNING: Using spack directly from the host OS is deprecated. It is recommended that you use it from a container.
 +
 
 +
The recommended way to set up your environment is with one of the following:
 +
 
 +
  [bash]  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/spack_env.sh  lmod gcc/9.3.0
 +
  [tcsh]  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/spack_env.csh lmod gcc/9.3.0
 +
 
 +
Note that the above may take a few seconds to complete, but it sets up a user-friendly package naming scheme for "module load". If you want quicker startup and are willing to live with package names that include long hashes, then source the script with no arguments:
  
== Using spack packages via CVMFS ==
+
  [bash]  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/spack_env.sh
 +
  [tcsh]  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/spack_env.csh
  
CVMFS is available on the SciComp machines (e.g. farm and ifarm). It is also available on many
+
Other useful commands:
remote sites and is easy to install and mount on a personal desktop/laptop. The following commands
 
can be used to access the spack packages via spack or "module load".
 
  
Note that the implementation here takes advantage of hierarchical ''lmod'' features so that loading a different compiler will automatically change the list of available modules to reflect that choice. This also means that the module command used needs to come from the ''lmod'' package installed in spack. The default one on the CUE does not support hierarchical modules.
+
  module avail                # List available packages
 +
  module load packagename    # Load a package (optionally specify version number)
 +
  module unload packagename  # Unload a package that was previously loaded
  
Commands are given below for both bash and tcsh. (Yes, this is bulky and these will likely be placed in a centralized script later).
+
The following operating systems are supported:
  
  # bash
+
{| class="wikitable"
  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.sh
+
|+ Supported Operating Systems
  spack load lmod
+
! OS
  source $(spack location -i lmod)/lmod/lmod/init/bash
+
! support start date
  spackarch=$(spack arch)
+
! status
  spackgenarch=$(spack arch -p)-$(spack arch -o)-x86_64
+
|-
  module unuse ${SPACK_ROOT}/share/spack/modules/${spackarch}
+
| ubuntu/22.04
  module unuse ${SPACK_ROOT}/share/spack/modules/${spackgenarch}
+
| December 29, 2022
  module use ${SPACK_ROOT}/share/spack/lmod/${spackgenarch}/Core
+
| current|
  module load gcc/9.3.0
+
|-
 +
| centos/7.7.1908
 +
| March 31, 2021
 +
| deprecated
 +
|-
 +
| centos/8.3.2011
 +
| March 31, 2021
 +
| deprecated
 +
|-
 +
| ubuntu/21.04
 +
| March 31, 2021
 +
| deprecated
 +
|}
  
  # tcsh
+
== CVMFS Client Configuration ==
  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.csh
 
  spack load lmod
 
  source `spack location -i lmod`/lmod/lmod/init/tcsh
 
  set spackarch=`spack arch`
 
  set spackgenarch=`spack arch -p`-`spack arch -o`-x86_64
 
  module unuse ${SPACK_ROOT}/share/spack/modules/${spackarch}
 
  module unuse ${SPACK_ROOT}/share/spack/modules/${spackgenarch}
 
  module use ${SPACK_ROOT}/share/spack/lmod/${spackgenarch}/Core
 
  module load gcc/9.3.0
 
  
 +
If you are working on the JLab ifarm computers than CVMFS is already installed and configured. This is nothing else you need to do. CVMFS may also already be available on many remote HPC sites (e.g. NERSC). Check the site's specific documentation or simply look for the /cvmfs/oasis.opensciencegrid.org directory.
  
 +
To mount the public, read-only CVMFS volume that contains the pre-built binaries see the instructions in one of the following sections for your specific platform.
  
== Using spack packages on the JLab CUE ==
+
''The most up to date instructions on installing and configuring the CVMFS client software can be found on the [https://cernvm.cern.ch/fs/ CVMFS website].''
  
* module use /apps/modulefiles
+
<hr width="50%">
* source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.csh
+
=== Linux ===
* module avail
 
  
== Using spack packages offsite via CVMFS ==
+
Here are instructions for installing on a CentOS or RedHat system (personal laptop or desktop)
  
NOTE: When using CVMFS, you may have issues with the read-only filesystem. Try using the ''--disable-locks'' option to avoid some errors.
+
1. Install the pointer to the CVMFS repo and then install cvmfs itself. After it is installed, generate a default config file.
  
* source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.csh
+
  sudo yum install https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm
* module avail
+
  sudo yum install -y cvmfs
 +
  cvmfs_config setup
  
== Installing the spack binary packages on your local computer ==
+
2. Create a config file '''''/etc/cvmfs/default.local''''' with the following content (you need to do this with sudo):
  
* git clone https://github.com/spack/spack.git
+
  CVMFS_REPOSITORIES=oasis.opensciencegrid.org
* source spack/share/spack/setup-env.sh (or setup-env.csh)
+
  CVMFS_HTTP_PROXY=DIRECT
* spack mirror add jlab-public https://spack.jlab.org/mirror
+
  CVMFS_CLIENT_PROFILE=single
* module avail
 
  
== Installing the spack binary packages on your local computer with Docker ==
+
3. Restart the autofs service
  
These are instructions for installing and using binary packages using Docker.
+
  systemctl restart autofs
This is useful if your host OS is not one of the standard ones supported in
 
the spack mirror.
 
  
These instructions will install to a directory on the host machine so that the
+
4. Make the mount point and mount the cvmfs disk
installs are persistent and don't disappear when the Docker container is shut
 
down. It also uses a tool from the container to try and use the same uid and gid
 
as the current user on the host. This is so the files and directories on the
 
persistent disk are not owned by root.
 
  
=== First time setup ===
+
  sudo mkdir -p /cvmfs/oasis.opensciencegrid.org
 +
  sudo mount -t cvmfs oasis.opensciencegrid.org /cvmfs/oasis.opensciencegrid.org
  
These commands only need to be run when first setting up the repository. The
+
<hr width="50%">
MYSPACK environment variable is used for convenience in these commands and
 
is not used by any of the underlying packages.
 
  
'''NOTE:''' This method does *not* use ''CVMFS'' even though it uses a directory
+
=== Mac OS X ===
starting with ''/cvmfs'' inside the container. This is done to match the install
+
To use CVMFS on Mac OS X, you need to install the ''MacFUSE'' package and then the ''cvmfs'' package. You should then reboot so everything will load properly. The step-by-step instructions follow.  
directories of the pre-built binaries.
 
  
  setenv MYSPACK $PWD/myspack
+
1. Download and install the [https://osxfuse.github.io/ macFUSE package]
  mkdir -p $MYSPACK
 
  docker pull jeffersonlab/epsci-centos:7.7.1908
 
  docker run -it --rm jeffersonlab/epsci-centos:7.7.1908 cat /container/dsh | tr -d "\r" > dsh
 
  chmod +x ./dsh
 
  ./dsh -cp -v ${MYSPACK}:/cvmfs/oasis.opensciencegrid.org/jlab  jeffersonlab/epsci-centos:7.7.1908
 
  git clone --depth 1 https://github.com/spack/spack.git /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908
 
  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.sh
 
  spack mirror add jlab-public https://spack.jlab.org/mirror
 
  spack buildcache install -u -o  gcc@9.3.0%gcc@4.8.5
 
  spack load gcc@9.3.0%gcc@4.8.5
 
  spack compiler find
 
  
=== Installing pre-built binaries ===
+
2. Download and install the cvmfs package with the following (Note that downloading the cvmfs package via ''curl'' apparently avoids some signature security issue on Mac OS X that you would get if downloaded via web-browser. Don't ask me how.)
  
   spack install -f -o -u root
+
   curl -o ~/Downloads/cvmfs-2.7.5.pkg https://ecsft.cern.ch/dist/cvmfs/cvmfs-2.7.5/cvmfs-2.7.5.pkg
 +
  open cvmfs-2.7.5.pkg
  
= Spack repository management =
+
3. Create a config file '''''/etc/cvmfs/default.local''''' with the following content (you need to do this with sudo):
 +
  CVMFS_REPOSITORIES=oasis.opensciencegrid.org
 +
  CVMFS_HTTP_PROXY=DIRECT
 +
 
 +
4. Restart the computer
 +
 
 +
5. Create the mount point and mount oasis with:
 +
  sudo mkdir -p /cvmfs/oasis.opensciencegrid.org
 +
  sudo mount -t cvmfs oasis.opensciencegrid.org /cvmfs/oasis.opensciencegrid.org
 +
 
 +
If it all works you should see something like this:
 +
 
 +
  >sudo mount -t cvmfs oasis.opensciencegrid.org /cvmfs/oasis.opensciencegrid.org
 +
  CernVM-FS: running with credentials 10000:10000
 +
  CernVM-FS: loading Fuse module... done
 +
  CernVM-FS: mounted cvmfs on /Users/Shared/cvmfs/oasis.opensciencegrid.org
 +
 
 +
<hr width="50%">
 +
=== Docker ===
 +
 
 +
There are actually two options for using CVMFS inside a Docker container:
 +
# Install CVMFS on the host and simply bind the /cvmfs directory to the same directory inside the container
 +
# Run the CVMFS software inside the container and mount it there.
 +
 
 +
Option 1 is preferred since any caching of the files is done by the host and and so does not disappear when the container goes away. It also can be used with any image and does not require another image to be created with the CVMFS software installed. To implement option 1, first mount CMVFS on the host using the above instructions for your host platform. Then, when you start the container, give the docker command an argument of ''-v /cvmfs:/cvmfs''.
 +
 
 +
Option 2 can be convenient if you have trouble getting CVMFS working on the host. There are actually two methods here. One is to use the pre-made Docker container [https://cvmfs.readthedocs.io/en/stable/cpt-quickstart.html#docker-container as described in the CVMFS documentation]. You may create an image based on this or even use it as-is to supply /cvmfs to the host and then use option 1 above.
 +
 
 +
The second method is to create a new image from scratch containing the necessary software. This method has worked in the past, though there may be easier ways of doing it today. Here are the instructions though in case all of the other above methods fail.
 +
 
 +
Unfortunately, there are a couple of steps that cannot be done when the image is created and must be implemented when the container is created. A working example with some comments can be found here:
 +
 
 +
[https://github.com/faustus123/hdcontainers/tree/master/Docker_cvmfs https://github.com/faustus123/hdcontainers/tree/master/Docker_cvmfs]
 +
 
 +
 
 +
== Running untethered (no CVMFS) ==
 +
<div id="untethered"></div>
 +
 
 +
Running untethered means installing the packages on your local computer so you can still run the software even with no internet connection. It is stated up front that this is unlikely to work for numerous reasons, but for those who like punishing themselves, here is some info that may help get you going. It goes without saying that none of this is recommended.
 +
 
 +
The main issue with installing locally is that many packages build their installation paths into their installed scripts and binaries. While spack does have mechanisms to try and fix this, they can fail if the directory path is either too long or too short. Your best chances of success will come if you create a local directory path that matches exactly what it would be if /CVMFS were mounted. Here are some example instructions:
 +
 
 +
  mkdir -p /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/
 +
  git clone --depth 1 https://github.com/spack/spack.git /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908
 +
  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.sh # or setup-env.csh
 +
  spack mirror add jlab-public https://spack.jlab.org/mirror
 +
  spack install -f -o -u clhep  # This should install "CLHEP" locally using the pre-built binaries
 +
 
 +
 
 +
 
 +
 
 +
<!-- ================================================================================================ -->
 +
<hr>
 +
 
 +
= Administration of the SPACK Repository =
 +
The following sections describe various aspects of creating and managing the JLab SPACK repository. There are a number of choices that were made in how this was set up so this documents those since they may not all be obvious by simply looking at directory structures and config. files.
 +
 
 +
Perhaps one of the most important pieces of information is that scripts and tools used to help us maintain spack at JLab are kept in a github repository:
 +
 
 +
  [https://github.com/JeffersonLab/epsci-spack/tree/main/admin https://github.com/JeffersonLab/epsci-spack/tree/main/admin]
 +
 
 +
which is checked out on the CUE in ''/scigroup/spack/admin''.
 +
 
 +
== SPACK Version History ==
 +
 
 +
{| class="wikitable"
 +
|-
 +
!colspan=3 | centos7
 +
|-
 +
! date active !! tag !! notes
 +
|-
 +
| 6/8/2021 || v0.16.0 || issues with util-linux-uuid after upgrading to v0.16.2. This is closest tag to what we had working before so testing it.
 +
|-
 +
| 6/5/2021 || v0.16.2 ||
 +
|-
 +
| 1/10/2021 || develop || initial version used
 +
|-
 +
!colspan=3 | centos8 (deprecated)
 +
|-
 +
! date active !! tag !! notes
 +
|-
 +
| 1/11/2021 || develop || initial version used
 +
|}
  
 
== Organizational Overview ==
 
== Organizational Overview ==
Line 113: Line 251:
 
# Packages are built using singularity containers
 
# Packages are built using singularity containers
 
#* Containers bind the ''/scigroup/cvmfs'' subdirectory to be at ''/cvmfs/oasis.opensciencegrid.org/jlab'' inside the container
 
#* Containers bind the ''/scigroup/cvmfs'' subdirectory to be at ''/cvmfs/oasis.opensciencegrid.org/jlab'' inside the container
#* This allows absolute paths that start with /cvmfs to be used in the build/install process
+
#* This allows absolute paths that start with /cvmfs to be writable in the build/install process
 
#* The ''/scigroup/cvmfs/epsci'' directory is exported to CVMFS so it can be mounted read-only from anywhere
 
#* The ''/scigroup/cvmfs/epsci'' directory is exported to CVMFS so it can be mounted read-only from anywhere
# The CUE mounts CVMFS (under /cvmfs as is standard) so that CUE users can access the software there (i.e. '''''not''''' through /scigroup/cvmfs/epsci)
+
#* The export is done every 4 hours via cronjob. Thus, newly built packages will not be immediately accessible.
 +
#** Wes Moore set this up and can increase frequency if needed.
 +
# A separate spack repository is maintained for every platform (e.g. centos/7.7.1908 is separate from centos/8.0.2011)
 +
#* This was a choice made on our end to segregate the binaries and make it easier to add and drop support for platforms in the future.
 +
# In addition to the [https://github.com/spack/spack global spack repository], we also include the [https://github.com/eic/eic-spack eic-spack] and [https://github.com/JeffersonLab/epsci-spack epsci-spack] repositories.
 +
#* This allows us to pull from the eic-spack package configurations and maintain our own package configurations.
 +
# Users will access the software via the ''/cvmfs'' directory.
 +
#* The SciComp computers (e.g. ifarm1901) all mount ''/cvmfs''
 +
#* Users can also install the CVMFS client on their personal laptop or desktop to access the software.
 
# The packages are exported to a ''build cache'' accessible from https://spack.jlab.org/mirror
 
# The packages are exported to a ''build cache'' accessible from https://spack.jlab.org/mirror
#* They can also be accessed from ''file:///scigroup/spack/mirror'' if on a computer that mounts ''/scigroup''
+
#* We do this only because it is simple and doesn't cost us anything significant. We discourage its use and may remove it in the future.
 +
 
 +
== Setting up a new platform ==
  
=== Creating a new Singularity Image ===
+
A platform here is defined as the OS name and version. e.g. almalinux:9.2-20230718. The specific versions are chosen based on official Docker images maintained by the OS vendors on [https://hub.docker.com Docker Hub](for examples, look [https://hub.docker.com/_/almalinux/tags here]). For platforms corresponding to CUE machines, the exact tags used are selected to be as close as possible to what is being used on the CUE.
For the purposes of this system, the Singularity images used for building packages are derived from Docker images. This ensures that either Docker or Singularity can be used to build packages with spack. Thus, if someone needs to build another package, they can choose the container system most convenient for them. Docker images are posted on [https://hub.docker.com/ Docker Hub] where Singularity can easily pull them. (Docker images cannot be easily created from Singularity images.)
 
  
The ''Dockerfile''s used to create the Docker images are kept in the git-hub repository [https://github.com/JeffersonLab/epsci-containers ''"epsci-containers"'']. They are also copied into the image itself so one can always access the Dockerfile used to create an image via ''/container/Dockerfile.*''. The Docker images are created with a few system software packages installed. Mainly a C++ compiler, version control tools (e.g. git and svn), python, and a couple of other tools needed for building packages. Below is an example of a Dockerfile (click right-hand side to view).
+
The basic steps are to create an [https://apptainer.org/ apptainer] (formerly singularity) image, then use it to setup a new [https://spack.io spack] instance ( helper script is available for this). Packages will then be built with the native compiler for the platform. Optionally, other compiler versions may be built using spack and then those compilers used to build versions of the spack packages compatible with that compiler.
 +
 
 +
The following sections describe these steps in some detail.
 +
 
 +
=== Creating a new Apptainer Image ===
 +
<div id="Apptainer"></div>
 +
For the purposes of this system, the Apptainer images used for building packages are derived from Docker images. This ensures that either Docker or Apptainer can be used to build packages with spack. Thus, if someone needs a convenient sandbox to work with locally they can choose the container system that is most convenient for them. Docker images we create are posted on [https://hub.docker.com/ Docker Hub] where Apptainer can easily pull them. (Docker images cannot be easily created from Apptainer images.)
 +
 
 +
The ''Dockerfile''s used to create the Docker images are kept in the git-hub repository [https://github.com/JeffersonLab/epsci-containers ''"epsci-containers"'']. They are also copied into the image itself so one can always access the Dockerfile used to create an image via ''/container/Dockerfile.*'' within a container. The Docker images are created with only a few system software packages installed. Mainly a C++ compiler, version control tools (e.g. git and svn), python, and a couple of other tools needed for building packages (e.g. cmake). Below is an example of a Dockerfile (click right-hand side to view).
  
 
<div class="toccolours mw-collapsible mw-collapsed">
 
<div class="toccolours mw-collapsible mw-collapsed">
EXAMPLE Dockerfile. (Click "Expand" to the right for details -->):
+
EXAMPLE Dockerfile. (Click "Expand" to the right to see the example -->):
  
 
<div class="mw-collapsible-content">
 
<div class="mw-collapsible-content">
 
  #--------------------------------------------------------------------------
 
  #--------------------------------------------------------------------------
# ubuntu build environment
+
# almalinux build environment
#  
+
#
# This Dockerfile will produce an image based on the one used for running
+
#
# at NERSC, PSC, and the OSG, but which can also be used to mount CVMFS
+
# This Dockerfile will produce an image suitable for compiling software.
# using any computer. The main use case is to provide a simple way to
+
# The image will gave a C/C++ and Fortran compiler as well as python,
# mount and run software from /group/halld/Software on you local laptop
+
# It will also contain version control software (e.g. git and svn).
# or desktop.
+
#
#
+
# The commands below are simplified by setting this environment variable:
# To use this most effectively:
+
#
#
+
#   export MYOS=almalinux:9.2-20230718
#      docker run -it --rm jeffersonlab/epsci-ubuntu cat /container/dsh | tr -d "\r" > dsh
+
#   export MYOS_=almalinux-9.2-20230718
#      chmod +x ./dsh
+
#
#      ./dsh jeffersonlab/epsci-ubuntu
+
#
  #
+
# To use this most effectively:
  #--------------------------------------------------------------------------
+
#
  #
+
#      docker run -it --rm jeffersonlab/epsci-almalinux:9.2-20230718 cat /container/dsh | tr -d "\r" > dsh
#  docker build -t epsci-ubuntu:21.04 -t jeffersonlab/epsci-ubuntu:21.04 .
+
#      chmod +x ./dsh
#  docker push jeffersonlab/epsci-ubuntu:21.04
+
#      ./dsh jeffersonlab/epsci-almalinux:9.2-20230718
#
+
#
#--------------------------------------------------------------------------  
+
#--------------------------------------------------------------------------
+
#
FROM ubuntu:21.04
+
#  These instructions are for a multi-architecture build (see below for single):
 
+
#
# Python3 requires the timezone be set and will try and prompt for it.
+
#  docker buildx create --name mybuilder
ENV TZ=US/Eastern
+
#  docker buildx use mybuilder
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
+
#  docker buildx inspect --bootstrap
+
#  docker buildx build --platform linux/arm64,linux/amd64 -t jeffersonlab/epsci-${MYOS} --push -f Dockerfile.${MYOS_} .
# Install compiler and code management tools
+
#
RUN apt -y update \
+
# To get this locally for the local architecture, pull it from dockerhub:
&& apt -y install build-essential libssl-dev libffi-dev python-dev \
+
#
&& apt -y install python python3 git subversion cvs curl
+
# docker pull jeffersonlab/epsci-${MYOS}
+
#
COPY dsh /container/dsh
+
#--------------------------------------------------------------------------
COPY Dockerfile /container/Dockerfile
+
# These instructions are for the classic single architecture build:
RUN ln -s /root /home/root
+
#
RUN ln -s /root /home/0
+
#  docker build -t epsci-${MYOS} -t jeffersonlab/epsci-${MYOS} -f Dockerfile.${MYOS} .
+
#  docker push jeffersonlab/epsci-${MYOS}
CMD ["/bin/bash"]
+
#
 +
#--------------------------------------------------------------------------
 +
# On ifarm:
 +
#  cd /scigroup/spack/mirror/singularity/images
 +
#  apptainer build epsci-${MYOS_}.img docker://jeffersonlab/epsci-${MYOS}
 +
#  cp -rp epsci-${MYOS_}.img /scigroup/cvmfs/epsci/singularity/images
 +
#
 +
#--------------------------------------------------------------------------
 +
 
 +
FROM almalinux:9.2-20230718
 +
 
 +
# Python3 requires the timezone be set and will try and prompt
 +
# for it.
 +
ENV TZ=US/Eastern
 +
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
 +
 
 +
# Install compiler and code management tools
 +
RUN dnf -y groupinstall 'Development Tools' \
 +
&& dnf -y install --allowerasing gcc-gfortran python3 git subversion curl which \
 +
&& dnf clean all
 +
 
 +
COPY dsh /container/dsh
 +
COPY Dockerfile.almalinux.9.2-20230718 /container/Dockerfile.almalinux.9.2-20230718
 +
RUN ln -s /root /home/root
 +
RUN ln -s /root /home/0
 +
 
 +
CMD ["/bin/bash"]
 +
 
  
 
</div>
 
</div>
Line 173: Line 355:
  
  
To create a singularity image, one first needs to create a Docker image. Thus, one needs access to a computer with Docker installed. This generally needs to be a personal desktop or laptop since Docker requires root access and is therefore not available on the public machines like ''ifarm''. (Incidentally, singularity also requires root privileges in order to build an image from a recipe, but not if just pulling from an existing Docker image). Here is example of the steps you might go through if creating an image for a new version of ubuntu. This assumes you are starting on a computer with Docker installed and running.
+
To create an Apptainer image, one first needs to create a Docker image. Thus, one needs access to a computer with Docker installed. This generally needs to be a personal desktop or laptop since Docker requires root access and is therefore not available on the public machines like ''ifarm''. (Incidentally, apptainer also requires root privileges in order to build an image from a recipe, but not if just pulling from an existing Docker image). Here is example of the steps you might go through if creating an image for a version of almalinux. This assumes you are starting on a computer with Docker installed and running.
 +
 
 +
NOTE: These instructions build a multi-architecture image for both '''amd64''' and '''arm64''' that gets pushed directly to Dockerhub.  
  
 +
# export MYOS=almalinux:9.2-20230718
 +
# export MYOS_=almalinux.9.2-20230718
 
# git clone https://github.com/JeffersonLab/epsci-containers
 
# git clone https://github.com/JeffersonLab/epsci-containers
 
# cd epsci-containers/base
 
# cd epsci-containers/base
# cp Dockerfile.ubuntu.21.04 Dockerfile.ubuntu.18.04
+
# cp Dockerfile.ubuntu.22.04 Dockerfile.${MYOS_}
# ''edit Dockerfile.ubuntu.18.04 to replace the version numbers with the new ones. They appear in a lot of places so better to do global replace''
+
# ''edit Dockerfile.${MYOS_} to replace the version numbers with the new ones. They appear in a few places so better to do global replace''
# docker build -t epsci-ubuntu:18.04 -t jeffersonlab/epsci-ubuntu:18.04 -f Dockerfile.ubuntu.18.04 .
+
# docker buildx create --name mybuilder
# docker push jeffersonlab/epsci-ubuntu:18.04
+
# docker buildx use mybuilder
 +
# docker buildx inspect --bootstrap
 +
# docker buildx build --platform linux/arm64,linux/amd64 -t jeffersonlab/epsci-${MYOS} --push -f Dockerfile.${MYOS_} .
 
# ssh ifarm
 
# ssh ifarm
# module use /apps/modulefiles
 
# module load singularity
 
 
# cd /scigroup/spack/mirror/singularity/images
 
# cd /scigroup/spack/mirror/singularity/images
# singularity build epsci-ubuntu-18.04.img docker://jeffersonlab/epsci-ubuntu:18.04
+
# apptainer build epsci-${MYOS_}.img docker://jeffersonlab/epsci-${MYOS}
# git clone https://github.com/spack/spack.git /scigroup/cvmfs/epsci/ubuntu/18.04
+
# cp -rp epsci-${MYOS_}.img /scigroup/cvmfs/epsci/singularity/images
 +
 
 +
=== Initializing the new platform configuration ===
 +
<div id="add_to_platform"></div>
 +
There are some pitfalls that are easy to fall into when trying to setup a new platform. Particularly
 +
if you want to build using a non-default compiler. To ameliorate this several administration scripts
 +
have been written to make it as turnkey as possible.
  
The last step above will clone a new spack instance that corresponds to the new image.  
+
Step-by-step instructions are below, but you may find some useful details in the comments at the
 +
top of the script: [https://github.com/JeffersonLab/epsci-spack/blob/main/admin/mnp.sh /scigroup/spack/admin/mnp.sh]
  
=== Building a spack package with a Singularity (or Docker) container ===
+
To set up the initial directory and build some of the base packages do the following. Note that this assumes an Apptainer image exists in the standard location for the platform version you are setting up (see [[#Apptainer|Apptainer section above]] for details).
  
The preferred method of building new packages is to use one of the ifarm computers with a singularity container from the ''/scigroup/spack/mirror/singularity/images'' directory. Any packages built should also be exported to the ''build cache'' so they are accessible for offsite installations. Below is an example recipe that builds zlib for the ubuntu 21.04 platform using the native gcc10.2.1 compiler:
+
In this example we assume we are building for a new platform named "almalinux:9.2-20230718".
  
# ssh ifarm1901
+
# newgrp spack  # start new shell with spack as the default group
# module use /apps/modulefiles
+
# cd /scigroup/spack/admin
# module load singularity
+
# cp make_new_platform_centos7.9.sh make_new_platform_almalinux9.2.sh
# singularity shell -B /scigroup/cvmfs:/cvmfs/oasis.opensciencegrid.org/jlab -B /scigroup:/scigroup /scigroup/spack/mirror/singularity/images/epsci-ubuntu-21.04.img
+
# <edit the settings at the top of the new make_new_platform_almalinux9.2.sh script>
# source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/ubuntu/21.04/share/spack/setup-env.sh
+
# ./make_new_platform_almalinux9.2.sh
# spack compiler find
+
 
# spack install zlib%gcc@10.2.1 target=x86_64
+
==== Potential Issues ====
# cd /scigroup/spack/mirror
+
* I had an issue with incompatible compiler and os which was due to the almalinux9.2 compiler gcc11.3.1 being installed with "operating_system: almalinux9" instead of "operating_system: almalinux9.2". I'm not 100% sure where to fix this upstream at the moment so the easy solution is to edit the file ''/home/davidl/.spack/linux/compilers.yaml'' and fix it there.
# spack buildcache create -r -a -u -d . zlib%gcc@10.2.1
 
# spack buildcache update-index -k -d /scigroup/spack/mirror
 
  
Be careful that the singularity image you use matches the spack root directory (i.e. where you source the set-env.sh script).
+
* When trying to setup centos7.9.2009 I ran into permission denied errors when it started trying to install packages built with the 9.3.0 compiler. This turned out to be as simple as manually creating the directory from inside a singularity shell with:
 +
  mkdir -p ${spack_top}/opt/spack/linux-*-x86_64/gcc-${spack_compiler}
  
You also want to specify the ''x86_64'' target so generic binaries are built that do not contain optimizations for specific processors.
+
Note that I added the above to the mnp.sh script so it should be done automatically. (I actually haven't tested it yet there may also be a bug in it!)
  
Finally, don't forget to run the last two commands above to add the package to the ''build cache'' and to update the index.
+
* The ifarm was unable to reach the website for the ''ca-certificates-mozilla'' package that was a dependency of lmod. The easiest way to handle this was to download it on my ''jana2'' desktop using the ''spack mirror'' command like this:
 +
  ssh jana2
 +
  git clone -c feature.manyFiles=true https://github.com/spack/spack.git
 +
  source spack/share/spack/setup-env.csh
 +
  spack mirror create -D -d spack-mirror-2022-10-28 lmod
 +
  tar czf spack-mirror-2022-10-28.tgz spack-mirror-2022-10-28
 +
  scp spack-mirror-2022-10-28.tgz ifarm1801:/work/epsci
 +
 
 +
  ssh ifarm1801
 +
  newgrp spack
 +
  cd /scigroup/spack
 +
  tar xzf /work/epsci/spack-mirror-2022-10-28.tgz
  
 +
Then, in a singularity shell on ''ifarm'':
 +
  spack mirror add local_filesystem file:///scigroup/spack/spack-mirror-2022-10-28
  
=== Setting up a new platform ===
+
At this point re-run the ''./make_new_platform_rocky8.sh'' script and it should install everything for lmod OK.
  
There are some pitfalls that are easy to fall into when trying to setup a new platform. Particularly
+
To add a list of packages to the spack instance, write them to a text file with one line per package specification. An example can be seen here:
if you want to build using a non-default compiler.
+
<font size="-1">
Here are some steps that can be useful to get one set up from scratch. These assume a singularity
+
''n.b. this can also be done via a [https://spack-tutorial.readthedocs.io/en/latest/tutorial_environments.html yaml file] which may eventually be used to replace this system''
image has been built and a spack clone for the specific OS already exists. These are for setting
+
</font>
up a repository for centos 7.7.1908 that uses binaries built with the gcc 9.3.0 compiler.
 
  
  # First, create a singularity container and setup the spack environment
+
  [https://github.com/JeffersonLab/epsci-spack/blob/main/admin/jlabce-2.4.txt jlabce-2.4.txt]
  ssh ifarm1901
 
  newgrp spack  # this puts spack to the front of your group list so new files/directories below to it
 
  module use /apps/modulefiles
 
  module load singularity
 
  singularity shell -B /scigroup/cvmfs:/cvmfs/oasis.opensciencegrid.org/jlab -B /scigroup:/scigroup /scigroup/spack/mirror/singularity/images/epsci-centos-7.7.1908.img
 
  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.sh
 
 
 
  # At this point you want to disable the mirror and any other compilers
 
  # to ensure packages are all built for *this* spack environment.
 
  spack mirror rm jlab-public
 
  spack compilers # use spack compiler remove xxx for anything that is not the default system compiler
 
 
 
  # Build the gcc9.3.0 compiler using the default system compiler.
 
  # Load it and add it to the list of spack compilers
 
  spack install gcc@9.3.0%gcc@4.8.5 target=x86_64
 
  spack load gcc@9.3.0
 
  spack compiler find
 
 
 
  # Use the gcc9.3.0 compiler to build clhep and ROOT.
 
  # Note that in this case, root requires sqlite and the default
 
  # version (3.34.0) failed to fetch the source. Thus, I had to
 
  # build the previous version and specify that ROOT use it.
 
  spack install clhep%gcc@9.3.0 target=x86_64
 
  spack load clhep
 
  spack install sqlite@3.33.0%gcc@9.3.0 target=x86_64 # default version 3.34.0 failed to fetch
 
  spack install root %gcc@9.3.0 target=x86_64 ^sqlite@3.33.0 target=x86_64
 
  spack buildcache create -r -a -u -d . gcc@9.3.0 %gcc@4.8.5 arch=x86_64
 
  spack buildcache create -r -a -u -d . clhep %gcc@9.3.0 arch=x86_64
 
  spack buildcache create -r -a -u -d . root %gcc@9.3.0 arch=x86_64 ^sqlite@3.33.0 arch=x86_64
 
  spack buildcache update-index -k -d /scigroup/spack/mirror
 
  
Please see the next section on setting up the module system
+
# cd /scigroup/spack/admin
 +
# cp add_to_platform_centos7.sh add_to_platform_rocky8.sh
 +
# <edit the settings at the top of the new add_to_platform_rocky8.sh script>
 +
# ./add_to_platform_rocky8.sh jlabce-2.4.txt
  
 
=== Setting up the module system (LMOD) ===
 
=== Setting up the module system (LMOD) ===
 +
 +
The [https://github.com/JeffersonLab/epsci-spack/blob/main/admin/mnp.sh mnp.sh] script should already set up the LMOD system configuration when a new platform is created. This section documents some of what was done and why.
  
 
We would like most users to be be able to interact with the spack packages using the standard "module load" command. Spack has nice support for this though there are options for how it is setup and we'd like to be consistent across supported platforms.
 
We would like most users to be be able to interact with the spack packages using the standard "module load" command. Spack has nice support for this though there are options for how it is setup and we'd like to be consistent across supported platforms.
  
 
First off, we use the LMOD system as it supports hierarchical module files. This allows us to configure the system so that when a specific compiler is loaded, only packages corresponding to that compiler are listed. This should make it easier on the user to navigate and to avoid loading incompatible packages. We also configure it to present packages using the {package}/{version} naming scheme. This is what is used by /apps on the CUE which will make the spack packages integrate more seamlessly with those.
 
First off, we use the LMOD system as it supports hierarchical module files. This allows us to configure the system so that when a specific compiler is loaded, only packages corresponding to that compiler are listed. This should make it easier on the user to navigate and to avoid loading incompatible packages. We also configure it to present packages using the {package}/{version} naming scheme. This is what is used by /apps on the CUE which will make the spack packages integrate more seamlessly with those.
 
Here are instructions for setting this up.
 
 
 
  
 
==== Modules configuration file ====
 
==== Modules configuration file ====
Line 268: Line 443:
 
The ''${SPACK_ROOT}/etc/spack/modules.yaml'' configuration file must be created and have the following content added. This is mostly based on an example given in the [https://spack-tutorial.readthedocs.io/en/latest/tutorial_modules.html spack documentation] under "Hierarchical Module Files". Descriptions of the settings are given below.  
 
The ''${SPACK_ROOT}/etc/spack/modules.yaml'' configuration file must be created and have the following content added. This is mostly based on an example given in the [https://spack-tutorial.readthedocs.io/en/latest/tutorial_modules.html spack documentation] under "Hierarchical Module Files". Descriptions of the settings are given below.  
  
 +
<div class="toccolours mw-collapsible mw-collapsed">
 +
EXAMPLE modules.yaml. (Click "Expand" to the right to see the example -->):
 +
 +
<div class="mw-collapsible-content">
 
   modules:
 
   modules:
 
     enable::
 
     enable::
Line 294: Line 473:
 
         all:          '{name}/{version}'
 
         all:          '{name}/{version}'
 
         ^lapack:      '{name}/{version}-{^lapack.name}'
 
         ^lapack:      '{name}/{version}-{^lapack.name}'
 +
</div>
 +
</div>
  
 
+
* The core_compilers section should list the system compiler as the default.
* The core_compilers section should list the system compiler and is the default.
 
 
* hash_length: 0 removes the spack hash from package names
 
* hash_length: 0 removes the spack hash from package names
 
* whitelist ensures all gcc compilers are available. (Once one of those is loaded, other packages will appear.)
 
* whitelist ensures all gcc compilers are available. (Once one of those is loaded, other packages will appear.)
Line 305: Line 485:
 
* The projections section defines the module naming scheme. The line for ''lapack'' was left in from the spack tutorial example.
 
* The projections section defines the module naming scheme. The line for ''lapack'' was left in from the spack tutorial example.
  
== Misc. Notes ==
+
== Building a new package ==
Here are some miscellaneous notes on issues with getting some packages to build
 
 
 
{| class="wikitable"
 
|-
 
! package
 
! style="width: 30%"| solution
 
! notes
 
|-
 
| ncurses
 
| spack install ncurses+symlinks target=x86_64
 
| The ncurses package can fail to build due to permission denied errors related to /etc/ld.so.cache~. Internet wisdom says to build it with the symlinks option turned on. (See also notes on building using Mac OS X + Docker below)
 
|-
 
| automake
 
| build into spack directory with very short absolute path
 
| This error happens at the very end of the build when it tries to run the ''help2man'' utility on the automake and aclocal scripts. The failure is because the scripts contain a [https://en.wikipedia.org/wiki/Shebang_(Unix) shebang] at the top with a path length longer than 128 characters. Spack actually has a fix for this that it will automatically apply after install. However, this ''help2man'' tool is run by the automake build system before that is run. To make the build succeed, use a spack root directory that has a very short path (e.g. by binding the host working directory to something like "/A" in the singularity container). Then, make sure to create the buildcache using the "-r" option so that it is relocatable. The buildcache can then be installed in any spack root directory, regardless of path length.
 
|}
 
  
=== Mac OS X + Docker ===
+
We anticipate getting user requests for new packages. Building packages that already have a spack configuration should be fairly straight-forward. There are a couple of questions to answer before you do it though are:
The default disk format for Mac OS X is non-case-sensitive. It automatically translates file and directory names to give the illusion that it is case sensitive. This works fine except when you have two files in the same directory whose name only differs by case. This becomes an issue if you are building spack packages for Linux using Docker and are doing so in a directory from the local disk (bound in the Docker container). I saw this with the ncurses package failing with errors related to '''E/E_TERM''' and '''A/APPLE_TERM''' (I may not be remembering the exact file names correctly).
 
  
One work-around is to create a disk image using Disk Utility and choose the format to be ''"Mac OS Extended (Case-sensitive, Journaled)"''. Mount the disk image and bind that to the docker container. This will give you a case sensitive persistent disk (i.e. survives after the container exits).
+
# Is this package something we should be supporting via spack?
 +
# Should this be part of a spack Environment?
 +
# What compilers/platforms should this be built for?
  
If you do not care about persistence, then just build in a directory in the Docker container's temporary file system. You can always save to a buildcache from there and copy just the buildcache file out of the container.
+
Once you have answers for these then you can proceed.
  
== Mac OS X ==
+
The best way to handle this is to use the appropriate ''[https://github.com/JeffersonLab/epsci-spack/blob/main/admin/add_to_platform_centos7.sh add_to_platform_X.sh]'' script with the exact package specification written in a text file. If it is not being added to an existing spack Environment then it should be added to the ''misc_packages.txt'' file for archival purposes. Make sure to commit any changes to github.
  
=== CVMFS + Mac OS X ===
+
Details on this can be seen at the bottom of the [[#add_to_platform|section above on adding packages]] to a new platform.
  
''The most up to date instructions can be found on the [https://cernvm.cern.ch/fs/ CVMFS website].''
+
=== Building a new package manually ===
 +
If you have trouble using an ''add_to_platform_X.sh'' script then you can build the package manually. This is useful if you need to debug why a package is failing to build. If you decide you need to build manually you can do so by launching a singularity shell for the appropriate platform and running the ''spack install'' command. To make the command for launching the singularity container with all of the correct volume bindings, admin scripts are available:
  
To use CVMFS on Mac OS X, you need to install the ''MacFUSE'' package and then the ''cvmfs'' package. You should then reboot so everything will load properly. The step-by-step instructions follow.  
+
  > ssh ifarm1901
 +
  > /scigroup/spack/admin/singshell_centos7.sh
 +
  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.sh
 +
  Singularity>
  
1. Download and install the [https://osxfuse.github.io/ macFUSE package]
+
The last line indicates you are now running in a singularity shell. The line before it that starts with ''"source /cvmfs/..."'' SHOULD BE COPIED AND EXECUTED WITHIN THE SHELL. This is important since the setup-env.sh script is not run automatically when the singularity shell is created. Sourcing it is necessary to setup your environment for working with the spack instance.
  
2. Download and install the cvmfs package with the following (Note that downloading the cvmfs package via ''curl'' apparently avoids some signature security issue on Mac OS X that you would get if downloaded via web-browser. Don't ask me how.)
+
continuing the example ...
  
  curl -o ~/Downloads/cvmfs-2.7.5.pkg https://ecsft.cern.ch/dist/cvmfs/cvmfs-2.7.5/cvmfs-2.7.5.pkg
+
  Singularity> source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.sh
  open cvmfs-2.7.5.pkg
+
  Singularity> spack install -j16 clhep@2.4.1.3 %gcc@9.3.0 target=x86_64
  
3. Create a config file '''''/etc/cvmfs/default.local''''' with the following content (you need to do this with sudo):
+
The above will build the clhep package version 2.4.1.3 using the GCC 9.3.0 compiler and make the binaries for the generic x86_64 target. Note that by default spack will build with microcode for the specific processor in use on the machine on which you are compiling. Adding the ''target=x86_64'' ensures all packages are built the same regardless of specifics of the CPU.
  CVMFS_REPOSITORIES=oasis.opensciencegrid.org
 
  CVMFS_HTTP_PROXY=DIRECT
 
  
3. Restart the computer
+
It should also be noted that, if there are dependencies for the package you are building the specific versions should be given in the package specification. The syntax for this is beyond the scope of this document and the spack documentation should be consulted for details.
  
4. Create the mount point and mount oasis with:
+
= Updating the buildcache aka mirror =
  sudo mkdir -p /cvmfs/oasis.opensciencegrid.org
 
  sudo mount -t cvmfs oasis.opensciencegrid.org /cvmfs/oasis.opensciencegrid.org
 
  
If it all works you should see something like this:
+
The build cache is only useful if someone is trying to run [[#untethered|untethered]]. Individual packages (tarballs) can be made from existing spack package builds with the ''buildcache create'' command. It is also possible to generate buildcache packages from all packages in a repository or (probably) an Environment with a single command. Look into the spack help for details. Once the buildcache packages are built, you need to rebuild the index. Here are some specific commands:
  
  >sudo mount -t cvmfs oasis.opensciencegrid.org /cvmfs/oasis.opensciencegrid.org
+
# spack buildcache create -r -a -u -d . zlib%gcc@10.2.1
  CernVM-FS: running with credentials 10000:10000
+
# spack buildcache update-index -k -d /scigroup/spack/mirror
  CernVM-FS: loading Fuse module... done
 
  CernVM-FS: mounted cvmfs on /Users/Shared/cvmfs/oasis.opensciencegrid.org
 
  
<!--
+
== Misc. Notes ==
Some binaries are available for the macosx platform. One issue here is that multiple versions of the Apple supplied compiler are available. This complicates things since one would need to maintain a complete set of builds for multiple compilers in order to support the multiple OS versions. To simplify things, we instead use a compiler installed by spack itself to build the packages. This gives end users access to the compiler which can be used consistently regardless of the exact Mac OS X system version you are using. Note that we also compile the packages with the generic x86_64 target for a similar reason: to be independent of the exact flavor of CPU being used. Thus, the packages are built with:
+
Here are some miscellaneous notes to catch useful tidbits not covered in the previous sections.
  
compiler: gcc 10.2.0
+
=== Mac OS X + Docker ===
target: x86_64
+
The default disk format for Mac OS X is non-case-sensitive. It automatically translates file and directory names to give the illusion that it is case sensitive. This works fine except when you have two files in the same directory whose name only differs by case. This becomes an issue if you are building spack packages for Linux using Docker and are doing so in a directory from the local disk (bound in the Docker container). I saw this with the ncurses package failing with errors related to '''E/E_TERM''' and '''A/APPLE_TERM''' (I may not be remembering the exact file names correctly).
  
{| class="wikitable"
+
One work-around is to create a disk image using Disk Utility and choose the format to be ''"Mac OS Extended (Case-sensitive, Journaled)"''. Mount the disk image and bind that to the docker container. This will give you a case sensitive persistent disk (i.e. survives after the container exits).
|-
 
!package
 
!compiler
 
!notes
 
|-
 
| curl@7.74.0
 
| apple-clang@12.0.0
 
| spack install curl%apple-clang@12.0.0 target=x86_64
 
|-
 
| clhep@2.4.4.0
 
| apple-clang@12.0.0
 
| spack install clhep%apple-clang@12.0.0 target=x86_64
 
|-
 
| xerces-c@3.2.3
 
| apple-clang@12.0.0
 
| spack install xerces-c%apple-clang@12.0.0 target=x86_64
 
|-
 
| gcc@10.2.0
 
| apple-clang@12.0.0
 
| spack install gcc@10.2.0%apple-clang@12.0.0 target=x86_64<br>''n.b. Only some packages will build using this compiler''
 
|-
 
| xerces-c@3.2.3
 
| apple-clang@12.0.0
 
| spack install xerces-c%apple-clang@12.0.0 target=x86_64
 
|}
 
  
-->
+
If you do not care about persistence, then just build in a directory in the Docker container's temporary file system. You can always save to a buildcache from there and copy just the buildcache file out of the container.

Latest revision as of 19:14, 12 March 2024


WARNING: This page is deprecated. Please go to the SPACK on JLab ifarm page.

Using the JLab SPACK Repository

Overview

SPACK is a package manager used to maintain multiple versions of software compiled with various compilers for various OSes. The EPSCI group takes the primary responsibility for maintaining the SPACK repository at JLab. SPACK has a rich feature set that allows a lot of flexibility in how one can use it to manage their software. This page describes details of how SPACK is implemented at JLab for the ENP program.

There are three primary use cases for the software built with the SPACK system:

  1. Users on the JLab SciComp farm (ifarm) want to use the pre-built binary versions on JLab computers
  2. Users running offsite want to use the pre-built binary versions on their local computers
  3. Users want to install the pre-built binaries on their local computer so they can run untethered

The first two of these are satisfied by using /cvmfs. The third use case uses a web accessible SPACK buildcache and is quite a bit more fickle. Officially, we do not support option 3 because of this.

Quickstart

The spack builds may be used directly from the host OS or via a container. In both cases, the software is installed in network mounted /cvmfs/oasis.opensciencegrid.org so the host will need to have that set up and working. It is recommended to use the container since there may be system packages installed there that the spack packages require.

Using Singularity

First, make sure singularity is in your PATH. On the JLab ifarm system you can do this:

  source /etc/profile.d/modules.sh
  module load singularity

Second, run a singularity shell with from one of the supported OSes. You can find compatible images in the /cvmfs/oasis.opensciencegrid.org/jlab/epsci/singularity/images directory. Note that in order to access the /cvmfs directory inside the container, you will need to bind it using the -B/cvmfs:/cvmfs option. Once the container has started, you will need to setup the spack environment by sourcing the correct setup=env.sh script.

  singularity shell -B/cvmfs:/cvmfs /cvmfs/oasis.opensciencegrid.org/jlab/epsci/singularity/images/epsci-ubuntu-22.04.img
  Singularity> source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/ubuntu/22.04/share/spack/setup-env.sh

To see available spack packages, run spack find. Below is an example of the output.

  Singularity> spack find       
  -- linux-ubuntu22.04-x86_64 / gcc@11.3.0 ------------------------
  berkeley-db@18.1.40                 geant4-data@11.0.0    libxml2@2.10.3           py-wheel@0.37.1
  binutils@2.38                       gettext@0.21.1        libxmu@1.1.2             python@3.10.8
  bison@3.8.2                         glproto@1.4.17        libxrandr@1.5.0          randrproto@1.5.0
  bzip2@1.0.8                         glx@1.4               libxrender@0.9.10        re2c@2.2
  ca-certificates-mozilla@2022-10-11  hwloc@2.8.0           libxt@1.1.5              readline@8.2
  clhep@2.4.6.0                       inputproto@2.3.2      llvm@14.0.6              renderproto@0.11.1
  cmake@3.25.1                        kbproto@1.0.7         lmod@8.7.2               sqlite@3.40.0
  curl@7.85.0                         libbsd@0.11.5         lua@5.4.4                tar@1.34
  diffutils@3.8                       libedit@3.1-20210216  lua-luafilesystem@1_8_0  tcl@8.6.12
  expat@2.5.0                         libffi@3.4.2          lua-luaposix@35.0        texinfo@7.0
  findutils@4.9.0                     libice@1.0.9          m4@1.4.19                unzip@6.0
  flex@2.6.3                          libiconv@1.16         mesa@22.1.2              util-linux-uuid@2.38.1
  g4abla@3.1                          libmd@1.0.4           mesa-glu@9.0.2           util-macros@1.19.3
  g4emlow@8.0                         libpciaccess@0.16     meson@1.0.0              xcb-proto@1.14.1
  g4ensdfstate@2.3                    libpthread-stubs@0.4  ncurses@6.3              xerces-c@3.2.3
  g4incl@1.0                          libsigsegv@2.13       ninja@1.11.1             xextproto@7.3.0
  g4ndl@4.6                           libsm@1.2.3           openssl@1.1.1s           xproto@7.0.31
  g4particlexs@4.0                    libtool@2.4.7         perl@5.36.0              xrandr@1.5.0
  g4photonevaporation@5.7             libunwind@1.6.2       perl-data-dumper@2.173   xtrans@1.3.5
  g4pii@1.3                           libx11@1.7.0          pigz@2.7                 xz@5.2.7
  g4radioactivedecay@5.6              libxau@1.0.8          pkgconf@1.8.0            zlib@1.2.13
  g4realsurface@2.2                   libxcb@1.14           py-mako@1.2.2            zstd@1.5.2
  g4saiddata@2.0                      libxcrypt@4.4.33      py-markupsafe@2.1.1
  gdbm@1.23                           libxdmcp@1.1.2        py-pip@22.2.2
  geant4@11.0.3                       libxext@1.3.3         py-setuptools@65.5.0
  ==> 97 installed packages

To load a package, use the spack load command. For example:

  Singularity> spack load geant4

Note that this may pull in other dependency packages. For example, python@3.10.8 will be loaded by the above, superseding the Ubuntu 22.04 system installed python 3.10.6.


Using via host OS

WARNING: Using spack directly from the host OS is deprecated. It is recommended that you use it from a container.

The recommended way to set up your environment is with one of the following:

  [bash]  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/spack_env.sh  lmod gcc/9.3.0
  [tcsh]  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/spack_env.csh lmod gcc/9.3.0

Note that the above may take a few seconds to complete, but it sets up a user-friendly package naming scheme for "module load". If you want quicker startup and are willing to live with package names that include long hashes, then source the script with no arguments:

  [bash]  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/spack_env.sh
  [tcsh]  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/spack_env.csh

Other useful commands:

  module avail                # List available packages
  module load packagename     # Load a package (optionally specify version number)
  module unload packagename   # Unload a package that was previously loaded

The following operating systems are supported:

Supported Operating Systems
OS support start date status
ubuntu/22.04 December 29, 2022
centos/7.7.1908 March 31, 2021 deprecated
centos/8.3.2011 March 31, 2021 deprecated
ubuntu/21.04 March 31, 2021 deprecated

CVMFS Client Configuration

If you are working on the JLab ifarm computers than CVMFS is already installed and configured. This is nothing else you need to do. CVMFS may also already be available on many remote HPC sites (e.g. NERSC). Check the site's specific documentation or simply look for the /cvmfs/oasis.opensciencegrid.org directory.

To mount the public, read-only CVMFS volume that contains the pre-built binaries see the instructions in one of the following sections for your specific platform.

The most up to date instructions on installing and configuring the CVMFS client software can be found on the CVMFS website.


Linux

Here are instructions for installing on a CentOS or RedHat system (personal laptop or desktop)

1. Install the pointer to the CVMFS repo and then install cvmfs itself. After it is installed, generate a default config file.

  sudo yum install https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm
  sudo yum install -y cvmfs
  cvmfs_config setup

2. Create a config file /etc/cvmfs/default.local with the following content (you need to do this with sudo):

 CVMFS_REPOSITORIES=oasis.opensciencegrid.org
 CVMFS_HTTP_PROXY=DIRECT
 CVMFS_CLIENT_PROFILE=single

3. Restart the autofs service

  systemctl restart autofs

4. Make the mount point and mount the cvmfs disk

 sudo mkdir -p /cvmfs/oasis.opensciencegrid.org
 sudo mount -t cvmfs oasis.opensciencegrid.org /cvmfs/oasis.opensciencegrid.org

Mac OS X

To use CVMFS on Mac OS X, you need to install the MacFUSE package and then the cvmfs package. You should then reboot so everything will load properly. The step-by-step instructions follow.

1. Download and install the macFUSE package

2. Download and install the cvmfs package with the following (Note that downloading the cvmfs package via curl apparently avoids some signature security issue on Mac OS X that you would get if downloaded via web-browser. Don't ask me how.)

 curl -o ~/Downloads/cvmfs-2.7.5.pkg https://ecsft.cern.ch/dist/cvmfs/cvmfs-2.7.5/cvmfs-2.7.5.pkg
 open cvmfs-2.7.5.pkg

3. Create a config file /etc/cvmfs/default.local with the following content (you need to do this with sudo):

 CVMFS_REPOSITORIES=oasis.opensciencegrid.org
 CVMFS_HTTP_PROXY=DIRECT

4. Restart the computer

5. Create the mount point and mount oasis with:

 sudo mkdir -p /cvmfs/oasis.opensciencegrid.org
 sudo mount -t cvmfs oasis.opensciencegrid.org /cvmfs/oasis.opensciencegrid.org

If it all works you should see something like this:

 >sudo mount -t cvmfs oasis.opensciencegrid.org /cvmfs/oasis.opensciencegrid.org
 CernVM-FS: running with credentials 10000:10000
 CernVM-FS: loading Fuse module... done
 CernVM-FS: mounted cvmfs on /Users/Shared/cvmfs/oasis.opensciencegrid.org

Docker

There are actually two options for using CVMFS inside a Docker container:

  1. Install CVMFS on the host and simply bind the /cvmfs directory to the same directory inside the container
  2. Run the CVMFS software inside the container and mount it there.

Option 1 is preferred since any caching of the files is done by the host and and so does not disappear when the container goes away. It also can be used with any image and does not require another image to be created with the CVMFS software installed. To implement option 1, first mount CMVFS on the host using the above instructions for your host platform. Then, when you start the container, give the docker command an argument of -v /cvmfs:/cvmfs.

Option 2 can be convenient if you have trouble getting CVMFS working on the host. There are actually two methods here. One is to use the pre-made Docker container as described in the CVMFS documentation. You may create an image based on this or even use it as-is to supply /cvmfs to the host and then use option 1 above.

The second method is to create a new image from scratch containing the necessary software. This method has worked in the past, though there may be easier ways of doing it today. Here are the instructions though in case all of the other above methods fail.

Unfortunately, there are a couple of steps that cannot be done when the image is created and must be implemented when the container is created. A working example with some comments can be found here:

https://github.com/faustus123/hdcontainers/tree/master/Docker_cvmfs


Running untethered (no CVMFS)

Running untethered means installing the packages on your local computer so you can still run the software even with no internet connection. It is stated up front that this is unlikely to work for numerous reasons, but for those who like punishing themselves, here is some info that may help get you going. It goes without saying that none of this is recommended.

The main issue with installing locally is that many packages build their installation paths into their installed scripts and binaries. While spack does have mechanisms to try and fix this, they can fail if the directory path is either too long or too short. Your best chances of success will come if you create a local directory path that matches exactly what it would be if /CVMFS were mounted. Here are some example instructions:

  mkdir -p /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/
  git clone --depth 1 https://github.com/spack/spack.git /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908
  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.sh # or setup-env.csh
  spack mirror add jlab-public https://spack.jlab.org/mirror
  spack install -f -o -u clhep   # This should install "CLHEP" locally using the pre-built binaries 




Administration of the SPACK Repository

The following sections describe various aspects of creating and managing the JLab SPACK repository. There are a number of choices that were made in how this was set up so this documents those since they may not all be obvious by simply looking at directory structures and config. files.

Perhaps one of the most important pieces of information is that scripts and tools used to help us maintain spack at JLab are kept in a github repository:

  https://github.com/JeffersonLab/epsci-spack/tree/main/admin

which is checked out on the CUE in /scigroup/spack/admin.

SPACK Version History

centos7
date active tag notes
6/8/2021 v0.16.0 issues with util-linux-uuid after upgrading to v0.16.2. This is closest tag to what we had working before so testing it.
6/5/2021 v0.16.2
1/10/2021 develop initial version used
centos8 (deprecated)
date active tag notes
1/11/2021 develop initial version used

Organizational Overview

The organization of the spack binaries is as follows:

  1. Packages are built using singularity containers
    • Containers bind the /scigroup/cvmfs subdirectory to be at /cvmfs/oasis.opensciencegrid.org/jlab inside the container
    • This allows absolute paths that start with /cvmfs to be writable in the build/install process
    • The /scigroup/cvmfs/epsci directory is exported to CVMFS so it can be mounted read-only from anywhere
    • The export is done every 4 hours via cronjob. Thus, newly built packages will not be immediately accessible.
      • Wes Moore set this up and can increase frequency if needed.
  2. A separate spack repository is maintained for every platform (e.g. centos/7.7.1908 is separate from centos/8.0.2011)
    • This was a choice made on our end to segregate the binaries and make it easier to add and drop support for platforms in the future.
  3. In addition to the global spack repository, we also include the eic-spack and epsci-spack repositories.
    • This allows us to pull from the eic-spack package configurations and maintain our own package configurations.
  4. Users will access the software via the /cvmfs directory.
    • The SciComp computers (e.g. ifarm1901) all mount /cvmfs
    • Users can also install the CVMFS client on their personal laptop or desktop to access the software.
  5. The packages are exported to a build cache accessible from https://spack.jlab.org/mirror
    • We do this only because it is simple and doesn't cost us anything significant. We discourage its use and may remove it in the future.

Setting up a new platform

A platform here is defined as the OS name and version. e.g. almalinux:9.2-20230718. The specific versions are chosen based on official Docker images maintained by the OS vendors on Docker Hub(for examples, look here). For platforms corresponding to CUE machines, the exact tags used are selected to be as close as possible to what is being used on the CUE.

The basic steps are to create an apptainer (formerly singularity) image, then use it to setup a new spack instance ( helper script is available for this). Packages will then be built with the native compiler for the platform. Optionally, other compiler versions may be built using spack and then those compilers used to build versions of the spack packages compatible with that compiler.

The following sections describe these steps in some detail.

Creating a new Apptainer Image

For the purposes of this system, the Apptainer images used for building packages are derived from Docker images. This ensures that either Docker or Apptainer can be used to build packages with spack. Thus, if someone needs a convenient sandbox to work with locally they can choose the container system that is most convenient for them. Docker images we create are posted on Docker Hub where Apptainer can easily pull them. (Docker images cannot be easily created from Apptainer images.)

The Dockerfiles used to create the Docker images are kept in the git-hub repository "epsci-containers". They are also copied into the image itself so one can always access the Dockerfile used to create an image via /container/Dockerfile.* within a container. The Docker images are created with only a few system software packages installed. Mainly a C++ compiler, version control tools (e.g. git and svn), python, and a couple of other tools needed for building packages (e.g. cmake). Below is an example of a Dockerfile (click right-hand side to view).

EXAMPLE Dockerfile. (Click "Expand" to the right to see the example -->):

#--------------------------------------------------------------------------
  1. almalinux build environment
  2. This Dockerfile will produce an image suitable for compiling software.
  3. The image will gave a C/C++ and Fortran compiler as well as python,
  4. It will also contain version control software (e.g. git and svn).
  5. The commands below are simplified by setting this environment variable:
  6. export MYOS=almalinux:9.2-20230718
  7. export MYOS_=almalinux-9.2-20230718
  8. To use this most effectively:
  9. docker run -it --rm jeffersonlab/epsci-almalinux:9.2-20230718 cat /container/dsh | tr -d "\r" > dsh
  10. chmod +x ./dsh
  11. ./dsh jeffersonlab/epsci-almalinux:9.2-20230718
  12. --------------------------------------------------------------------------
  13. These instructions are for a multi-architecture build (see below for single):
  14. docker buildx create --name mybuilder
  15. docker buildx use mybuilder
  16. docker buildx inspect --bootstrap
  17. docker buildx build --platform linux/arm64,linux/amd64 -t jeffersonlab/epsci-${MYOS} --push -f Dockerfile.${MYOS_} .
  18. To get this locally for the local architecture, pull it from dockerhub:
  19. docker pull jeffersonlab/epsci-${MYOS}
  20. --------------------------------------------------------------------------
  21. These instructions are for the classic single architecture build:
  22. docker build -t epsci-${MYOS} -t jeffersonlab/epsci-${MYOS} -f Dockerfile.${MYOS} .
  23. docker push jeffersonlab/epsci-${MYOS}
  24. --------------------------------------------------------------------------
  25. On ifarm:
  26. cd /scigroup/spack/mirror/singularity/images
  27. apptainer build epsci-${MYOS_}.img docker://jeffersonlab/epsci-${MYOS}
  28. cp -rp epsci-${MYOS_}.img /scigroup/cvmfs/epsci/singularity/images
  29. --------------------------------------------------------------------------

FROM almalinux:9.2-20230718

  1. Python3 requires the timezone be set and will try and prompt
  2. for it.

ENV TZ=US/Eastern RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

  1. Install compiler and code management tools

RUN dnf -y groupinstall 'Development Tools' \ && dnf -y install --allowerasing gcc-gfortran python3 git subversion curl which \ && dnf clean all

COPY dsh /container/dsh COPY Dockerfile.almalinux.9.2-20230718 /container/Dockerfile.almalinux.9.2-20230718 RUN ln -s /root /home/root RUN ln -s /root /home/0

CMD ["/bin/bash"]



To create an Apptainer image, one first needs to create a Docker image. Thus, one needs access to a computer with Docker installed. This generally needs to be a personal desktop or laptop since Docker requires root access and is therefore not available on the public machines like ifarm. (Incidentally, apptainer also requires root privileges in order to build an image from a recipe, but not if just pulling from an existing Docker image). Here is example of the steps you might go through if creating an image for a version of almalinux. This assumes you are starting on a computer with Docker installed and running.

NOTE: These instructions build a multi-architecture image for both amd64 and arm64 that gets pushed directly to Dockerhub.

  1. export MYOS=almalinux:9.2-20230718
  2. export MYOS_=almalinux.9.2-20230718
  3. git clone https://github.com/JeffersonLab/epsci-containers
  4. cd epsci-containers/base
  5. cp Dockerfile.ubuntu.22.04 Dockerfile.${MYOS_}
  6. edit Dockerfile.${MYOS_} to replace the version numbers with the new ones. They appear in a few places so better to do global replace
  7. docker buildx create --name mybuilder
  8. docker buildx use mybuilder
  9. docker buildx inspect --bootstrap
  10. docker buildx build --platform linux/arm64,linux/amd64 -t jeffersonlab/epsci-${MYOS} --push -f Dockerfile.${MYOS_} .
  11. ssh ifarm
  12. cd /scigroup/spack/mirror/singularity/images
  13. apptainer build epsci-${MYOS_}.img docker://jeffersonlab/epsci-${MYOS}
  14. cp -rp epsci-${MYOS_}.img /scigroup/cvmfs/epsci/singularity/images

Initializing the new platform configuration

There are some pitfalls that are easy to fall into when trying to setup a new platform. Particularly if you want to build using a non-default compiler. To ameliorate this several administration scripts have been written to make it as turnkey as possible.

Step-by-step instructions are below, but you may find some useful details in the comments at the top of the script: /scigroup/spack/admin/mnp.sh

To set up the initial directory and build some of the base packages do the following. Note that this assumes an Apptainer image exists in the standard location for the platform version you are setting up (see Apptainer section above for details).

In this example we assume we are building for a new platform named "almalinux:9.2-20230718".

  1. newgrp spack # start new shell with spack as the default group
  2. cd /scigroup/spack/admin
  3. cp make_new_platform_centos7.9.sh make_new_platform_almalinux9.2.sh
  4. <edit the settings at the top of the new make_new_platform_almalinux9.2.sh script>
  5. ./make_new_platform_almalinux9.2.sh

Potential Issues

  • I had an issue with incompatible compiler and os which was due to the almalinux9.2 compiler gcc11.3.1 being installed with "operating_system: almalinux9" instead of "operating_system: almalinux9.2". I'm not 100% sure where to fix this upstream at the moment so the easy solution is to edit the file /home/davidl/.spack/linux/compilers.yaml and fix it there.
  • When trying to setup centos7.9.2009 I ran into permission denied errors when it started trying to install packages built with the 9.3.0 compiler. This turned out to be as simple as manually creating the directory from inside a singularity shell with:
  mkdir -p ${spack_top}/opt/spack/linux-*-x86_64/gcc-${spack_compiler}

Note that I added the above to the mnp.sh script so it should be done automatically. (I actually haven't tested it yet there may also be a bug in it!)

  • The ifarm was unable to reach the website for the ca-certificates-mozilla package that was a dependency of lmod. The easiest way to handle this was to download it on my jana2 desktop using the spack mirror command like this:
  ssh jana2
  git clone -c feature.manyFiles=true https://github.com/spack/spack.git
  source spack/share/spack/setup-env.csh
  spack mirror create -D -d spack-mirror-2022-10-28 lmod
  tar czf spack-mirror-2022-10-28.tgz spack-mirror-2022-10-28
  scp spack-mirror-2022-10-28.tgz ifarm1801:/work/epsci
  
  ssh ifarm1801
  newgrp spack
  cd /scigroup/spack
  tar xzf /work/epsci/spack-mirror-2022-10-28.tgz

Then, in a singularity shell on ifarm:

  spack mirror add local_filesystem file:///scigroup/spack/spack-mirror-2022-10-28

At this point re-run the ./make_new_platform_rocky8.sh script and it should install everything for lmod OK.

To add a list of packages to the spack instance, write them to a text file with one line per package specification. An example can be seen here: n.b. this can also be done via a yaml file which may eventually be used to replace this system

  jlabce-2.4.txt
  1. cd /scigroup/spack/admin
  2. cp add_to_platform_centos7.sh add_to_platform_rocky8.sh
  3. <edit the settings at the top of the new add_to_platform_rocky8.sh script>
  4. ./add_to_platform_rocky8.sh jlabce-2.4.txt

Setting up the module system (LMOD)

The mnp.sh script should already set up the LMOD system configuration when a new platform is created. This section documents some of what was done and why.

We would like most users to be be able to interact with the spack packages using the standard "module load" command. Spack has nice support for this though there are options for how it is setup and we'd like to be consistent across supported platforms.

First off, we use the LMOD system as it supports hierarchical module files. This allows us to configure the system so that when a specific compiler is loaded, only packages corresponding to that compiler are listed. This should make it easier on the user to navigate and to avoid loading incompatible packages. We also configure it to present packages using the {package}/{version} naming scheme. This is what is used by /apps on the CUE which will make the spack packages integrate more seamlessly with those.

Modules configuration file

The ${SPACK_ROOT}/etc/spack/modules.yaml configuration file must be created and have the following content added. This is mostly based on an example given in the spack documentation under "Hierarchical Module Files". Descriptions of the settings are given below.

EXAMPLE modules.yaml. (Click "Expand" to the right to see the example -->):

 modules:
   enable::
     - lmod
   lmod:
     core_compilers:
       - 'gcc@4.8.5'
     hierarchy:
       - mpi
     hash_length: 0
     whitelist:
       - gcc
     blacklist:
       - '%gcc@4.8.5'
       - 'arch=linux-centos7-zen2'
     all:
       filter:
         environment_blacklist:
           - "C_INCLUDE_PATH"
           - "CPLUS_INCLUDE_PATH"
           - "LIBRARY_PATH"
       environment:
         set:
           '{name}_ROOT': '{prefix}'
     projections:
       all:          '{name}/{version}'
       ^lapack:      '{name}/{version}-{^lapack.name}'
  • The core_compilers section should list the system compiler as the default.
  • hash_length: 0 removes the spack hash from package names
  • whitelist ensures all gcc compilers are available. (Once one of those is loaded, other packages will appear.)
  • blacklist excludes packages built with the default system compiler (n.b. whitelist overrides this so other compilers will still be listed)
    • the arch= blacklist line excludes packages built specifically with the zen2 microcode instead of generic x86_64. Those packages were actually built by mistake and may be removed altogether. This is a nice way though of obscuring them from view.
  • The environment_blacklist filter was just copied from the spack example in the documentation. We may want to remove it. I did not recall build systems using those variables so just left it in.
  • The environment: set: section adds an environment variable for every package that is {package}_ROOT so the root directory of the package can be easily obtained, even if the package itself does not define such a variable.
  • The projections section defines the module naming scheme. The line for lapack was left in from the spack tutorial example.

Building a new package

We anticipate getting user requests for new packages. Building packages that already have a spack configuration should be fairly straight-forward. There are a couple of questions to answer before you do it though are:

  1. Is this package something we should be supporting via spack?
  2. Should this be part of a spack Environment?
  3. What compilers/platforms should this be built for?

Once you have answers for these then you can proceed.

The best way to handle this is to use the appropriate add_to_platform_X.sh script with the exact package specification written in a text file. If it is not being added to an existing spack Environment then it should be added to the misc_packages.txt file for archival purposes. Make sure to commit any changes to github.

Details on this can be seen at the bottom of the section above on adding packages to a new platform.

Building a new package manually

If you have trouble using an add_to_platform_X.sh script then you can build the package manually. This is useful if you need to debug why a package is failing to build. If you decide you need to build manually you can do so by launching a singularity shell for the appropriate platform and running the spack install command. To make the command for launching the singularity container with all of the correct volume bindings, admin scripts are available:

  > ssh ifarm1901
  > /scigroup/spack/admin/singshell_centos7.sh
  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.sh
  Singularity>

The last line indicates you are now running in a singularity shell. The line before it that starts with "source /cvmfs/..." SHOULD BE COPIED AND EXECUTED WITHIN THE SHELL. This is important since the setup-env.sh script is not run automatically when the singularity shell is created. Sourcing it is necessary to setup your environment for working with the spack instance.

continuing the example ...

  Singularity> source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.sh
  Singularity> spack install -j16 clhep@2.4.1.3 %gcc@9.3.0 target=x86_64 

The above will build the clhep package version 2.4.1.3 using the GCC 9.3.0 compiler and make the binaries for the generic x86_64 target. Note that by default spack will build with microcode for the specific processor in use on the machine on which you are compiling. Adding the target=x86_64 ensures all packages are built the same regardless of specifics of the CPU.

It should also be noted that, if there are dependencies for the package you are building the specific versions should be given in the package specification. The syntax for this is beyond the scope of this document and the spack documentation should be consulted for details.

Updating the buildcache aka mirror

The build cache is only useful if someone is trying to run untethered. Individual packages (tarballs) can be made from existing spack package builds with the buildcache create command. It is also possible to generate buildcache packages from all packages in a repository or (probably) an Environment with a single command. Look into the spack help for details. Once the buildcache packages are built, you need to rebuild the index. Here are some specific commands:

  1. spack buildcache create -r -a -u -d . zlib%gcc@10.2.1
  2. spack buildcache update-index -k -d /scigroup/spack/mirror

Misc. Notes

Here are some miscellaneous notes to catch useful tidbits not covered in the previous sections.

Mac OS X + Docker

The default disk format for Mac OS X is non-case-sensitive. It automatically translates file and directory names to give the illusion that it is case sensitive. This works fine except when you have two files in the same directory whose name only differs by case. This becomes an issue if you are building spack packages for Linux using Docker and are doing so in a directory from the local disk (bound in the Docker container). I saw this with the ncurses package failing with errors related to E/E_TERM and A/APPLE_TERM (I may not be remembering the exact file names correctly).

One work-around is to create a disk image using Disk Utility and choose the format to be "Mac OS Extended (Case-sensitive, Journaled)". Mount the disk image and bind that to the docker container. This will give you a case sensitive persistent disk (i.e. survives after the container exits).

If you do not care about persistence, then just build in a directory in the Docker container's temporary file system. You can always save to a buildcache from there and copy just the buildcache file out of the container.