Difference between revisions of "SPACK Mirror on JLab CUE"

From epsciwiki
Jump to navigation Jump to search
Line 30: Line 30:
 
* spack mirror add jlab-public https://spack.jlab.org/mirror
 
* spack mirror add jlab-public https://spack.jlab.org/mirror
 
* module avail
 
* module avail
 +
 +
== Installing the spack binary packages on your local computer with Docker ==
 +
 +
These are instructions for installing and using binary packages using Docker.
 +
This is useful if your host OS is not one of the standard ones supported in
 +
the spack mirror.
 +
 +
These instructions will install to a directory on the host machine so that the
 +
installs are persistent and don't disappear when the Docker container is shut
 +
down. It also uses a tool from the container to try and use the same uid and gid
 +
as the current user on the host. This is so the files and directories on the
 +
persistent disk are not owned by root.
 +
 +
=== First time setup ===
 +
 +
These commands only need to be run when first setting up the repository. The
 +
MYSPACK environment variable is used for convenience in these commands and
 +
is not used by any of the underlying packages.
 +
 +
'''NOTE:''' This method does *not* use ''CVMFS'' even though it uses a directory
 +
starting with ''/cvmfs'' inside the container. This is done to match the install
 +
directories of the pre-built binaries.
 +
 +
  setenv MYSPACK $PWD/myspack
 +
  mkdir -p $MYSPACK
 +
  docker pull jeffersonlab/epsci-centos:7.7.1908
 +
  docker run -it --rm jeffersonlab/epsci-centos:7.7.1908 cat /container/dsh | tr -d "\r" > dsh
 +
  chmod +x ./dsh
 +
  ./dsh -cp -v ${MYSPACK}:/cvmfs/oasis.opensciencegrid.org/jlab  jeffersonlab/epsci-centos:7.7.1908
 +
  git clone --depth 1 https://github.com/spack/spack.git /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908
 +
  source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.sh
 +
  spack mirror add jlab-public https://spack.jlab.org/mirror
 +
  spack buildcache install -u -o  gcc@9.2.0
 +
  spack compiler find
  
 
= Spack repository management =
 
= Spack repository management =

Revision as of 15:21, 31 December 2020

Overview

SPACK is a package manager used to maintain multiple versions of software compiled at JLab. The spack manager takes care of many stages of managing the software packages used for the scientific program. It keeps track of multiple software versions built using multiple compilers and even with multiple dependency lists. For example, you can have a version of gemc that uses root v6.18/00, GEANT 10.1.02, and was built with the gcc 9.2.0 compiler. You can also build another version which changes any or all of those version numbers and spack will happily organize it.

Mostly, we want to use spack to centrally manage some standard builds of commonly used software packages. This avoids every researcher from having to build their own copies which can be costly in storage, computing, and their own time. This will include external, 3rd party packages like CLHEP and internal software like gemc.

There are three primary use cases for the software built with the spack system:

  1. Users on the JLab CUE want to use the pre-built binary versions on JLab computers
  2. Users running offsite want to access the binaries through /cvmfs
  3. Users want to install the pre-built binaries on their local computer so they can run untethered

Instructions for using the software in each of these modes are given in the sections below.

Using spack packages on the JLab CUE

  • module use /apps/modulefiles
  • source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.csh
  • module avail

Using spack packages offsite via CVMFS

  • source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.csh
  • module avail

Installing the spack binary packages on your local computer

Installing the spack binary packages on your local computer with Docker

These are instructions for installing and using binary packages using Docker. This is useful if your host OS is not one of the standard ones supported in the spack mirror.

These instructions will install to a directory on the host machine so that the installs are persistent and don't disappear when the Docker container is shut down. It also uses a tool from the container to try and use the same uid and gid as the current user on the host. This is so the files and directories on the persistent disk are not owned by root.

First time setup

These commands only need to be run when first setting up the repository. The MYSPACK environment variable is used for convenience in these commands and is not used by any of the underlying packages.

NOTE: This method does *not* use CVMFS even though it uses a directory starting with /cvmfs inside the container. This is done to match the install directories of the pre-built binaries.

 setenv MYSPACK $PWD/myspack
 mkdir -p $MYSPACK
 docker pull jeffersonlab/epsci-centos:7.7.1908
 docker run -it --rm jeffersonlab/epsci-centos:7.7.1908 cat /container/dsh | tr -d "\r" > dsh
 chmod +x ./dsh
 ./dsh -cp -v ${MYSPACK}:/cvmfs/oasis.opensciencegrid.org/jlab  jeffersonlab/epsci-centos:7.7.1908
 git clone --depth 1 https://github.com/spack/spack.git /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908
 source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/centos/7.7.1908/share/spack/setup-env.sh
 spack mirror add jlab-public https://spack.jlab.org/mirror
 spack buildcache install -u -o  gcc@9.2.0
 spack compiler find

Spack repository management

Organizational Overview

The organization of the spack binaries is as follows:

  1. Packages are built using singularity containers
    • Containers bind the /scigroup/cvmfs subdirectory to be at /cvmfs/oasis.opensciencegrid.org/jlab inside the container
    • This allows absolute paths that start with /cvmfs to be used in the build/install process
    • The /scigroup/cvmfs/epsci directory is exported to CVMFS so it can be mounted read-only from anywhere
  2. The CUE mounts CVMFS (under /cvmfs as is standard) so that CUE users can access the software there (i.e. not through /scigroup/cvmfs/epsci)
  3. The packages are exported to a build cache accessible from https://spack.jlab.org/mirror
    • They can also be accessed from file:///scigroup/spack/mirror if on a computer that mounts /scigroup

Creating a new Singularity Image

For the purposes of this system, the Singularity images used for building packages are derived from Docker images. This ensures that either Docker or Singularity can be used to build packages with spack. Thus, if someone needs to build another package, they can choose the container system most convenient for them. Docker images are posted on Docker Hub where Singularity can easily pull them. (Docker images cannot be easily created from Singularity images.)

The Dockerfiles used to create the Docker images are kept in the git-hub repository "epsci-containers". They are also copied into the image itself so one can always access the Dockerfile used to create an image via /container/Dockerfile.*. The Docker images are created with a few system software packages installed. Mainly a C++ compiler, version control tools (e.g. git and svn), python, and a couple of other tools needed for building packages. Below is an example of a Dockerfile (click right-hand side to view).

EXAMPLE Dockerfile. (Click "Expand" to the right for details -->):

#--------------------------------------------------------------------------
# ubuntu build environment
# 
# This Dockerfile will produce an image based on the one used for running
# at NERSC, PSC, and the OSG, but which can also be used to mount CVMFS
# using any computer. The main use case is to provide a simple way to
# mount and run software from /group/halld/Software on you local laptop
# or desktop.
#
# To use this most effectively:
#
#      docker run -it --rm jeffersonlab/epsci-ubuntu cat /container/dsh | tr -d "\r" > dsh
#      chmod +x ./dsh
#      ./dsh jeffersonlab/epsci-ubuntu
#
#--------------------------------------------------------------------------
#
#   docker build -t epsci-ubuntu:21.04 -t jeffersonlab/epsci-ubuntu:21.04 .
#   docker push jeffersonlab/epsci-ubuntu:21.04
#
#--------------------------------------------------------------------------   

FROM ubuntu:21.04
 
# Python3 requires the timezone be set and will try and prompt for it.
ENV TZ=US/Eastern
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

# Install compiler and code management tools
RUN apt -y update \
	&& apt -y install build-essential libssl-dev libffi-dev python-dev \
	&& apt -y install python python3 git subversion cvs curl

COPY dsh /container/dsh
COPY Dockerfile /container/Dockerfile
RUN ln -s /root /home/root
RUN ln -s /root /home/0

CMD ["/bin/bash"]


To create a singularity image, one first needs to create a Docker image. Thus, one needs access to a computer with Docker installed. This generally needs to be a personal desktop or laptop since Docker requires root access and is therefore not available on the public machines like ifarm. (Incidentally, singularity also requires root privileges in order to build an image from a recipe, but not if just pulling from an existing Docker image). Here is example of the steps you might go through if creating an image for a new version of ubuntu. This assumes you are starting on a computer with Docker installed and running.

  1. git clone https://github.com/JeffersonLab/epsci-containers
  2. cd epsci-containers/base
  3. cp Dockerfile.ubuntu.21.04 Dockerfile.ubuntu.18.04
  4. edit Dockerfile.ubuntu.18.04 to replace the version numbers with the new ones. They appear in a lot of places so better to do global replace
  5. docker build -t epsci-ubuntu:18.04 -t jeffersonlab/epsci-ubuntu:18.04 -f Dockerfile.ubuntu.18.04 .
  6. docker push jeffersonlab/epsci-ubuntu:18.04
  7. ssh ifarm
  8. module use /apps/modulefiles
  9. module load singularity
  10. cd /scigroup/spack/mirror/singularity/images
  11. singularity build epsci-ubuntu-18.04.img docker://jeffersonlab/epsci-ubuntu:18.04
  12. git clone https://github.com/spack/spack.git /scigroup/cvmfs/epsci/ubuntu/18.04

The last step above will clone a new spack instance that corresponds to the new image.

Building a spack package with a Singularity (or Docker) container

The preferred method of building new packages is to use one of the ifarm computers with a singularity container from the /scigroup/spack/mirror/singularity/images directory. Any packages built should also be exported to the build cache so they are accessible for offsite installations. Below is an example recipe that builds zlib for the ubuntu 21.04 platform using the native gcc10.2.1 compiler:

  1. ssh ifarm1901
  2. module use /apps/modulefiles
  3. module load singularity
  4. singularity shell -B /scigroup/cvmfs:/cvmfs/oasis.opensciencegrid.org/jlab -B /scigroup:/scigroup /scigroup/spack/mirror/singularity/images/epsci-ubuntu-21.04.img
  5. source /cvmfs/oasis.opensciencegrid.org/jlab/epsci/ubuntu/21.04/share/spack/setup-env.sh
  6. spack compiler find
  7. spack install zlib%gcc@10.2.1 target=x86_64
  8. cd /scigroup/spack/mirror
  9. spack buildcache create -r -a -u -d . zlib%gcc@10.2.1
  10. spack buildcache update-index -k -d /scigroup/spack/mirror

Be careful that the singularity image you use matches the spack root directory (i.e. where you source the set-env.sh script).

You also want to specify the x86_64 target so generic binaries are built that do not contain optimizations for specific processors.

Finally, don't forget to run the last two commands above to add the package to the build cache and to update the index.


Misc. Notes

Here are some miscellaneous notes on issues with getting some packages to build

package solution notes
ncurses spack install ncurses+symlinks target=x86_64 The ncurses package can fail to build due to permission denied errors related to /etc/ld.so.cache~. Internet wisdom says to build it with the symlinks option turned on. (See also notes on building using Mac OS X + Docker below)
automake build into spack directory with very short absolute path This error happens at the very end of the build when it tries to run the help2man utility on the automake and aclocal scripts. The failure is because the scripts contain a shebang at the top with a path length longer than 128 characters. Spack actually has a fix for this that it will automatically apply after install. However, this help2man tool is run by the automake build system before that is run. To make the build succeed, use a spack root directory that has a very short path (e.g. by binding the host working directory to something like "/A" in the singularity container). Then, make sure to create the buildcache using the "-r" option so that it is relocatable. The buildcache can then be installed in any spack root directory, regardless of path length.

Mac OS X + Docker

The default disk format for Mac OS X is non-case-sensitive. It automatically translates file and directory names to give the illusion that it is case sensitive. This works fine except when you have two files in the same directory whose name only differs by case. This becomes an issue if you are building spack packages for Linux using Docker and are doing so in a directory from the local disk (bound in the Docker container). I saw this with the ncurses package failing with errors related to E/E_TERM and A/APPLE_TERM (I may not be remembering the exact file names correctly).

One work-around is to create a disk image using Disk Utility and choose the format to be "Mac OS Extended (Case-sensitive, Journaled)". Mount the disk image and bind that to the docker container. This will give you a case sensitive persistent disk (i.e. survives after the container exits).

If you do not care about persistence, then just build in a directory in the Docker container's temporary file system. You can always save to a buildcache from there and copy just the buildcache file out of the container.

Mac OS X