Behind the Spack scenes

With Qlustar 13, we moved our toolchain support to Spack.

This page describes best practices, tricks, and non-trivialities around getting all the prerequisites for our in-house code running.

  • Try to compile scientific software on cluster nodes, since spack is very picky with the build architecture. You can append arch=linux-ubuntu22.04-zen2 to your build spec, but it will lead to pkgconfig errors rather than cross-compilation.
  • Build parallelism is great, but spack is a bit conservative. It can't hurt to use spack install -j80 or similar
  • Unless you're sure about the build dependency chain, use install –fail-fast (the process would err less in case of build failures)
  • Avoid having Anaconda4 or similar in your path. This might induce cross-dependencies between multiple third-party package managers (which is about as bad as it sounds)
  • Disk space. Spack runs on /tmp, and you won't get far with 1 GB of tmpfs on the cluster nodes. RAM is cheap, mount -o remount,size=50G /tmp is your friend.

Setting: The code runs in Octave, relying on MPI that is non-standard, and Octave-MPI bindings. The process is described in the README of that repo.

  • compflu/backstage/spack.txt
  • Last modified: 2023-06-29 16:16
  • by j.hielscher