Skip to content
Lukas Mosimann edited this page Nov 27, 2018 · 1 revision

CTest Integration

Summary

If you are not interested in the MPI tests, the easiest way to execute all tests locally:

ctest . -L "regression_*"
ctest . -L "unittest_*"

On SLURM machines:

source machine.sh
salloc -N 1
srun ctest .

or even shorter

source machine.sh
srun -N1 ctest .

Execution

CTest integration can run MPI tests and normal tests. On a local machine, the usage is very simple. From any build directory, you can execute the tests using

ctest .
ctest . -j8 # run tests in parallel

Note that CTest will take care of not running to many jobs. I.e., when running with -j8, CTest will try to run either 8 single-core tasks, or 2 MPI-tasks with 4 cores each, ...

CTest has very basic support for adding labels to tests. The usage of labels is as follows:

ctest . --print-labels  # print all supported labels
ctest . -L "target_x86" # filter labels using a regex

MPI tests are always run using the MPI wrapper detected when running use_package(MPI). The MPI wrapper is stored in MPITEST_EXECUTABLE. Note: There are also variables prefixed with MPIEXEC_; those are not relevant!

Cray machines

As our machines are using SLURM, life gets slightly more difficult due to the following reasons:

  1. Tests might need a certain environment
  2. Tests might need to be run on the target node
  3. MPI Tests need to be run with one srun per test

Regarding the environment, we added four variables to CMake: [MPI_]TEST_CUDA_ENVIRONMENT contain a list of variables set when running CUDA tests (with or without MPI); [MPI_]TEST_HOST_ENVIRONMENT contain a list of variables set when running non-CUDA tests (with or without MPI). These values are initialized during the first run of CMake with the values taken from the environment variables [MPI_][HOST,CUDA]_JOB_ENV. Note: You will get the proper behaviour when sourcing the environment script before calling CMake.

Regarding the other two reasons, we offer two solutions:

  1. When you want to run only non-MPI tests, the suggested approach is to allocate a node, and then calling CTest through SLURM, i.e.:

    source machine.sh
    salloc -N1
    srun ctest . -L "unittest_*"
    srun ctest . -L "regression_*"
    # srun ctest . -L "mpitest_*" ==> THIS WILL FAIL
    
  2. When you want to all tests (MPI and non-MPI) through one ctest-call, we need to call srun for each test, otherwise we get into trouble with multiple MPI_Init.

    In order to do that, you need to set the variable TEST_USE_WRAPPERS_FOR_ALL_TEST to ON. This will basically run srun test for each test in CTest.

    source machine.sh
    salloc -N1
    ctest . # CTest will do srun internally.