Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed Ensemble (MPI Support) #1090

Merged
merged 58 commits into from
Dec 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
58 commits
Select commit Hold shift + click to select a range
6fbd3a8
CMake: Add FLAMEGPU_ENABLE_MPI option.
Robadob Jul 14, 2023
cb18d4f
MPI CUDAEnsemble functionality and a local machine test case.
Robadob Jul 19, 2023
d7b2491
MPI CUDAEnsemble error handling.
Robadob Jul 21, 2023
0828a43
Breaking: CUDAEnsemble::getLogs() now returns a map
Robadob Jul 26, 2023
993bced
BugFix: Ensemble log already exist exception message contained bad fi…
Robadob Jul 26, 2023
449908d
draft: move MPI_Finalize() to util::cleanup()
Robadob Jul 27, 2023
a922a06
Fix tests
Robadob Jul 27, 2023
bab928a
BugFix: Replace occurences of throw with THROW
Robadob Jul 28, 2023
129771d
Add MPI to requirements.
Robadob Jul 28, 2023
d9496a4
MPI ensemble telemetry
Robadob Jul 28, 2023
1ddcbc2
Pete fix for newer gcc
Robadob Sep 12, 2023
df6d423
fix comment.
Robadob Sep 12, 2023
6acbeba
OMP->MPI
Robadob Sep 12, 2023
6ef185e
amend/correct buffer logic for recieving GPU device name.
Robadob Sep 12, 2023
316513e
Rewrite todo pop? comment
Robadob Sep 12, 2023
b35534c
replace bad todo
Robadob Sep 12, 2023
7a810d0
Add MPI clarity to README.
Robadob Sep 13, 2023
eb05bce
Remove --no-mpi runarg, config var now exists as a compile time const…
Robadob Sep 22, 2023
74477ad
Split out MPI tests into a seperate suite
Robadob Sep 25, 2023
0c157c6
Draft MPI workflow.
Robadob Sep 25, 2023
cf11e15
tiny change to file to attempt to trigger CI
Robadob Sep 25, 2023
bac3ae2
Forgot to add abstract simrunner.
Robadob Sep 25, 2023
b234dff
Replace std::atomic_init
Robadob Sep 25, 2023
8eab93c
Update ensemble example to use logging and account for MPI runs.
Robadob Sep 25, 2023
c6a0c33
Warn about MPI link failures with CMake < 3.20.1
ptheywood Oct 2, 2023
f75a5a8
Install MPI from apt or source in CI.
ptheywood Oct 13, 2023
4e9a541
Warn at CMake Configure when mpich forces -flto.
ptheywood Oct 16, 2023
1f02012
Reduce MPI build matrix
ptheywood Oct 16, 2023
ec9be63
Add MPI restrictions to readme requirements
ptheywood Oct 16, 2023
f694414
Update .github/workflows/MPI.yml
Robadob Nov 1, 2023
67392ec
Update .github/workflows/MPI.yml
Robadob Nov 1, 2023
2f7002c
Reconfig MPI test fixture to scale workload with world size.
Robadob Nov 2, 2023
6500331
Rework progress printing
Robadob Nov 3, 2023
8faf040
fixup progress stuff
Robadob Nov 3, 2023
e3916f4
WIP. Found that after Rank 0 tells all runners to exit, it stops trac…
Robadob Nov 3, 2023
7b14f6c
Require more tests. But I think it should work.
Robadob Nov 3, 2023
ea5ac56
Tests now all pass.
Robadob Nov 6, 2023
4a4a6a1
Start of cleanup
Robadob Nov 6, 2023
51dcbd8
various cleanup, lint should now be fixed.
Robadob Nov 6, 2023
c9093eb
split out local err processing
Robadob Nov 6, 2023
03bfb21
lint fix
Robadob Nov 6, 2023
d3298ee
Fixes
Robadob Nov 7, 2023
9e83d71
Bugfix: Resolve crash when EnsembleConfig::devices is left empty
Robadob Nov 7, 2023
34e85c6
lint
Robadob Nov 7, 2023
302fa94
Duplicate error tests to force error on rank 0 and rank 1
Robadob Nov 7, 2023
245bf57
Fix MPIEnsemble init order
Robadob Nov 7, 2023
3fd69af
Remove unneeded printf
ptheywood Nov 10, 2023
474a056
Adjust MPI CI to explicitly build the tests_mpi and ensemble targets …
ptheywood Nov 21, 2023
7357153
Update src/flamegpu/simulation/CUDAEnsemble.cu
Robadob Nov 21, 2023
574f108
Rename MPIensemble getWorldRank and getWorldSize to queryMPIWorldRank…
ptheywood Nov 21, 2023
44eb064
Add static methods to get the shared memory rank and size from mpi in…
ptheywood Nov 21, 2023
b663ea0
Add detail::MPIEnsemble members for the group rank and size. Plus doc…
ptheywood Nov 21, 2023
c1660fe
Fix create_test_project cmake macro gpu arch init
ptheywood Nov 27, 2023
0fd20d3
Assign GPUs to MPI ranks per node, allowing more flexible MPI configu…
ptheywood Nov 21, 2023
7785326
Fix Debug builds of tests_mpi
ptheywood Dec 13, 2023
ad763cd
Fix pytest tests not updated for CUDASimulation::getLogs return type …
ptheywood Dec 13, 2023
7b2b176
Skip cleanup tests in python if MPI is enabled, as finalize can only …
ptheywood Dec 13, 2023
9e2d0cd
Remove checks where mpi could be nullptr since the removal of config-…
ptheywood Dec 14, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
261 changes: 261 additions & 0 deletions .github/workflows/MPI.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,261 @@
# Perform builds with supported MPI versions
name: MPI

on:
# On pull_requests which mutate this CI workflow, using pr rather than push:branch to ensure that the merged state will be OK, as that is what is important here.
pull_request:
paths:
- ".github/workflows/MPI.yml"
- "tests/test_cases/simulation/test_mpi_ensemble.cu"
- "include/flamegpu/simulation/detail/MPISimRunner.h"
- "include/flamegpu/simulation/detail/AbstractSimRunner.h"
- "src/flamegpu/simulation/detail/MPISimRunner.cu"
- "src/flamegpu/simulation/detail/AbstractSimRunner.cu"
- "src/flamegpu/simulation/CUDAEnsemble.cu"
- "**mpi**"
- "**MPI**"
# Or trigger on manual dispatch.
workflow_dispatch:

defaults:
run:
# Default to using bash regardless of OS unless otherwise specified.
shell: bash

jobs:
build-ubuntu:
runs-on: ${{ matrix.cudacxx.os }}
strategy:
fail-fast: false
# Multiplicative build matrix
# optional exclude: can be partial, include: must be specific
matrix:
# CUDA_ARCH values are reduced compared to wheels due to CI memory issues while compiling the test suite.
cudacxx:
- cuda: "12.0"
cuda_arch: "50-real;"
hostcxx: gcc-11
os: ubuntu-22.04
python:
- "3.8"
mpi:
- lib: "openmpi"
version: "apt" # MPI 3.1
# - lib: "openmpi"
# version: "4.1.6" # MPI 3.1
# - lib: "openmpi"
# version: "4.0.0" # MPI 3.1
# - lib: "openmpi"
# version: "3.0.0" # MPI 3.1
# - lib: "openmpi"
# version: "2.0.0" # MPI 3.1
- lib: "openmpi"
version: "1.10.7" # MPI 3.0
- lib: "mpich"
version: "apt" # MPI 4.0
# - lib: "mpich"
# version: "4.1.2" # MPI 4.0
# - lib: "mpich"
# version: "4.0" # MPI 4.0
# - lib: "mpich"
# version: "3.4.3" # MPI 3.1
- lib: "mpich"
version: "3.3" # MPI 3.1
config:
- name: "Release"
config: "Release"
SEATBELTS: "ON"
VISUALISATION:
- "OFF"

# Name the job based on matrix/env options
name: "build-ubuntu-mpi (${{ matrix.mpi.lib }}, ${{ matrix.mpi.version }}, ${{ matrix.cudacxx.cuda }}, ${{matrix.python}}, ${{ matrix.VISUALISATION }}, ${{ matrix.config.name }}, ${{ matrix.cudacxx.os }})"

# Define job-wide env constants, and promote matrix elements to env constants for portable steps.
env:
# Define constants
BUILD_DIR: "build"
FLAMEGPU_BUILD_TESTS: "ON"
# Conditional based on matrix via awkward almost ternary
FLAMEGPU_BUILD_PYTHON: ${{ fromJSON('{true:"ON",false:"OFF"}')[matrix.python != ''] }}
# Port matrix options to environment, for more portability.
CUDA: ${{ matrix.cudacxx.cuda }}
CUDA_ARCH: ${{ matrix.cudacxx.cuda_arch }}
HOSTCXX: ${{ matrix.cudacxx.hostcxx }}
OS: ${{ matrix.cudacxx.os }}
CONFIG: ${{ matrix.config.config }}
FLAMEGPU_SEATBELTS: ${{ matrix.config.SEATBELTS }}
PYTHON: ${{ matrix.python }}
MPI_LIB: ${{ matrix.mpi.lib }}
MPI_VERSION: ${{ matrix.mpi.version }}
VISUALISATION: ${{ matrix.VISUALISATION }}

steps:
- uses: actions/checkout@v3

- name: Install CUDA
if: ${{ startswith(env.OS, 'ubuntu') && env.CUDA != '' }}
env:
cuda: ${{ env.CUDA }}
run: .github/scripts/install_cuda_ubuntu.sh

- name: Install/Select gcc and g++
if: ${{ startsWith(env.HOSTCXX, 'gcc-') }}
run: |
gcc_version=${HOSTCXX//gcc-/}
sudo apt-get install -y gcc-${gcc_version} g++-${gcc_version}
echo "CC=/usr/bin/gcc-${gcc_version}" >> $GITHUB_ENV
echo "CXX=/usr/bin/g++-${gcc_version}" >> $GITHUB_ENV
echo "CUDAHOSTCXX=/usr/bin/g++-${gcc_version}" >> $GITHUB_ENV

- name: Install MPI from apt
if: ${{ env.MPI_VERSION == 'apt' }}
working-directory: ${{ runner.temp }}
run: |
sudo apt-get install lib${{ env.MPI_LIB }}-dev

- name: Install OpenMPI from source
if: ${{ env.MPI_VERSION != 'apt' && env.MPI_LIB == 'openmpi' }}
working-directory: ${{ runner.temp }}
run: |
# Note: using download.open-mpi.org as gh tags aren't pre configured
MPI_VERISON_MAJOR_MINOR=$(cut -d '.' -f 1,2 <<< "${{ env.MPI_VERSION}}")
echo "https://download.open-mpi.org/release/open-mpi/v${MPI_VERISON_MAJOR_MINOR}/openmpi-${{ env.MPI_VERSION}}.tar.gz"
wget -q https://download.open-mpi.org/release/open-mpi/v${MPI_VERISON_MAJOR_MINOR}/openmpi-${{ env.MPI_VERSION}}.tar.gz --output-document openmpi-${{ env.MPI_VERSION }}.tar.gz || (echo "An Error occurred while downloading OpenMPI '${{ env.MPI_VERSION }}'. Is it a valid version of OpenMPI?" && exit 1)
tar -zxvf openmpi-${{ env.MPI_VERSION }}.tar.gz
cd openmpi-${{ env.MPI_VERSION}}
./configure --prefix="${{ runner.temp }}/mpi"
make -j `nproc`
make install -j `nproc`
echo "${{ runner.temp }}/mpi/bin" >> $GITHUB_PATH
echo "LD_LIBRARY_PATH=${{ runner.temp }}/mpi/lib:${LD_LIBRARY_PATH}" >> $GITHUB_ENV
echo "LD_RUN_PATH=${{ runner.temp }}/mpi/lib:${LD_RUN_PATH}" >> $GITHUB_ENV

# This will only work for mpich >= 3.3:
# 3.0-3.2 doesn't appear compatible with default gcc in 22.04.
# 1.x is named mpich2 so requires handling differently
# Uses the ch3 interface, as ch4 isn't available pre 3.4, but one must be specified for some versions
- name: Install MPICH from source
if: ${{ env.MPI_VERSION != 'apt' && env.MPI_LIB == 'mpich' }}
working-directory: ${{ runner.temp }}
run: |
MPI_MAJOR=$(cut -d '.' -f 1 <<< "${{ env.MPI_VERSION}}")
MPI_MINOR=$(cut -d '.' -f 2 <<< "${{ env.MPI_VERSION}}")
[[ ${MPI_MAJOR} < 3 ]] && echo "MPICH must be >= 3.0" && exit 1
echo "https://www.mpich.org/static/downloads/${{ env.MPI_VERSION }}/mpich-${{ env.MPI_VERSION}}.tar.gz"
wget -q https://www.mpich.org/static/downloads/${{ env.MPI_VERSION }}/mpich-${{ env.MPI_VERSION}}.tar.gz --output-document mpich-${{ env.MPI_VERSION }}.tar.gz || (echo "An Error occurred while downloading MPICH '${{ env.MPI_VERSION }}'. Is it a valid version of MPICH?" && exit 1)
tar -zxvf mpich-${{ env.MPI_VERSION }}.tar.gz
cd mpich-${{ env.MPI_VERSION}}
DISABLE_FORTRAN_FLAGS=""
if (( ${MPI_MAJOR} >= 4 )) || ( ((${MPI_MAJOR} >= 3)) && ((${MPI_MINOR} >= 2)) ); then
# MPICH >= 3.2 has --disable-fortran
DISABLE_FORTRAN_FLAGS="--disable-fortran"
else
DISABLE_FORTRAN_FLAGS="--disable-f77 --disable-fc"
fi
./configure --prefix="${{ runner.temp }}/mpi" --with-device=ch3 ${DISABLE_FORTRAN_FLAGS}
make -j `nproc`
make install -j `nproc`
echo "${{ runner.temp }}/mpi/bin" >> $GITHUB_PATH
echo "LD_LIBRARY_PATH=${{ runner.temp }}/mpi/lib:${LD_LIBRARY_PATH}" >> $GITHUB_ENV
echo "LD_RUN_PATH=${{ runner.temp }}/mpi/lib:${LD_RUN_PATH}" >> $GITHUB_ENV

- name: Select Python
if: ${{ env.PYTHON != '' && env.FLAMEGPU_BUILD_PYTHON == 'ON' }}
uses: actions/setup-python@v4
with:
python-version: ${{ env.PYTHON }}

- name: Install python dependencies
if: ${{ env.PYTHON != '' && env.FLAMEGPU_BUILD_PYTHON == 'ON' }}
run: |
sudo apt-get install python3-venv
python3 -m pip install --upgrade wheel build setuptools

- name: Install Visualisation Dependencies
if: ${{ startswith(env.OS, 'ubuntu') && env.VISUALISATION == 'ON' }}
run: |
# Install ubuntu-20.04 packages
if [ "$OS" == 'ubuntu-20.04' ]; then
sudo apt-get install -y libglew-dev libfontconfig1-dev libsdl2-dev libdevil-dev libfreetype-dev
fi
# Install Ubuntu 18.04 packages
if [ "$OS" == 'ubuntu-18.04' ]; then
sudo apt-get install -y libglew-dev libfontconfig1-dev libsdl2-dev libdevil-dev libfreetype6-dev libgl1-mesa-dev
fi

- name: Install Swig >= 4.0.2
run: |
# Remove existing swig install, so CMake finds the correct swig
if [ "$OS" == 'ubuntu-20.04' ]; then
sudo apt-get remove -y swig swig4.0
fi
# Install Ubuntu 18.04 packages
if [ "$OS" == 'ubuntu-18.04' ]; then
sudo apt-get remove -y swig
fi
# Install additional apt-based dependencies required to build swig 4.0.2
sudo apt-get install -y bison
# Create a local directory to build swig in.
mkdir -p swig-from-source && cd swig-from-source
# Install SWIG building from source dependencies
wget https://github.com/swig/swig/archive/refs/tags/v4.0.2.tar.gz
tar -zxf v4.0.2.tar.gz
cd swig-4.0.2/
./autogen.sh
./configure
make
sudo make install

# This pre-emptively patches a bug where ManyLinux didn't generate buildnumber as git dir was owned by diff user
- name: Enable git safe-directory
run: git config --global --add safe.directory $GITHUB_WORKSPACE

- name: Configure cmake
run: >
cmake . -B "${{ env.BUILD_DIR }}"
-DCMAKE_BUILD_TYPE="${{ env.CONFIG }}"
-Werror=dev
-DCMAKE_WARN_DEPRECATED="OFF"
-DFLAMEGPU_WARNINGS_AS_ERRORS="ON"
-DCMAKE_CUDA_ARCHITECTURES="${{ env.CUDA_ARCH }}"
-DFLAMEGPU_BUILD_TESTS="${{ env.FLAMEGPU_BUILD_TESTS }}"
-DFLAMEGPU_BUILD_PYTHON="${{ env.FLAMEGPU_BUILD_PYTHON }}"
-DPYTHON3_EXACT_VERSION="${{ env.PYTHON }}"
-DFLAMEGPU_VISUALISATION="${{ env.VISUALISATION }}"
-DFLAMEGPU_ENABLE_MPI="ON"
-DFLAMEGPU_ENABLE_NVTX="ON"
${MPI_OVERRIDE_CXX_OPTIONS}

- name: Reconfigure cmake fixing MPICH from apt
if: ${{ env.MPI_VERSION == 'apt' && env.MPI_LIB == 'mpich' }}
run: >
cmake . -B "${{ env.BUILD_DIR }}"
-DMPI_CXX_COMPILE_OPTIONS=""

- name: Build static library
working-directory: ${{ env.BUILD_DIR }}
run: cmake --build . --target flamegpu --verbose -j `nproc`

- name: Build python wheel
if: ${{ env.FLAMEGPU_BUILD_PYTHON == 'ON' }}
working-directory: ${{ env.BUILD_DIR }}
run: cmake --build . --target pyflamegpu --verbose -j `nproc`

- name: Build tests
if: ${{ env.FLAMEGPU_BUILD_TESTS == 'ON' }}
working-directory: ${{ env.BUILD_DIR }}
run: cmake --build . --target tests --verbose -j `nproc`

- name: Build tests_mpi
if: ${{ env.FLAMEGPU_BUILD_TESTS == 'ON' }}
working-directory: ${{ env.BUILD_DIR }}
run: cmake --build . --target tests_mpi --verbose -j `nproc`

- name: Build ensemble example
working-directory: ${{ env.BUILD_DIR }}
run: cmake --build . --target ensemble --verbose -j `nproc`

- name: Build all remaining targets
working-directory: ${{ env.BUILD_DIR }}
run: cmake --build . --target all --verbose -j `nproc`
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,9 @@ Optionally:
+ With `setuptools`, `wheel`, `build` and optionally `venv` python packages installed
+ [swig](http://www.swig.org/) `>= 4.0.2` for python integration
+ Swig `4.x` will be automatically downloaded by CMake if not provided (if possible).
+ MPI (e.g. [MPICH](https://www.mpich.org/), [OpenMPI](https://www.open-mpi.org/)) for distributed ensemble support
ptheywood marked this conversation as resolved.
Show resolved Hide resolved
+ MPI 3.0+ tested, older MPIs may work but not tested.
+ CMake `>= 3.20.1` may be required for some MPI libraries / platforms.
+ [FLAMEGPU2-visualiser](https://github.com/FLAMEGPU/FLAMEGPU2-visualiser) dependencies
+ [SDL](https://www.libsdl.org/)
+ [GLM](http://glm.g-truc.net/) *(consistent C++/GLSL vector maths functionality)*
Expand Down Expand Up @@ -176,6 +179,7 @@ cmake --build . --target all
| `FLAMEGPU_VERBOSE_PTXAS` | `ON`/`OFF` | Enable verbose PTXAS output during compilation. Default `OFF`. |
| `FLAMEGPU_CURAND_ENGINE` | `XORWOW` / `PHILOX` / `MRG` | Select the CUDA random engine. Default `XORWOW` |
| `FLAMEGPU_ENABLE_GLM` | `ON`/`OFF` | Experimental feature for GLM type support within models. Default `OFF`. |
| `FLAMEGPU_ENABLE_MPI` | `ON`/`OFF` | Enable MPI support for distributed CUDAEnsembles, each MPI worker should have exclusive access to it's GPUs e.g. 1 MPI worker per node. Default `OFF`. |
| `FLAMEGPU_ENABLE_ADVANCED_API` | `ON`/`OFF` | Enable advanced API functionality (C++ only), providing access to internal sim components for high-performance extensions. No stability guarantees are provided around this interface and the returned objects. Documentation is limited to that found in the source. Default `OFF`. |
| `FLAMEGPU_SHARE_USAGE_STATISTICS` | `ON`/`OFF` | Share usage statistics ([telemetry](https://docs.flamegpu.com/guide/telemetry)) to support evidencing usage/impact of the software. Default `ON`. |
| `FLAMEGPU_TELEMETRY_SUPPRESS_NOTICE` | `ON`/`OFF` | Suppress notice encouraging telemetry to be enabled, which is emitted once per binary execution if telemetry is disabled. Defaults to `OFF`, or the value of a system environment variable of the same name. |
Expand Down
51 changes: 35 additions & 16 deletions examples/cpp/ensemble/src/main.cu
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,7 @@ FLAMEGPU_INIT_FUNCTION(Init) {
std::atomic<unsigned int> atomic_init = {0};
std::atomic<uint64_t> atomic_result = {0};
FLAMEGPU_EXIT_FUNCTION(Exit) {
atomic_init += FLAMEGPU->environment.getProperty<int>("init");
atomic_result += FLAMEGPU->agent("Agent").sum<int>("x");
FLAMEGPU->environment.setProperty<int>("result", FLAMEGPU->agent("Agent").sum<int>("x"));
}
int main(int argc, const char ** argv) {
flamegpu::ModelDescription model("boids_spatial3D");
Expand All @@ -35,6 +34,7 @@ int main(int argc, const char ** argv) {
env.newProperty<int>("init", 0);
env.newProperty<int>("init_offset", 0);
env.newProperty<int>("offset", 1);
env.newProperty<int>("result", 0);
}
{ // Agent
flamegpu::AgentDescription agent = model.newAgent("Agent");
Expand Down Expand Up @@ -64,27 +64,46 @@ int main(int argc, const char ** argv) {
runs.setPropertyLerpRange<int>("init_offset", 1, 0);
runs.setPropertyLerpRange<int>("offset", 0, 99);
}

/**
* Create a logging config
*/
flamegpu::LoggingConfig exit_log_cfg(model);
exit_log_cfg.logEnvironment("init");
exit_log_cfg.logEnvironment("result");
/**
* Create Model Runner
*/
flamegpu::CUDAEnsemble cuda_ensemble(model, argc, argv);

cuda_ensemble.setExitLog(exit_log_cfg);
cuda_ensemble.simulate(runs);

// Check result
// Don't currently have logging
unsigned int init_sum = 0;
uint64_t result_sum = 0;
for (int i = 0 ; i < 100; ++i) {
const int init = i/10;
const int init_offset = 1 - i/50;
init_sum += init;
result_sum += POPULATION_TO_GENERATE * init + init_offset * ((POPULATION_TO_GENERATE-1)*POPULATION_TO_GENERATE/2); // Initial agent values
result_sum += POPULATION_TO_GENERATE * STEPS * i; // Agent values added by steps
/**
* Check result for each log
*/
const std::map<unsigned int, flamegpu::RunLog> &logs = cuda_ensemble.getLogs();
if (!cuda_ensemble.Config().mpi || logs.size() > 0) {
unsigned int init_sum = 0, expected_init_sum = 0;
uint64_t result_sum = 0, expected_result_sum = 0;

for (const auto &[i, log] : logs) {
const int init = i/10;
const int init_offset = 1 - i/50;
expected_init_sum += init;
expected_result_sum += POPULATION_TO_GENERATE * init + init_offset * ((POPULATION_TO_GENERATE-1)*POPULATION_TO_GENERATE/2); // Initial agent values
expected_result_sum += POPULATION_TO_GENERATE * STEPS * i; // Agent values added by steps
const flamegpu::ExitLogFrame &exit_log = log.getExitLog();
init_sum += exit_log.getEnvironmentProperty<int>("init");
result_sum += exit_log.getEnvironmentProperty<int>("result");
}
printf("Ensemble init: %u, calculated init %u\n", expected_init_sum, init_sum);
printf("Ensemble result: %zu, calculated result %zu\n", expected_result_sum, result_sum);
}
/**
* Report if MPI was enabled
*/
if (cuda_ensemble.Config().mpi) {
printf("Local MPI runner completed %u/%u runs.\n", static_cast<unsigned int>(logs.size()), static_cast<unsigned int>(runs.size()));
}
printf("Ensemble init: %u, calculated init %u\n", atomic_init.load(), init_sum);
printf("Ensemble result: %zu, calculated result %zu\n", atomic_result.load(), result_sum);

// Ensure profiling / memcheck work correctly
flamegpu::util::cleanup();
Expand Down
Loading