Skip to content

Core MPI Communication

Sam Reeve edited this page May 17, 2023 · 1 revision

Overview

MPI communication in Cabana is provided through several high-level algorithms and their associated communication plans for migrating data and performing halo exchanges. When GPU data is communicated, a GPU-aware MPI implementation is required as Cabana will directly use communication buffers allocated in GPU memory.

General Concepts

  • MPI Communicators
  • Neighbors
  • Import
  • Export
  • Forward Communication Plan
  • Inverse Communication Plan

Migration is the movement of data using MPI from one uniquely owned decomposition to another uniquely owned decomposition. Migration is a broadly used communication paradigm with applications including load balancing and particle redistribution. In Cabana, the communication plan for migration is encapsulated in the Distributor class.

A halo allows for the management of data uniquely owned by one rank that is shared on potentially multiple other ranks as ghost data. A halo exchange is used in many codes and algorithms that employ domain decomposition including a particle halo exchange in MD and SPH codes and a grid halo exchange in mesh-based PDE codes. In Cabana, the communication plan for halo exchange is encapsulated in the Halo class.

Clone this wiki locally