Ginkgo  Generated from pipelines/1068515030 branch based on master. Ginkgo version 1.7.0
A numerical linear algebra library targeting many-core architectures
Classes | Enumerations | Functions
gko::experimental::mpi Namespace Reference

The mpi namespace, contains wrapper for many MPI functions. More...

Classes

class  communicator
 A thin wrapper of MPI_Comm that supports most MPI calls. More...
 
class  contiguous_type
 A move-only wrapper for a contiguous MPI_Datatype. More...
 
class  environment
 Class that sets up and finalizes the MPI environment. More...
 
class  request
 The request class is a light, move-only wrapper around the MPI_Request handle. More...
 
struct  status
 The status struct is a light wrapper around the MPI_Status struct. More...
 
struct  type_impl
 A struct that is used to determine the MPI_Datatype of a specified type. More...
 
struct  type_impl< char >
 
struct  type_impl< double >
 
struct  type_impl< float >
 
struct  type_impl< int >
 
struct  type_impl< long >
 
struct  type_impl< long double >
 
struct  type_impl< long long >
 
struct  type_impl< std::complex< double > >
 
struct  type_impl< std::complex< float > >
 
struct  type_impl< unsigned >
 
struct  type_impl< unsigned char >
 
struct  type_impl< unsigned long >
 
struct  type_impl< unsigned long long >
 
struct  type_impl< unsigned short >
 
class  window
 This class wraps the MPI_Window class with RAII functionality. More...
 

Enumerations

enum  thread_type { serialized = MPI_THREAD_SERIALIZED, funneled = MPI_THREAD_FUNNELED, single = MPI_THREAD_SINGLE, multiple = MPI_THREAD_MULTIPLE }
 This enum specifies the threading type to be used when creating an MPI environment.
 

Functions

constexpr bool is_gpu_aware ()
 Return if GPU aware functionality is available.
 
int map_rank_to_device_id (MPI_Comm comm, int num_devices)
 Maps each MPI rank to a single device id in a round robin manner. More...
 
std::vector< statuswait_all (std::vector< request > &req)
 Allows a rank to wait on multiple request handles. More...
 
bool requires_host_buffer (const std::shared_ptr< const Executor > &exec, const communicator &comm)
 Checks if the combination of Executor and communicator requires passing MPI buffers from the host memory.
 
double get_walltime ()
 Get the rank in the communicator of the calling process. More...
 

Detailed Description

The mpi namespace, contains wrapper for many MPI functions.

Function Documentation

◆ get_walltime()

double gko::experimental::mpi::get_walltime ( )
inline

Get the rank in the communicator of the calling process.

Parameters
commthe communicator

◆ map_rank_to_device_id()

int gko::experimental::mpi::map_rank_to_device_id ( MPI_Comm  comm,
int  num_devices 
)

Maps each MPI rank to a single device id in a round robin manner.

Parameters
commused to determine the node-local rank, if no suitable environment variable is available.
num_devicesthe number of devices per node.
Returns
device id that this rank should use.

◆ wait_all()

std::vector<status> gko::experimental::mpi::wait_all ( std::vector< request > &  req)
inline

Allows a rank to wait on multiple request handles.

Parameters
reqThe vector of request handles to be waited on.
Returns
status The vector of status objects that can be queried.