Ginkgo  Generated from pipelines/1330831941 branch based on master. Ginkgo version 1.8.0
A numerical linear algebra library targeting many-core architectures
Public Member Functions | List of all members
gko::experimental::mpi::communicator Class Reference

A thin wrapper of MPI_Comm that supports most MPI calls. More...

#include <ginkgo/core/base/mpi.hpp>

Public Member Functions

 communicator (const MPI_Comm &comm, bool force_host_buffer=false)
 Non-owning constructor for an existing communicator of type MPI_Comm. More...
 
 communicator (const MPI_Comm &comm, int color, int key)
 Create a communicator object from an existing MPI_Comm object using color and key. More...
 
 communicator (const communicator &comm, int color, int key)
 Create a communicator object from an existing MPI_Comm object using color and key. More...
 
const MPI_Comm & get () const
 Return the underlying MPI_Comm object. More...
 
bool force_host_buffer () const
 
int size () const
 Return the size of the communicator (number of ranks). More...
 
int rank () const
 Return the rank of the calling process in the communicator. More...
 
int node_local_rank () const
 Return the node local rank of the calling process in the communicator. More...
 
bool operator== (const communicator &rhs) const
 Compare two communicator objects for equality. More...
 
bool operator!= (const communicator &rhs) const
 Compare two communicator objects for non-equality. More...
 
void synchronize () const
 This function is used to synchronize the ranks in the communicator. More...
 
template<typename SendType >
void send (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, const int destination_rank, const int send_tag) const
 Send (Blocking) data from calling process to destination rank. More...
 
template<typename SendType >
request i_send (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, const int destination_rank, const int send_tag) const
 Send (Non-blocking, Immediate return) data from calling process to destination rank. More...
 
template<typename RecvType >
status recv (std::shared_ptr< const Executor > exec, RecvType *recv_buffer, const int recv_count, const int source_rank, const int recv_tag) const
 Receive data from source rank. More...
 
template<typename RecvType >
request i_recv (std::shared_ptr< const Executor > exec, RecvType *recv_buffer, const int recv_count, const int source_rank, const int recv_tag) const
 Receive (Non-blocking, Immediate return) data from source rank. More...
 
template<typename BroadcastType >
void broadcast (std::shared_ptr< const Executor > exec, BroadcastType *buffer, int count, int root_rank) const
 Broadcast data from calling process to all ranks in the communicator. More...
 
template<typename BroadcastType >
request i_broadcast (std::shared_ptr< const Executor > exec, BroadcastType *buffer, int count, int root_rank) const
 (Non-blocking) Broadcast data from calling process to all ranks in the communicator More...
 
template<typename ReduceType >
void reduce (std::shared_ptr< const Executor > exec, const ReduceType *send_buffer, ReduceType *recv_buffer, int count, MPI_Op operation, int root_rank) const
 Reduce data into root from all calling processes on the same communicator. More...
 
template<typename ReduceType >
request i_reduce (std::shared_ptr< const Executor > exec, const ReduceType *send_buffer, ReduceType *recv_buffer, int count, MPI_Op operation, int root_rank) const
 (Non-blocking) Reduce data into root from all calling processes on the same communicator. More...
 
template<typename ReduceType >
void all_reduce (std::shared_ptr< const Executor > exec, ReduceType *recv_buffer, int count, MPI_Op operation) const
 (In-place) Reduce data from all calling processes from all calling processes on same communicator. More...
 
template<typename ReduceType >
request i_all_reduce (std::shared_ptr< const Executor > exec, ReduceType *recv_buffer, int count, MPI_Op operation) const
 (In-place, non-blocking) Reduce data from all calling processes from all calling processes on same communicator. More...
 
template<typename ReduceType >
void all_reduce (std::shared_ptr< const Executor > exec, const ReduceType *send_buffer, ReduceType *recv_buffer, int count, MPI_Op operation) const
 Reduce data from all calling processes from all calling processes on same communicator. More...
 
template<typename ReduceType >
request i_all_reduce (std::shared_ptr< const Executor > exec, const ReduceType *send_buffer, ReduceType *recv_buffer, int count, MPI_Op operation) const
 Reduce data from all calling processes from all calling processes on same communicator. More...
 
template<typename SendType , typename RecvType >
void gather (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, RecvType *recv_buffer, const int recv_count, int root_rank) const
 Gather data onto the root rank from all ranks in the communicator. More...
 
template<typename SendType , typename RecvType >
request i_gather (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, RecvType *recv_buffer, const int recv_count, int root_rank) const
 (Non-blocking) Gather data onto the root rank from all ranks in the communicator. More...
 
template<typename SendType , typename RecvType >
void gather_v (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, RecvType *recv_buffer, const int *recv_counts, const int *displacements, int root_rank) const
 Gather data onto the root rank from all ranks in the communicator with offsets. More...
 
template<typename SendType , typename RecvType >
request i_gather_v (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, RecvType *recv_buffer, const int *recv_counts, const int *displacements, int root_rank) const
 (Non-blocking) Gather data onto the root rank from all ranks in the communicator with offsets. More...
 
template<typename SendType , typename RecvType >
void all_gather (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, RecvType *recv_buffer, const int recv_count) const
 Gather data onto all ranks from all ranks in the communicator. More...
 
template<typename SendType , typename RecvType >
request i_all_gather (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, RecvType *recv_buffer, const int recv_count) const
 (Non-blocking) Gather data onto all ranks from all ranks in the communicator. More...
 
template<typename SendType , typename RecvType >
void scatter (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, RecvType *recv_buffer, const int recv_count, int root_rank) const
 Scatter data from root rank to all ranks in the communicator. More...
 
template<typename SendType , typename RecvType >
request i_scatter (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, RecvType *recv_buffer, const int recv_count, int root_rank) const
 (Non-blocking) Scatter data from root rank to all ranks in the communicator. More...
 
template<typename SendType , typename RecvType >
void scatter_v (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int *send_counts, const int *displacements, RecvType *recv_buffer, const int recv_count, int root_rank) const
 Scatter data from root rank to all ranks in the communicator with offsets. More...
 
template<typename SendType , typename RecvType >
request i_scatter_v (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int *send_counts, const int *displacements, RecvType *recv_buffer, const int recv_count, int root_rank) const
 (Non-blocking) Scatter data from root rank to all ranks in the communicator with offsets. More...
 
template<typename RecvType >
void all_to_all (std::shared_ptr< const Executor > exec, RecvType *recv_buffer, const int recv_count) const
 (In-place) Communicate data from all ranks to all other ranks in place (MPI_Alltoall). More...
 
template<typename RecvType >
request i_all_to_all (std::shared_ptr< const Executor > exec, RecvType *recv_buffer, const int recv_count) const
 (In-place, Non-blocking) Communicate data from all ranks to all other ranks in place (MPI_Ialltoall). More...
 
template<typename SendType , typename RecvType >
void all_to_all (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, RecvType *recv_buffer, const int recv_count) const
 Communicate data from all ranks to all other ranks (MPI_Alltoall). More...
 
template<typename SendType , typename RecvType >
request i_all_to_all (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int send_count, RecvType *recv_buffer, const int recv_count) const
 (Non-blocking) Communicate data from all ranks to all other ranks (MPI_Ialltoall). More...
 
template<typename SendType , typename RecvType >
void all_to_all_v (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int *send_counts, const int *send_offsets, RecvType *recv_buffer, const int *recv_counts, const int *recv_offsets) const
 Communicate data from all ranks to all other ranks with offsets (MPI_Alltoallv). More...
 
void all_to_all_v (std::shared_ptr< const Executor > exec, const void *send_buffer, const int *send_counts, const int *send_offsets, MPI_Datatype send_type, void *recv_buffer, const int *recv_counts, const int *recv_offsets, MPI_Datatype recv_type) const
 Communicate data from all ranks to all other ranks with offsets (MPI_Alltoallv). More...
 
request i_all_to_all_v (std::shared_ptr< const Executor > exec, const void *send_buffer, const int *send_counts, const int *send_offsets, MPI_Datatype send_type, void *recv_buffer, const int *recv_counts, const int *recv_offsets, MPI_Datatype recv_type) const
 Communicate data from all ranks to all other ranks with offsets (MPI_Ialltoallv). More...
 
template<typename SendType , typename RecvType >
request i_all_to_all_v (std::shared_ptr< const Executor > exec, const SendType *send_buffer, const int *send_counts, const int *send_offsets, RecvType *recv_buffer, const int *recv_counts, const int *recv_offsets) const
 Communicate data from all ranks to all other ranks with offsets (MPI_Ialltoallv). More...
 
template<typename ScanType >
void scan (std::shared_ptr< const Executor > exec, const ScanType *send_buffer, ScanType *recv_buffer, int count, MPI_Op operation) const
 Does a scan operation with the given operator. More...
 
template<typename ScanType >
request i_scan (std::shared_ptr< const Executor > exec, const ScanType *send_buffer, ScanType *recv_buffer, int count, MPI_Op operation) const
 Does a scan operation with the given operator. More...
 

Detailed Description

A thin wrapper of MPI_Comm that supports most MPI calls.

A wrapper class that takes in the given MPI communicator. If a bare MPI_Comm is provided, the wrapper takes no ownership of the MPI_Comm. Thus the MPI_Comm must remain valid throughout the lifetime of the communicator. If the communicator was created through splitting, the wrapper takes ownership of the MPI_Comm. In this case, as the class or object goes out of scope, the underlying MPI_Comm is freed.

Note
All MPI calls that work on a buffer take in an Executor as an additional argument. This argument specifies the memory space the buffer lives in.

Constructor & Destructor Documentation

◆ communicator() [1/3]

gko::experimental::mpi::communicator::communicator ( const MPI_Comm &  comm,
bool  force_host_buffer = false 
)
inline

Non-owning constructor for an existing communicator of type MPI_Comm.

The MPI_Comm object will not be deleted after the communicator object has been freed and an explicit MPI_Comm_free needs to be called on the original MPI_Comm object.

Parameters
commThe input MPI_Comm object.
force_host_bufferIf set to true, always communicates through host memory

◆ communicator() [2/3]

gko::experimental::mpi::communicator::communicator ( const MPI_Comm &  comm,
int  color,
int  key 
)
inline

Create a communicator object from an existing MPI_Comm object using color and key.

Parameters
commThe input MPI_Comm object.
colorThe color to split the original comm object
keyThe key to split the comm object

◆ communicator() [3/3]

gko::experimental::mpi::communicator::communicator ( const communicator comm,
int  color,
int  key 
)
inline

Create a communicator object from an existing MPI_Comm object using color and key.

Parameters
commThe input communicator object.
colorThe color to split the original comm object
keyThe key to split the comm object

References get().

Member Function Documentation

◆ all_gather()

template<typename SendType , typename RecvType >
void gko::experimental::mpi::communicator::all_gather ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
RecvType *  recv_buffer,
const int  recv_count 
) const
inline

Gather data onto all ranks from all ranks in the communicator.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to gather from
send_countthe number of elements to send
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.

References get().

◆ all_reduce() [1/2]

template<typename ReduceType >
void gko::experimental::mpi::communicator::all_reduce ( std::shared_ptr< const Executor exec,
const ReduceType *  send_buffer,
ReduceType *  recv_buffer,
int  count,
MPI_Op  operation 
) const
inline

Reduce data from all calling processes from all calling processes on same communicator.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe data to reduce
recv_bufferthe reduced result
countthe number of elements to reduce
operationthe reduce operation. See @MPI_Op
Template Parameters
ReduceTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.

References get().

◆ all_reduce() [2/2]

template<typename ReduceType >
void gko::experimental::mpi::communicator::all_reduce ( std::shared_ptr< const Executor exec,
ReduceType *  recv_buffer,
int  count,
MPI_Op  operation 
) const
inline

(In-place) Reduce data from all calling processes from all calling processes on same communicator.

Parameters
execThe executor, on which the message buffer is located.
recv_bufferthe data to reduce and the reduced result
countthe number of elements to reduce
operationthe MPI_Op type reduce operation.
Template Parameters
ReduceTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.

References get().

◆ all_to_all() [1/2]

template<typename SendType , typename RecvType >
void gko::experimental::mpi::communicator::all_to_all ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
RecvType *  recv_buffer,
const int  recv_count 
) const
inline

Communicate data from all ranks to all other ranks (MPI_Alltoall).

See MPI documentation for more details.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to send
send_countthe number of elements to send
recv_bufferthe buffer to receive
recv_countthe number of elements to receive
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.

References get().

◆ all_to_all() [2/2]

template<typename RecvType >
void gko::experimental::mpi::communicator::all_to_all ( std::shared_ptr< const Executor exec,
RecvType *  recv_buffer,
const int  recv_count 
) const
inline

(In-place) Communicate data from all ranks to all other ranks in place (MPI_Alltoall).

See MPI documentation for more details.

Parameters
execThe executor, on which the message buffer is located.
bufferthe buffer to send and the buffer receive
recv_countthe number of elements to receive
commthe communicator
Template Parameters
RecvTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
Note
This overload uses MPI_IN_PLACE and the source and destination buffers are the same.

References get().

◆ all_to_all_v() [1/2]

template<typename SendType , typename RecvType >
void gko::experimental::mpi::communicator::all_to_all_v ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int *  send_counts,
const int *  send_offsets,
RecvType *  recv_buffer,
const int *  recv_counts,
const int *  recv_offsets 
) const
inline

Communicate data from all ranks to all other ranks with offsets (MPI_Alltoallv).

See MPI documentation for more details.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to send
send_countthe number of elements to send
send_offsetsthe offsets for the send buffer
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
recv_offsetsthe offsets for the recv buffer
commthe communicator
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.

◆ all_to_all_v() [2/2]

void gko::experimental::mpi::communicator::all_to_all_v ( std::shared_ptr< const Executor exec,
const void *  send_buffer,
const int *  send_counts,
const int *  send_offsets,
MPI_Datatype  send_type,
void *  recv_buffer,
const int *  recv_counts,
const int *  recv_offsets,
MPI_Datatype  recv_type 
) const
inline

Communicate data from all ranks to all other ranks with offsets (MPI_Alltoallv).

See MPI documentation for more details.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to send
send_countthe number of elements to send
send_offsetsthe offsets for the send buffer
send_typethe MPI_Datatype for the send buffer
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
recv_offsetsthe offsets for the recv buffer
recv_typethe MPI_Datatype for the recv buffer
commthe communicator

References get().

◆ broadcast()

template<typename BroadcastType >
void gko::experimental::mpi::communicator::broadcast ( std::shared_ptr< const Executor exec,
BroadcastType *  buffer,
int  count,
int  root_rank 
) const
inline

Broadcast data from calling process to all ranks in the communicator.

Parameters
execThe executor, on which the message buffer is located.
bufferthe buffer to broadcsat
countthe number of elements to broadcast
root_rankthe rank to broadcast from
Template Parameters
BroadcastTypethe type of the data to broadcast. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.

References get().

◆ gather()

template<typename SendType , typename RecvType >
void gko::experimental::mpi::communicator::gather ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
RecvType *  recv_buffer,
const int  recv_count,
int  root_rank 
) const
inline

Gather data onto the root rank from all ranks in the communicator.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to gather from
send_countthe number of elements to send
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
root_rankthe rank to gather into
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.

References get().

◆ gather_v()

template<typename SendType , typename RecvType >
void gko::experimental::mpi::communicator::gather_v ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
RecvType *  recv_buffer,
const int *  recv_counts,
const int *  displacements,
int  root_rank 
) const
inline

Gather data onto the root rank from all ranks in the communicator with offsets.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to gather from
send_countthe number of elements to send
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
displacementsthe offsets for the buffer
root_rankthe rank to gather into
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.

References get().

◆ get()

const MPI_Comm& gko::experimental::mpi::communicator::get ( ) const
inline

◆ i_all_gather()

template<typename SendType , typename RecvType >
request gko::experimental::mpi::communicator::i_all_gather ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
RecvType *  recv_buffer,
const int  recv_count 
) const
inline

(Non-blocking) Gather data onto all ranks from all ranks in the communicator.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to gather from
send_countthe number of elements to send
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.
Returns
the request handle for the call

References gko::experimental::mpi::request::get(), and get().

◆ i_all_reduce() [1/2]

template<typename ReduceType >
request gko::experimental::mpi::communicator::i_all_reduce ( std::shared_ptr< const Executor exec,
const ReduceType *  send_buffer,
ReduceType *  recv_buffer,
int  count,
MPI_Op  operation 
) const
inline

Reduce data from all calling processes from all calling processes on same communicator.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe data to reduce
recv_bufferthe reduced result
countthe number of elements to reduce
operationthe reduce operation. See @MPI_Op
Template Parameters
ReduceTypethe type of the data to reduce. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
Returns
the request handle for the call

References gko::experimental::mpi::request::get(), and get().

◆ i_all_reduce() [2/2]

template<typename ReduceType >
request gko::experimental::mpi::communicator::i_all_reduce ( std::shared_ptr< const Executor exec,
ReduceType *  recv_buffer,
int  count,
MPI_Op  operation 
) const
inline

(In-place, non-blocking) Reduce data from all calling processes from all calling processes on same communicator.

Parameters
execThe executor, on which the message buffer is located.
recv_bufferthe data to reduce and the reduced result
countthe number of elements to reduce
operationthe reduce operation. See @MPI_Op
Template Parameters
ReduceTypethe type of the data to reduce. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
Returns
the request handle for the call

References gko::experimental::mpi::request::get(), and get().

◆ i_all_to_all() [1/2]

template<typename SendType , typename RecvType >
request gko::experimental::mpi::communicator::i_all_to_all ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
RecvType *  recv_buffer,
const int  recv_count 
) const
inline

(Non-blocking) Communicate data from all ranks to all other ranks (MPI_Ialltoall).

See MPI documentation for more details.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to send
send_countthe number of elements to send
recv_bufferthe buffer to receive
recv_countthe number of elements to receive
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.
Returns
the request handle for the call

References gko::experimental::mpi::request::get(), and get().

◆ i_all_to_all() [2/2]

template<typename RecvType >
request gko::experimental::mpi::communicator::i_all_to_all ( std::shared_ptr< const Executor exec,
RecvType *  recv_buffer,
const int  recv_count 
) const
inline

(In-place, Non-blocking) Communicate data from all ranks to all other ranks in place (MPI_Ialltoall).

See MPI documentation for more details.

Parameters
execThe executor, on which the message buffer is located.
bufferthe buffer to send and the buffer receive
recv_countthe number of elements to receive
commthe communicator
Template Parameters
RecvTypethe type of the data to receive. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
Returns
the request handle for the call
Note
This overload uses MPI_IN_PLACE and the source and destination buffers are the same.

References gko::experimental::mpi::request::get(), and get().

◆ i_all_to_all_v() [1/2]

template<typename SendType , typename RecvType >
request gko::experimental::mpi::communicator::i_all_to_all_v ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int *  send_counts,
const int *  send_offsets,
RecvType *  recv_buffer,
const int *  recv_counts,
const int *  recv_offsets 
) const
inline

Communicate data from all ranks to all other ranks with offsets (MPI_Ialltoallv).

See MPI documentation for more details.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to send
send_countthe number of elements to send
send_offsetsthe offsets for the send buffer
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
recv_offsetsthe offsets for the recv buffer
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.
Returns
the request handle for the call

References i_all_to_all_v().

◆ i_all_to_all_v() [2/2]

request gko::experimental::mpi::communicator::i_all_to_all_v ( std::shared_ptr< const Executor exec,
const void *  send_buffer,
const int *  send_counts,
const int *  send_offsets,
MPI_Datatype  send_type,
void *  recv_buffer,
const int *  recv_counts,
const int *  recv_offsets,
MPI_Datatype  recv_type 
) const
inline

Communicate data from all ranks to all other ranks with offsets (MPI_Ialltoallv).

See MPI documentation for more details.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to send
send_countthe number of elements to send
send_offsetsthe offsets for the send buffer
send_typethe MPI_Datatype for the send buffer
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
recv_offsetsthe offsets for the recv buffer
recv_typethe MPI_Datatype for the recv buffer
Returns
the request handle for the call
Note
This overload allows specifying the MPI_Datatype for both the send and received data.

References gko::experimental::mpi::request::get(), and get().

Referenced by i_all_to_all_v().

◆ i_broadcast()

template<typename BroadcastType >
request gko::experimental::mpi::communicator::i_broadcast ( std::shared_ptr< const Executor exec,
BroadcastType *  buffer,
int  count,
int  root_rank 
) const
inline

(Non-blocking) Broadcast data from calling process to all ranks in the communicator

Parameters
execThe executor, on which the message buffer is located.
bufferthe buffer to broadcsat
countthe number of elements to broadcast
root_rankthe rank to broadcast from
Template Parameters
BroadcastTypethe type of the data to broadcast. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
Returns
the request handle for the call

References gko::experimental::mpi::request::get(), and get().

◆ i_gather()

template<typename SendType , typename RecvType >
request gko::experimental::mpi::communicator::i_gather ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
RecvType *  recv_buffer,
const int  recv_count,
int  root_rank 
) const
inline

(Non-blocking) Gather data onto the root rank from all ranks in the communicator.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to gather from
send_countthe number of elements to send
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
root_rankthe rank to gather into
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.
Returns
the request handle for the call

References gko::experimental::mpi::request::get(), and get().

◆ i_gather_v()

template<typename SendType , typename RecvType >
request gko::experimental::mpi::communicator::i_gather_v ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
RecvType *  recv_buffer,
const int *  recv_counts,
const int *  displacements,
int  root_rank 
) const
inline

(Non-blocking) Gather data onto the root rank from all ranks in the communicator with offsets.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to gather from
send_countthe number of elements to send
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
displacementsthe offsets for the buffer
root_rankthe rank to gather into
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.
Returns
the request handle for the call

References gko::experimental::mpi::request::get(), and get().

◆ i_recv()

template<typename RecvType >
request gko::experimental::mpi::communicator::i_recv ( std::shared_ptr< const Executor exec,
RecvType *  recv_buffer,
const int  recv_count,
const int  source_rank,
const int  recv_tag 
) const
inline

Receive (Non-blocking, Immediate return) data from source rank.

Parameters
execThe executor, on which the message buffer is located.
recv_bufferthe buffer to send
recv_countthe number of elements to receive
source_rankthe rank to receive the data from
recv_tagthe tag for the recv call
Template Parameters
RecvTypethe type of the data to receive. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
Returns
the request handle for the recv call

References gko::experimental::mpi::request::get(), and get().

◆ i_reduce()

template<typename ReduceType >
request gko::experimental::mpi::communicator::i_reduce ( std::shared_ptr< const Executor exec,
const ReduceType *  send_buffer,
ReduceType *  recv_buffer,
int  count,
MPI_Op  operation,
int  root_rank 
) const
inline

(Non-blocking) Reduce data into root from all calling processes on the same communicator.

Parameters
execThe executor, on which the message buffer is located.
send_bufferthe buffer to reduce
recv_bufferthe reduced result
countthe number of elements to reduce
operationthe MPI_Op type reduce operation.
Template Parameters
ReduceTypethe type of the data to reduce. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
Returns
the request handle for the call

References gko::experimental::mpi::request::get(), and get().

◆ i_scan()

template<typename ScanType >
request gko::experimental::mpi::communicator::i_scan ( std::shared_ptr< const Executor exec,
const ScanType *  send_buffer,
ScanType *  recv_buffer,
int  count,
MPI_Op  operation 
) const
inline

Does a scan operation with the given operator.

(MPI_Iscan). See MPI documentation for more details.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to scan from
recv_bufferthe result buffer
recv_countthe number of elements to scan
operationthe operation type to be used for the scan. See @MPI_Op
Template Parameters
ScanTypethe type of the data to scan. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
Returns
the request handle for the call

References gko::experimental::mpi::request::get(), and get().

◆ i_scatter()

template<typename SendType , typename RecvType >
request gko::experimental::mpi::communicator::i_scatter ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
RecvType *  recv_buffer,
const int  recv_count,
int  root_rank 
) const
inline

(Non-blocking) Scatter data from root rank to all ranks in the communicator.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to gather from
send_countthe number of elements to send
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.
Returns
the request handle for the call

References gko::experimental::mpi::request::get(), and get().

◆ i_scatter_v()

template<typename SendType , typename RecvType >
request gko::experimental::mpi::communicator::i_scatter_v ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int *  send_counts,
const int *  displacements,
RecvType *  recv_buffer,
const int  recv_count,
int  root_rank 
) const
inline

(Non-blocking) Scatter data from root rank to all ranks in the communicator with offsets.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to gather from
send_countthe number of elements to send
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
displacementsthe offsets for the buffer
commthe communicator
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.
Returns
the request handle for the call

References gko::experimental::mpi::request::get(), and get().

◆ i_send()

template<typename SendType >
request gko::experimental::mpi::communicator::i_send ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
const int  destination_rank,
const int  send_tag 
) const
inline

Send (Non-blocking, Immediate return) data from calling process to destination rank.

Parameters
execThe executor, on which the message buffer is located.
send_bufferthe buffer to send
send_countthe number of elements to send
destination_rankthe rank to send the data to
send_tagthe tag for the send call
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
Returns
the request handle for the send call

References gko::experimental::mpi::request::get(), and get().

◆ node_local_rank()

int gko::experimental::mpi::communicator::node_local_rank ( ) const
inline

Return the node local rank of the calling process in the communicator.

Returns
the node local rank

◆ operator!=()

bool gko::experimental::mpi::communicator::operator!= ( const communicator rhs) const
inline

Compare two communicator objects for non-equality.

Returns
if the two comm objects are not equal

◆ operator==()

bool gko::experimental::mpi::communicator::operator== ( const communicator rhs) const
inline

Compare two communicator objects for equality.

Returns
if the two comm objects are equal

◆ rank()

int gko::experimental::mpi::communicator::rank ( ) const
inline

Return the rank of the calling process in the communicator.

Returns
the rank

◆ recv()

template<typename RecvType >
status gko::experimental::mpi::communicator::recv ( std::shared_ptr< const Executor exec,
RecvType *  recv_buffer,
const int  recv_count,
const int  source_rank,
const int  recv_tag 
) const
inline

Receive data from source rank.

Parameters
execThe executor, on which the message buffer is located.
recv_bufferthe buffer to receive
recv_countthe number of elements to receive
source_rankthe rank to receive the data from
recv_tagthe tag for the recv call
Template Parameters
RecvTypethe type of the data to receive. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
Returns
the status of completion of this call

References gko::experimental::mpi::status::get(), and get().

◆ reduce()

template<typename ReduceType >
void gko::experimental::mpi::communicator::reduce ( std::shared_ptr< const Executor exec,
const ReduceType *  send_buffer,
ReduceType *  recv_buffer,
int  count,
MPI_Op  operation,
int  root_rank 
) const
inline

Reduce data into root from all calling processes on the same communicator.

Parameters
execThe executor, on which the message buffer is located.
send_bufferthe buffer to reduce
recv_bufferthe reduced result
countthe number of elements to reduce
operationthe MPI_Op type reduce operation.
Template Parameters
ReduceTypethe type of the data to reduce. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.

References get().

◆ scan()

template<typename ScanType >
void gko::experimental::mpi::communicator::scan ( std::shared_ptr< const Executor exec,
const ScanType *  send_buffer,
ScanType *  recv_buffer,
int  count,
MPI_Op  operation 
) const
inline

Does a scan operation with the given operator.

(MPI_Scan). See MPI documentation for more details.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to scan from
recv_bufferthe result buffer
recv_countthe number of elements to scan
operationthe operation type to be used for the scan. See @MPI_Op
Template Parameters
ScanTypethe type of the data to scan. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.

References get().

◆ scatter()

template<typename SendType , typename RecvType >
void gko::experimental::mpi::communicator::scatter ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
RecvType *  recv_buffer,
const int  recv_count,
int  root_rank 
) const
inline

Scatter data from root rank to all ranks in the communicator.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to gather from
send_countthe number of elements to send
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.

References get().

◆ scatter_v()

template<typename SendType , typename RecvType >
void gko::experimental::mpi::communicator::scatter_v ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int *  send_counts,
const int *  displacements,
RecvType *  recv_buffer,
const int  recv_count,
int  root_rank 
) const
inline

Scatter data from root rank to all ranks in the communicator with offsets.

Parameters
execThe executor, on which the message buffers are located.
send_bufferthe buffer to gather from
send_countthe number of elements to send
recv_bufferthe buffer to gather into
recv_countthe number of elements to receive
displacementsthe offsets for the buffer
commthe communicator
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.
RecvTypethe type of the data to receive. The same restrictions as for SendType apply.

References get().

◆ send()

template<typename SendType >
void gko::experimental::mpi::communicator::send ( std::shared_ptr< const Executor exec,
const SendType *  send_buffer,
const int  send_count,
const int  destination_rank,
const int  send_tag 
) const
inline

Send (Blocking) data from calling process to destination rank.

Parameters
execThe executor, on which the message buffer is located.
send_bufferthe buffer to send
send_countthe number of elements to send
destination_rankthe rank to send the data to
send_tagthe tag for the send call
Template Parameters
SendTypethe type of the data to send. Has to be a type which has a specialization of type_impl that defines its MPI_Datatype.

References get().

◆ size()

int gko::experimental::mpi::communicator::size ( ) const
inline

Return the size of the communicator (number of ranks).

Returns
the size

◆ synchronize()

void gko::experimental::mpi::communicator::synchronize ( ) const
inline

This function is used to synchronize the ranks in the communicator.

Calls MPI_Barrier

References get().


The documentation for this class was generated from the following file: