Ginkgo  Generated from pipelines/1589998975 branch based on develop. Ginkgo version 1.10.0
A numerical linear algebra library targeting many-core architectures
Namespaces | Classes | Typedefs | Enumerations | Functions
gko::experimental::distributed Namespace Reference

The distributed namespace. More...

Namespaces

 preconditioner
 The Preconditioner namespace.
 

Classes

class  DistributedBase
 A base class for distributed objects. More...
 
struct  index_map
 This class defines mappings between global and local indices. More...
 
class  Matrix
 The Matrix class defines a (MPI-)distributed matrix. More...
 
class  Partition
 Represents a partition of a range of indices [0, size) into a disjoint set of parts. More...
 
class  Vector
 Vector is a format which explicitly stores (multiple) distributed column vectors in a dense storage format. More...
 

Typedefs

using comm_index_type = int
 Index type for enumerating processes in a distributed application. More...
 

Enumerations

enum  index_space { index_space::local, index_space::non_local, index_space::combined }
 Index space classification for the locally stored indices. More...
 
enum  assembly_mode { communicate, local_only }
 assembly_mode defines how the read_distributed function of the distributed matrix treats non-local indices in the (device_)matrix_data: More...
 

Functions

template<typename ValueType >
gko::detail::temporary_conversion< Vector< ValueType > > make_temporary_conversion (LinOp *matrix)
 Convert the given LinOp from experimental::distributed::Vector<...> to experimental::distributed::Vector<ValueType>. More...
 
template<typename ValueType >
gko::detail::temporary_conversion< const Vector< ValueType > > make_temporary_conversion (const LinOp *matrix)
 Convert the given LinOp from experimental::distributed::Vector<...> to experimental::distributed::Vector<ValueType>. More...
 
template<typename ValueType , typename Function , typename... Args>
void precision_dispatch (Function fn, Args *... linops)
 Calls the given function with each given argument LinOp temporarily converted into experimental::distributed::Vector<ValueType> as parameters. More...
 
template<typename ValueType , typename Function >
void precision_dispatch_real_complex (Function fn, const LinOp *in, LinOp *out)
 Calls the given function with the given LinOps temporarily converted to experimental::distributed::Vector<ValueType>* as parameters. More...
 
template<typename ValueType , typename Function >
void precision_dispatch_real_complex (Function fn, const LinOp *alpha, const LinOp *in, LinOp *out)
 Calls the given function with the given LinOps temporarily converted to experimental::distributed::Vector<ValueType>* as parameters. More...
 
template<typename ValueType , typename Function >
void precision_dispatch_real_complex (Function fn, const LinOp *alpha, const LinOp *in, const LinOp *beta, LinOp *out)
 Calls the given function with the given LinOps temporarily converted to experimental::distributed::Vector<ValueType>* as parameters. More...
 
template<typename ValueType , typename LocalIndexType , typename GlobalIndexType >
device_matrix_data< ValueType, GlobalIndexType > assemble_rows_from_neighbors (mpi::communicator comm, const device_matrix_data< ValueType, GlobalIndexType > &input, ptr_param< const Partition< LocalIndexType, GlobalIndexType >> partition)
 Assembles device_matrix_data entries owned by this MPI rank from other ranks and communicates entries located on this MPI rank owned by other ranks to their respective owners. More...
 
template<typename LocalIndexType , typename GlobalIndexType >
std::unique_ptr< Partition< LocalIndexType, GlobalIndexType > > build_partition_from_local_range (std::shared_ptr< const Executor > exec, mpi::communicator comm, span local_range)
 Builds a partition from a local range. More...
 
template<typename LocalIndexType , typename GlobalIndexType >
std::unique_ptr< Partition< LocalIndexType, GlobalIndexType > > build_partition_from_local_size (std::shared_ptr< const Executor > exec, mpi::communicator comm, size_type local_size)
 Builds a partition from a local size. More...
 

Detailed Description

The distributed namespace.

Typedef Documentation

◆ comm_index_type

Index type for enumerating processes in a distributed application.

Conforms to the MPI C interface of e.g. MPI rank or size

Enumeration Type Documentation

◆ assembly_mode

assembly_mode defines how the read_distributed function of the distributed matrix treats non-local indices in the (device_)matrix_data:

  • communicate communicates the overlap between ranks and adds up all local contributions. Indices smaller than 0 or larger than the global size of the matrix are ignored.
  • local_only does not communicate any overlap but ignores all non-local indices.

◆ index_space

Index space classification for the locally stored indices.

The definitions of the enum values is clarified in index_map.

Enumerator
local 

indices that are locally owned

non_local 

indices that are owned by other processes

combined 

both local and non_local indices

Function Documentation

◆ assemble_rows_from_neighbors()

template<typename ValueType , typename LocalIndexType , typename GlobalIndexType >
device_matrix_data<ValueType, GlobalIndexType> gko::experimental::distributed::assemble_rows_from_neighbors ( mpi::communicator  comm,
const device_matrix_data< ValueType, GlobalIndexType > &  input,
ptr_param< const Partition< LocalIndexType, GlobalIndexType >>  partition 
)

Assembles device_matrix_data entries owned by this MPI rank from other ranks and communicates entries located on this MPI rank owned by other ranks to their respective owners.

This can be useful e.g. in a finite element code where each rank assembles a local contribution to a global system matrix and the global matrix has to be assembled by summing up the local contributions on rank boundaries. The partition used is only relevant for row ownership.

Parameters
commthe communicator used to assemble the global matrix.
inputthe device_matrix_data structure.
partitionthe partition used to determine row owndership.
Returns
the globally assembled device_matrix_data structure for this MPI rank.

◆ build_partition_from_local_range()

template<typename LocalIndexType , typename GlobalIndexType >
std::unique_ptr<Partition<LocalIndexType, GlobalIndexType> > gko::experimental::distributed::build_partition_from_local_range ( std::shared_ptr< const Executor exec,
mpi::communicator  comm,
span  local_range 
)

Builds a partition from a local range.

Parameters
execthe Executor on which the partition should be built.
commthe communicator used to determine the global partition.
local_rangethe start and end indices of the local range.
Warning
This throws, if the resulting partition would contain gaps. That means that for a partition of size n every local range r_i = [s_i, e_i) either s_i != 0 and another local range r_j = [s_j, e_j = s_i) exists, or e_i != n and another local range r_j = [s_j = e_i, e_j) exists.
Returns
a Partition where each range has the individual local_start and local_ends.

◆ build_partition_from_local_size()

template<typename LocalIndexType , typename GlobalIndexType >
std::unique_ptr<Partition<LocalIndexType, GlobalIndexType> > gko::experimental::distributed::build_partition_from_local_size ( std::shared_ptr< const Executor exec,
mpi::communicator  comm,
size_type  local_size 
)

Builds a partition from a local size.

Parameters
execthe Executor on which the partition should be built.
commthe communicator used to determine the global partition.
local_rangethe number of the locally owned indices
Returns
a Partition where each range has the specified local size. More specifically, if this is called on process i with local_size s_i, then the range i has size s_i, and range r_i = [start, start + s_i), where start = sum_j^(i-1) s_j.

◆ make_temporary_conversion() [1/2]

template<typename ValueType >
gko::detail::temporary_conversion<const Vector<ValueType> > gko::experimental::distributed::make_temporary_conversion ( const LinOp matrix)

Convert the given LinOp from experimental::distributed::Vector<...> to experimental::distributed::Vector<ValueType>.

The conversion tries to convert the input LinOp to all Dense types with value type recursively reachable by next_precision_base<...> starting from the ValueType template parameter. This means that all real-to-real and complex-to-complex conversions for default precisions are being considered. If the input matrix is non-const, the contents of the modified converted object will be converted back to the input matrix when the returned object is destroyed. This may lead to a loss of precision!

Parameters
matrixthe input matrix which is supposed to be converted. It is wrapped unchanged if it is already of type experimental::distributed::Vector<ValueType>, otherwise it will be converted to this type if possible.
Returns
a detail::temporary_conversion pointing to the (potentially converted) object.
Exceptions
NotSupportedif the input matrix cannot be converted to experimental::distributed::Vector<ValueType>
Template Parameters
ValueTypethe value type into whose associated Vector type to convert the input LinOp.

◆ make_temporary_conversion() [2/2]

template<typename ValueType >
gko::detail::temporary_conversion<Vector<ValueType> > gko::experimental::distributed::make_temporary_conversion ( LinOp matrix)

Convert the given LinOp from experimental::distributed::Vector<...> to experimental::distributed::Vector<ValueType>.

The conversion tries to convert the input LinOp to all Dense types with value type recursively reachable by next_precision_base<...> starting from the ValueType template parameter. This means that all real-to-real and complex-to-complex conversions for default precisions are being considered. If the input matrix is non-const, the contents of the modified converted object will be converted back to the input matrix when the returned object is destroyed. This may lead to a loss of precision!

Parameters
matrixthe input matrix which is supposed to be converted. It is wrapped unchanged if it is already of type experimental::distributed::Vector<ValueType>, otherwise it will be converted to this type if possible.
Returns
a detail::temporary_conversion pointing to the (potentially converted) object.
Exceptions
NotSupportedif the input matrix cannot be converted to experimental::distributed::Vector<ValueType>
Template Parameters
ValueTypethe value type into whose associated Vector type to convert the input LinOp.

◆ precision_dispatch()

template<typename ValueType , typename Function , typename... Args>
void gko::experimental::distributed::precision_dispatch ( Function  fn,
Args *...  linops 
)

Calls the given function with each given argument LinOp temporarily converted into experimental::distributed::Vector<ValueType> as parameters.

Parameters
fnthe given function. It will be passed one (potentially const) experimental::distributed::Vector<ValueType>* parameter per parameter in the parameter pack linops.
linopsthe given arguments to be converted and passed on to fn.
Template Parameters
ValueTypethe value type to use for the parameters of fn.
Functionthe function pointer, lambda or other functor type to call with the converted arguments.
Argsthe argument type list.

◆ precision_dispatch_real_complex() [1/3]

template<typename ValueType , typename Function >
void gko::experimental::distributed::precision_dispatch_real_complex ( Function  fn,
const LinOp alpha,
const LinOp in,
const LinOp beta,
LinOp out 
)

Calls the given function with the given LinOps temporarily converted to experimental::distributed::Vector<ValueType>* as parameters.

If ValueType is real and both input vectors are complex, uses experimental::distributed::Vector::get_real_view() to convert them into real matrices after precision conversion.

See also
precision_dispatch()

◆ precision_dispatch_real_complex() [2/3]

template<typename ValueType , typename Function >
void gko::experimental::distributed::precision_dispatch_real_complex ( Function  fn,
const LinOp alpha,
const LinOp in,
LinOp out 
)

Calls the given function with the given LinOps temporarily converted to experimental::distributed::Vector<ValueType>* as parameters.

If ValueType is real and both input vectors are complex, uses experimental::distributed::Vector::get_real_view() to convert them into real matrices after precision conversion.

See also
precision_dispatch()

◆ precision_dispatch_real_complex() [3/3]

template<typename ValueType , typename Function >
void gko::experimental::distributed::precision_dispatch_real_complex ( Function  fn,
const LinOp in,
LinOp out 
)

Calls the given function with the given LinOps temporarily converted to experimental::distributed::Vector<ValueType>* as parameters.

If ValueType is real and both input vectors are complex, uses experimental::distributed::Vector::get_real_view() to convert them into real matrices after precision conversion.

See also
precision_dispatch()