The distributed solver example..
This example depends on simple-solver, three-pt-stencil-solver.
Introduction
This distributed solver example should help you understand the basics of using Ginkgo in a distributed setting. The example will solve a simple 1D Laplace equation where the system can be distributed row-wise to multiple processes. To run the solver with multiple processes, use mpirun -n NUM_PROCS ./distributed-solver [executor] [num_grid_points] [num_iterations]
.
If you are using GPU devices, please make sure that you run this example with at most as many processes as you have GPU devices available.
The commented program
As vector type we use the following, which implements a subset of gko::matrix::Dense.
As matrix type we simply use the following type, which can read distributed data and be applied to a distributed vector.
using dist_mtx =
GlobalIndexType>;
We still need a localized vector type to be used as scalars in the advanced apply operations.
The partition type describes how the rows of the matrices are distributed.
using part_type =
GlobalIndexType>;
We can use here the same solver type as you would use in a non-distributed program. Please note that not all solvers support distributed systems at the moment.
ValueType, LocalIndexType, GlobalIndexType>;
Create an MPI communicator get the rank of the calling process.
const auto rank = comm.rank();
User Input Handling
User input settings:
- The executor, defaults to reference.
- The number of grid points, defaults to 100.
- The number of iterations, defaults to 1000.
if (argc == 2 && (std::string(argv[1]) == "--help")) {
if (rank == 0) {
std::cerr << "Usage: " << argv[0]
<< " [executor] [num_grid_points] [num_iterations] "
<< std::endl;
}
std::exit(-1);
}
const auto executor_string = argc >= 2 ? argv[1] : "reference";
const auto grid_dim =
static_cast<gko::size_type>(argc >= 3 ? std::atoi(argv[2]) : 100);
const auto num_iters =
static_cast<gko::size_type>(argc >= 4 ? std::atoi(argv[3]) : 1000);
const std::map<std::string,
std::function<std::shared_ptr<gko::Executor>(MPI_Comm)>>
executor_factory_mpi{
{"reference",
[](MPI_Comm) { return gko::ReferenceExecutor::create(); }},
{"cuda",
[](MPI_Comm comm) {
device_id, gko::ReferenceExecutor::create());
}},
{"hip",
[](MPI_Comm comm) {
device_id, gko::ReferenceExecutor::create());
}},
{"dpcpp", [](MPI_Comm comm) {
int device_id = 0;
} else {
throw std::runtime_error("No suitable DPC++ devices");
}
device_id, gko::ReferenceExecutor::create());
}}};
auto exec = executor_factory_mpi.at(executor_string)(MPI_COMM_WORLD);
Creating the Distributed Matrix and Vectors
As a first step, we create a partition of the rows. The partition consists of ranges of consecutive rows which are assigned a part-id. These part-ids will be used for the distributed data structures to determine which rows will be stored locally. In this example each rank has (nearly) the same number of rows, so we can use the following specialized constructor. See gko::distributed::Partition for other modes of creating a partition.
const auto num_rows = grid_dim;
auto partition =
gko::share(part_type::build_from_global_size_uniform(
exec->get_master(), comm.size(),
static_cast<GlobalIndexType>(num_rows)));
Assemble the matrix using a 3-pt stencil and fill the right-hand-side with a sine value. The distributed matrix supports only constructing an empty matrix of zero size and filling in the values with gko::experimental::distributed::Matrix::read_distributed. Only the data that belongs to the rows by this rank will be assembled.
A_data.
size = {num_rows, num_rows};
b_data.
size = {num_rows, 1};
x_data.
size = {num_rows, 1};
const auto range_start = partition->get_range_bounds()[rank];
const auto range_end = partition->get_range_bounds()[rank + 1];
for (int i = range_start; i < range_end; i++) {
if (i > 0) {
A_data.
nonzeros.emplace_back(i, i - 1, -1);
}
if (i < grid_dim - 1) {
A_data.
nonzeros.emplace_back(i, i + 1, -1);
}
b_data.
nonzeros.emplace_back(i, 0, std::sin(i * 0.01));
x_data.
nonzeros.emplace_back(i, 0, gko::zero<ValueType>());
}
Take timings.
Read the matrix data, currently this is only supported on CPU executors. This will also set up the communication pattern needed for the distributed matrix-vector multiplication.
auto A_host =
gko::share(dist_mtx::create(exec->get_master(), comm));
auto x_host = dist_vec::create(exec->get_master(), comm);
auto b_host = dist_vec::create(exec->get_master(), comm);
A_host->read_distributed(A_data, partition);
b_host->read_distributed(b_data, partition);
x_host->read_distributed(x_data, partition);
After reading, the matrix and vector can be moved to the chosen executor, since the distributed matrix supports SpMV also on devices.
auto A =
gko::share(dist_mtx::create(exec, comm));
auto x = dist_vec::create(exec, comm);
auto b = dist_vec::create(exec, comm);
A->copy_from(A_host);
b->copy_from(b_host);
x->copy_from(x_host);
Take timings.
Solve the Distributed System
Generate the solver, this is the same as in the non-distributed case. with a local block diagonal preconditioner.
Setup the local block diagonal solver factory.
auto local_solver =
gko::share(bj::build().on(exec));
Setup the stopping criterion and logger
std::shared_ptr<const gko::log::Convergence<ValueType>> logger =
auto Ainv = solver::build()
.with_preconditioner(
schwarz::build().with_local_solver(local_solver))
.with_criteria(
gko::stop::Iteration::build().with_max_iters(num_iters),
.with_reduction_factor(reduction_factor))
.on(exec)
->generate(A);
Add logger to the generated solver to log the iteration count and residual norm
Ainv->add_logger(logger);
Take timings.
Apply the distributed solver, this is the same as in the non-distributed case.
Take timings.
Get the residual.
gko::as<vec>(logger->get_residual_norm()));
Printing Results
Print the achieved residual norm and timings on rank 0.
clang-format off
std::cout << "\nNum rows in matrix: " << num_rows
<< "\nNum ranks: " << comm.size()
<< "\nFinal Res norm: " << res_norm->at(0, 0)
<< "\nIteration count: " << logger->get_num_iterations()
<< "\nInit time: " << t_init_end - t_init
<< "\nRead time: " << t_read_setup_end - t_init
<< "\nSolver generate time: " << t_solver_generate_end - t_read_setup_end
<< "\nSolver apply time: " << t_end - t_solver_generate_end
<< "\nTotal time: " << t_end - t_init
<< std::endl;
clang-format on
Results
This is the expected output for mpirun -n 4 ./distributed-solver
:
Num ranks: 4
Final Res norm: 5.58392e-12
Iteration count: 7
Init time: 0.0663887
Read time: 0.0729806
Solver generate time: 7.6348e-05
Solver apply time: 0.0680783
Total time: 0.141351
The timings may vary depending on the machine.
The plain program
#include <ginkgo/ginkgo.hpp>
#include <iostream>
#include <map>
#include <string>
int main(int argc, char* argv[])
{
using ValueType = double;
using dist_mtx =
GlobalIndexType>;
using part_type =
GlobalIndexType>;
ValueType, LocalIndexType, GlobalIndexType>;
const auto rank = comm.rank();
if (argc == 2 && (std::string(argv[1]) == "--help")) {
if (rank == 0) {
std::cerr << "Usage: " << argv[0]
<< " [executor] [num_grid_points] [num_iterations] "
<< std::endl;
}
std::exit(-1);
}
const auto executor_string = argc >= 2 ? argv[1] : "reference";
const auto grid_dim =
static_cast<gko::size_type>(argc >= 3 ? std::atoi(argv[2]) : 100);
const auto num_iters =
static_cast<gko::size_type>(argc >= 4 ? std::atoi(argv[3]) : 1000);
const std::map<std::string,
std::function<std::shared_ptr<gko::Executor>(MPI_Comm)>>
executor_factory_mpi{
{"reference",
[](MPI_Comm) { return gko::ReferenceExecutor::create(); }},
{"cuda",
[](MPI_Comm comm) {
device_id, gko::ReferenceExecutor::create());
}},
{"hip",
[](MPI_Comm comm) {
device_id, gko::ReferenceExecutor::create());
}},
{"dpcpp", [](MPI_Comm comm) {
int device_id = 0;
} else {
throw std::runtime_error("No suitable DPC++ devices");
}
device_id, gko::ReferenceExecutor::create());
}}};
auto exec = executor_factory_mpi.at(executor_string)(MPI_COMM_WORLD);
const auto num_rows = grid_dim;
auto partition =
gko::share(part_type::build_from_global_size_uniform(
exec->get_master(), comm.size(),
static_cast<GlobalIndexType>(num_rows)));
A_data.
size = {num_rows, num_rows};
b_data.
size = {num_rows, 1};
x_data.
size = {num_rows, 1};
const auto range_start = partition->get_range_bounds()[rank];
const auto range_end = partition->get_range_bounds()[rank + 1];
for (int i = range_start; i < range_end; i++) {
if (i > 0) {
A_data.
nonzeros.emplace_back(i, i - 1, -1);
}
if (i < grid_dim - 1) {
A_data.
nonzeros.emplace_back(i, i + 1, -1);
}
b_data.
nonzeros.emplace_back(i, 0, std::sin(i * 0.01));
x_data.
nonzeros.emplace_back(i, 0, gko::zero<ValueType>());
}
comm.synchronize();
auto A_host =
gko::share(dist_mtx::create(exec->get_master(), comm));
auto x_host = dist_vec::create(exec->get_master(), comm);
auto b_host = dist_vec::create(exec->get_master(), comm);
A_host->read_distributed(A_data, partition);
b_host->read_distributed(b_data, partition);
x_host->read_distributed(x_data, partition);
auto A =
gko::share(dist_mtx::create(exec, comm));
auto x = dist_vec::create(exec, comm);
auto b = dist_vec::create(exec, comm);
A->copy_from(A_host);
b->copy_from(b_host);
x->copy_from(x_host);
comm.synchronize();
auto local_solver =
gko::share(bj::build().on(exec));
std::shared_ptr<const gko::log::Convergence<ValueType>> logger =
auto Ainv = solver::build()
.with_preconditioner(
schwarz::build().with_local_solver(local_solver))
.with_criteria(
gko::stop::Iteration::build().with_max_iters(num_iters),
.with_reduction_factor(reduction_factor))
.on(exec)
->generate(A);
Ainv->add_logger(logger);
comm.synchronize();
Ainv->apply(b, x);
comm.synchronize();
gko::as<vec>(logger->get_residual_norm()));
if (comm.rank() == 0) {
std::cout << "\nNum rows in matrix: " << num_rows
<< "\nNum ranks: " << comm.size()
<< "\nFinal Res norm: " << res_norm->at(0, 0)
<< "\nIteration count: " << logger->get_num_iterations()
<< "\nInit time: " << t_init_end - t_init
<< "\nRead time: " << t_read_setup_end - t_init
<< "\nSolver generate time: " << t_solver_generate_end - t_read_setup_end
<< "\nSolver apply time: " << t_end - t_solver_generate_end
<< "\nTotal time: " << t_end - t_init
<< std::endl;
}
}