Ginkgo  Generated from pipelines/224724463 branch based on develop. Ginkgo version 1.3.0
A numerical linear algebra library targeting many-core architectures
Public Member Functions | List of all members
gko::Operation Class Reference

Operations can be used to define functionalities whose implementations differ among devices. More...

#include <ginkgo/core/base/executor.hpp>

Public Member Functions

virtual void run (std::shared_ptr< const OmpExecutor >) const
 
virtual void run (std::shared_ptr< const HipExecutor >) const
 
virtual void run (std::shared_ptr< const DpcppExecutor >) const
 
virtual void run (std::shared_ptr< const CudaExecutor >) const
 
virtual void run (std::shared_ptr< const ReferenceExecutor > executor) const
 
virtual const char * get_name () const noexcept
 Returns the operation's name. More...
 

Detailed Description

Operations can be used to define functionalities whose implementations differ among devices.

This is done by extending the Operation class and implementing the overloads of the Operation::run() method for all Executor types. When invoking the Executor::run() method with the Operation as input, the library will select the Operation::run() overload corresponding to the dynamic type of the Executor instance.

Consider an overload of operator<< for Executors, which prints some basic device information (e.g. device type and id) of the Executor to a C++ stream:

std::ostream& operator<<(std::ostream &os, const gko::Executor &exec);

One possible implementation would be to use RTTI to find the dynamic type of the Executor, However, using the Operation feature of Ginkgo, there is a more elegant approach which utilizes polymorphism. The first step is to define an Operation that will print the desired information for each Executor type.

class DeviceInfoPrinter : public gko::Operation {
public:
explicit DeviceInfoPrinter(std::ostream &os) : os_(os) {}
void run(const gko::OmpExecutor *) const override { os_ << "OMP"; }
void run(const gko::CudaExecutor *exec) const override
{ os_ << "CUDA(" << exec->get_device_id() << ")"; }
void run(const gko::HipExecutor *exec) const override
{ os_ << "HIP(" << exec->get_device_id() << ")"; }
void run(const gko::DpcppExecutor *exec) const override
{ os_ << "DPC++(" << exec->get_device_id() << ")"; }
// This is optional, if not overloaded, defaults to OmpExecutor overload
void run(const gko::ReferenceExecutor *) const override
{ os_ << "Reference CPU"; }
private:
std::ostream &os_;
};

Using DeviceInfoPrinter, the implementation of operator<< is as simple as calling the run() method of the executor.

std::ostream& operator<<(std::ostream &os, const gko::Executor &exec)
{
DeviceInfoPrinter printer(os);
exec.run(printer);
return os;
}

Now it is possible to write the following code:

std::cout << *omp << std::endl
<< *gko::CudaExecutor::create(0, omp) << std::endl
<< *gko::HipExecutor::create(0, omp) << std::endl
<< *gko::DpcppExecutor::create(0, omp) << std::endl
<< *gko::ReferenceExecutor::create() << std::endl;

which produces the expected output:

OMP
CUDA(0)
HIP(0)
DPC++(0)
Reference CPU

One might feel that this code is too complicated for such a simple task. Luckily, there is an overload of the Executor::run() method, which is designed to facilitate writing simple operations like this one. The method takes three closures as input: one which is run for OMP, one for CUDA executors, one for HIP executors, and the last one for DPC++ executors. Using this method, there is no need to implement an Operation subclass:

std::ostream& operator<<(std::ostream &os, const gko::Executor &exec)
{
exec.run(
[&]() { os << "OMP"; }, // OMP closure
[&]() { os << "CUDA(" // CUDA closure
<< static_cast<gko::CudaExecutor&>(exec)
.get_device_id()
<< ")"; },
[&]() { os << "HIP(" // HIP closure
<< static_cast<gko::HipExecutor&>(exec)
.get_device_id()
<< ")"; });
[&]() { os << "DPC++(" // DPC++ closure
<< static_cast<gko::DpcppExecutor&>(exec)
.get_device_id()
<< ")"; });
return os;
}

Using this approach, however, it is impossible to distinguish between a OmpExecutor and ReferenceExecutor, as both of them call the OMP closure.

Member Function Documentation

◆ get_name()

virtual const char* gko::Operation::get_name ( ) const
virtualnoexcept

Returns the operation's name.

Returns
the operation's name

The documentation for this class was generated from the following file:
gko::HipExecutor::create
static std::shared_ptr< HipExecutor > create(int device_id, std::shared_ptr< Executor > master, bool device_reset=false)
Creates a new HipExecutor.
gko::CudaExecutor::create
static std::shared_ptr< CudaExecutor > create(int device_id, std::shared_ptr< Executor > master, bool device_reset=false)
Creates a new CudaExecutor.
gko::OmpExecutor::create
static std::shared_ptr< OmpExecutor > create()
Creates a new OmpExecutor.
Definition: executor.hpp:909
gko::Executor::run
virtual void run(const Operation &op) const =0
Runs the specified Operation using this Executor.
gko::HipExecutor
This is the Executor subclass which represents the HIP enhanced device.
Definition: executor.hpp:1144
gko::ReferenceExecutor
This is a specialization of the OmpExecutor, which runs the reference implementations of the kernels ...
Definition: executor.hpp:945
gko::HipExecutor::get_device_id
int get_device_id() const noexcept
Get the HIP device id of the device associated to this executor.
Definition: executor.hpp:1174
gko::CudaExecutor
This is the Executor subclass which represents the CUDA device.
Definition: executor.hpp:978
gko::DpcppExecutor
This is the Executor subclass which represents a DPC++ enhanced device.
Definition: executor.hpp:1310
gko::DpcppExecutor::create
static std::shared_ptr< DpcppExecutor > create(int device_id, std::shared_ptr< Executor > master, std::string device_type="all")
Creates a new DpcppExecutor.
gko::OmpExecutor
This is the Executor subclass which represents the OpenMP device (typically CPU).
Definition: executor.hpp:901
gko::DpcppExecutor::get_device_id
int get_device_id() const noexcept
Get the DPCPP device id of the device associated to this executor.
Definition: executor.hpp:1339
gko::Executor
The first step in using the Ginkgo library consists of creating an executor.
Definition: executor.hpp:466
gko::Operation
Operations can be used to define functionalities whose implementations differ among devices.
Definition: executor.hpp:205
gko::CudaExecutor::get_device_id
int get_device_id() const noexcept
Get the CUDA device id of the device associated to this executor.
Definition: executor.hpp:1008