Integration Strategies#

Ginkgo can be integrated into an existing application in several ways, depending on how the host library represents its operators and what adapter infrastructure it provides. This page describes the three main strategies illustrated by the examples on this site.

All strategies share a common principle: zero-copy data sharing. Ginkgo’s gko::array::view() wraps existing memory without copying it, so the host library retains ownership and Ginkgo reads and writes through the same pointers.


Direct sparse-matrix wrapping#

The simplest strategy applies when the host library assembles an explicit sparse matrix. The application extracts the CSR arrays (row pointers, column indices, values) and wraps them as a Ginkgo matrix, then builds a solver around it.

┌──────────────┐      extract CSR       ┌──────────────────┐
│  Host library │  ─────────────────►   │  gko::matrix::Csr │
│  SparseMatrix │   row_ptrs, col_idxs, │  (non-owning view)│
└──────────────┘   values               └────────┬─────────┘
                                                  │
┌──────────────┐      wrap pointer       ┌────────▼─────────┐
│  Host vector  │  ─────────────────►   │ gko::matrix::Dense │
│  (rhs, sol)   │                        │ (non-owning view) │
└──────────────┘                         └────────┬─────────┘
                                                  │
                                          ┌───────▼────────┐
                                          │  gko::solver::Cg│
                                          │  + preconditioner│
                                          └────────────────┘

The application controls the entire workflow: it assembles the system, creates the Ginkgo solver, calls apply(), and reads the result from its own vector.

How easy the CSR extraction is depends on the host library. MFEM exposes raw pointers directly (GetI(), GetJ(), GetData()), while deal.II requires iterating over matrix rows to build the arrays.

Examples using this strategy:


Custom LinOp for matrix-free operators#

Some libraries never assemble a sparse matrix. Spectral-element and high-order codes often apply the operator matrix-free, computing the matrix-vector product on the fly. Ginkgo supports this through its LinOp interface: you subclass gko::LinOp and implement apply_impl() to call the host library’s operator.

┌──────────────────────────────────────────────────┐
│              Custom gko::LinOp subclass           │
│                                                   │
│  apply_impl(b, x):                                │
│    1. Extract raw pointer from Ginkgo Dense b     │
│    2. Wrap as host-library memory (e.g. OCCA)     │
│    3. Call host operator:  x = A * b              │
│    4. Result lands in Ginkgo Dense x              │
└──────────────────────┬───────────────────────────┘
                       │
               ┌───────▼────────┐
               │  gko::solver::Cg│
               │  (sees a LinOp) │
               └────────────────┘

The Ginkgo solver only sees a LinOp with an apply() method – it does not know or care whether the operator is backed by a sparse matrix or a matrix-free kernel. This makes the custom LinOp approach fully general: any operator that can compute a matrix-vector product can be plugged in.

The main effort is in the apply_impl() implementation, which must translate between Ginkgo’s Dense vectors and whatever memory abstraction the host library uses (raw pointers, OCCA memory objects, etc.).

Examples using this strategy:


Native library wrappers#

Some libraries ship their own Ginkgo integration layer. In this case the host library provides wrapper classes that present Ginkgo solvers through the library’s native solver interface. The application configures the solver using the host library’s API and never touches Ginkgo objects directly.

┌──────────────┐                        ┌──────────────────────┐
│  Application  │  standard library API │  Library-provided     │
│               │ ─────────────────────►│  Ginkgo wrapper       │
│  (callbacks   │  SetOperator(),       │                       │
│   for F, J)   │  Solve()              │  SUNMatrix → gko::Csr │
└──────────────┘                        │  SUNLinSol → gko::Gmres│
                                        └──────────────────────┘

This is the lowest-effort integration path: the application only needs to fill in the standard callbacks (e.g. residual and Jacobian evaluation for SUNDIALS) and select Ginkgo as the linear solver backend. The wrapper handles all data conversion and solver lifecycle management internally.

The trade-off is less control over Ginkgo’s configuration. The wrapper exposes only the options that the host library’s interface supports, which may be a subset of what Ginkgo offers.

Examples using this strategy:


Choosing a strategy#

Direct wrapping

Custom LinOp

Native wrapper

When to use

Host library assembles a sparse matrix

Operator is matrix-free

Host library ships Ginkgo support

Effort

Low–moderate

Moderate–high

Minimal

Control over Ginkgo

Full

Full

Limited to wrapper API

Data copy

Zero-copy via views

Zero-copy if memory models align

Handled by wrapper

Solver lifecycle

Application manages

Application manages

Library manages

In practice, start by checking whether the host library already provides a Ginkgo wrapper (native wrapper strategy). If not, check whether it exposes a sparse matrix – if so, use direct wrapping. For matrix-free operators, implement a custom LinOp.