Understanding the sparsity pattern of your Jacobian and Hessian matrices is crucial for high-performance optimization. Janus provides tools to inspect sparsity patterns from symbolic graphs, visualize them as ASCII/PDF/HTML spy plots, compile sparse derivative evaluators that return only structural nonzeros, and surface CasADi graph coloring metadata. Sparsity analysis works in symbolic mode; the NaN-propagation fallback also supports numeric mode for black-box functions.
Quick Start
auto f = janus::SymbolicScalar::vertcat({
x(1) - x(0),
x(3) - x(2)
});
std::cout << sp.to_string() << std::endl;
std::cout << "nnz = " << J.nnz() << "\n";
Definition Sparsity.hpp:38
Umbrella header that includes the entire Janus public API.
SparsityPattern sparsity_of_jacobian(const SymbolicScalar &expr, const SymbolicScalar &vars)
Get Jacobian sparsity without computing the full Jacobian.
Definition Sparsity.hpp:921
T sin(const T &x)
Computes sine of x.
Definition Trig.hpp:21
SymbolicScalar sym(const std::string &name)
Create a named symbolic scalar variable.
Definition JanusTypes.hpp:90
SparseJacobianEvaluator sparse_jacobian(const SymbolicArg &expression, const SymbolicArg &variables, const std::string &name="")
Compile a sparse Jacobian evaluator from symbolic expressions.
Definition Sparsity.hpp:1005
Core API
The core class is janus::SparsityPattern in <janus/core/Sparsity.hpp>.
Extracting Sparsity
| Function | Description |
| janus::sparsity_of_jacobian(f, x) | Jacobian sparsity from symbolic expressions |
| janus::sparsity_of_hessian(f, x) | Hessian sparsity from symbolic expressions |
| janus::get_jacobian_sparsity(func, out_idx, in_idx) | Jacobian sparsity from a compiled janus::Function |
| janus::get_hessian_sparsity(func, out_idx, in_idx) | Hessian sparsity from a compiled janus::Function |
| janus::nan_propagation_sparsity(callable, n_in, n_out) | Black-box sparsity via NaN propagation |
| janus::nan_propagation_sparsity(func) | NaN-propagation sparsity from janus::Function |
Querying a SparsityPattern
| Method | Description |
| sp.nnz() | Number of structural nonzeros |
| sp.density() | Fraction of nonzeros |
| sp.n_rows() / sp.n_cols() | Dimensions |
| sp.get_triplet() | Row/column index vectors |
| sp.get_ccs() | Compressed column storage |
Sparse Derivative Evaluators
| Function | Description |
| janus::sparse_jacobian(f, x) | Build a sparse Jacobian evaluator from expressions |
| janus::sparse_hessian(phi, x) | Build a sparse Hessian evaluator from a scalar expression |
| janus::sparse_jacobian(func, out_idx, in_idx) | Build from a janus::Function block |
| janus::sparse_hessian(func, out_idx, in_idx) | Build from a janus::Function block |
Visualization
| Method | Description |
| sp.to_string() | ASCII spy plot for terminal output |
| sp.visualize_spy(filename) | Render spy plot to PDF (requires Graphviz) |
| sp.export_spy_html(filename, title) | Interactive HTML spy plot with pan/zoom |
Usage Patterns
From Symbolic Expressions
auto f = ...;
SparsityPattern sparsity_of_hessian(const SymbolicScalar &expr, const SymbolicScalar &vars)
Get Hessian sparsity of a scalar expression.
Definition Sparsity.hpp:936
From a Compiled Function
Wrapper around casadi::Function providing Eigen-native IO.
Definition Function.hpp:46
SparsityPattern get_jacobian_sparsity(const Function &fn, int output_idx=0, int input_idx=0)
Get sparsity of a janus::Function Jacobian.
Definition Sparsity.hpp:951
For multi-input or multi-output functions, use explicit block selection:
SparsityPattern get_hessian_sparsity(const Function &fn, int output_idx=0, int input_idx=0)
Get Hessian sparsity of a scalar janus::Function output.
Definition Sparsity.hpp:964
From CasADi Types Directly
casadi::Sparsity raw = ...;
casadi::MX SymbolicScalar
CasADi MX symbolic scalar.
Definition JanusTypes.hpp:70
Sparse Jacobian Pipeline
auto f = janus::SymbolicScalar::vertcat({
x(1) - x(0),
x(3) - x(2)
});
std::cout << "nnz = " << J.nnz() << "\n";
std::cout << "forward colors = " << J.forward_coloring().n_colors() << "\n";
std::cout << "reverse colors = " << J.reverse_coloring().n_colors() << "\n";
std::cout << "preferred mode = "
<< "\n";
x_val << 0.0, 0.2, 0.5, 0.9, 0.0, 0.0;
JanusMatrix< NumericScalar > NumericMatrix
Eigen::MatrixXd equivalent.
Definition JanusTypes.hpp:66
@ Forward
Forward-mode (column compression).
Definition Sparsity.hpp:586
jac_nz is a column vector of derivative values in the same CCS ordering reported by J.sparsity().get_triplet() and J.sparsity().get_ccs(). That ordering is fixed, so the sparsity structure can be reused across many evaluations.
Sparse Hessian Pipeline
for (int k = 0; k < 4; ++k) {
auto diff = x(k + 1) - x(k);
phi = phi + diff * diff;
}
std::cout << "nnz = " << H.nnz() << "\n";
std::cout << "star colors = " << H.coloring().n_colors() << "\n";
x_val << 0.0, 0.1, 0.3, 0.7, 1.0;
SparseHessianEvaluator sparse_hessian(const SymbolicArg &expression, const SymbolicArg &variables, const std::string &name="")
Compile a sparse Hessian evaluator from symbolic expressions.
Definition Sparsity.hpp:1044
For Hessians, Janus exposes CasADi's star coloring through H.coloring().
From Function Blocks
Sparse derivative evaluators can also be built from an already-compiled janus::Function, selecting a specific output block and input block:
This is the most useful form for optimization pipelines where one compiled function already exposes multiple residual, constraint, and objective blocks.
Reconstructing a Dense Matrix
When debugging, it is often useful to reconstruct the dense matrix from sparse values:
auto nz = J.values(x_val);
auto [rows, cols] = J.sparsity().get_triplet();
janus::NumericMatrix::Zero(J.sparsity().n_rows(), J.sparsity().n_cols());
for (Eigen::Index k = 0; k < nz.size(); ++k) {
dense(rows[static_cast<size_t>(k)], cols[static_cast<size_t>(k)]) = nz(k);
}
In production you usually keep the structural ordering and pass the nonzero vector straight into a sparse solver or downstream callback.
Visualization Workflows
ASCII spy plot for quick terminal debugging:
std::cout << sp.to_string() << std::endl;
Output:
Sparsity: 10x10, nnz=28 (density=28.000%)
+----------+
|**........|
|***.......
|.***......|
...
+----------+
PDF rendering for reports:
sp.visualize_spy("my_pattern");
Interactive HTML for exploring large matrices:
sp.export_spy_html("my_pattern", "My Jacobian");
The HTML output includes pan/zoom, clickable cells with row/col details, axis labels, and a stats panel.
Advanced Usage
Reading Coloring Metadata
janus::GraphColoring exposes:
- n_entries() for the uncompressed derivative size
- n_colors() for the compressed directional count
- compression_ratio() for a quick summary
- colorvec() for the per-entry color assignment
This is useful when comparing derivative blocks and deciding whether sparse directional sweeps are worth it.
NaN-Propagation Sparsity Detection
Sometimes you have black-box functions where symbolic sparsity analysis is not possible (external library calls, non-traceable operations, functions with runtime branching). Janus provides nan_propagation_sparsity() for these cases.
How it works:
- Evaluate f(x) at a reference point
- For each input i: set x[i] = NaN, evaluate f(x)
- If output[j] becomes NaN, then it depends on input i, so Jacobian(j, i) is nonzero
for (int i = 0; i < x.size(); ++i) y(i) = x(i) * x(i);
return y;
},
n_inputs, n_outputs);
JanusVector< NumericScalar > NumericVector
Eigen::VectorXd equivalent.
Definition JanusTypes.hpp:67
SparsityPattern nan_propagation_sparsity(Func &&fn, int n_inputs, int n_outputs, const NaNSparsityOptions &opts={})
Detect Jacobian sparsity using NaN propagation.
Definition Sparsity.hpp:1123
Options for NaN-propagation sparsity detection.
Definition Sparsity.hpp:1084
NumericVector reference_point
Point to evaluate at (default: zeros).
Definition Sparsity.hpp:1085
Verifying symbolic sparsity:
auto f = x * x;
assert(sp_symbolic == sp_nan);
Example Walkthrough: sparsity_intro.cpp
The example examples/intro/sparsity_intro.cpp demonstrates four common structures found in optimization:
- Simple Jacobian – Shows how dependencies map to nonzeros. f_0 = x_0^2 depends only on x_0, so row 0 has a nonzero at column 0.
- Chain Structure (Tridiagonal) – f = sum((x[i] - x[i+1])^2) creates a tridiagonal Hessian. This band structure is typical in trajectory optimization where each state depends only on its neighbors.
- Independent Systems (Block Diagonal) – Two completely separate systems stacked together form a block-diagonal matrix. Solvers can parallelize this trivially.
- 2D Laplacian (5-Point Stencil) – Typical in PDE constraints. Each node depends on itself and its 4 neighbors. Uses janus::sym_vec_pair for 2D indexing:
std::pair< SymbolicVector, SymbolicScalar > sym_vec_pair(const std::string &name, int size)
Create symbolic vector and return both SymbolicVector and underlying MX.
Definition JanusTypes.hpp:155
Example Walkthrough: sparse_derivative_pipeline.cpp
The example examples/math/sparse_derivative_pipeline.cpp shows the full workflow:
- Build a trajectory-style residual with local coupling
- Compile sparse Jacobian blocks and a sparse Hessian block
- Print nnz versus dense size
- Inspect forward, reverse, and star coloring counts
- Reuse the same sparse kernels at two different evaluation points
Build and run:
ninja -C build sparse_derivative_pipeline
./build/examples/sparse_derivative_pipeline
See Also