Janus 2.0.0
High-performance C++20 dual-mode numerical framework
Loading...
Searching...
No Matches
Symbolic Computing

Janus provides a powerful symbolic computing layer built on top of CasADi, abstracted to feel like native C++ with Eigen integration. This allows you to compute derivatives, generate code, and optimize systems using the same code you write for simulation. Symbolic mode works by building a computational graph instead of executing immediately, enabling automatic differentiation, sensitivity analysis, and matrix-free second-order products.

Quick Start

#include <janus/janus.hpp>
// Create a symbolic variable
auto x = janus::sym("x");
// Build an expression (creates a computation graph)
auto f = x * x;
// Compute the Jacobian symbolically
auto J = janus::jacobian({f}, {x});
// Compile into a callable function
janus::Function fn({x}, {f, J});
// Evaluate numerically
auto res = fn(3.0);
std::cout << "f(3) = " << res[0] << ", f'(3) = " << res[1] << std::endl;
Wrapper around casadi::Function providing Eigen-native IO.
Definition Function.hpp:46
Umbrella header that includes the entire Janus public API.
SymbolicScalar sym(const std::string &name)
Create a named symbolic scalar variable.
Definition JanusTypes.hpp:90
auto jacobian(const Expr &expression, const Vars &...variables)
Computes Jacobian of an expression with respect to variables.
Definition AutoDiff.hpp:109

Core API

  • janus::SymbolicScalar: Alias for casadi::MX. Represents a symbolic value or expression.
  • janus::SymbolicMatrix: Alias for Eigen::Matrix<casadi::MX, -1, -1>. Allows you to use familiar Eigen syntax (block operations, coeff access) on symbolic variables.
  • janus::sym(name): Create a scalar symbolic variable.
  • janus::sym(name, n): Create a column vector symbolic variable (n x 1).
  • janus::sym(name, r, c): Create a matrix symbolic variable (r x c).
  • janus::Function({inputs}, {outputs}): Compile symbolic expressions into a callable function.
  • janus::jacobian({outputs}, {inputs}): Compute the Jacobian of outputs with respect to inputs.

Usage Patterns

Creating Variables

Use the janus::sym helper to create symbolic variables cleanly.

// Scalar variable "x"
auto x = janus::sym("x");
// Column vector "v" (3x1)
auto v = janus::sym("v", 3);
// Matrix "A" (2x2)
auto A = janus::sym("A", 2, 2);

Building Expressions

You can use standard arithmetic operators (+, -, *, /) and Janus math functions. These build a computational graph instead of executing immediately.

auto y = janus::sin(x) + janus::pow(x, 2.0);
auto z = A * v; // Matrix multiplication
T sin(const T &x)
Computes sine of x.
Definition Trig.hpp:21
T pow(const T &base, const T &exponent)
Computes the power function: base^exponent.
Definition Arithmetic.hpp:72

Defining Functions (janus::Function)

To evaluate expressions numerically, you must wrap them in a janus::Function. This wrapper handles type conversion between Eigen and CasADi.

// Define f(x, v) -> y
janus::Function f({x, v}, {y});
// Evaluate with numeric input (doubles/Eigen)
// Returns std::vector<Eigen::MatrixXd>
auto result = f(1.5, Eigen::Vector3d::Zero());
std::cout << result[0] << std::endl;

Features:

  • Automatic Naming: You don't need to provide a string name; one is generated automatically.
  • Eigen Integration: Inputs and outputs are converted to/from Eigen types seamlessly.

Automatic Differentiation (janus::jacobian)

Compute derivatives effortlessly. Janus handles the tedious task of concatenating variables for CasADi.

// Compute Jacobian J = dy/dx
auto J = janus::jacobian({y}, {x});
// Compute Jacobian w.r.t multiple variables: J = dy/d[x, v]
auto J_full = janus::jacobian({y}, {x, v});

Advanced Usage

Sensitivity Regime Switching

For compiled janus::Function objects, Janus can choose between forward and adjoint Jacobian construction automatically based on parameter count, output count, and optional trajectory hints.

janus::Function f("f", {x}, {y});
f,
0, // output block
0, // input block
400, // horizon length hint
true // stiff trajectory hint
);
auto J_fun = janus::sensitivity_jacobian(f, 0, 0, 400, true);
auto J = J_fun.eval(x_val);
SensitivityRecommendation select_sensitivity_regime(int parameter_count, int output_count, int horizon_length=1, bool stiff=false, const SensitivitySwitchOptions &opts=SensitivitySwitchOptions())
Recommend a sensitivity regime from parameter/output counts.
Definition AutoDiff.hpp:293
Function sensitivity_jacobian(const Function &fn, int output_idx=0, int input_idx=0, int horizon_length=1, bool stiff=false, const SensitivitySwitchOptions &opts=SensitivitySwitchOptions())
Build a Jacobian function for one output/input block using the recommended regime.
Definition AutoDiff.hpp:373

Use rec.integrator_options() when you want the same recommendation expressed as CasADi/SUNDIALS options (nfwd or nadj, plus checkpoint settings for long-horizon adjoints).

Matrix-Free Second-Order Products

For large optimization problems, you often want H * v without ever forming the dense Hessian. Janus exposes matrix-free Hessian-vector products for both plain scalar expressions and Lagrangians:

auto x = janus::sym("x", 3);
auto v = janus::sym("v", 3);
auto lam = janus::sym("lam", 2);
casadi::MX x0 = x(0);
casadi::MX x1 = x(1);
casadi::MX x2 = x(2);
auto objective = x0 * x0 + x1 * x2 + janus::sin(x2);
auto constraints = casadi::MX::vertcat({x0 + x1, x1 * x2});
auto hvp = janus::hessian_vector_product(objective, x, v);
auto lag_hvp =
janus::lagrangian_hessian_vector_product(objective, constraints, x, lam, v);
SymbolicMatrix lagrangian_hessian_vector_product(const SymbolicArg &objective, const SymbolicArg &constraints, const SymbolicArg &vars, const SymbolicArg &multipliers, const SymbolicArg &direction)
Hessian-vector product of a Lagrangian, i.e. a second-order adjoint action.
Definition AutoDiff.hpp:503
SymbolicMatrix hessian_vector_product(const SymbolicArg &expr, const SymbolicArg &vars, const SymbolicArg &direction)
Hessian-vector product for a scalar expression without forming the dense Hessian.
Definition AutoDiff.hpp:477

For compiled janus::Function objects, the wrappers return another janus::Function:

janus::Function model("model", {x}, {objective});
auto hv = hvp_fun.eval(x_val, v_val); // original inputs..., then direction v

The Lagrangian variant appends the multiplier block first and the direction block last:

janus::Function nlp_terms("nlp_terms", {x}, {objective, constraints});
janus::Function lag_hvp_fun =
auto hv = lag_hvp_fun.eval(x_val, lam_val, v_val);

These products use CasADi's forward-over-reverse AD internally, so the dense Hessian is never constructed as an intermediate.

See Also