Skip to content

Python API¤

The qontrol Python API consists of the function optimize which performs optimal control, and various models that can be optimized including SESolveModel and MESolveModel. There are additionally various utility functions and classes that help to define an optimization routine (cost functions, file input output, options, etc.)

Optimization¤

optimize(parameters: ArrayLike | dict, costs: Cost, model: Model, *, optimizer: GradientTransformation = optax.adam(0.0001, b1=0.99, b2=0.99), plotter: Plotter = DefaultPlotter(), solver: Solver = Tsit5(), gradient: Gradient | None = None, dq_options: dq.Options = dq.Options(), opt_options: dict | None = None, filepath: str | None = None) -> Array | dict ¤

Perform gradient descent to optimize Hamiltonian parameters.

This function takes as input parameters which parametrize a model when called performs time-dynamics simulations using dynamiqs. How to update parameters is encoded in the list of cost functions costs that contains e.g. infidelity contributions, pulse amplitude penalties, etc.

Parameters:

Name Type Description Default
parameters _(dict or array-like)_

parameters to optimize over that are used to define the Hamiltonian and control times.

required
costs _(list of Cost instances)_

List of cost functions used to perform the optimization.

required
model _(Model)_

Model that is called at each iteration step.

required
optimizer _(optax.GradientTransformation)_

optax optimizer to use for gradient descent. Defaults to the Adam optimizer.

adam(0.0001, b1=0.99, b2=0.99)
plotter _(Plotter)_

Plotter for monitoring the optimization.

DefaultPlotter()
solver _(Solver)_

Solver passed to dynamiqs.

Tsit5()
gradient _()Gradient_

Gradient passed to dynamiqs.

None
options _(dict)_

Options for grape optimization. verbose (bool): If True, the optimizer will print out the infidelity at each epoch step to track the progress of the optimization. ignore_termination (bool): Whether to ignore the various termination conditions all_costs (bool): Whether or not all costs must be below their targets for early termination of the optimizer. If False, the optimization terminates if only one cost function is below the target (typically infidelity). epochs (int): Number of optimization epochs. plot (bool): Whether to plot the results during the optimization (for the epochs where results are plotted, necessarily suffer a time penalty). plot_period (int): If plot is True, plot every plot_period. xtol (float): Defaults to 1e-8, terminate the optimization if the parameters are not being updated ftol (float): Defaults to 1e-8, terminate the optimization if the cost function is not changing above this level gtol (float): Defaults to 1e-8, terminate the optimization if the norm of the gradient falls below this level

required
filepath _(str)_

Filepath of where to save optimization results.

None

Returns:

Type Description
Array | dict

optimized parameters from the final timestep

Cost functions¤

incoherent_infidelity(target_states: list[QArrayLike], cost_multiplier: float = 1.0, target_cost: float = 0.005) -> IncoherentInfidelity ¤

Instantiate the cost function for calculating infidelity incoherently.

This fidelity is defined as $$ F_{\rm incoherent} = \sum_{k}|\langle\psi_{t}^{k}|\psi_{i}^{k}(T)\rangle|^2, $$ where the states at the end of the pulse are \(|\psi_{i}^{k}(T)\rangle\) and the target states are \(|\psi_{t}^{k}\rangle\).

Parameters:

Name Type Description Default
target_states _(qarray_like of shape (s, n, 1) or (s, n, n))_

target states for the initial states passed to optimize. If performing master-equation optimization, the target states should be passed as a list of density matrices.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.005

Returns:

Type Description
IncoherentInfidelity

(IncoherentInfidelity): Callable object that returns the incoherent infidelity and whether the infidelity is below the target value.

coherent_infidelity(target_states: list[QArrayLike], cost_multiplier: float = 1.0, target_cost: float = 0.005) -> CoherentInfidelity ¤

Instantiate the cost function for calculating infidelity coherently.

This fidelity is defined as $$ F_{\rm coherent} = |\sum_{k}\langle\psi_{t}^{k}|\psi_{i}^{k}(T)\rangle|^2, $$ where the states at the end of the pulse are \(|\psi_{i}^{k}(T)\rangle\) and the target states are \(|\psi_{t}^{k}\rangle\).

Parameters:

Name Type Description Default
target_states _(qarray_like of shape (s, n, 1) or (s, n, n))_

target states for the initial states passed to optimize. If performing master-equation optimization, the target states should be passed as a list of density matrices.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.005

Returns:

Type Description
CoherentInfidelity

(CoherentInfidelity): Callable object that returns the coherent infidelity and whether the infidelity is below the target value.

propagator_infidelity(target_unitary: QArrayLike, cost_multiplier: float = 1.0, target_cost: float = 0.005) -> PropagatorInfidelity ¤

Instantiate the cost function for calculating infidelity of a propagator.

This fidelity is defined as $$ F_{\rm propagator} = \Tr(U_{t}^{\dagger} U/d)^2, $$ where the propagator at the end of the pulse is \(U\), the dimension of the system is \(d\) and the target unitary \(U_{t}\).

Parameters:

Name Type Description Default
target_unitary _(qarray_like of shape (n, n))_

target unitary for the initial states passed to optimize.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.005

Returns:

Type Description
PropagatorInfidelity

(PropagatorInfidelity): Callable object that returns the propagator infidelity and whether the infidelity is below the target value.

forbidden_states(forbidden_states_list: list[QArrayLike], cost_multiplier: float = 1.0, target_cost: float = 0.0) -> ForbiddenStates ¤

Instantiate the cost function for penalizing forbidden-state occupation.

This cost function is defined as $$ C = \sum_{k}\sum_{f}\int_{0}^{T}dt|\langle\psi_{f}^{k}|\psi_{i}^{k}(t)\rangle|^2, $$ where \(|\psi_{i}^{k}(t)\rangle\) is the \(k^{\rm th}\) initial state propagated to time \(t\) and |\psi_{f}^{k}\rangle is a forbidden state for the \(k^{\rm th}\) initial state.

Parameters:

Name Type Description Default
forbidden_states_list _(list of list of qarray-like of shape (n, 1) or (n, n))_

For each initial state indexed by s (outer list), a list of forbidden states (inner list) should be provided. The inner lists need not all be of the same shape, for instance if some initial states have more forbidden states than others. The array is eventually reshaped to have shape (s, 1, f, n, 1) or (s, 1, f, n, n) (for sesolve or mesolve, respectively) where s is the number of initial states, f is the length of the longest forbidden-state list (with zero-padding for meaningless entries) and 1 is an added batch dimension for eventually batching over tsave.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.0

Returns:

Type Description
ForbiddenStates

(ForbiddenStates): Callable object that returns the forbidden-state cost and whether the cost is below the target value.

control_area(cost_multiplier: float = 1.0, target_cost: float = 0.0) -> ControlCostArea ¤

Control area cost function.

Penalize the area under the pulse curves according to $$ C = \sum_{j}\int_{0}^{T}\Omega_{j}(t)dt, $$ where the \(\Omega_{j}\) are the individual controls and \(T\) is the pulse time.

Parameters:

Name Type Description Default
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.0

Returns:

Type Description
ControlCostArea

(ControlArea): Callable object that returns the control-area cost and whether the cost is below the target value.

control_norm(threshold: float, cost_multiplier: float = 1.0, target_cost: float = 0.0) -> ControlCostNorm ¤

Control norm cost function.

Penalize the norm of the controls above some threshold according to $$ C = \sum_{j}\int_{0}^{T}ReLU(|\Omega_{j}(t)|-\Omega_{max})dt, $$ where the \(\Omega_{j}\) are the individual controls, \(T\) is the pulse time and \(\Omega_{max}\) is the threshold.

Parameters:

Name Type Description Default
threshold _(float)_

Threshold to use for penalizing amplitudes above this value in absolute magnitude.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.0

Returns:

Type Description
ControlCostNorm

(ControlNorm): Callable object that returns the control-norm cost and whether the cost is below the target value.

custom_control_cost(cost_fun: callable, cost_multiplier: float = 1.0, target_cost: float = 0.0) -> CustomControlCost ¤

Cost function based on an arbitrary transformation of the controls.

Penalize the controls according to an arbitrary function F $$ C = \sum_{j}\int_{0}^{T}F(\Omega_{j}(t))dt, $$

Parameters:

Name Type Description Default
cost_fun _(callable)_

Cost function which must have signature (control_amp: Array) -> Array.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.0

Returns:

Type Description
CustomControlCost

(CustomCost): Callable object that returns the cost for the custom function and whether the cost is below the target value.

Examples:

def penalize_negative(control_amp: jax.Array) -> jax.Array:
    return jax.nn.relu(-control_amp)


negative_amp_cost = ql.custom_control_cost(penalize_negative)
In this example, we penalize negative drive amplitudes.

custom_cost(cost_fun: callable, cost_multiplier: float = 1.0, target_cost: float = 0.0) -> CustomCost ¤

A custom cost function.

In many (most!) cases, the user may want to add a cost function to their optimization that is not included in the hardcoded set of available cost functions.

Parameters:

Name Type Description Default
cost_fun _(callable)_

Cost function which must have signature (result: dq.SolveResult, H: dq.TimeQArray, parameters: dict | Array) -> Array.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.0

Returns:

Type Description
CustomCost

(CustomCost): Callable object that returns the cost for the custom function.

Examples:

Let's imagine we want to penalize the value of some expectation value at the final time in tsave.

def penalize_expect(
    result: SolveResult, H: TimeQArray, parameters: dict | Array
) -> Array:
    # 0 is the index of the operator, -1 is the time index
    return jnp.sum(jnp.abs(result.expects[0, -1]))


expect_cost = ql.custom_cost(penalize_expect)
Then expect_cost can be added to the other utilized cost functions. The only thing happening under the hood is that the penalize_expect function is passed to jax.tree_util.Partial to enable it to be passed through jitted functions without issue.

Models¤

SESolveModel ¤

Bases: Model

Model for Schrödinger-equation optimization.

When called with the parameters we optimize over returns the results of sesolve as well as the updated Hamiltonian.

MESolveModel ¤

Bases: Model

Model for Lindblad-master-equation optimization.

When called with the parameters we optimize over returns the results of mesolve as well as the updated Hamiltonian.

SEPropagatorModel ¤

Bases: Model

Model for Schrödinger-equation propagator optimization.

When called with the parameters we optimize over returns the results of sepropagate as well as the updated Hamiltonian.

MEPropagatorModel ¤

Bases: Model

Model for Lindblad-master-equation propagator optimization.

When called with the parameters we optimize over returns the results of mesolve as well as the updated Hamiltonian.

sesolve_model(H_function: callable, psi0: QArrayLike, tsave_or_function: ArrayLike | callable, *, exp_ops: list[ArrayLike] | None = None) -> SESolveModel ¤

Instantiate sesolve model.

Here we instantiate the model that is called at each step of the optimization iteration, returning a tuple of the result of calling sesolve as well as the Hamiltonian evaluated at the parameter values.

Parameters:

Name Type Description Default
H_function _(callable)_

function specifying how to update the Hamiltonian

required
psi0 _(QArrayLike of shape (..., n, 1))_

Initial states.

required
tsave_or_function _(ArrayLike of shape (ntsave,) or callable)_

Either an array of times passed to sesolve or a method specifying how to update the times that are passed to sesolve

required
exp_ops _(list of array-like)_

Operators to calculate expectation values of, in case some of the cost functions depend on the value of certain expectation values.

None

Returns:

Type Description
SESolveModel

(SESolveModel): Model that when called with the parameters we optimize over as argument returns the results of sesolve as well as the updated Hamiltonian

Examples:

In this simple example the parameters are the amplitudes of piecewise-constant controls

tsave = jnp.linspace(0.0, 11.0, 10)
psi0 = [dq.basis(2, 0)]
H1s = [dq.sigmax(), dq.sigmay()]


def H_pwc(values: Array) -> dq.TimeQArray:
    H = dq.sigmaz()
    for idx, _H1 in enumerate(H1s):
        H += dq.pwc(tsave, values[idx], _H1)
    return H


sesolve_model = ql.sesolve_model(H_pwc, psi0, tsave)
See for example this tutorial.

In more complex cases, we can imagine that the optimized parameters are the control points fed into a spline, and additionally the control times themselves are optimized.

init_drive_params_topt = {
    'dp': -0.001 * jnp.ones((len(H1s), len(tsave))),
    't': tsave[-1],
}


def H_func_topt(t: float, drive_params_dict: dict) -> dq.TimeQArray:
    drive_params = drive_params_dict['dp']
    new_tsave = jnp.linspace(0.0, drive_params_dict['t'], len(tsave))
    drive_spline = _drive_spline(drive_params, envelope, new_tsave)
    drive_amps = drive_spline.evaluate(t)
    drive_Hs = jnp.einsum('d,dij->ij', drive_amps, H1s)
    return H0 + drive_Hs


def update_H_topt(drive_params_dict: dict) -> dq.TimeQArray:
    new_H = jtu.Partial(H_func_topt, drive_params_dict=drive_params_dict)
    return dq.timecallable(new_H)


def update_tsave_topt(drive_params_dict: dict) -> jax.Array:
    return jnp.linspace(0.0, drive_params_dict['t'], len(tsave))


se_t_opt_Kerr_model = ql.sesolve_model(update_H_topt, psi0, update_tsave_topt)
See for example this tutorial.

mesolve_model(H_function: callable, jump_ops: list[QArrayLike | TimeQArray], rho0: QArrayLike, tsave_or_function: ArrayLike | callable, *, exp_ops: list[ArrayLike] | None = None) -> MESolveModel ¤

Instantiate mesolve model.

Here we instantiate the model that is called at each step of the optimization iteration, returning a tuple of the result of calling mesolve as well as the Hamiltonian evaluated at the parameter values.

Parameters:

Name Type Description Default
H_function _(callable)_

function specifying how to update the Hamiltonian

required
jump_ops _(list of qarray-like or time-qarray, each of shape (...Lk, n, n))_

List of jump operators.

required
rho0 _(QArrayLike of shape (..., n, n))_

Initial density matrices.

required
tsave_or_function _(ArrayLike of shape (ntsave,) or callable)_

Either an array of times passed to sesolve or a method specifying how to update the times that are passed to sesolve

required
exp_ops _(list of array-like)_

Operators to calculate expectation values of, in case some of the cost functions depend on the value of certain expectation values.

None

Returns:

Type Description
MESolveModel

(MESolveModel): Model that when called with the parameters we optimize over as argument returns the results of mesolve as well as the updated Hamiltonian

Examples:

Instantiating a MESolveModel is quite similar to instantiating an SESolveModel, with the two differences being that we need to supply jump operators, and the initial and target states should be specified as density matrices. Continuing the last example from sesolve_model

jump_ops = [0.03 * dq.sigmax()]
me_initial_states = dq.todm(psi0)
me_Kerr_model = ql.mesolve_model(
    update_H_topt, jump_ops, me_initial_states, update_tsave_topt
)
See this tutorial for example

sepropagator_model(H_function: callable, tsave_or_function: ArrayLike | callable, *, exp_ops: list[ArrayLike] | None = None) -> SEPropagatorModel ¤

Instantiate sepropagator model.

Here we instantiate the model that is called at each step of the optimization iteration, returning a tuple of the result of calling sepropagator as well as the Hamiltonian evaluated at the parameter values.

Parameters:

Name Type Description Default
H_function _(callable)_

function specifying how to update the Hamiltonian

required
tsave_or_function _(ArrayLike of shape (ntsave,) or callable)_

Either an array of times passed to the solver or a method specifying how to update the times that are passed to the solver

required
exp_ops _(list of array-like)_

Operators to calculate expectation values of, in case some of the cost functions depend on the value of certain expectation values.

None

Returns:

Type Description
SEPropagatorModel

(SEPropagatorModel): Model that when called with the parameters we optimize over as argument returns the results of sepropagator as well as the updated Hamiltonian

Examples:

In this simple example the parameters are the amplitudes of piecewise-constant controls

tsave = jnp.linspace(0.0, 11.0, 10)
H1s = [dq.sigmax(), dq.sigmay()]


def H_pwc(values: Array) -> dq.TimeQArray:
    H = dq.sigmaz()
    for idx, _H1 in enumerate(H1s):
        H += dq.pwc(tsave, values[idx], _H1)
    return H


sepropagator_model = ql.sepropagator_model(H_pwc, tsave)

In more complex cases, we can imagine that the optimized parameters are the control points fed into a spline, and additionally the control times themselves are optimized.

init_drive_params_topt = {
    'dp': -0.001 * jnp.ones((len(H1s), len(tsave))),
    't': tsave[-1],
}


def H_func_topt(t: float, drive_params_dict: dict) -> dq.TimeQArray:
    drive_params = drive_params_dict['dp']
    new_tsave = jnp.linspace(0.0, drive_params_dict['t'], len(tsave))
    drive_spline = _drive_spline(drive_params, envelope, new_tsave)
    drive_amps = drive_spline.evaluate(t)
    drive_Hs = jnp.einsum('d,dij->ij', drive_amps, H1s)
    return H0 + drive_Hs


def update_H_topt(drive_params_dict: dict) -> dq.TimeQArray:
    new_H = jtu.Partial(H_func_topt, drive_params_dict=drive_params_dict)
    return dq.timecallable(new_H)


def update_tsave_topt(drive_params_dict: dict) -> jax.Array:
    return jnp.linspace(0.0, drive_params_dict['t'], len(tsave))


sep_t_opt_Kerr_model = ql.sepropagator_model(update_H_topt, update_tsave_topt)

mepropagator_model(H_function: callable, jump_ops: list[QArrayLike | TimeQArray], tsave_or_function: ArrayLike | callable, *, exp_ops: list[ArrayLike] | None = None) -> MEPropagatorModel ¤

Instantiate mepropagator model.

Here we instantiate the model that is called at each step of the optimization iteration, returning a tuple of the result of calling mepropagator as well as the Hamiltonian evaluated at the parameter values.

Parameters:

Name Type Description Default
H_function _(callable)_

function specifying how to update the Hamiltonian

required
jump_ops _(list of qarray-like or time-qarray, each of shape (...Lk, n, n))_

List of jump operators.

required
tsave_or_function _(ArrayLike of shape (ntsave,) or callable)_

Either an array of times passed to the solver or a method specifying how to update the times that are passed to the solver

required
exp_ops _(list of array-like)_

Operators to calculate expectation values of, in case some of the cost functions depend on the value of certain expectation values.

None

Returns:

Type Description
MEPropagatorModel

(MEPropagatorModel): Model that when called with the parameters we optimize over as argument returns the results of mepropagator as well as the updated Hamiltonian

Examples:

Instantiating a MEPropagatorModel is quite similar to instantiating an SEPropagatorModel, with the difference being that we need to supply jump operators. Continuing the last example from sepropagator_model

jump_ops = [0.03 * dq.sigmax()]
mep_Kerr_model = ql.mepropagator_model(
    update_H_topt, jump_ops, update_tsave_topt
)

Custom Plotting¤

plot_costs(ax: Axes, costs: Cost, epoch: int, cost_values_over_epochs: list) -> Axes ¤

Plot the evolution of the cost function values.

get_controls(H: TimeQArray, tsave: np.ndarray) -> list[np.ndarray] ¤

Extract the Hamiltonian prefactors at the supplied times.

plot_controls(ax: Axes, _expects: Array | None, model: Model, parameters: Array | dict) -> Axes ¤

Plot the Hamiltonian prefactors, usually corresponding to controls.

plot_fft(ax: Axes, _expects: Array | None, model: Model, parameters: Array | dict) -> Axes ¤

Plot the fft of the Hamiltonian controls.

plot_expects(ax: Axes, expects: Array | None, model: Model, parameters: Array | dict) -> Axes ¤

Plot the expectation values obtained from the time evolution.

custom_plotter(plotting_functions: list[Callable]) -> Plotter ¤

Instantiate a custom Plotter for tracking results during optimization.

This function returns a Plotter that can be passed to optimize to track the progress of an optimization run. Note that the cost function values are always plotted in the first panel.

Parameters:

Name Type Description Default
plotting_functions _(list[Callable])_

list of functions that each return a plot useful for tracking intermediate results, such as the value of the optimized controls, fft of the controls, expectation values, etc. Each function must have signature example_plot_function(ax, expects, model, parameters) where ax is the matplotlib.pyplot.Axes instance where the results are plotted, expects is of type dq.SolveResult.expects (which could be None), model is of type ql.Model and parameters are the parameters being optimized. Of course, some of these arguments may be unused for a particular plot (for instance if we are plotting expectation values, we don't need access to parameters).

required

Returns:

Type Description
Plotter

(Plotter): Plotter whose update_plots method is repeatedly called during an optimization run.

Examples:

We plot the controls as well as the expectation values for two different initial states

import dynamiqs as dq
import jax.numpy as jnp
import numpy as np
import qontrol as ql
from functools import partial

H1s = [dq.sigmax(), dq.sigmay()]
H1_labels = ['X', 'Y']


def plot_states(
    ax: Axes,
    expects: Array | None,
    model: ql.Model,
    parameters: Array | dict,
    which=0,
) -> Axes:
    ax.set_facecolor('none')
    tsave = model.tsave_function(parameters)
    batch_idxs = np.ndindex(*expects.shape[:-3])
    for batch_idx in batch_idxs:
        ax.plot(tsave, np.real(expects[tuple(batch_idx), which, 0]))
    ax.set_xlabel('time [ns]')
    ax.set_ylabel(
        f'population in $|1\\rangle$ for initial state $|{which}\\rangle$'
    )
    return ax


def plot_controls(
    ax: Axes, _expects: Array | None, model: ql.Model, parameters: Array | dict
) -> Axes:
    ax.set_facecolor('none')
    tsave = model.tsave_function(parameters)
    finer_tsave = jnp.linspace(0.0, tsave[-1], 10 * len(tsave))
    for idx, control in enumerate(parameters):
        H_c = dq.pwc(tsave, control, H1s[idx])
        ax.plot(
            finer_tsave,
            np.real(jax.vmap(H_c.prefactor)(finer_tsave)) / 2 / np.pi,
            label=H1_labels[idx],
        )
    ax.legend(loc='lower right', framealpha=0.0)
    ax.set_ylabel('pulse amplitude [GHz]')
    ax.set_xlabel('time [ns]')
    return ax


plotter = ql.custom_plotter(
    [
        plot_controls,
        partial(plot_states, which=0),
        partial(plot_states, which=1),
    ]
)
See for example this tutorial.

File utilities¤