Skip to content

Python API¤

The qontrol Python API consists of the function optimize which performs optimal control, and various models that can be optimized including SESolveModel and MESolveModel. There are additionally various utility functions and classes that help to define an optimization routine (cost functions, file input output, options, etc.)

Optimization¤

optimize(parameters, costs, model, *, optimizer=optax.adam(0.0001, b1=0.99, b2=0.99), solver=Tsit5(), gradient=None, dq_options=dq.Options(), opt_options=None, filepath=None) ¤

Perform gradient descent to optimize Hamiltonian parameters.

This function takes as input parameters which parametrize a model when called performs time-dynamics simulations using dynamiqs. How to update parameters is encoded in the list of cost functions costs that contains e.g. infidelity contributions, pulse amplitude penalties, etc.

Parameters:

Name Type Description Default
parameters _(dict or array-like)_

parameters to optimize over that are used to define the Hamiltonian and control times.

required
costs _(list of Cost instances)_

List of cost functions used to perform the optimization.

required
model _(Model)_

Model that is called at each iteration step.

required
optimizer _(optax.GradientTransformation)_

optax optimizer to use for gradient descent. Defaults to the Adam optimizer.

adam(0.0001, b1=0.99, b2=0.99)
solver _(Solver)_

Solver passed to dynamiqs.

Tsit5()
gradient _()Gradient_

Gradient passed to dynamiqs.

None
options _(dict)_

Options for grape optimization. verbose (bool): If True, the optimizer will print out the infidelity at each epoch step to track the progress of the optimization. all_costs (bool): Whether or not all costs must be below their targets for early termination of the optimizer. If False, the optimization terminates if only one cost function is below the target (typically infidelity). epochs (int): Number of optimization epochs. plot (bool): Whether to plot the results during the optimization (for the epochs where results are plotted, necessarily suffer a time penalty). plot_period (int): If plot is True, plot every plot_period. which_states_plot (tuple): Which states to plot if expectation values are passed to the model. Defaults to (0, ), so just plot expectation values for the zero indexed batch state xtol (float): Defaults to 1e-8, terminate the optimization if the parameters are not being updated ftol (float): Defaults to 1e-8, terminate the optimization if the cost function is not changing above this level gtol (float): Defaults to 1e-8, terminate the optimization if the norm of the gradient falls below this level

required
filepath _(str)_

Filepath of where to save optimization results.

None

Returns:

Type Description
Array | dict

optimized parameters from the final timestep

Cost functions¤

coherent_infidelity(target_states, cost_multiplier=1.0, target_cost=0.005) ¤

Instantiate the cost function for calculating infidelity coherently.

This infidelity is defined as $$ F_{\rm coherent} = |\sum_{k}\langle\psi_{t}^{k}|\psi_{i}^{k}(T)\rangle|^2, $$ where the states at the end of the pulse are \(|\psi_{i}^{k}(T)\rangle\) and the target states are \(|\psi_{t}^{k}\rangle\).

Parameters:

Name Type Description Default
target_states _(array_like of shape (s, n, 1) or (s, n, n))_

target states for the initial states passed to grape. If performing master-equation optimization, the target states should be passed as a list of density matrices.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.005

Returns:

Type Description
CoherentInfidelity

(CoherentInfidelity): Callable object that returns the coherent infidelity and whether the infidelity is below the target value.

control_area(cost_multiplier=1.0, target_cost=0.0) ¤

Control area cost function.

Penalize the area under the pulse curves according to $$ C = \sum_{j}\int_{0}^{T}\Omega_{j}(t)dt, $$ where the \(\Omega_{j}\) are the individual controls and \(T\) is the pulse time.

Parameters:

Name Type Description Default
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.0

Returns:

Type Description
ControlCostArea

(ControlArea): Callable object that returns the control-area cost and whether the cost is below the target value.

control_norm(threshold, cost_multiplier=1.0, target_cost=0.0) ¤

Control norm cost function.

Penalize the norm of the controls above some threshold according to $$ C = \sum_{j}\int_{0}^{T}ReLU(|\Omega_{j}(t)|-\Omega_{max})dt, $$ where the \(\Omega_{j}\) are the individual controls, \(T\) is the pulse time and \(\Omega_{max}\) is the threshold.

Parameters:

Name Type Description Default
threshold _(float)_

Threshold to use for penalizing amplitudes above this value in absolute magnitude.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.0

Returns:

Type Description
ControlCostNorm

(ControlNorm): Callable object that returns the control-norm cost and whether the cost is below the target value.

custom_control_cost(cost_fun, cost_multiplier=1.0, target_cost=0.0) ¤

Cost function based on an arbitrary transformation of the controls.

Penalize the controls according to an arbitrary function F $$ C = \sum_{j}\int_{0}^{T}F(\Omega_{j}(t))dt, $$

Parameters:

Name Type Description Default
cost_fun _(callable)_

Cost function which must have signature (control_amp: Array) -> Array.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.0

Returns:

Type Description
CustomControlCost

(CustomCost): Callable object that returns the cost for the custom function and whether the cost is below the target value.

Examples:

def penalize_negative(control_amp: jax.Array) -> jax.Array:
    return jax.nn.relu(-control_amp)


negative_amp_cost = ql.custom_control_cost(penalize_negative)
In this example, we penalize negative drive amplitudes.

custom_cost(cost_fun, cost_multiplier=1.0, target_cost=0.0) ¤

A custom cost function.

In many (most!) cases, the user may want to add a cost function to their optimization that is not included in the hardcoded set of available cost functions.

Parameters:

Name Type Description Default
cost_fun _(callable)_

Cost function which must have signature (result: dq.Result, H: dq.TimeArray, parameters: dict | Array) -> Array.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.0

Returns:

Type Description
CustomCost

(CustomCost): Callable object that returns the cost for the custom function.

Examples:

Let's imagine we want to penalize the value of some expectation value at the final time in tsave.

def penalize_expect(
    result: Result, H: TimeArray, parameters: dict | Array
) -> Array:
    # 0 is the index of the operator, -1 is the time index
    return jnp.sum(jnp.abs(result.expects[0, -1]))


expect_cost = ql.custom_cost(penalize_expect)
Then expect_cost can be added to the other utilized cost functions. The only thing happening under the hood is that the penalize_expect function is passed to jax.tree_util.Partial to enable it to be passed through jitted functions without issue.

forbidden_states(forbidden_states_list, cost_multiplier=1.0, target_cost=0.0) ¤

Instantiate the cost function for penalizing forbidden-state occupation.

This cost function is defined as $$ C = \sum_{k}\sum_{f}\int_{0}^{T}dt|\langle\psi_{f}^{k}|\psi_{i}^{k}(t)\rangle|^2, $$ where \(|\psi_{i}^{k}(t)\rangle\) is the \(k^{\rm th}\) initial state propagated to time \(t\) and |\psi_{f}^{k}\rangle is a forbidden state for the \(k^{\rm th}\) initial state.

Parameters:

Name Type Description Default
forbidden_states_list _(list of list of array-like of shape (n, 1) or (n, n))_

For each initial state indexed by s (outer list), a list of forbidden states (inner list) should be provided. The inner lists need not all be of the same shape, for instance if some initial states have more forbidden states than others. The array is eventually reshaped to have shape (s, f, n, 1) or (s, f, n, n) (for sesolve or mesolve, respectively) where s is the number of initial states and f is the length of the longest forbidden-state list (with zero-padding for meaningless entries).

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.0

Returns:

Type Description
ForbiddenStates

(ForbiddenStates): Callable object that returns the forbidden-state cost and whether the cost is below the target value.

incoherent_infidelity(target_states, cost_multiplier=1.0, target_cost=0.005) ¤

Instantiate the cost function for calculating infidelity incoherently.

This infidelity is defined as $$ F_{\rm incoherent} = \sum_{k}|\langle\psi_{t}^{k}|\psi_{i}^{k}(T)\rangle|^2, $$ where the states at the end of the pulse are \(|\psi_{i}^{k}(T)\rangle\) and the target states are \(|\psi_{t}^{k}\rangle\).

Parameters:

Name Type Description Default
target_states _(array_like of shape (s, n, 1) or (s, n, n))_

target states for the initial states passed to grape. If performing master-equation optimization, the target states should be passed as a list of density matrices.

required
cost_multiplier _(float)_

Weight for this cost function relative to other cost functions.

1.0
target_cost _(float)_

Target value for this cost function. If options.all_costs is True, the optimization terminates early if all cost functions fall below their target values. If options.all_costs is False, the optimization terminates if only one cost function falls below its target value.

0.005

Returns:

Type Description
IncoherentInfidelity

(IncoherentInfidelity): Callable object that returns the incoherent infidelity and whether the infidelity is below the target value.

Models¤

MESolveModel ¤

Bases: Model

Model for Lindblad-master-equation optimization.

When called with the parameters we optimize over returns the results of mesolve as well as the updated Hamiltonian.

SESolveModel ¤

Bases: Model

Model for Schrödinger-equation optimization.

When called with the parameters we optimize over returns the results of sesolve as well as the updated Hamiltonian.

mesolve_model(H_function, jump_ops, rho0, tsave_or_function, *, exp_ops=None) ¤

Instantiate mesolve model.

Here we instantiate the model that is called at each step of the optimization iteration, returning a tuple of the result of calling mesolve as well as the Hamiltonian evaluated at the parameter values.

Parameters:

Name Type Description Default
H_function _(callable)_

function specifying how to update the Hamiltonian

required
jump_ops _(list of array-like or time-array, each of shape (...Lk, n, n))_

List of jump operators.

required
rho0 _(ArrayLike of shape (..., n, n))_

Initial density matrices.

required
tsave_or_function _(ArrayLike of shape (ntsave,) or callable)_

Either an array of times passed to sesolve or a method specifying how to update the times that are passed to sesolve

required
exp_ops _(list of array-like)_

Operators to calculate expectation values of, in case some of the cost functions depend on the value of certain expectation values.

None

Returns:

Type Description
MESolveModel

(MESolveModel): Model that when called with the parameters we optimize over as argument returns the results of mesolve as well as the updated Hamiltonian

Examples:

Instantiating a MESolveModel is quite similar to instantiating an SESolveModel, with the two differences being that we need to supply jump operators, and the initial and target states should be specified as density matrices. Continuing the last example from sesolve_model

jump_ops = [0.03 * dq.sigmax()]
me_initial_states = dq.todm(psi0)
me_Kerr_model = ql.mesolve_model(
    update_H_topt, jump_ops, me_initial_states, update_tsave_topt
)
See this tutorial for example

sesolve_model(H_function, psi0, tsave_or_function, *, exp_ops=None) ¤

Instantiate sesolve model.

Here we instantiate the model that is called at each step of the optimization iteration, returning a tuple of the result of calling sesolve as well as the Hamiltonian evaluated at the parameter values.

Parameters:

Name Type Description Default
H_function _(callable)_

function specifying how to update the Hamiltonian

required
psi0 _(ArrayLike of shape (..., n, 1))_

Initial states.

required
tsave_or_function _(ArrayLike of shape (ntsave,) or callable)_

Either an array of times passed to sesolve or a method specifying how to update the times that are passed to sesolve

required
exp_ops _(list of array-like)_

Operators to calculate expectation values of, in case some of the cost functions depend on the value of certain expectation values.

None

Returns:

Type Description
SESolveModel

(SESolveModel): Model that when called with the parameters we optimize over as argument returns the results of sesolve as well as the updated Hamiltonian

Examples:

In this simple example the parameters are the amplitudes of piecewise-constant controls

tsave = jnp.linspace(0.0, 11.0, 10)
psi0 = [dq.basis(2, 0)]
H1s = [dq.sigmax(), dq.sigmay()]


def H_pwc(values: Array) -> dq.TimeArray:
    H = dq.sigmaz()
    for idx, _H1 in enumerate(H1s):
        H += dq.pwc(tsave, values[idx], _H1)
    return H


sesolve_model = ql.sesolve_model(H_pwc, psi0, tsave)
See for example this tutorial.

In more complex cases, we can imagine that the optimized parameters are the control points fed into a spline, and additionally the control times themselves are optimized.

init_drive_params_topt = {
    'dp': -0.001 * jnp.ones((len(H1s), len(tsave))),
    't': tsave[-1],
}


def H_func_topt(t: float, drive_params_dict: dict) -> dq.TimeArray:
    drive_params = drive_params_dict['dp']
    new_tsave = jnp.linspace(0.0, drive_params_dict['t'], len(tsave))
    drive_spline = _drive_spline(drive_params, envelope, new_tsave)
    drive_amps = drive_spline.evaluate(t)
    drive_Hs = jnp.einsum('d,dij->ij', drive_amps, H1s)
    return H0 + drive_Hs


def update_H_topt(drive_params_dict: dict) -> dq.TimeArray:
    new_H = jtu.Partial(H_func_topt, drive_params_dict=drive_params_dict)
    return dq.timecallable(new_H)


def update_tsave_topt(drive_params_dict: dict) -> jax.Array:
    return jnp.linspace(0.0, drive_params_dict['t'], len(tsave))


se_t_opt_Kerr_model = ql.sesolve_model(update_H_topt, psi0, update_tsave_topt)
See for example this tutorial.

File utilities¤