Functionalities of MarkovBounds.jl
Problem Specification
MarkovBounds.MarkovProcess
— TypeMarkovProcess
An abstract type to define stochastic processes that are supported by MarkovBounds.jl.
MarkovBounds.DriftProcess
— TypeDriftProcess(x, f, X = FullSpace();
iv = _variable("t"), controls = [], poly_vars = Dict())
constructor returns DriftProcess
object with fields
x
- vector containing the system state. Can be specified both as Variable type object from DynamicPolynomials.jl or Num type object generated by ModelingToolkit.jl. In the latter case, Variable variables with analogous names are generated automatically and a dictionary is assembled that keeps track of this transformation.f
- vector containing the drift function for the process.iv
- independent variable of the process (usually time)controls
- vector of control variables (empty if there are none)poly_vars
- Dict mapping symbolic variables created by Catalyst.jl or ModelingToolkit.jl to generated Variable placeholders.
MarkovBounds.JumpProcess
— TypeJumpProcess(x, a, h, X = FullSpace();
iv = Variable("t"), controls = [], poly_vars = Dict())
constructor returns JumpProcess
object with fields
x
- vector containing the system state. Can be specified both as Variable type object from DynamicPolynomials.jl or Num type object generated by Catalyst.jl/ModelingToolkit.jl. In the latter case, Variable variables with analogous names are generated automatically and a dictionary is assembled that keeps track of this transformation.a
- vector containing the rate coefficients for each independent jump in the systemh
- vector of vectors containing the jump functionsX
- basic semialgebraic set enclosing the state space of the processiv
- independent variable of the process (usually time)controls
- vector of control variables (empty if there are none)poly_vars
- Dict mapping symbolic variables created by Catalyst.jl or ModelingToolkit.jl to generated Variable placeholders.
MarkovBounds.ReactionProcess
— TypeReactionProcess(rn::ReactionSystem, params = Dict())
constructor returns ReactionProcess
object generated from Catalyst.jl ReactionSystem
object with given parameters. The fields of ReactionProcess objects are
ReactionSystem
- underlying ReactionSystem object as generated by Catalyst.jlJumpProcess
- JumpProcess object describing aJumpProcess
that is equivalent to the stochastic reaction network described by ReactionSystemspecies_to_index
- Dictionary mapping chemical species to its internally used variable speciestoindexspecies_to_state
- Dictionary mapping species to equivalent state in the JumpProcessstate_to_species
- Dictionary mapping state inJumpProcess
to species in reaction network
MarkovBounds.DiffusionProcess
— TypeDiffusionProcess(x, f, σ, X = FullSpace();
iv = _variable("t"), controls = [], poly_vars = Dict())
constructor returns DiffusionProcess
object with fields
x
- vector containing the system state. Can be specified both as Variable type object from DynamicPolynomials.jl or Num type object generated by Catalyst.jl/ModelingToolkit.jl. In the latter case, Variable variables with analogous names are generated automatically and a dictionary is assembled that keeps track of this transformation.f
- vector of drift coefficients for each stateσ
- diffusion matrix of the processX
- basic semialgebraic set enclosing the state space of the processiv
- independent variable of the process (usually time)controls
- vector of control variables (empty if there are none)poly_vars
- Dict mapping symbolic variables created by Catalyst.jl or ModelingToolkit.jl to generated Variable placeholders.
MarkovBounds.JumpDiffusionProcess
— TypeJumpDiffusionProcess(x, a, h, f, σ, X = FullSpace()
iv = _variable("t"), controls = [], poly_vars = Dict())
constructor returns JumpDiffusionProcess
object with fields
x
- vector containing the system state. Can be specified both as Variable type object from DynamicPolynomials.jl or Num type object generated by Catalyst.jl/ModelingToolkit.jl. In the latter case, Variable variables with analogous names are generated automatically and a dictionary is assembled that keeps track of this transformation.a
- vector containing the rate coefficients for each independent jump in the systemh
- vector of vectors containing the jump functionsf
- vector of drift coefficients for each stateσ
- diffusion matrix of the processX
- basic semialgebraic set enclosing the state space of the processiv
- independent variable of the process (usually time)controls
- vector of control variables (empty if there are none)poly_vars
- Dict mapping symbolic variables created by Catalyst.jl or ModelingToolkit.jl to generated Variable placeholders.
MarkovBounds.ControlProcess
— TypeControlProcess(MP::MarkovProcess, T::Real, U, obj, PCs = [], TCs = [], dis_fac = 0)
constructor returns object of type ControlProcess
with fields
MP
- MarkovProcess to be controlledT
- length of the time horizon (can be Inf)U
- closed basic semialgebraic set describing admissible control actionsPathChanceConstraints
- Array of chance path constraintsTerminalChanceConstraints
- Array of terminal chance constraintsdiscount_factor
- discount factor for accumulating stage cost
MarkovBounds.LagrangeMayer
— TypeLagrangeMayer
a type used to specify an objective function consisting of a Lagrange and Mayer term, i.e., $\mathbb{E}\left[ \int_{0}^T l(x(t), u(t)) \, dt + m(x(T))\right]$. The Lagrange and Mayer term are specified in terms of polynomial functions $l : X \times U \to \mathbb{R}$ and $m : X \to \mathbb{R}$, respectively.
Fields:
l
-AbstractPolynomialLike
determining the Lagrange termm
-AbstractPolynomialLike
determining the Mayer term
If only a Lagrange or Mayer term is neede for specification of the objective function, one can make use of the functions Lagrange(l::AbstractPolynomialLike)
and Mayer(m::AbstractPolynomialLike)
for more convenient model specification.
MarkovBounds.TerminalSetProbability
— TypeTerminalSetProbability
a type used to specify an objective function quantifiying the probability for a supported Markov process to occupy a specified closed basic semialgebraic set at the end of the control horizon specified in a ControlProcess
object.
Fields:
- X - basic semialgebraic set whose occupation probability is to be quantified at the end of the control horizon
MarkovBounds.ExitProbability
— TypeExitProbability
a type used to specify an objective function quantifiying the probability for a supported Markov process to exit a specified closed basic semialgebraic set on the time horizon specified in a ControlProcess
object.
Fields:
- X - basic semialgebraic set whose exit probability is to be quantified
Bounds on Stationary Moments of Markov Processes
MarkovBounds.stationary_polynomial
— Methodstationary_polynomial(MP::MarkovProcess, v::APL, d::Int, solver)
returns a lower bound on the expecation of a polynomial observables $v(x)$ at steady state of the Markov process MP
. The bound is computed based on an SOS program over a polynomial of degree at most d
; the bounds can be tightened by increasing d
. The program is solved with solver
.
MarkovBounds.stationary_mean
— Methodstationary_mean(MP::MarkovProcess, v::APL, d::Int, solver)
returns lower and upper bound on the observable $v(x)$ at steady state of the Markov process MP
. Bounds are computed based on SOS programs over a polynomial of degree at most d
; the bounds can be tightened by increasing d
. The program is solved with solver
.
MarkovBounds.stationary_mean
— Functionstationary_mean(rn::ReactionSystem, S0::Dict, S, d::Int, solver,
scales = Dict(s => 1 for s in species(rn));
auto_scaling = false)
returns lower and upper bound on the mean of species S
of the reaction network rn
with initial condition S0
(for all species!). The bound is based on an SOS program of order d
solved via solver
; the bounds can be tightened by increasing d
.
For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).
If the initial condition of the reaction network under investigation is unknown or irrelevant, simply call
stationary_mean(rn::ReactionSystem, S, d::Int, solver,
scales = Dict(s => 1 for s in species(rn))).
MarkovBounds.stationary_variance
— Methodstationary_variance(MP::MarkovProcess, v::APL, d::Int, solver)
returns SOS program of degree d
for computation of an upper bound on the variance of a polynomial observables v
at steady state of the Markov process MP
.
MarkovBounds.stationary_variance
— Functionstationary_variance(rn::ReactionSystem, S0, x, d::Int, solver,
scales = Dict(s => 1 for s in species(rn));
auto_scaling = false)
returns upper bound on the variance of species S
of the reaction network rn with initial condition S0
(for all species!). The bound is based on an SOS program of degree d
solved via solver
; the bound can be tightened by increasing d
.
For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling
which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).
If the initial condition of the reaction network under investigation is unknown or irrelevant, simply call
stationary_variance(rn::ReactionSystem, S, d::Int, solver,
scales = Dict(s => 1 for s in species(rn)))
MarkovBounds.stationary_covariance_ellipsoid
— Methodstationary_covariance_ellipsoid(MP::MarkovProcess, v::Vector{<:APL}, d::Int, solver)
returns an upper on the volume of the covariance ellipsoid of a vector of polynomial observables $v(x)$, i.e., $\text{det}(\mathbb{E} [v(x)v(x)^\top] - \mathbb{E}[v(x)] \mathbb{E}[v(x)]^\top)$, at steady state of the Markov process MP
. The bounds are computed via an SOS program of degree d
, hence can be tightened by increasing d
. This computation requires a solver
that can handle exponential cone constraints.
MarkovBounds.stationary_covariance_ellipsoid
— Functionstationary_covariance_ellipsoid(rn::ReactionSystem, S0::Dict, S::AbstractVector, d::Int, solver,
scales = Dict(s => 1 for s in species(rn));
auto_scaling = false)
returns an upper on the volume of the covariance ellipsoid of any subset S
of the chemical species in the reaction network rn
, i.e., $\text{det}(\mathbb{E}[SS^\top] - \mathbb{E}[S] \mathbb{E}[S]^\top)$, at steady state of the associated jump process. The reaction network is assumed to have the deterministic initial state S0
(all species must be included here!). The bounds are computed via an SOS program of degree d
, hence can be tightened by increasing d
. This computation requires a solver that can deal with exponential cone constraints.
For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).
If the initial condition of the reaction network under investigation is unknown or irrelevant, simply call
stationary_covariance_ellipsoid(rn::ReactionSystem, S, d::Int, solver,
scales = Dict(s => 1 for s in species(rn)))
Bounds on Transient Moments of Markov Processes
MarkovBounds.transient_polynomial
— Methodtransient_polynomial(MP::MarkovProcess, μ0::Dict, v::APL, d::Int, trange::AbstractVector{<:Real}, solver)
returns a lower bound on $\mathbb{E}[v(x(T))]$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP
at time T = trange[end]
. μ0
encodes the distribution of the initial state of the process in terms of its moments; specifically, it maps monomials to the respective moments of the initial distribution. trange
is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]
. Populating trange
and increasing d
improves the computed bound.
MarkovBounds.transient_mean
— Methodtransient_mean(MP::MarkovProcess, μ0::Dict, x::APL, d::Int, trange::AbstractVector{<:Real}, solver)
returns a lower and upper bound on $\mathbb{E}[v(x(T))]$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP
at time T = trange[end]
. μ0
encodes the distribution of the initial state of the process in terms of its moments; specifically, it maps monomials to the respective moments of the initial distribution. trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]
. Populating trange
and increasing d
improves the computed bounds.
MarkovBounds.transient_mean
— Functiontransient_mean(rn::ReactionSystem, S0::Dict, S, d::Int, trange::AbstractVector{<:Number}, solver,
scales = Dict(s => 1 for s in species(rn));
auto_scaling = false)
returns a lower and upper bound on the mean of the molecular count of species S
in reaction network rn
at time T = trange[end]
. S0
refers to the deterministic initial state of the reaction system (including all species!). trange
is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]
. Populating trange and increasing d
improves the computed bounds.
For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).
MarkovBounds.transient_variance
— Methodtransient_variance(MP::MarkovProcess, μ0::Dict, v::APL, d::Int, trange::AbstractVector{<:Real}, solver)
returns an upper bound on $\mathbb{E}[v(x(T))^2] - \mathbb{E}[v(x(T))]^2$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP
at time T = trange[end]
.
MarkovBounds.transient_variance
— Functiontransient_variance(rn::ReactionSystem, S0::Dict, S, d::Int, trange::AbstractVector{<:Real}, solver,
scales = Dict(s => 1 for s in species(rn));
auto_scaling = false)
returns an upper bound on the variance of species S
in the reaction network rn
at time T = trange[end]
. S0
refers to the deterministic initial state of the reaction system (including all species!). trange
is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]
. Populating trange
and increasing d
improves the computed bounds.
For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling
which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).
MarkovBounds.transient_covariance_ellipsoid
— Methodtransient_covariance_ellipsoid(MP::MarkovProcess, μ0::Dict, v::Vector{APL}, d::Int, trange::AbstractVector{<:Real}, solver)
returns an upper bound on the volume of the covariance ellipsoid $\text{det}(\mathbb{E}[v(x(T))v(x(T))^\top] - \mathbb{E}[v(x(T))] \mathbb{E}[v(x(T))]^\top)$, where $v$ is a polynomial and $x(T)$ the state of the Markov process MP
at time T = trange[end]
.
MarkovBounds.transient_covariance_ellipsoid
— Functiontransient_covariance_ellipsoid(rn::ReactionSystem, S0::Dict, S::AbstractVector, d::Int, trange::AbstractVector{<:Real}, solver,
scales = Dict(s => 1 for s in species(rn));
auto_scaling = false)
returns an upper bound on the volume of the covariance ellipsoid associated with any collection of chemical species in the reaction network rn
at time T = trange[end]
. S0 refers to the deterministic initial state of the reaction system (including all species!). trange
is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]
. Populating trange
and increasing d
improves the computed bounds.
For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).
Bounds on Stochastic Optimal Control Problems
MarkovBounds.optimal_control
— Methodoptimal_control(CP::ControlProcess, μ0::Dict, d::Int, trange::AbstractVector{<:Real}, solver,
P::Partition = trivial_partition(CP.MP.X);
inner_approx = SOSCone)
returns a lower bound on the objective value of the (stochastic) optimal control problem specified by CP
. μ0
encodes information about the distribution of the initial state of the process; specifically, μ0
maps a given monomial to the corresponding moment of the initial distribution. trange
refers to an ordered set of time points discretizing the control horizon. trange[end]
should coincide with the end of the control horizon, i.e., trange[end] = Inf
in case of an infinite horizon problem. P
is the partition of the state space X and defaults to the trivial partition with a singular element coinciding with X itself. inner_approx
is the the inner approximation of the cone of non-negative polynomials used to construct the relaxations. By default inner_approx=SOSCone
and other choices are DSOSCone
and SDSOSCone
referring to the cones of diagonally dominant and scaled diagonally dominant sum-of-squares polynomials, respectively. The bound is computed via a SOS program of degree d
solved with an appropriate method given by solver.
The bound can be tightened by populating trange or increasing d
.
API
MarkovBounds.reaction_process_setup
— Methodreaction_process_setup(rn::ReactionSystem, x0::Dict;
scales = Dict(s => 1.0 for s in species(rn)),
auto_scaling = false, solver = nothing)
transforms a reaction network as defined by Catalyst.jl's ReactionSystem type
into an equivalent ReactionProcess accounting for reaction invariants and
scales.
MarkovBounds.approximate_stationary_measure
— Functionapproximate_stationary_measure(MP::MarkovProcess, v::APL, order::Int, solver, P::Partition;
side_infos::BasicSemialgebraicSet)
returns approximate values for the stationary measure on the partition P
. v
is a polynomial observable whose expectation is minimized when determining the approximation. The choice of v
shall be understood as means to regularize the problem.
MarkovBounds.max_entropy_measure
— Functionmax_entropy_measure(MP::MarkovProcess, order::Int, solver, P::Partition;
side_infos::BasicSemialgebraicSet)
returns approximate values for the stationary measure which maximizes the entropy on the partition P
.
MarkovBounds.stationary_covariance_ellipsoid
— Functionstationary_covariance_ellipsoid(MP::MarkovProcess, v::Vector{<:APL}, d::Int, solver)
returns an upper on the volume of the covariance ellipsoid of a vector of polynomial observables $v(x)$, i.e., $\text{det}(\mathbb{E} [v(x)v(x)^\top] - \mathbb{E}[v(x)] \mathbb{E}[v(x)]^\top)$, at steady state of the Markov process MP
. The bounds are computed via an SOS program of degree d
, hence can be tightened by increasing d
. This computation requires a solver
that can handle exponential cone constraints.
MarkovBounds.stationary_mean
— Functionstationary_mean(MP::MarkovProcess, v::APL, d::Int, solver)
returns lower and upper bound on the observable $v(x)$ at steady state of the Markov process MP
. Bounds are computed based on SOS programs over a polynomial of degree at most d
; the bounds can be tightened by increasing d
. The program is solved with solver
.
MarkovBounds.stationary_polynomial
— Functionstationary_polynomial(MP::MarkovProcess, v::APL, d::Int, solver)
returns a lower bound on the expecation of a polynomial observables $v(x)$ at steady state of the Markov process MP
. The bound is computed based on an SOS program over a polynomial of degree at most d
; the bounds can be tightened by increasing d
. The program is solved with solver
.
MarkovBounds.stationary_pop
— Functionstationary_pop(MP::MarkovProcess, v::APL, d::Int, solver)
returns SOS program of degree d
for compuitation of a lower bound on the expecation of a polynomial observable $v(x)$ at steady state of the Markov process MP
.
MarkovBounds.stationary_probability_mass
— Methodstationary_probability_mass(MP::MarkovProcess, X::BasicSemialgebraicSet, d::Int,
solver)
returns lower and upper bounds on the probability mass associated with the set X
. d
refers to the order of the relaxation used, again the bounds will tighten monotonically with increasing order. solver refers to the optimizer used to solve the semidefinite programs which optimal values furnish the bounds. This is the weakest formulation that can be used to compute bounds on the probabiltiy mass associated with a Basic semialgebraic set. For sensible results the set X
should have non- empty interior. In order to improve the bounds the user should supply a carefully defined partition of the state space. In that case it is most sensible to choose X
as a subset of the element of said partition. Then, one should call
stationary_probability_mass(MP::MarkovProcess, v::Int, order::Int, solver,
P::Partition)
where v
refers to the vertex of the partition graph that corresponds to the set X
.
MarkovBounds.stationary_variance
— Functionstationary_variance(MP::MarkovProcess, v::APL, d::Int, solver)
returns SOS program of degree d
for computation of an upper bound on the variance of a polynomial observables v
at steady state of the Markov process MP
.
MarkovBounds.max_entropy_measure
— Methodmax_entropy_measure(MP::MarkovProcess, μ0::Dict, d::Int, trange::AbstractVector{<:Real}, solver, P::Partition;
side_infos::BasicSemialgebraicSet)
returns approximate values for the transient measure which maximizes the entropy on the partition P
at time t.
MarkovBounds.transient_covariance_ellipsoid
— Functiontransient_covariance_ellipsoid(MP::MarkovProcess, μ0::Dict, v::Vector{APL}, d::Int, trange::AbstractVector{<:Real}, solver)
returns an upper bound on the volume of the covariance ellipsoid $\text{det}(\mathbb{E}[v(x(T))v(x(T))^\top] - \mathbb{E}[v(x(T))] \mathbb{E}[v(x(T))]^\top)$, where $v$ is a polynomial and $x(T)$ the state of the Markov process MP
at time T = trange[end]
.
MarkovBounds.transient_covariance_ellipsoid
— Functiontransient_covariance_ellipsoid(rn::ReactionSystem, S0::Dict, S::AbstractVector, d::Int, trange::AbstractVector{<:Real}, solver,
scales = Dict(s => 1 for s in species(rn));
auto_scaling = false)
returns an upper bound on the volume of the covariance ellipsoid associated with any collection of chemical species in the reaction network rn
at time T = trange[end]
. S0 refers to the deterministic initial state of the reaction system (including all species!). trange
is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]
. Populating trange
and increasing d
improves the computed bounds.
For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).
MarkovBounds.transient_mean
— Functiontransient_mean(rn::ReactionSystem, S0::Dict, S, d::Int, trange::AbstractVector{<:Number}, solver,
scales = Dict(s => 1 for s in species(rn));
auto_scaling = false)
returns a lower and upper bound on the mean of the molecular count of species S
in reaction network rn
at time T = trange[end]
. S0
refers to the deterministic initial state of the reaction system (including all species!). trange
is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]
. Populating trange and increasing d
improves the computed bounds.
For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).
MarkovBounds.transient_mean
— Functiontransient_mean(MP::MarkovProcess, μ0::Dict, x::APL, d::Int, trange::AbstractVector{<:Real}, solver)
returns a lower and upper bound on $\mathbb{E}[v(x(T))]$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP
at time T = trange[end]
. μ0
encodes the distribution of the initial state of the process in terms of its moments; specifically, it maps monomials to the respective moments of the initial distribution. trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]
. Populating trange
and increasing d
improves the computed bounds.
MarkovBounds.transient_polynomial
— Functiontransient_polynomial(MP::MarkovProcess, μ0::Dict, v::APL, d::Int, trange::AbstractVector{<:Real}, solver)
returns a lower bound on $\mathbb{E}[v(x(T))]$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP
at time T = trange[end]
. μ0
encodes the distribution of the initial state of the process in terms of its moments; specifically, it maps monomials to the respective moments of the initial distribution. trange
is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]
. Populating trange
and increasing d
improves the computed bound.
MarkovBounds.transient_pop
— Functiontransient_pop(MP::MarkovProcess, μ0::Dict, v::APL, d::Int, trange::AbstractVector{<:Real}, solver)
returns SOS program of degree d
for computing a lower bound on $\mathbb{E}[v(x(T))]$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP
at time T = trange[end]
. μ0 encodes the distribution of the initial state of the process in terms of its moments; specifically, it maps monomials to the respective moments of the initial distribution. trange
is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]
. Populating trange
and increasing d
has a tightening effect on the bound furnished by the assembled SOS program.
MarkovBounds.transient_variance
— Functiontransient_variance(rn::ReactionSystem, S0::Dict, S, d::Int, trange::AbstractVector{<:Real}, solver,
scales = Dict(s => 1 for s in species(rn));
auto_scaling = false)
returns an upper bound on the variance of species S
in the reaction network rn
at time T = trange[end]
. S0
refers to the deterministic initial state of the reaction system (including all species!). trange
is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]
. Populating trange
and increasing d
improves the computed bounds.
For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling
which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).
MarkovBounds.transient_variance
— Functiontransient_variance(MP::MarkovProcess, μ0::Dict, v::APL, d::Int, trange::AbstractVector{<:Real}, solver)
returns an upper bound on $\mathbb{E}[v(x(T))^2] - \mathbb{E}[v(x(T))]^2$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP
at time T = trange[end]
.
MarkovBounds.optimal_control
— Functionoptimal_control(CP::ControlProcess, μ0::Dict, d::Int, trange::AbstractVector{<:Real}, solver,
P::Partition = trivial_partition(CP.MP.X);
inner_approx = SOSCone)
returns a lower bound on the objective value of the (stochastic) optimal control problem specified by CP
. μ0
encodes information about the distribution of the initial state of the process; specifically, μ0
maps a given monomial to the corresponding moment of the initial distribution. trange
refers to an ordered set of time points discretizing the control horizon. trange[end]
should coincide with the end of the control horizon, i.e., trange[end] = Inf
in case of an infinite horizon problem. P
is the partition of the state space X and defaults to the trivial partition with a singular element coinciding with X itself. inner_approx
is the the inner approximation of the cone of non-negative polynomials used to construct the relaxations. By default inner_approx=SOSCone
and other choices are DSOSCone
and SDSOSCone
referring to the cones of diagonally dominant and scaled diagonally dominant sum-of-squares polynomials, respectively. The bound is computed via a SOS program of degree d
solved with an appropriate method given by solver.
The bound can be tightened by populating trange or increasing d
.