Functionalities of MarkovBounds.jl

Problem Specification

MarkovBounds.DriftProcessType
DriftProcess(x, f, X = FullSpace();
             iv = _variable("t"), controls = [], poly_vars = Dict())

constructor returns DriftProcess object with fields

  • x - vector containing the system state. Can be specified both as Variable type object from DynamicPolynomials.jl or Num type object generated by ModelingToolkit.jl. In the latter case, Variable variables with analogous names are generated automatically and a dictionary is assembled that keeps track of this transformation.
  • f - vector containing the drift function for the process.
  • iv - independent variable of the process (usually time)
  • controls - vector of control variables (empty if there are none)
  • poly_vars - Dict mapping symbolic variables created by Catalyst.jl or ModelingToolkit.jl to generated Variable placeholders.
source
MarkovBounds.JumpProcessType
JumpProcess(x, a, h, X = FullSpace();
            iv = Variable("t"), controls = [], poly_vars = Dict())

constructor returns JumpProcess object with fields

  • x - vector containing the system state. Can be specified both as Variable type object from DynamicPolynomials.jl or Num type object generated by Catalyst.jl/ModelingToolkit.jl. In the latter case, Variable variables with analogous names are generated automatically and a dictionary is assembled that keeps track of this transformation.
  • a - vector containing the rate coefficients for each independent jump in the system
  • h - vector of vectors containing the jump functions
  • X - basic semialgebraic set enclosing the state space of the process
  • iv - independent variable of the process (usually time)
  • controls - vector of control variables (empty if there are none)
  • poly_vars - Dict mapping symbolic variables created by Catalyst.jl or ModelingToolkit.jl to generated Variable placeholders.
source
MarkovBounds.ReactionProcessType
ReactionProcess(rn::ReactionSystem, params = Dict())

constructor returns ReactionProcess object generated from Catalyst.jl ReactionSystem object with given parameters. The fields of ReactionProcess objects are

  • ReactionSystem- underlying ReactionSystem object as generated by Catalyst.jl
  • JumpProcess - JumpProcess object describing a JumpProcess that is equivalent to the stochastic reaction network described by ReactionSystem
  • species_to_index - Dictionary mapping chemical species to its internally used variable speciestoindex
  • species_to_state - Dictionary mapping species to equivalent state in the JumpProcess
  • state_to_species - Dictionary mapping state in JumpProcess to species in reaction network
source
MarkovBounds.DiffusionProcessType
DiffusionProcess(x, f, σ, X = FullSpace();
                 iv = _variable("t"), controls = [], poly_vars = Dict())

constructor returns DiffusionProcess object with fields

  • x - vector containing the system state. Can be specified both as Variable type object from DynamicPolynomials.jl or Num type object generated by Catalyst.jl/ModelingToolkit.jl. In the latter case, Variable variables with analogous names are generated automatically and a dictionary is assembled that keeps track of this transformation.
  • f - vector of drift coefficients for each state
  • σ - diffusion matrix of the process
  • X - basic semialgebraic set enclosing the state space of the process
  • iv - independent variable of the process (usually time)
  • controls - vector of control variables (empty if there are none)
  • poly_vars - Dict mapping symbolic variables created by Catalyst.jl or ModelingToolkit.jl to generated Variable placeholders.
source
MarkovBounds.JumpDiffusionProcessType
JumpDiffusionProcess(x, a, h, f, σ, X = FullSpace()
                     iv = _variable("t"), controls = [], poly_vars = Dict())

constructor returns JumpDiffusionProcess object with fields

  • x - vector containing the system state. Can be specified both as Variable type object from DynamicPolynomials.jl or Num type object generated by Catalyst.jl/ModelingToolkit.jl. In the latter case, Variable variables with analogous names are generated automatically and a dictionary is assembled that keeps track of this transformation.
  • a - vector containing the rate coefficients for each independent jump in the system
  • h - vector of vectors containing the jump functions
  • f - vector of drift coefficients for each state
  • σ - diffusion matrix of the process
  • X - basic semialgebraic set enclosing the state space of the process
  • iv - independent variable of the process (usually time)
  • controls - vector of control variables (empty if there are none)
  • poly_vars - Dict mapping symbolic variables created by Catalyst.jl or ModelingToolkit.jl to generated Variable placeholders.
source
MarkovBounds.ControlProcessType
ControlProcess(MP::MarkovProcess, T::Real, U, obj, PCs = [], TCs = [], dis_fac = 0)

constructor returns object of type ControlProcess with fields

  • MP - MarkovProcess to be controlled
  • T - length of the time horizon (can be Inf)
  • U - closed basic semialgebraic set describing admissible control actions
  • PathChanceConstraints - Array of chance path constraints
  • TerminalChanceConstraints - Array of terminal chance constraints
  • discount_factor - discount factor for accumulating stage cost
source
MarkovBounds.LagrangeMayerType
LagrangeMayer

a type used to specify an objective function consisting of a Lagrange and Mayer term, i.e., $\mathbb{E}\left[ \int_{0}^T l(x(t), u(t)) \, dt + m(x(T))\right]$. The Lagrange and Mayer term are specified in terms of polynomial functions $l : X \times U \to \mathbb{R}$ and $m : X \to \mathbb{R}$, respectively.

Fields:

  • l - AbstractPolynomialLike determining the Lagrange term
  • m - AbstractPolynomialLike determining the Mayer term

If only a Lagrange or Mayer term is neede for specification of the objective function, one can make use of the functions Lagrange(l::AbstractPolynomialLike) and Mayer(m::AbstractPolynomialLike) for more convenient model specification.

source
MarkovBounds.TerminalSetProbabilityType
TerminalSetProbability

a type used to specify an objective function quantifiying the probability for a supported Markov process to occupy a specified closed basic semialgebraic set at the end of the control horizon specified in a ControlProcess object.

Fields:

  • X - basic semialgebraic set whose occupation probability is to be quantified at the end of the control horizon
source
MarkovBounds.ExitProbabilityType
ExitProbability

a type used to specify an objective function quantifiying the probability for a supported Markov process to exit a specified closed basic semialgebraic set on the time horizon specified in a ControlProcess object.

Fields:

  • X - basic semialgebraic set whose exit probability is to be quantified
source

Bounds on Stationary Moments of Markov Processes

MarkovBounds.stationary_polynomialMethod
stationary_polynomial(MP::MarkovProcess, v::APL, d::Int, solver)

returns a lower bound on the expecation of a polynomial observables $v(x)$ at steady state of the Markov process MP. The bound is computed based on an SOS program over a polynomial of degree at most d; the bounds can be tightened by increasing d. The program is solved with solver.

source
MarkovBounds.stationary_meanMethod
stationary_mean(MP::MarkovProcess, v::APL, d::Int, solver)

returns lower and upper bound on the observable $v(x)$ at steady state of the Markov process MP. Bounds are computed based on SOS programs over a polynomial of degree at most d; the bounds can be tightened by increasing d. The program is solved with solver.

source
MarkovBounds.stationary_meanFunction
stationary_mean(rn::ReactionSystem, S0::Dict, S, d::Int, solver,
		scales = Dict(s => 1 for s in species(rn));
		auto_scaling = false)

returns lower and upper bound on the mean of species S of the reaction network rn with initial condition S0 (for all species!). The bound is based on an SOS program of order d solved via solver; the bounds can be tightened by increasing d.

For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).

If the initial condition of the reaction network under investigation is unknown or irrelevant, simply call

stationary_mean(rn::ReactionSystem, S, d::Int, solver,
		scales = Dict(s => 1 for s in species(rn))).
source
MarkovBounds.stationary_varianceMethod
stationary_variance(MP::MarkovProcess, v::APL, d::Int, solver)

returns SOS program of degree d for computation of an upper bound on the variance of a polynomial observables v at steady state of the Markov process MP.

source
MarkovBounds.stationary_varianceFunction
stationary_variance(rn::ReactionSystem, S0, x, d::Int, solver,
		    scales = Dict(s => 1 for s in species(rn));
		    auto_scaling = false)

returns upper bound on the variance of species S of the reaction network rn with initial condition S0 (for all species!). The bound is based on an SOS program of degree d solved via solver; the bound can be tightened by increasing d.

For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).

If the initial condition of the reaction network under investigation is unknown or irrelevant, simply call

stationary_variance(rn::ReactionSystem, S, d::Int, solver,
		    scales = Dict(s => 1 for s in species(rn)))
source
MarkovBounds.stationary_covariance_ellipsoidMethod
stationary_covariance_ellipsoid(MP::MarkovProcess, v::Vector{<:APL}, d::Int, solver)

returns an upper on the volume of the covariance ellipsoid of a vector of polynomial observables $v(x)$, i.e., $\text{det}(\mathbb{E} [v(x)v(x)^\top] - \mathbb{E}[v(x)] \mathbb{E}[v(x)]^\top)$, at steady state of the Markov process MP. The bounds are computed via an SOS program of degree d, hence can be tightened by increasing d. This computation requires a solver that can handle exponential cone constraints.

source
MarkovBounds.stationary_covariance_ellipsoidFunction
stationary_covariance_ellipsoid(rn::ReactionSystem, S0::Dict, S::AbstractVector, d::Int, solver,
				scales = Dict(s => 1 for s in species(rn));
				auto_scaling = false)

returns an upper on the volume of the covariance ellipsoid of any subset S of the chemical species in the reaction network rn, i.e., $\text{det}(\mathbb{E}[SS^\top] - \mathbb{E}[S] \mathbb{E}[S]^\top)$, at steady state of the associated jump process. The reaction network is assumed to have the deterministic initial state S0 (all species must be included here!). The bounds are computed via an SOS program of degree d, hence can be tightened by increasing d. This computation requires a solver that can deal with exponential cone constraints.

For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).

If the initial condition of the reaction network under investigation is unknown or irrelevant, simply call

stationary_covariance_ellipsoid(rn::ReactionSystem, S, d::Int, solver,
				scales = Dict(s => 1 for s in species(rn)))
source

Bounds on Transient Moments of Markov Processes

MarkovBounds.transient_polynomialMethod
transient_polynomial(MP::MarkovProcess, μ0::Dict, v::APL, d::Int, trange::AbstractVector{<:Real}, solver)

returns a lower bound on $\mathbb{E}[v(x(T))]$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP at time T = trange[end]. μ0 encodes the distribution of the initial state of the process in terms of its moments; specifically, it maps monomials to the respective moments of the initial distribution. trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]. Populating trange and increasing d improves the computed bound.

source
MarkovBounds.transient_meanMethod
transient_mean(MP::MarkovProcess, μ0::Dict, x::APL, d::Int, trange::AbstractVector{<:Real}, solver)

returns a lower and upper bound on $\mathbb{E}[v(x(T))]$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP at time T = trange[end]. μ0 encodes the distribution of the initial state of the process in terms of its moments; specifically, it maps monomials to the respective moments of the initial distribution. trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]. Populating trange and increasing d improves the computed bounds.

source
MarkovBounds.transient_meanFunction
transient_mean(rn::ReactionSystem, S0::Dict, S, d::Int, trange::AbstractVector{<:Number}, solver,
		scales = Dict(s => 1 for s in species(rn));
		auto_scaling = false)

returns a lower and upper bound on the mean of the molecular count of species S in reaction network rn at time T = trange[end]. S0 refers to the deterministic initial state of the reaction system (including all species!). trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]. Populating trange and increasing d improves the computed bounds.

For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).

source
MarkovBounds.transient_varianceMethod
transient_variance(MP::MarkovProcess, μ0::Dict, v::APL, d::Int, trange::AbstractVector{<:Real}, solver)

returns an upper bound on $\mathbb{E}[v(x(T))^2] - \mathbb{E}[v(x(T))]^2$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP at time T = trange[end].

source
MarkovBounds.transient_varianceFunction
transient_variance(rn::ReactionSystem, S0::Dict, S, d::Int, trange::AbstractVector{<:Real}, solver,
		    scales = Dict(s => 1 for s in species(rn));
			auto_scaling = false)

returns an upper bound on the variance of species S in the reaction network rn at time T = trange[end]. S0 refers to the deterministic initial state of the reaction system (including all species!). trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]. Populating trange and increasing d improves the computed bounds.

For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).

source
MarkovBounds.transient_covariance_ellipsoidMethod
transient_covariance_ellipsoid(MP::MarkovProcess, μ0::Dict, v::Vector{APL}, d::Int, trange::AbstractVector{<:Real}, solver)

returns an upper bound on the volume of the covariance ellipsoid $\text{det}(\mathbb{E}[v(x(T))v(x(T))^\top] - \mathbb{E}[v(x(T))] \mathbb{E}[v(x(T))]^\top)$, where $v$ is a polynomial and $x(T)$ the state of the Markov process MP at time T = trange[end].

source
MarkovBounds.transient_covariance_ellipsoidFunction
transient_covariance_ellipsoid(rn::ReactionSystem, S0::Dict, S::AbstractVector, d::Int, trange::AbstractVector{<:Real}, solver,
						scales = Dict(s => 1 for s in species(rn));
						auto_scaling = false)

returns an upper bound on the volume of the covariance ellipsoid associated with any collection of chemical species in the reaction network rn at time T = trange[end]. S0 refers to the deterministic initial state of the reaction system (including all species!). trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]. Populating trange and increasing d improves the computed bounds.

For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).

source

Bounds on Stochastic Optimal Control Problems

MarkovBounds.optimal_controlMethod
optimal_control(CP::ControlProcess, μ0::Dict, d::Int, trange::AbstractVector{<:Real}, solver,
				P::Partition = trivial_partition(CP.MP.X);
				inner_approx = SOSCone)

returns a lower bound on the objective value of the (stochastic) optimal control problem specified by CP. μ0 encodes information about the distribution of the initial state of the process; specifically, μ0 maps a given monomial to the corresponding moment of the initial distribution. trange refers to an ordered set of time points discretizing the control horizon. trange[end] should coincide with the end of the control horizon, i.e., trange[end] = Inf in case of an infinite horizon problem. P is the partition of the state space X and defaults to the trivial partition with a singular element coinciding with X itself. inner_approx is the the inner approximation of the cone of non-negative polynomials used to construct the relaxations. By default inner_approx=SOSCone and other choices are DSOSCone and SDSOSCone referring to the cones of diagonally dominant and scaled diagonally dominant sum-of-squares polynomials, respectively. The bound is computed via a SOS program of degree d solved with an appropriate method given by solver.

The bound can be tightened by populating trange or increasing d.

source

API

MarkovBounds.reaction_process_setupMethod
reaction_process_setup(rn::ReactionSystem, x0::Dict;
                scales = Dict(s => 1.0 for s in species(rn)),
                auto_scaling = false, solver = nothing)

transforms a reaction network as defined by Catalyst.jl's ReactionSystem type
into an equivalent ReactionProcess accounting for reaction invariants and
scales.
source
MarkovBounds.approximate_stationary_measureFunction
approximate_stationary_measure(MP::MarkovProcess, v::APL, order::Int, solver, P::Partition;
                               side_infos::BasicSemialgebraicSet)

returns approximate values for the stationary measure on the partition P. v is a polynomial observable whose expectation is minimized when determining the approximation. The choice of v shall be understood as means to regularize the problem.

source
MarkovBounds.max_entropy_measureFunction
max_entropy_measure(MP::MarkovProcess, order::Int, solver, P::Partition;
                               side_infos::BasicSemialgebraicSet)

returns approximate values for the stationary measure which maximizes the entropy on the partition P.

source
MarkovBounds.stationary_covariance_ellipsoidFunction
stationary_covariance_ellipsoid(MP::MarkovProcess, v::Vector{<:APL}, d::Int, solver)

returns an upper on the volume of the covariance ellipsoid of a vector of polynomial observables $v(x)$, i.e., $\text{det}(\mathbb{E} [v(x)v(x)^\top] - \mathbb{E}[v(x)] \mathbb{E}[v(x)]^\top)$, at steady state of the Markov process MP. The bounds are computed via an SOS program of degree d, hence can be tightened by increasing d. This computation requires a solver that can handle exponential cone constraints.

source
MarkovBounds.stationary_meanFunction
stationary_mean(MP::MarkovProcess, v::APL, d::Int, solver)

returns lower and upper bound on the observable $v(x)$ at steady state of the Markov process MP. Bounds are computed based on SOS programs over a polynomial of degree at most d; the bounds can be tightened by increasing d. The program is solved with solver.

source
MarkovBounds.stationary_polynomialFunction
stationary_polynomial(MP::MarkovProcess, v::APL, d::Int, solver)

returns a lower bound on the expecation of a polynomial observables $v(x)$ at steady state of the Markov process MP. The bound is computed based on an SOS program over a polynomial of degree at most d; the bounds can be tightened by increasing d. The program is solved with solver.

source
MarkovBounds.stationary_popFunction
stationary_pop(MP::MarkovProcess, v::APL, d::Int, solver)

returns SOS program of degree d for compuitation of a lower bound on the expecation of a polynomial observable $v(x)$ at steady state of the Markov process MP.

source
MarkovBounds.stationary_probability_massMethod
stationary_probability_mass(MP::MarkovProcess, X::BasicSemialgebraicSet, d::Int,
						solver)

returns lower and upper bounds on the probability mass associated with the set X. d refers to the order of the relaxation used, again the bounds will tighten monotonically with increasing order. solver refers to the optimizer used to solve the semidefinite programs which optimal values furnish the bounds. This is the weakest formulation that can be used to compute bounds on the probabiltiy mass associated with a Basic semialgebraic set. For sensible results the set X should have non- empty interior. In order to improve the bounds the user should supply a carefully defined partition of the state space. In that case it is most sensible to choose X as a subset of the element of said partition. Then, one should call

stationary_probability_mass(MP::MarkovProcess, v::Int, order::Int, solver,
                             P::Partition)

where v refers to the vertex of the partition graph that corresponds to the set X.

source
MarkovBounds.stationary_varianceFunction
stationary_variance(MP::MarkovProcess, v::APL, d::Int, solver)

returns SOS program of degree d for computation of an upper bound on the variance of a polynomial observables v at steady state of the Markov process MP.

source
MarkovBounds.max_entropy_measureMethod
max_entropy_measure(MP::MarkovProcess, μ0::Dict, d::Int, trange::AbstractVector{<:Real}, solver, P::Partition;
                    side_infos::BasicSemialgebraicSet)

returns approximate values for the transient measure which maximizes the entropy on the partition P at time t.

source
MarkovBounds.transient_covariance_ellipsoidFunction
transient_covariance_ellipsoid(MP::MarkovProcess, μ0::Dict, v::Vector{APL}, d::Int, trange::AbstractVector{<:Real}, solver)

returns an upper bound on the volume of the covariance ellipsoid $\text{det}(\mathbb{E}[v(x(T))v(x(T))^\top] - \mathbb{E}[v(x(T))] \mathbb{E}[v(x(T))]^\top)$, where $v$ is a polynomial and $x(T)$ the state of the Markov process MP at time T = trange[end].

source
MarkovBounds.transient_covariance_ellipsoidFunction
transient_covariance_ellipsoid(rn::ReactionSystem, S0::Dict, S::AbstractVector, d::Int, trange::AbstractVector{<:Real}, solver,
						scales = Dict(s => 1 for s in species(rn));
						auto_scaling = false)

returns an upper bound on the volume of the covariance ellipsoid associated with any collection of chemical species in the reaction network rn at time T = trange[end]. S0 refers to the deterministic initial state of the reaction system (including all species!). trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]. Populating trange and increasing d improves the computed bounds.

For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).

source
MarkovBounds.transient_meanFunction
transient_mean(rn::ReactionSystem, S0::Dict, S, d::Int, trange::AbstractVector{<:Number}, solver,
		scales = Dict(s => 1 for s in species(rn));
		auto_scaling = false)

returns a lower and upper bound on the mean of the molecular count of species S in reaction network rn at time T = trange[end]. S0 refers to the deterministic initial state of the reaction system (including all species!). trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]. Populating trange and increasing d improves the computed bounds.

For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).

source
MarkovBounds.transient_meanFunction
transient_mean(MP::MarkovProcess, μ0::Dict, x::APL, d::Int, trange::AbstractVector{<:Real}, solver)

returns a lower and upper bound on $\mathbb{E}[v(x(T))]$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP at time T = trange[end]. μ0 encodes the distribution of the initial state of the process in terms of its moments; specifically, it maps monomials to the respective moments of the initial distribution. trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]. Populating trange and increasing d improves the computed bounds.

source
MarkovBounds.transient_polynomialFunction
transient_polynomial(MP::MarkovProcess, μ0::Dict, v::APL, d::Int, trange::AbstractVector{<:Real}, solver)

returns a lower bound on $\mathbb{E}[v(x(T))]$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP at time T = trange[end]. μ0 encodes the distribution of the initial state of the process in terms of its moments; specifically, it maps monomials to the respective moments of the initial distribution. trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]. Populating trange and increasing d improves the computed bound.

source
MarkovBounds.transient_popFunction
transient_pop(MP::MarkovProcess, μ0::Dict, v::APL, d::Int, trange::AbstractVector{<:Real}, solver)

returns SOS program of degree d for computing a lower bound on $\mathbb{E}[v(x(T))]$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP at time T = trange[end]. μ0 encodes the distribution of the initial state of the process in terms of its moments; specifically, it maps monomials to the respective moments of the initial distribution. trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]. Populating trange and increasing d has a tightening effect on the bound furnished by the assembled SOS program.

source
MarkovBounds.transient_varianceFunction
transient_variance(rn::ReactionSystem, S0::Dict, S, d::Int, trange::AbstractVector{<:Real}, solver,
		    scales = Dict(s => 1 for s in species(rn));
			auto_scaling = false)

returns an upper bound on the variance of species S in the reaction network rn at time T = trange[end]. S0 refers to the deterministic initial state of the reaction system (including all species!). trange is an ordered collection of time points used to discretize the time horizon $[0,T]$, i.e., T = trange[end]. Populating trange and increasing d improves the computed bounds.

For numerical stability, it is recommended to provide scales of the expected magnitude of molecular counts for the different species at steady state. If the system is closed it is also possible to enable auto_scaling which will find the maximum molecular counts for each species under stoichiometry constraints (via LP).

source
MarkovBounds.transient_varianceFunction
transient_variance(MP::MarkovProcess, μ0::Dict, v::APL, d::Int, trange::AbstractVector{<:Real}, solver)

returns an upper bound on $\mathbb{E}[v(x(T))^2] - \mathbb{E}[v(x(T))]^2$ where $v$ is a polynomial and $x(T)$ the state of the Markov process MP at time T = trange[end].

source
MarkovBounds.optimal_controlFunction
optimal_control(CP::ControlProcess, μ0::Dict, d::Int, trange::AbstractVector{<:Real}, solver,
				P::Partition = trivial_partition(CP.MP.X);
				inner_approx = SOSCone)

returns a lower bound on the objective value of the (stochastic) optimal control problem specified by CP. μ0 encodes information about the distribution of the initial state of the process; specifically, μ0 maps a given monomial to the corresponding moment of the initial distribution. trange refers to an ordered set of time points discretizing the control horizon. trange[end] should coincide with the end of the control horizon, i.e., trange[end] = Inf in case of an infinite horizon problem. P is the partition of the state space X and defaults to the trivial partition with a singular element coinciding with X itself. inner_approx is the the inner approximation of the cone of non-negative polynomials used to construct the relaxations. By default inner_approx=SOSCone and other choices are DSOSCone and SDSOSCone referring to the cones of diagonally dominant and scaled diagonally dominant sum-of-squares polynomials, respectively. The bound is computed via a SOS program of degree d solved with an appropriate method given by solver.

The bound can be tightened by populating trange or increasing d.

source