ceem package¶
Subpackages¶
Submodules¶
ceem.baseline_utils module¶
-
ceem.baseline_utils.
DEFAULT_NN_KWARGS
(H)¶
-
class
ceem.baseline_utils.
LagModel
(neural_net_kwargs, H)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(data_batch)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
ceem.ceem module¶
-
class
ceem.ceem.
CEEM
(smoothing_criteria: Tuple, learning_criteria: Tuple, learning_params: Tuple, learning_opts: Tuple, epoch_callbacks: Tuple[Callable[int, None], …], termination_callback: Callable[int, bool], parallel: int = 0)[source]¶ Bases:
object
-
smooth
(xsm, sys, solver_kwargs=None, subsetinds=None)[source]¶ - Parameters
xsm (torch.tensor) – (B,T,n) trajectories of states
sys (DiscreteDynamicalSystem) –
- Returns
(B,T,n) trajectories of smoothed states metrics_l (list):
- Return type
xsm (torch.tensor)
-
step
(xs, sys, smooth_solver_kwargs=None, learner_criterion_kwargs=None, learner_opt_kwargs=None, subset=None)[source]¶ Runs one step of CEEM algorithm
- Parameters
xs (torch.tensor) –
sys (DiscreteDynamicalSystem) –
smooth_solver_kwargs (dict) –
learner_criterion_kwargs (dict) –
learner_opt_kwargs (dict) –
subset (int) –
- Returns
smooth_metrics (dict): learn_metrics (dict):
- Return type
xs (torch.tensor)
First runs the smoothing step on trajectories xs. Then runs the learning step on sys.
-
train
(xs, sys, nepochs, smooth_solver_kwargs=None, learner_criterion_kwargs=None, learner_opt_kwargs=None, subset=None)[source]¶ Runs one step of CEEM algorithm
- Parameters
xs (torch.tensor) –
sys (DiscreteDynamicalSystem) –
nepochs (int) – Number of epochs to run CEEM algorithm for
smooth_solver_kwargs (dict) –
learner_criterion_kwargs (dict) –
learner_opt_kwargs (dict) –
subset (int) –
-
ceem.data_utils module¶
ceem.dynamics module¶
-
class
ceem.dynamics.
AnalyticDynJacMixin
[source]¶ Bases:
ceem.dynamics.DynJacMixin
Mixin for analytically computing jacobians of dynamics.
-
jac_step_theta
(t, x, u=None)[source]¶ Returns the Jacobian of step at time t
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
jac_x_t (seq of torch.tensor): jacobian of the observation wrt each nn.Parameter
-
jac_step_x
(t, x, u=None)[source]¶ Returns the Jacobian of step at time t
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
jac_x_t (torch.tensor): (B, T, n, n) shaped jacobian of the next state
-
-
class
ceem.dynamics.
AnalyticObsJacMixin
[source]¶ Bases:
ceem.dynamics.ObsJacMixin
Mixin for analytically computing observation jacobains
-
jac_obs_theta
(t, x, u=None)[source]¶ Returns the Jacobian of observation wrt theta
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
jac_y_t (seq of torch.tensor): jacobian of the observation wrt each nn.Parameter
-
jac_obs_x
(t, x, u=None)[source]¶ Returns the Jacobian of observation wrt x
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
jac_y_t (torch.tensor): (B, T, m, n) shaped jacobian of the observation
-
-
class
ceem.dynamics.
C2DSystem
(dt, method, transforms=None)[source]¶ Bases:
ceem.dynamics.DiscreteDynamicalSystem
,ceem.dynamics.ContinuousDynamicalSystem
Continuous to Discrete System
-
class
ceem.dynamics.
ContinuousDynamicalSystem
[source]¶ Bases:
object
-
step_derivs
(t, x, u=None)[source]¶ Returns xdot_t
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
(B, T, n) system state derivatives
- Return type
xdot (torch.tensor)
-
property
udim
¶ system input dimension returns None if unactuated
-
property
xdim
¶ system state dimensions
-
property
ydim
¶ system observation dimension
-
-
class
ceem.dynamics.
DiscreteDynamicalSystem
[source]¶ Bases:
object
-
step
(t, x, u=None)[source]¶ Returns next x_{t+1}
- Parameters
t (torch.IntTensor) – (B, T,) shaped time
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
(B, T, n) shaped next system states
- Return type
x_next (torch.tensor)
-
property
udim
¶ system input dimension returns None if unactuated
-
property
xdim
¶ system state dimensions
-
property
ydim
¶ system observation dimension
-
-
class
ceem.dynamics.
DynJacMixin
[source]¶ Bases:
object
Mixin for computing jacobians of dynamics. Defaults to using autograd.
-
jac_step_theta
(t, x, u=None)[source]¶ Returns the Jacobian of step at time t
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
jac_x_t (seq of torch.tensor): jacobian of the observation wrt each nn.Parameter
-
jac_step_u
(t, x, u)[source]¶ Returns the Jacobian of step at time t
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
jac_u (torch.tensor): (B, T, n, m) shaped jacobian of the next state
-
jac_step_x
(t, x, u=None)[source]¶ Returns the Jacobian of step at time t
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
jac_x_t (torch.tensor): (B, T, n, n) shaped jacobian of the next state
-
-
class
ceem.dynamics.
KinoDynamicalSystem
[source]¶ Bases:
ceem.dynamics.ContinuousDynamicalSystem
-
dynamics
(t, q, v, u)[source]¶ Returns qdot_t
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
q (torch.tensor) – (B, T, qn) shaped system generalized coordinates
v (torch.tensor) – (B, T, vn) shaped system generalized velocities
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
(B, T, n) system gen. velocity derivatives
- Return type
vdot (torch.tensor)
-
kinematics
(t, q, v, u)[source]¶ Returns qdot_t
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
q (torch.tensor) – (B, T, qn) shaped system generalized coordinates
v (torch.tensor) – (B, T, vn) shaped system generalized velocities
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
(B, T, n) system gen. coordinate derivatives
- Return type
qdot (torch.tensor)
-
property
qdim
¶
-
step_derivs
(t, x, u=None)[source]¶ Returns xdot_t
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, qn+vn) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
(B, T, n) system state derivatives
- Return type
xdot (torch.tensor)
-
property
vdim
¶
-
property
xdim
¶ system state dimensions
-
-
class
ceem.dynamics.
ObsJacMixin
[source]¶ Bases:
object
Mixin for computing jacobians of observations. Defaults to using autograd.
-
jac_obs_theta
(t, x, u=None)[source]¶ Returns the Jacobian of observation wrt theta
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
jac_y_t (seq of torch.tensor): jacobian of the observation wrt each nn.Parameter
-
jac_obs_u
(t, x, u)[source]¶ Returns the Jacobian of observation wrt u
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
jac_y_t (torch.tensor): (B, T, ydim, m) shaped jacobian of the observation
-
jac_obs_x
(t, x, u=None)[source]¶ Returns the Jacobian of observation wrt x
- Parameters
t (torch.tensor) – (B, T,) shaped time indices
x (torch.tensor) – (B, T, n) shaped system states
u (torch.tensor) – (B, T, m) shaped control inputs
- Returns
jac_y_t (torch.tensor): (B, T, ydim, n) shaped jacobian of the observation
-
ceem.exp_utils module¶
-
ceem.exp_utils.
compute_rms
(y, ypred, y_std=None)[source]¶ Compute RMS prediction error on type of trajectory Args: y (torch.tensor): (B,T,m) tensor of true observations
ypred (torch.tensor): (B,T,m) tensor of predicted observations demonames (list): len=T list of demo names y_std (torch.tensor): (m,) standard deviations
- Returns
(B,) rms errors on each demo
ceem.learner module¶
-
ceem.learner.
learner
(model, criterion_list, criterion_x_list, opt_list, params_list, crit_kwargs_list, opt_kwargs_list=None, subsetinds=None)[source]¶ Generic learner function :param model: :type model: DiscreteDynamicalSystem :param criterion_list: list of Criterion instances :param criterion_x_list: list of torch.tensor to pass to criteria :param opt_list: list of optimizers (see OPTIMIZERS) :param params_list: list of list of parameters for each optimizer :param crit_kwargs_list: list of kwargs for each criterion :param opt_kwargs_list: kwargs to optimizer :param subsetinds: array specifying the batch indices to operate on
- Returns
- Return type
opt_result_list (list)
ceem.logger module¶
-
ceem.logger.
critical
(msg, *args, **kwargs)[source]¶ Log ‘msg % args’ with severity ‘CRITICAL’.
To pass exception information, use the keyword argument exc_info with a true value, e.g.
logger.critical(“Houston, we have a %s”, “major disaster”, exc_info=1)
-
ceem.logger.
debug
(msg, *args, **kwargs)[source]¶ Log ‘msg % args’ with severity ‘DEBUG’.
To pass exception information, use the keyword argument exc_info with a true value, e.g.
logger.debug(“Houston, we have a %s”, “thorny problem”, exc_info=1)
-
ceem.logger.
error
(msg, *args, **kwargs)[source]¶ Log ‘msg % args’ with severity ‘ERROR’.
To pass exception information, use the keyword argument exc_info with a true value, e.g.
logger.error(“Houston, we have a %s”, “major problem”, exc_info=1)
-
ceem.logger.
exception
(msg, *args, exc_info=True, **kwargs)[source]¶ Convenience method for logging an ERROR with exception information.
-
ceem.logger.
info
(msg, *args, **kwargs)[source]¶ Log ‘msg % args’ with severity ‘INFO’.
To pass exception information, use the keyword argument exc_info with a true value, e.g.
logger.info(“Houston, we have a %s”, “interesting problem”, exc_info=1)
-
ceem.logger.
setLevel
(level)¶ Set the logging level of this logger. level must be an int or a str.
ceem.nested module¶
Tools for manipulating nested tuples, list, and dictionaries.
-
ceem.nested.
filter
(predicate, *structures, **kwargs)¶ Select elements of a nested structure based on a predicate function.
If multiple structures are provided as input, their structure must match and the function will be applied to corresponding groups of elements. The nested structure can consist of any combination of lists, tuples, and dicts.
- Parameters
predicate – The function to determine whether an element should be kept. Receives one argument for every structure that is provided.
*structures – One of more nested structures.
flatten – Whether to flatten the resulting structure into a tuple. Keys of dictionaries will be discarded.
- Returns
Nested structure.
-
ceem.nested.
filter_
(predicate, *structures, **kwargs)[source]¶ Select elements of a nested structure based on a predicate function.
If multiple structures are provided as input, their structure must match and the function will be applied to corresponding groups of elements. The nested structure can consist of any combination of lists, tuples, and dicts.
- Parameters
predicate – The function to determine whether an element should be kept. Receives one argument for every structure that is provided.
*structures – One of more nested structures.
flatten – Whether to flatten the resulting structure into a tuple. Keys of dictionaries will be discarded.
- Returns
Nested structure.
-
ceem.nested.
flatten
(structure)¶ Combine all leaves of a nested structure into a tuple.
The nested structure can consist of any combination of tuples, lists, and dicts. Dictionary keys will be discarded but values will ordered by the sorting of the keys.
- Parameters
structure – Nested structure.
- Returns
Flat tuple.
-
ceem.nested.
flatten_
(structure)[source]¶ Combine all leaves of a nested structure into a tuple.
The nested structure can consist of any combination of tuples, lists, and dicts. Dictionary keys will be discarded but values will ordered by the sorting of the keys.
- Parameters
structure – Nested structure.
- Returns
Flat tuple.
-
ceem.nested.
map
(function, *structures, **kwargs)¶ Apply a function to every element in a nested structure.
If multiple structures are provided as input, their structure must match and the function will be applied to corresponding groups of elements. The nested structure can consist of any combination of lists, tuples, and dicts.
- Parameters
function – The function to apply to the elements of the structure. Receives one argument for every structure that is provided.
*structures – One of more nested structures.
flatten – Whether to flatten the resulting structure into a tuple. Keys of dictionaries will be discarded.
- Returns
Nested structure.
-
ceem.nested.
map_
(function, *structures, **kwargs)[source]¶ Apply a function to every element in a nested structure.
If multiple structures are provided as input, their structure must match and the function will be applied to corresponding groups of elements. The nested structure can consist of any combination of lists, tuples, and dicts.
- Parameters
function – The function to apply to the elements of the structure. Receives one argument for every structure that is provided.
*structures – One of more nested structures.
flatten – Whether to flatten the resulting structure into a tuple. Keys of dictionaries will be discarded.
- Returns
Nested structure.
-
ceem.nested.
zip
(*structures, **kwargs)¶ Combine corresponding elements in multiple nested structure to tuples.
The nested structures can consist of any combination of lists, tuples, and dicts. All provided structures must have the same nesting.
- Parameters
*structures – Nested structures.
flatten – Whether to flatten the resulting structure into a tuple. Keys of dictionaries will be discarded.
- Returns
Nested structure.
-
ceem.nested.
zip_
(*structures, **kwargs)[source]¶ Combine corresponding elements in multiple nested structure to tuples.
The nested structures can consist of any combination of lists, tuples, and dicts. All provided structures must have the same nesting.
- Parameters
*structures – Nested structures.
flatten – Whether to flatten the resulting structure into a tuple. Keys of dictionaries will be discarded.
- Returns
Nested structure.
ceem.nn module¶
-
class
ceem.nn.
LNMLP
(input_size, hidden_sizes, output_size, activation='relu', gain=1.0, ln=False)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(inp)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
ceem.odesolver module¶
ceem.opt_criteria module¶
-
class
ceem.opt_criteria.
BasicGroupSOSCriterion
[source]¶ Bases:
ceem.opt_criteria.SOSCriterion
Group of SOSCriterion instances
-
class
ceem.opt_criteria.
BlockSparseGroupSOSCriterion
(criteria)[source]¶ Bases:
ceem.opt_criteria.BasicGroupSOSCriterion
Group of SOSCriterion instances with careful design of the sparsity pattern in the Jacobian matrix. All SOSCriterion but the last one must have block diagonal jacobians and a .scaled_jac_x_diag method. The last SOSCriterion, typically the dynamics, has two blocks on the primary and first upper diagonal.
-
jac_resid_x
(model, x, sparse_format=<class 'scipy.sparse.bsr.bsr_matrix'>, **kwargs)[source]¶ Method for computing residuals jacobian wrt x :param model: :type model: DiscreteDynamicalSystem :param x: (B,T,n) system states :type x: torch.tensor :param sparse: if True returns a (nresid,B*T*n) sparse_format :type sparse: bool :param sparse_format: sparse matrix format :type sparse_format: scipy.sparse
- Returns
(nresid,B,T,n) criterion jacobian
- Return type
jac_resid_x (torch.tensor)
-
-
class
ceem.opt_criteria.
Criterion
[source]¶ Bases:
object
-
batched_forward
(model, x, **kwargs)[source]¶ Forward method for computing criterion, not summed over batch :param model: :type model: DiscreteDynamicalSystem :param x: (B,T,n) system states :type x: torch.tensor
- Returns
(B,) criterion
- Return type
y (torch.tensor)
-
batched_sample_forward
(model, x, **kwargs)[source]¶ Forward method for computing criterion, not summed over batch :param model: :type model: DiscreteDynamicalSystem :param x: (B,N,T,n) system states :type x: torch.tensor
- Returns
(B,N) criterion
- Return type
y (torch.tensor)
-
forward
(model, x, **kwargs)[source]¶ Forward method for computing criterion :param model: :type model: DiscreteDynamicalSystem :param x: (B,T,n) system states :type x: torch.tensor
- Returns
scalar criterion
- Return type
criterion (torch.tensor)
-
-
class
ceem.opt_criteria.
GaussianDynamicsCriterion
(Sig_w_inv, t, u=None)[source]¶ Bases:
ceem.opt_criteria.SOSCriterion
-
jac_resid_x
(model, x, sparse=False, sparse_format=<class 'scipy.sparse.csr.csr_matrix'>, inds=None)[source]¶ Method for computing residuals jacobian wrt x :param model: :type model: DiscreteDynamicalSystem :param x: (B,T,n) system states :type x: torch.tensor :param sparse: if True returns a (nresid,B*T*n) sparse_format :type sparse: bool :param sparse_format: sparse matrix format :type sparse_format: scipy.sparse
- Returns
(nresid,B,T,n) criterion jacobian
- Return type
jac_resid_x (torch.tensor)
-
-
class
ceem.opt_criteria.
GaussianObservationCriterion
(Sig_y_inv, t, y, u=None)[source]¶ Bases:
ceem.opt_criteria.SOSCriterion
-
jac_resid_x
(model, x, sparse=False, sparse_format=<class 'scipy.sparse.csr.csr_matrix'>, inds=None)[source]¶ Method for computing residuals jacobian wrt x :param model: :type model: DiscreteDynamicalSystem :param x: (B,T,n) system states :type x: torch.tensor :param sparse: if True returns a (nresid,B*T*n) sparse_format :type sparse: bool :param sparse_format: sparse matrix format :type sparse_format: scipy.sparse
- Returns
(nresid,B,T,n) criterion jacobian
- Return type
jac_resid_x (torch.tensor)
-
-
class
ceem.opt_criteria.
GaussianX0Criterion
(x0, Sig_x0_inv)[source]¶ Bases:
ceem.opt_criteria.SOSCriterion
Soft Trust Region on states
-
jac_resid_x
(model, x, sparse=False, sparse_format=<class 'scipy.sparse.csr.csr_matrix'>)[source]¶ Method for computing residuals jacobian wrt x :param model: :type model: DiscreteDynamicalSystem :param x: (B,T,n) system states :type x: torch.tensor :param sparse: if True returns a (nresid,B*T*n) sparse_format :type sparse: bool :param sparse_format: sparse matrix format :type sparse_format: scipy.sparse
- Returns
(nresid,B,T,n) criterion jacobian
- Return type
jac_resid_x (torch.tensor)
-
-
class
ceem.opt_criteria.
GroupCriterion
(criteria)[source]¶ Bases:
ceem.opt_criteria.Criterion
A group of Criterion
-
class
ceem.opt_criteria.
GroupSOSCriterion
(criteria)[source]¶ Bases:
ceem.opt_criteria.BasicGroupSOSCriterion
Group of SOSCriterion instances
-
jac_resid_x
(model, x, sparse=False, sparse_format=<class 'scipy.sparse.csr.csr_matrix'>, **kwargs)[source]¶ Method for computing residuals jacobian wrt x :param model: :type model: DiscreteDynamicalSystem :param x: (B,T,n) system states :type x: torch.tensor :param sparse: if True returns a (nresid,B*T*n) sparse_format :type sparse: bool :param sparse_format: sparse matrix format :type sparse_format: scipy.sparse
- Returns
(nresid,B,T,n) criterion jacobian
- Return type
jac_resid_x (torch.tensor)
-
-
class
ceem.opt_criteria.
SOSCriterion
[source]¶ Bases:
ceem.opt_criteria.Criterion
Sum-of-squares criterion
-
forward
(model, x, **kwargs)[source]¶ Forward method for computing criterion :param model: :type model: DiscreteDynamicalSystem :param x: (B,T,n) system states :type x: torch.tensor
- Returns
scalar criterion
- Return type
criterion (torch.tensor)
-
jac_resid_x
(model, x, sparse=False, sparse_format=<class 'scipy.sparse.csr.csr_matrix'>, **kwargs)[source]¶ Method for computing residuals jacobian wrt x :param model: :type model: DiscreteDynamicalSystem :param x: (B,T,n) system states :type x: torch.tensor :param sparse: if True returns a (nresid,B*T*n) sparse_format :type sparse: bool :param sparse_format: sparse matrix format :type sparse_format: scipy.sparse
- Returns
(nresid,B,T,n) criterion jacobian
- Return type
jac_resid_x (torch.tensor)
-
-
class
ceem.opt_criteria.
STRParamCriterion
(rho, params)[source]¶ Bases:
ceem.opt_criteria.Criterion
Soft Trust Region on params criterion
-
class
ceem.opt_criteria.
STRStateCriterion
(rho, x0)[source]¶ Bases:
ceem.opt_criteria.SOSCriterion
Soft Trust Region on states
-
jac_resid_x
(model, x, sparse=False, sparse_format=<class 'scipy.sparse.csr.csr_matrix'>)[source]¶ Method for computing residuals jacobian wrt x :param model: :type model: DiscreteDynamicalSystem :param x: (B,T,n) system states :type x: torch.tensor :param sparse: if True returns a (nresid,B*T*n) sparse_format :type sparse: bool :param sparse_format: sparse matrix format :type sparse_format: scipy.sparse
- Returns
(nresid,B,T,n) criterion jacobian
- Return type
jac_resid_x (torch.tensor)
-
ceem.particleem module¶
-
class
ceem.particleem.
Resampler
(method)[source]¶ Bases:
object
-
class
ceem.particleem.
SAEMTrainer
(fapf, y, gamma_sched=<function HarmonicDecayScheduler>, max_k=100, xlen_cutoff=None)[source]¶ Bases:
object
-
recursive_Q
(xs, y, k_, Q)[source]¶ Recursive computation of Q :param x: [(B,N,T,n)] iid smoothed trajs :type x: list of torch.tensor :param y: (B,T,m) observations :type y: torch.tensor :param k_: call level :type k_: int :param Q: (1,) Q(theta) or 0. :type Q: torch.tensor
- Returns
(1,) Q(theta)
- Return type
Q (torch.tensor)
-
-
class
ceem.particleem.
faPF
(N, system, Q, R, Px0, x0mean=None, resampler=<ceem.particleem.Resampler object>, FFBSi_N=None, burnin_T=None, ess_frac=0.5)[source]¶ Bases:
object
-
FFBSi
(x, w)[source]¶ Sample trajectories from JSD using FFBSi :param x: (B,N,T,n) filtered states :type x: torch.tenosr :param w: (B,N,T) filtered weights :type w: torch.tensor
- Returns
(B, Ns, T, n) sampled state trajs
- Return type
xsamps (torch.tesnor)
Notes
See http://users.isy.liu.se/rt/schon/Publications/LindstenS2013.pdf Algorithm 4
-
FFBSm
(y, x, w, return_wij_tN=False)[source]¶ Compute the smoothing marginal distributions :param y: (B,T,m) observartions :type y: torch.tensor :param x: (B,N,T,n) filtered states :type x: torch.tenosr :param w: (B,N,T) filtered weights :type w: torch.tensor :param return_wij_tN: return pair-wise smoothing weights :type return_wij_tN: bool
- Returns
(B,N,T,n) smoothed states wsm (torch.tensor): (B,N,T) smoothed weights wij_tN (torch.tensor): (B,N,N,T) pairwise smoothing weights
- Return type
xsm (torch.tenosr)
Notes
Implemenation of Forward Filtering-Backward Smoothing https://www.stats.ox.ac.uk/~doucet/doucet_johansen_tutorialPF2011.pdf Equation 49
-
Q_JSD_marginals
(y, x, w, wij_tN)[source]¶ Compute Q(theta, theta_k) using approximate JSD marginals :param y: (B,T,m) observartions :type y: torch.tensor :param x: (B,N,T,n) filtered states :type x: torch.tenosr :param w: (B,N,T) filtered weights :type w: torch.tensor :param wij_tN: (B,N,N,T-1) pair-wise smoothing weights :type wij_tN: torch.tensor
- Returns
(1,) Q(theta, theta_k)
- Return type
Q (torch.tensor)
Notes
See http://user.it.uu.se/~thosc112/pubpdf/schonwn2011-2.pdf Equations 48-49
-
Q_MCEM
(y, x)[source]¶ - Compute Monte-Carlo appx Q(theta, theta_k)
using iid x_1:T ~ p(x_1:T|y_1:T)
- Parameters
y (torch.tensor) – (B,T,m) observartions
x (torch.tenosr) – (B,N,T,n) sampled state trajs
- Returns
(1,) Q(theta, theta_k)
- Return type
Q (torch.tensor)
-
compute_mean
(x, w)[source]¶ Computes mean trajectory from weighted samples :param x: (B,N,T,n) filtered states :type x: torch.tenosr :param w: (B,N,T) filtered weights :type w: torch.tensor
- Returns
xmean (torch.tensor): (B,T,n) mean states
-
fa_loglikelihood
(y, xfilt)[source]¶ Estimates log-likehood of filtered particles from faPF Eqn 39 from https://arxiv.org/pdf/1703.02419.pdf :param y: (B,T,m) observartions :type y: torch.tensor :param xfilt: (B,N,T,n) filtered states :type xfilt: torch.tenosr
- Returns
(1,) log p(y_1:T | x_filt, theta)
- Return type
z (torch.tensor)
-
filter
(y)[source]¶ Run filter :param y: (B,T,m) observartions :type y: torch.tensor
- Returns
(B,N,T,n) filtered states w (torch.tensor): (B,N,T) filtered weights
- Return type
x (torch.tenosr)
-
get_resampling_distribution
(t, xtm1)[source]¶ Resampling distribution p(yt | xt-1) :param t: time-index :type t: int :param xtm1: (B,N,1,n) state-particles :type xtm1: torch.tensor
- Returns
p(yt | xt-1)
- Return type
dist (MultivariateGaussian)
Notes
Implementation of https://link.springer.com/content/pdf/10.1023%2FA%3A1008935410038.pdf Equation 14
-
get_sampling_distribution
(t, xtm1, yt)[source]¶ Sampling distribution p(xt | xt-1, yt) :param t: time-index :type t: int :param xtm1: (B,N,1,n) state-particles :type xtm1: torch.tensor :param yt: (B,1,m) observation :type yt: torch.tensor
- Returns
p(xt | xt-1, yt)
- Return type
dist (MultivariateGaussian)
Notes
Implementation of https://link.springer.com/content/pdf/10.1023%2FA%3A1008935410038.pdf Equation 18-20
-
sample_initial
(y0, N)[source]¶ Sample from p(x0 | y0) :param y0: (B,1,m) initial observation :type y0: torch.tensor
- Returns
(B,N,1,n) samples of x0
- Return type
x0 (torch.tensor)
Notes
Eqns 352-354 https://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf
-
ceem.smoother module¶
-
ceem.smoother.
EKF
(x0, y_T, u_T, sigma0, Q, R, system)[source]¶ Extended Kalman filter
- Parameters
x0 (torch.tensor) – (B, xdim) initial system states
y_T (torch.tensor) – (B, T, ydim) observations
u_T (torch.tensor) – (B, T, udim) controls
sigma0 (torch.tensor) – (xdim, xdim) initial state covariance
dyn_err (torch.tensor) – (xdim) dynamics error mean
obs_err (torch.tensor) – (xdim) observation error mean
Q (torch.tensor) – (xdim, xdim) dynamics error covariance
R (torch.tensor) – (ydim, ydim) observation error covariance
system (DiscreteDynamicalSystem) –
- Returns
(B, T, xdim) system states y_pred (torch.tensor): (B, T, ydim) predicted observations before state correction
- Return type
x_filt (torch.tensor)
-
ceem.smoother.
NLSsmoother
(x0, criterion, system, solver_kwargs={'verbose': 2})[source]¶ Smoothing with Gauss-Newton based approach :param x0: (1,T,n) system states :type x0: torch.tensor :param criterion: criterion to optimize :type criterion: SOSCriterion :param system: :type system: DiscreteDynamicalSystem :param solver_kwargs: options for scipy.optimize.least_squares
-
ceem.smoother.
choice
(a, size=None, replace=True, p=None)¶ Generates a random sample from a given 1-D array
New in version 1.7.0.
Note
New code should use the
choice
method of adefault_rng()
instance instead; see random-quick-start.- Parameters
a (1-D array-like or int) – If an ndarray, a random sample is generated from its elements. If an int, the random sample is generated as if a were np.arange(a)
size (int or tuple of ints, optional) – Output shape. If the given shape is, e.g.,
(m, n, k)
, thenm * n * k
samples are drawn. Default is None, in which case a single value is returned.replace (boolean, optional) – Whether the sample is with or without replacement
p (1-D array-like, optional) – The probabilities associated with each entry in a. If not given the sample assumes a uniform distribution over all entries in a.
- Returns
samples – The generated random samples
- Return type
single item or ndarray
- Raises
ValueError – If a is an int and less than zero, if a or p are not 1-dimensional, if a is an array-like of size 0, if p is not a vector of probabilities, if a and p have different lengths, or if replace=False and the sample size is greater than the population size
See also
randint()
,shuffle()
,permutation()
Generator.choice()
which should be used in new code
Examples
Generate a uniform random sample from np.arange(5) of size 3:
>>> np.random.choice(5, 3) array([0, 3, 4]) # random >>> #This is equivalent to np.random.randint(0,5,3)
Generate a non-uniform random sample from np.arange(5) of size 3:
>>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0]) array([3, 3, 0]) # random
Generate a uniform random sample from np.arange(5) of size 3 without replacement:
>>> np.random.choice(5, 3, replace=False) array([3,1,0]) # random >>> #This is equivalent to np.random.permutation(np.arange(5))[:3]
Generate a non-uniform random sample from np.arange(5) of size 3 without replacement:
>>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0]) array([2, 3, 0]) # random
Any of the above can be repeated with an arbitrary array-like instead of just integers. For instance:
>>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher'] >>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3]) array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random dtype='<U11')
ceem.utils module¶
-
ceem.utils.
peak_memory_mb
() → float[source]¶ Get peak memory usage for this process, as measured by max-resident-set size: https://unix.stackexchange.com/questions/30940/getrusage-system-call-what-is-maximum-resident-set-size Only works on OSX and Linux, returns 0.0 otherwise.