romcomma.gpf.likelihoods.MOGaussian§
- class MOGaussian(variance, **kwargs)[source]§
Bases:
QuadratureLikelihood
A non-diagonal, multivariate likelihood, extending gpflow. The code is the multivariate version of gf.likelihoods.Gaussian.
The Gaussian likelihood is appropriate where uncertainties associated with the data are believed to follow a normal distribution, with constant variance.
Very small uncertainties can lead to numerical instability during the optimization process. A lower bound of 1e-3 is therefore imposed on the likelihood Variance.cholesky_diagonal elements by default.
- __init__(variance, **kwargs)[source]§
Constructor, which passes the Cholesky decomposition of the variance matrix.
- Parameters:
variance – The covariance matrix of the likelihood, expressed in tensorflow or numpy. Is checked for symmetry and positive definiteness.
**kwargs – Keyword arguments forwarded to
Likelihood
.
Methods
N
(data)The number of samples in data, assuming the last 2 dimensions have been concatenated to LN.
__init__
(variance, **kwargs)Constructor, which passes the Cholesky decomposition of the variance matrix.
add_to
(Fvar)The conditional mean of Y|F: [E[Y₁|F], ..., E[Yₖ|F]] where K = observation_dim
The conditional marginal variance of Y|F: [var(Y₁|F), ..., var(Yₖ|F)] where K = observation_dim
log_prob
(F, Y)The log probability density log p(Y|F)
predict_log_density
(Fmu, Fvar, Y)Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,
predict_mean_and_var
(Fmu, Fvar)Given a Normal distribution for the latent function, return the mean and marginal variance of Y,
split_axis_shape
(data)Split the final data axis length LN into the pair (L,N).
variational_expectations
(Fmu, Fvar, Y)Compute the expected log density of the data, given a Gaussian distribution for the function values,
with_name_scope
(method)Decorator to automatically enter the module name scope.
Attributes
Returns the name of this module as passed or determined in the ctor.
Returns a tf.name_scope instance for this class.
Sequence of non-trainable variables owned by this module and its submodules.
parameters
Sequence of all sub-modules.
trainable_parameters
Sequence of trainable variables owned by this module and its submodules.
Sequence of variables owned by this module and its submodules.
- N(data)[source]§
The number of samples in data, assuming the last 2 dimensions have been concatenated to LN.
- Return type:
int
- split_axis_shape(data)[source]§
Split the final data axis length LN into the pair (L,N).
- Return type:
Tuple[int, int]
- conditional_mean(F)§
The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim
- conditional_variance(F)§
The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim
- log_prob(F, Y)§
The log probability density log p(Y|F)
- property name§
Returns the name of this module as passed or determined in the ctor.
NOTE: This is not the same as the self.name_scope.name which includes parent module names.
- property name_scope§
Returns a tf.name_scope instance for this class.
- property non_trainable_variables§
Sequence of non-trainable variables owned by this module and its submodules.
Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.
- Returns:
A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).
- predict_log_density(Fmu, Fvar, Y)§
Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,
- i.e. if
q(F) = N(Fmu, Fvar)
and this object represents
p(y|F)
then this method computes the predictive density
log ∫ p(y=Y|F)q(F) df
- Parameters:
Fmu (ndarray[Any, Any] | Tensor | Variable | Parameter) – mean function evaluation Tensor, with shape […, latent_dim]
Fvar (ndarray[Any, Any] | Tensor | Variable | Parameter) – variance of function evaluation Tensor, with shape […, latent_dim]
Y (ndarray[Any, Any] | Tensor | Variable | Parameter) – observation Tensor, with shape […, observation_dim]:
- Returns:
log predictive density, with shape […]
- Return type:
- predict_mean_and_var(Fmu, Fvar)§
Given a Normal distribution for the latent function, return the mean and marginal variance of Y,
- i.e. if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes the predictive mean
∫∫ y p(y|f)q(f) df dy
and the predictive variance
∫∫ y² p(y|f)q(f) df dy - [ ∫∫ y p(y|f)q(f) df dy ]²
- Parameters:
- Returns:
mean and variance, both with shape […, observation_dim]
- Return type:
- property submodules§
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
>>> a = tf.Module() >>> b = tf.Module() >>> c = tf.Module() >>> a.b = b >>> b.c = c >>> list(a.submodules) == [b, c] True >>> list(b.submodules) == [c] True >>> list(c.submodules) == [] True
- Returns:
A sequence of all submodules.
- property trainable_variables§
Sequence of trainable variables owned by this module and its submodules.
Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.
- Returns:
A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).
- property variables§
Sequence of variables owned by this module and its submodules.
Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.
- Returns:
A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).
- variational_expectations(Fmu, Fvar, Y)§
Compute the expected log density of the data, given a Gaussian distribution for the function values,
- i.e. if
q(f) = N(Fmu, Fvar)
and this object represents
p(y|f)
then this method computes
∫ log(p(y=Y|f)) q(f) df.
This only works if the broadcasting dimension of the statistics of q(f) (mean and variance) are broadcastable with that of the data Y.
- Parameters:
Fmu (ndarray[Any, Any] | Tensor | Variable | Parameter) – mean function evaluation Tensor, with shape […, latent_dim]
Fvar (ndarray[Any, Any] | Tensor | Variable | Parameter) – variance of function evaluation Tensor, with shape […, latent_dim]
Y (ndarray[Any, Any] | Tensor | Variable | Parameter) – observation Tensor, with shape […, observation_dim]:
- Returns:
expected log density of the data given q(F), with shape […]
- Return type:
- classmethod with_name_scope(method)§
Decorator to automatically enter the module name scope.
>>> class MyModule(tf.Module): ... @tf.Module.with_name_scope ... def __call__(self, x): ... if not hasattr(self, 'w'): ... self.w = tf.Variable(tf.random.normal([x.shape[1], 3])) ... return tf.matmul(x, self.w)
Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:
>>> mod = MyModule() >>> mod(tf.ones([1, 2])) <tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)> >>> mod.w <tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32, numpy=..., dtype=float32)>
- Parameters:
method – The method to wrap.
- Returns:
The original method wrapped such that it enters the module’s name scope.