romcomma.gpf.kernels.MOStationary§
- class MOStationary(variance, lengthscales, name='Kernel', active_dims=None)[source]§
Bases:
AnisotropicStationary
,Kernel
Base class for stationary kernels, i.e. kernels that only depend on
d = x - x’
Derived classes should implement K_d(self, d): Returns the kernel evaluated on d, which is the pairwise difference matrix, scaled by the lengthscale parameter ℓ (i.e. [(X - X2ᵀ) / ℓ]). The last axis corresponds to the input dimension.
- __init__(variance, lengthscales, name='Kernel', active_dims=None)[source]§
Kernel Constructor.
- Parameters:
variance – An (L,L) symmetric, positive definite matrix for the signal variance.
lengthscales – An (L,M) matrix of positive definite lengthscales.
is_lengthscales_trainable – Whether the lengthscales of this kernel are trainable.
name – The name of this kernel.
active_dims – Which of the input dimensions are used. The default None means all of them.
Methods
K
(X[, X2])K_d
(d)The kernel.
K_d_apply_variance
(K_d_unit_variance)Multiply the unit variance kernel by the kernel variance, and reshape.
The kernel with variance=ones().
K_diag
(X)The kernel diagonal.
K_unit_variance
(X[, X2])The kernel with variance=ones().
__init__
(variance, lengthscales[, name, ...])Kernel Constructor.
on_separate_dims
(other)Checks if the dimensions, over which the kernels are specified, overlap.
scale
(X)scaled_difference_matrix
(X[, X2])Returns [(X - X2ᵀ) / ℓ].
slice
(X[, X2])Slice the correct dimensions for use in the kernel, as indicated by self.active_dims.
slice_cov
(cov)Slice the correct dimensions for use in the kernel, as indicated by self.active_dims for covariance matrices.
with_name_scope
(method)Decorator to automatically enter the module name scope.
Attributes
L
M
active_dims
Whether ARD behaviour is active.
The kernel lengthscales as an (L,M) matrix.
Returns the name of this module as passed or determined in the ctor.
Returns a tf.name_scope instance for this class.
Sequence of non-trainable variables owned by this module and its submodules.
parameters
Sequence of all sub-modules.
trainable_parameters
Sequence of trainable variables owned by this module and its submodules.
Sequence of variables owned by this module and its submodules.
- property lengthscales_neat§
The kernel lengthscales as an (L,M) matrix.
- K_diag(X)[source]§
The kernel diagonal.
- Parameters:
X – An (N,M) Tensor.
Returns: An (L, N, L, N) Tensor.
- K_unit_variance(X, X2=None)[source]§
The kernel with variance=ones(). This can be cached during optimisations where only the variance is trainable.
- Parameters:
X – An (n,M) Tensor.
X2 – An (N,M) Tensor.
Returns: An (L,N,L,N) Tensor.
- abstract K_d_unit_variance(d)[source]§
The kernel with variance=ones(). This can be cached during optimisations where only the variance is trainable.
- Parameters:
d – An (L,N,L,N,M) Tensor.
Returns: An (L,N,L,N) Tensor.
- K_d_apply_variance(K_d_unit_variance)[source]§
Multiply the unit variance kernel by the kernel variance, and reshape.
- Parameters:
K_d_unit_variance – An (L,N,L,N) Tensor.
Returns: An (LN,LN) Tensor
- property ard: bool§
Whether ARD behaviour is active.
- property name§
Returns the name of this module as passed or determined in the ctor.
NOTE: This is not the same as the self.name_scope.name which includes parent module names.
- property name_scope§
Returns a tf.name_scope instance for this class.
- property non_trainable_variables§
Sequence of non-trainable variables owned by this module and its submodules.
Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.
- Returns:
A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).
- on_separate_dims(other)§
Checks if the dimensions, over which the kernels are specified, overlap. Returns True if they are defined on different/separate dimensions and False otherwise.
- Parameters:
other (Kernel) –
- Return type:
bool
- scaled_difference_matrix(X, X2=None)§
Returns [(X - X2ᵀ) / ℓ]. If X has shape […, N, D] and X2 has shape […, M, D], the output will have shape […, N, M, D].
- slice(X, X2=None)§
Slice the correct dimensions for use in the kernel, as indicated by self.active_dims.
- slice_cov(cov)§
Slice the correct dimensions for use in the kernel, as indicated by self.active_dims for covariance matrices. This requires slicing the rows and columns. This will also turn flattened diagonal matrices into a tensor of full diagonal matrices.
- property submodules§
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
>>> a = tf.Module() >>> b = tf.Module() >>> c = tf.Module() >>> a.b = b >>> b.c = c >>> list(a.submodules) == [b, c] True >>> list(b.submodules) == [c] True >>> list(c.submodules) == [] True
- Returns:
A sequence of all submodules.
- property trainable_variables§
Sequence of trainable variables owned by this module and its submodules.
Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.
- Returns:
A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).
- property variables§
Sequence of variables owned by this module and its submodules.
Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.
- Returns:
A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).
- classmethod with_name_scope(method)§
Decorator to automatically enter the module name scope.
>>> class MyModule(tf.Module): ... @tf.Module.with_name_scope ... def __call__(self, x): ... if not hasattr(self, 'w'): ... self.w = tf.Variable(tf.random.normal([x.shape[1], 3])) ... return tf.matmul(x, self.w)
Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:
>>> mod = MyModule() >>> mod(tf.ones([1, 2])) <tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)> >>> mod.w <tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32, numpy=..., dtype=float32)>
- Parameters:
method – The method to wrap.
- Returns:
The original method wrapped such that it enters the module’s name scope.