romcomma.gpr.models.GPR§

class GPR(name, fold, is_read, is_covariant, is_isotropic, kernel_parameters=None, likelihood_variance=None)[source]§

Bases: Model

Interface to a Gaussian Process.

Parameters:
  • name (str) –

  • fold (Fold) –

  • is_read (bool | None) –

  • is_covariant (bool) –

  • is_isotropic (bool) –

  • kernel_parameters (Kernel.Data | None) –

  • likelihood_variance (NP.Matrix | None) –

__init__(name, fold, is_read, is_covariant, is_isotropic, kernel_parameters=None, likelihood_variance=None)[source]§

Set up data, and checks dimensions.

Parameters:
  • name (str) – The name of this MOGP.

  • fold (Fold) – The Fold housing this MOGP.

  • is_read (bool | None) – If True, the MOGP.kernel.data and MOGP.data and are read from fold.folder/name, otherwise defaults are used.

  • is_covariant (bool) – Whether the outputs will be treated as independent.

  • is_isotropic (bool) – Whether to restrict the kernel to be isotropic.

  • kernel_parameters (Data | None) – A Kernel.Data to use for MOGP.kernel.data. If not None, this replaces the kernel specified by file/defaults. If None, the kernel is read from file, or set to the default Kernel.Data(), according to read_from_file.

  • likelihood_variance (ndarray | None) – The likelihood variance to use instead of file or default.

Raises:

IndexError – If a parameter is mis-shaped.

Methods

__init__(name, fold, is_read, is_covariant, ...)

Set up data, and checks dimensions.

broadcast_parameters(is_covariant, is_isotropic)

Broadcast the data of the MOGP (including kernels) to higher dimensions.

calibrate(**kwargs)

predict(x[, y_instead_of_f])

Predicts the response to input X.

predict_df(x[, y_instead_of_f, is_normalized])

Predicts the response to input X.

predict_gradient(x[, y_instead_of_f])

Predicts the gradient GP dy/dx (or df/dx) where self is the GP for y(x).

read_meta()

test()

Tests the MOGP on the test data in self._fold.test_data.

write_meta(meta)

Attributes

KERNEL_FOLDER_NAME

K_cho

The Cholesky decomposition of the LNxLN noisy kernel(X, X) + likelihood.variance.

K_inv_Y

The LN-Vector, which pre-multiplied by the LoxLN kernel k(x, X) gives the Lo-Vector predictive mean f(x).

L

The output (Y) dimensionality.

M

The input (X) dimensionality.

META

N

The the number of training samples.

X

The implementation training inputs.

Y

The implementation training outputs.

data

fold

The parent fold.

folder

implementation

The implementation of this MOGP in GPFlow.

kernel

likelihood

test_csv

test_summary_csv

class Data(folder, **kwargs)[source]§

Bases: Data

The Data set of a MOGP.

Parameters:
  • folder (Path | str) –

  • kwargs (Data.Matrix) –

NamedTuple[source]§

alias of Values

static copy(src_folder, dst_folder)§

Returns a copy of src_folder at dst_folder, deleting anything existing at the destination.

Parameters:
  • src_folder (Path | str) –

  • dst_folder (Path | str) –

Return type:

Path

static delete(folder)§

Returns a non-existent folder.

Parameters:

folder (Path | str) –

Return type:

Path

static empty(folder)§

Returns an empty folder.

Parameters:

folder (Path | str) –

Return type:

Path

move(dst_folder)§

Move self to dst_folder.

Parameters:

dst_folder (Path | str) – The folder to move to. If this exists, it will be emptied.

Return type:

Data

Returns: self for chaining calls.

classmethod read(folder, **kwargs)§

Read Data from folder.

Parameters:
  • folder (Path | str) – The folder to record the data. Must exist

  • **kwargs (Frame | DataFrame | ndarray | Tensor) – key=ordinate initial pairs of NamedTuple fields, precisely as in NamedTuple(**kwargs). Missing fields receive their defaults, so Data(folder) is the default Data.

Return type:

Data

Returns: The Data stored in folder.

abstract class property META: Dict[str, Any]§

Hyper-parameter optimizer meta

class property KERNEL_FOLDER_NAME: str§

The name of the folder where kernel data are stored.

property fold: Fold§

The parent fold.

abstract property implementation: Tuple[Any, ...]§

The implementation of this MOGP in GPFlow. If noise_variance.shape == (1,L) an L-tuple of kernels is returned. If noise_variance.shape == (L,L) a 1-tuple of multi-output kernels is returned.

property L: int§

The output (Y) dimensionality.

property M: int§

The input (X) dimensionality.

property N: int§

The the number of training samples.

abstract property X: Any§

The implementation training inputs.

abstract property Y: Any§

The implementation training outputs.

abstract property K_cho: ndarray | Tensor§

The Cholesky decomposition of the LNxLN noisy kernel(X, X) + likelihood.variance. Shape is (LN, LN) if self.kernel.is_covariant, else (L,N,N).

abstract property K_inv_Y: ndarray | Tensor§

The LN-Vector, which pre-multiplied by the LoxLN kernel k(x, X) gives the Lo-Vector predictive mean f(x). Shape is (L,1,N). Returns: ChoSolve(self.K_cho, self.Y)

abstract predict(x, y_instead_of_f=True)[source]§

Predicts the response to input X.

Parameters:
  • x (ndarray) – An (o, M) design Matrix of inputs.

  • y_instead_of_f (bool) – True to include noise in the variance of the result.

Return type:

Tuple[ndarray, ndarray]

Returns: The distribution of y or f, as a pair (mean (o, L) Matrix, std (o, L) Matrix).

predict_df(x, y_instead_of_f=True, is_normalized=True)[source]§

Predicts the response to input X.

Parameters:
  • x (ndarray) – An (o, M) design Matrix of inputs.

  • y_instead_of_f (bool) – True to include noise in the variance of the result.

  • is_normalized (bool) – Whether the results are normalized or not.

Return type:

DataFrame

Returns: The distribution of y or f, as a dataframe with M+L+L columns of the form (X, Mean, Predictive Std).

abstract predict_gradient(x, y_instead_of_f=True)[source]§

Predicts the gradient GP dy/dx (or df/dx) where self is the GP for y(x).

Parameters:
  • x (ndarray) – An (o, M) design Matrix of inputs.

  • y_instead_of_f (bool) – True to include noise in the variance of the result.

Return type:

Tuple[Tensor, Tensor]

Returns: The distribution of dy/dx or df/dx, as a pair (mean (o, L, M), cov (o, L, M, O, l, m)) if self.likelihood.is_covariant,

else (mean (o, L, M), cov (o, O, L, M)).

test()[source]§

Tests the MOGP on the test data in self._fold.test_data. Test results comprise three values for each output at each sample: The mean prediction, the std error of prediction and the Z score of prediction (i.e. error of prediction scaled by std error of prediction).

Returns: The test_data results as a Frame backed by MOGP.test_result_csv.

Return type:

Frame

broadcast_parameters(is_covariant, is_isotropic)[source]§

Broadcast the data of the MOGP (including kernels) to higher dimensions. Shrinkage raises errors, unchanged dimensions silently do nothing.

Parameters:
  • is_covariant (bool) – Whether the outputs will be treated as dependent.

  • is_isotropic (bool) – Whether to restrict the kernel to be isotropic.

Return type:

GPR

Returns: self, for chaining calls.