Skip to content

ML.INSPECT.SPECTRUM

Returns the spectrum of a fitted PCA or KernelPCA decomposition.

Syntax

ML.INSPECT.SPECTRUM(obj)

Arguments

Name Type Default Description
obj object Fitted PCA or KernelPCA object (created by ML.DIM_REDUCTION.PCA or ML.DIM_REDUCTION.KERNEL_PCA and trained with ML.FIT).

Returns

A DataFrame whose columns depend on the input. For PCA: Components, Explained Variance, Explained Variance Ratio, Singular Values. For KernelPCA: Components, Eigenvalues, Explained Variance Ratio. The two share Components and Explained Variance Ratio only.

When to use

Use ML.INSPECT.SPECTRUM to read the spectrum of a fitted dimensionality-reduction object — either a linear PCA or a KernelPCA. One formula serves both: pass the fitted object handle and the function returns the table that matches what scikit-learn exposes for that decomposition.

A common use is to chart the spectrum: pull the table into Excel, plot the explained-variance ratio (or the eigenvalues, for kernel PCA) against the component index, and look for the elbow — the point past which adding more components stops capturing appreciable variance.

Examples

Read the spectrum of a fitted linear PCA in cell B41:

=ML.DIM_REDUCTION.PCA(2)
=ML.FIT(B40, B15)
=ML.INSPECT.SPECTRUM(B41)

Or the spectrum of a fitted Kernel PCA with an RBF kernel:

=ML.DIM_REDUCTION.KERNEL_PCA(2, "rbf", , 15)
=ML.FIT(I4, B14)
=ML.INSPECT.SPECTRUM(I5)

The PCA result spills as four columns; the KernelPCA result spills as three. The Explained Variance Ratio column always sums to 1.0.

Remarks

  • The model passed in must be a PCA or KernelPCA object (created by ML.DIM_REDUCTION.PCA or ML.DIM_REDUCTION.KERNEL_PCA) that has been fitted with ML.FIT. Any other estimator type, or an unfitted decomposition, raises an error.
  • The returned column set depends on the input type. Avoid charts that reference fixed column positions if you plan to swap PCA for KernelPCA (or vice versa) — the two share only the Components and Explained Variance Ratio columns.
  • For linear PCA, the explained variance and singular values come directly from the data covariance. For Kernel PCA, the eigenvalues come from the centered kernel matrix and "explained variance ratio" is computed as eigenvalue / sum(eigenvalues) over the kept components, so the column always sums to 1.0 regardless of the underlying kernel.
  • Kernel PCA eigenvectors can flip sign across platforms; eigenvalues do not. If you want a stable summary of a fitted Kernel PCA, prefer this function over inspecting the raw projection coordinates.

Compatible with

Constructor Description
ML.DIM_REDUCTION.PCA Creates a Principal Component Analysis object.
ML.DIM_REDUCTION.KERNEL_PCA Creates a Kernel Principal Component Analysis object.

See also