Skip to content

ML.INSPECT.COEFFICIENTS

Returns the per-feature coefficients of a fitted linear model.

Syntax

ML.INSPECT.COEFFICIENTS(model)

Arguments

Name Type Default Description
model object Fitted linear model with a coef_ attribute (e.g. created by ML.REGRESSION.LINEAR/RIDGE/LASSO and trained with ML.FIT).

Returns

A DataFrame with columns [feature, coefficient], one row per training feature.

When to use

Use ML.INSPECT.COEFFICIENTS to read the per-feature coef_ weights from a fitted linear model — LinearRegression, Ridge, Lasso, LogisticRegression (binary), and similar single-output estimators. Each coefficient describes the direction and strength of one feature's contribution to the model's prediction.

A common use is to compare regularized models side-by-side: train Linear, Ridge, and Lasso on the same data, pull each model's coefficients into adjacent columns, and chart them together to see how regularization shrinks (Ridge) or zeroes out (Lasso) individual feature weights.

Examples

Train a Ridge regression on data in A2:H100 (predictors) and I2:I100 (target), then read its coefficients:

=ML.REGRESSION.RIDGE(1.0)
=ML.FIT(K1, A2:H100, I2:I100)
=ML.INSPECT.COEFFICIENTS(K2)

Pair the result with a clustered-bar chart that uses the feature column as categories and the coefficient column as values to visualize how features contribute. Repeat for LINEAR and LASSO and add their coefficient columns as additional bar series for a side-by-side comparison.

Remarks

  • The model passed in must already be fitted and must expose a coef_ attribute. Tree-based models (Random Forest, gradient boosting) expose feature_importances_ instead — use ML.INSPECT.FEATURE_IMPORTANCES for those.
  • When the model was fitted on a DataFrame, the feature column uses the original column names. When fitted on an unnamed array, it falls back to feature_0, feature_1, … in input order.
  • Only single-output models are supported (one regression target, or binary classification). Multi-class LogisticRegression and multi-output regression raise an error — they would produce a coefficient per class per feature, which doesn't fit a single bar chart.
  • Coefficients are not on a comparable scale across features unless the inputs were scaled before fitting (e.g. via ML.PREPROCESSING.STANDARD_SCALER). Without scaling, a feature measured in millions will have a tiny coefficient even if it's important.
  • Use ML.INSPECT.INTERCEPT alongside this function to read the bias term — the prediction the model would make when every feature is exactly 0.

See also