Loading...

verticapy.mlops.model_tracking.vExperiment#

class verticapy.mlops.model_tracking.vExperiment(experiment_name: str, test_relation: str | vDataFrame, X: str | list[str], y: str, experiment_type: Literal['auto', 'regressor', 'binary', 'multi', 'clustering'] = 'auto', experiment_table: str = '')#

Creates a vExperiment object that can be used for tracking native vertica models trained as part of an experiment.

Parameters#

experiment_name: str

The name of the experiment

test_relation: SQLRelation

Relation to use to test models in the experiment. It would be ignored for experiments of type clustering.

X: SQLColumns

List of the predictors. It would be ignored for experiments of type clustering.

y: str

Response column. It would be ignored for experiments of type clustering.

experiment_type: str, optional
The experiment type.
autoAutomatically detects the

experiment type from test_relation.

regressorThe regression models can be added

to the experiment.

binaryThe binary classification models

can be added to the experiment.

multiThe multiclass classification models

can be added to the experiment.

clustering: The clustering models can be added

to the experiment.

experiment_table: SQLRelation, optional

The name of table ([schema_name.]table_name) in the database to archive the experiment. When not specified, the experiment will not be backed up in the database. When specified, the table will be created if it doesn’t exist. In case that the table already exists, the user must have SELECT, INSERT, and DELETE privileges on the table.

Attributes#

_model_name_list: list

The list of model names added to the experiment.

_model_id_list: list

The list of model IDs added to the experiment.

_model_type_list: list

The list of model types added to the experiment.

_parameters: list

The list of dictionaries of parameters of each added model.

_measured_metrics: list

The list of list of measured metrics for each added model.

_metrics: list

The list of metrics to be used for evaluating each model. This list will be determined based on the value of experiment_type at the time of object creation. Each metric is paired with 1 or -1 where 1 indicates a positive correlationthe between the value of the metric positive correlationthe between the value of the metric and the quality of the model. In contrast, number -1 indicates a negative correlation.

_user_defined_metrics: list

The list of dictionaries of user-defined metrics.

__init__(experiment_name: str, test_relation: str | vDataFrame, X: str | list[str], y: str, experiment_type: Literal['auto', 'regressor', 'binary', 'multi', 'clustering'] = 'auto', experiment_table: str = '') None#

Methods

__init__(experiment_name, test_relation, X, y)

add_model(model[, metrics])

Adds a model to the experiment.

drop([keeping_models])

Drops all models of the experiment except those in the keeping_models list.

get_plotting_lib([class_name, chart, ...])

Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.

list_models()

load_best_model(metric)

plot(parameter, metric[, chart])

Draws the scatter plot of a metric vs a parameter