Loading...

verticapy.machine_learning.vertica.preprocessing.Scaler#

class verticapy.machine_learning.vertica.preprocessing.Scaler(name: str = None, overwrite_model: bool = False, method: Literal['zscore', 'robust_zscore', 'minmax'] = 'zscore')#

Creates a Vertica Scaler object.

Parameters#

name: str, optional

Name of the model.

overwrite_model: bool, optional

If set to True, training a model with the same name as an existing model overwrites the existing model.

method: str, optional

Method used to scale the data.

  • zscore:

Scaling using the Z-Score

\[Z_score = (x - avg) / std\]
  • robust_zscore:

Scaling using the Robust Z-Score.

\[Z_rscore = (x - median) / (1.4826 * mad)\]
  • minmax:

Normalization using the Min & Max.

\[Z_minmax = (x - min) / (max - min)\]

Attributes#

Many attributes are created during the fitting phase.

For StandardScaler:

mean_: numpy.array

Model’s features means.

std_: numpy.array

Model’s features standard deviation.

For MinMaxScaler:

min_: numpy.array

Model’s features minimums.

max_: numpy.array

Model’s features maximums.

For RobustScaler:

median_: numpy.array

Model’s features medians.

mad_: numpy.array

Model’s features median absolute deviations.

Note

All attributes can be accessed using the get_attributes() method.

Note

Several other attributes can be accessed by using the get_vertica_attributes() method.

Examples#

The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.

Load data for machine learning#

We import verticapy:

import verticapy as vp

Hint

By assigning an alias to verticapy, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions from verticapy are used as intended without interfering with functions from other libraries.

For this example, we will use a dummy dataset.

data = vp.vDataFrame(
    {
        "values": [1, 1.01, 1.02, 1.05, 1.024],
    }
)

Note

VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.

Model Initialization#

First we import the Scaler model:

from verticapy.machine_learning.vertica import Scaler

Then we can create the model:

model = Scaler(method = "zscore")

Hint

In verticapy 1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.

Important

The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.

Model Fitting#

We can now fit the model:

model.fit(data)

Important

To fit a model, you can directly use the vDataFrame or the name of the relation stored in the database.

Model Parameters#

To fetch the model parameter (mean) you can use:

model.mean_
Out[6]: array([1.0208])

Similarly for standard deviation:

model.std_
Out[7]: array([0.01879362])

Conversion/Transformation#

To get the scaled dataset, we can use the transform method. Let us transform the data:

model.transform(data)
123
values
Float(22)
1-1.10675880944634
2-0.57466322798175
3-0.0425676465171623
41.5537190978766
50.170270586068673
Rows: 1-5 | Column: values | Type: Float(22)

Please refer to transform() for more details on transforming a vDataFrame.

Similarly, you can perform the inverse transform to get the original features using:

model.inverse_transform(data_transformed)

The variable data_transformed is the scaled dataset.

Model Register#

In order to register the model for tracking and versioning:

model.register("model_v1")

Please refer to Model Tracking and Versioning for more details on model tracking and versioning.

Model Exporting#

To Memmodel

model.to_memmodel()

Note

MemModel objects serve as in-memory representations of machine learning models. They can be used for both in-database and in-memory prediction tasks. These objects can be pickled in the same way that you would pickle a scikit-learn model.

The preceding methods for exporting the model use MemModel, and it is recommended to use MemModel directly.

SQL

To get the SQL query use below:

model.to_sql()
Out[8]: ['("values" - 1.0208) / 0.018793615937338']

To Python

To obtain the prediction function in Python syntax, use the following code:

X = [[1]]

model.to_python()(X)
Out[10]: array([[-1.10675881]])

Hint

The to_python() method is used to scale the data. For specific details on how to use this method for different model types, refer to the relevant documentation for each model.

See also

StandardScaler : Scalar with method set as zscore.
RobustScaler : Scalar with method set as robust_zscore.
MinMaxScaler : Scalar with method set as minmax.
__init__(name: str = None, overwrite_model: bool = False, method: Literal['zscore', 'robust_zscore', 'minmax'] = 'zscore') None#

Must be overridden in the child class

Methods

__init__([name, overwrite_model, method])

Must be overridden in the child class

contour([nbins, chart])

Draws the model's contour plot.

deployInverseSQL([key_columns, ...])

Returns the SQL code needed to deploy the inverse model.

deploySQL([X, key_columns, exclude_columns])

Returns the SQL code needed to deploy the model.

does_model_exists(name[, raise_error, ...])

Checks whether the model is stored in the Vertica database.

drop()

Drops the model from the Vertica database.

export_models(name, path[, kind])

Exports machine learning models.

fit(input_relation[, X, return_report])

Trains the model.

get_attributes([attr_name])

Returns the model attributes.

get_match_index(x, col_list[, str_check])

Returns the matching index.

get_params()

Returns the parameters of the model.

get_plotting_lib([class_name, chart, ...])

Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.

get_vertica_attributes([attr_name])

Returns the model Vertica attributes.

import_models(path[, schema, kind])

Imports machine learning models.

inverse_transform(vdf[, X])

Applies the Inverse Model on a vDataFrame.

register(registered_name[, raise_error])

Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.

set_params([parameters])

Sets the parameters of the model.

summarize()

Summarizes the model.

to_binary(path)

Exports the model to the Vertica Binary format.

to_memmodel()

Converts the model to an InMemory object that can be used for different types of predictions.

to_pmml(path)

Exports the model to PMML.

to_python([return_proba, ...])

Returns the Python function needed for in-memory scoring without using built-in Vertica functions.

to_sql([X, return_proba, ...])

Returns the SQL code needed to deploy the model without using built-in Vertica functions.

to_tf(path)

Exports the model to the Frozen Graph format (TensorFlow).

transform([vdf, X])

Applies the model on a vDataFrame.

Attributes