Loading...

verticapy.machine_learning.vertica.cluster.DBSCAN#

class verticapy.machine_learning.vertica.cluster.DBSCAN(name: str = None, overwrite_model: bool = False, eps: float = 0.5, min_samples: int = 5, p: int = 2)#

[Beta Version] Creates a DBSCAN object by using the DBSCAN algorithm as defined by Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. This object uses pure SQL to compute the distances and neighbors, and uses Python to compute the cluster propagation (non-scalable phase).

Warning

This algorithm uses a CROSS JOIN during computation and is therefore computationally expensive at O(n * n), where n is the total number of elements. This algorithm indexes elements of the table in order to be optimal (the CROSS JOIN will happen only with IDs which are integers). Since DBSCAN uses the p-distance, it is highly sensitive to unnormalized data. However, DBSCAN is robust to outliers and can find non-linear clusters. It is a very powerful algorithm for outlier detection and clustering. A table is created at the end of the learning phase.

Important

This algorithm is not Vertica Native and relies solely on SQL for attribute computation. While this model does not take advantage of the benefits provided by a model management system, including versioning and tracking, the SQL code it generates can still be used to create a pipeline.

Parameters#

name: str, optional

Name of the model. This is not a built-in model, so this name is used to build the final table.

overwrite_model: bool, optional

If set to True, training a model with the same name as an existing model overwrites the existing model.

eps: float, optional

The radius of a neighborhood with respect to some point.

min_samples: int, optional

Minimum number of points required to form a dense region.

p: int, optional

The p of the p-distance (distance metric used during the model computation).

Attributes#

Many attributes are created during the fitting phase.

n_cluster_: int

Number of clusters.

p_: int

The p of the p-distances.

n_noise_: int

Number of outliers.

Note

All attributes can be accessed using the get_attributes() method.

Examples#

The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.

Load data for machine learning#

We import verticapy:

import verticapy as vp

Hint

By assigning an alias to verticapy, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions from verticapy are used as intended without interfering with functions from other libraries.

For this example, we will create a small dataset.

data = vp.vDataFrame({"col":[1.2, 1.1, 1.3, 1.5, 2, 2.2, 1.09, 0.9, 100, 102]})

Note

VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.

Model Initialization#

First we import the DBSCAN model:

from verticapy.machine_learning.vertica import DBSCAN

Then we can create the model:

model = DBSCAN(
    eps = 0.5,
    min_samples = 2,
    p = 2,
)

Important

As this model is not native, it solely relies on SQL statements to compute various attributes, storing them within the object. No data is saved in the database.

Model Training#

We can now fit the model:

model.fit(data, X = ["col"])

Important

To train a model, you can directly use the vDataFrame or the name of the relation stored in the database.

Hint

For clustering and anomaly detection, the use of predictors is optional. In such cases, all available predictors are considered, which can include solely numerical variables or a combination of numerical and categorical variables, depending on the model’s capabilities.

Important

As this model is not native, it solely relies on SQL statements to compute various attributes, storing them within the object. No data is saved in the database.

Prediction#

Predicting or ranking the dataset is straight-forward:

model.predict()
123
col
Numeric(22)
123
dbscan_cluster
Integer
10.90
21.090
31.10
41.20
51.30
61.50
72.01
82.21
9100.0-1
10102.0-1
Rows: 1-10 | Columns: 2

As shown above, a new column has been created, containing the clusters.

Hint

The name of the new column is optional. If not provided, it is randomly assigned.

Parameter Modification#

In order to see the parameters:

model.get_params()
Out[5]: {'eps': 0.5, 'min_samples': 2, 'p': 2}

And to manually change some of the parameters:

model.set_params({'min_samples': 5})

Model Register#

As this model is not native, it does not support model management and versioning. However, it is possible to use the SQL code it generates for deployment.

__init__(name: str = None, overwrite_model: bool = False, eps: float = 0.5, min_samples: int = 5, p: int = 2) None#

Must be overridden in the child class

Methods

__init__([name, overwrite_model, eps, ...])

Must be overridden in the child class

contour([nbins, chart])

Draws the model's contour plot.

deploySQL([X])

Returns the SQL code needed to deploy the model.

does_model_exists(name[, raise_error, ...])

Checks whether the model is stored in the Vertica database.

drop()

Drops the model from the Vertica database.

export_models(name, path[, kind])

Exports machine learning models.

fit(input_relation[, X, key_columns, index, ...])

Trains the model.

get_attributes([attr_name])

Returns the model attributes.

get_match_index(x, col_list[, str_check])

Returns the matching index.

get_params()

Returns the parameters of the model.

get_plotting_lib([class_name, chart, ...])

Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.

get_vertica_attributes([attr_name])

Returns the model Vertica attributes.

import_models(path[, schema, kind])

Imports machine learning models.

plot([max_nb_points, chart])

Draws the model.

predict()

Creates a vDataFrame of the model.

register(registered_name[, raise_error])

Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.

set_params([parameters])

Sets the parameters of the model.

summarize()

Summarizes the model.

to_binary(path)

Exports the model to the Vertica Binary format.

to_pmml(path)

Exports the model to PMML.

to_python([return_proba, ...])

Returns the Python function needed for in-memory scoring without using built-in Vertica functions.

to_sql([X, return_proba, ...])

Returns the SQL code needed to deploy the model without using built-in Vertica functions.

to_tf(path)

Exports the model to the Frozen Graph format (TensorFlow).

Attributes