
verticapy.machine_learning.vertica.naive_bayes.NaiveBayes¶
- class verticapy.machine_learning.vertica.naive_bayes.NaiveBayes(name: str = None, overwrite_model: bool = False, alpha: Annotated[int | float | Decimal, 'Python Numbers'] = 1.0, nbtype: Literal['auto', 'bernoulli', 'categorical', 'multinomial', 'gaussian'] = 'auto')¶
Creates a
NaiveBayes
object using the Vertica Naive Bayes algorithm. It is a “probabilistic classifier” based on applying Bayes’ theorem with strong (naïve) independence assumptions between the features.Parameters¶
- name: str, optional
Name of the model. The model is stored in the database.
- overwrite_model: bool, optional
If set to
True
, training a model with the same name as an existing model overwrites the existing model.- alpha: float, optional
A
float
that specifies use of Laplace smoothing if the event model is categorical, multinomial, or Bernoulli.- nbtype: str, optional
Naive Bayes type.
- auto:
Vertica NaiveBayes objects treat columns according to data type:
- FLOAT:
values are assumed to follow some Gaussian distribution.
- INTEGER:
values are assumed to belong to one multinomial distribution.
- CHAR/VARCHAR:
values are assumed to follow some categorical distribution. The string values stored in these columns must be no greater than 128 characters.
- BOOLEAN:
values are treated as categorical with two values.
- bernoulli:
Casts the variables to boolean.
- categorical:
Casts the variables to categorical.
- multinomial:
Casts the variables to integer.
- gaussian:
Casts the variables to float.
Attributes¶
Many attributes are created during the fitting phase.
- prior_: numpy.array
The model’s classes probabilities.
- attributes: list of dict
list
of the model’s attributes. Each feature is represented by adictionary
, which differs based on the distribution.- classes_: numpy.array
The classes labels.
Note
All attributes can be accessed using the
get_attributes()
method.Note
Several other attributes can be accessed by using the
get_vertica_attributes()
method.Examples¶
The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.
Load data for machine learning¶
We import
verticapy
:import verticapy as vp
Hint
By assigning an alias to
verticapy
, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions fromverticapy
are used as intended without interfering with functions from other libraries.For this example, we will use the iris dataset.
import verticapy.datasets as vpd data = vpd.load_iris()
123SepalLengthCm123SepalWidthCm123PetalLengthCm123PetalWidthCmAbcSpecies1 3.3 4.5 5.6 7.8 Iris-setosa 2 3.3 4.5 5.6 7.8 Iris-setosa 3 3.3 4.5 5.6 7.8 Iris-setosa 4 3.3 4.5 5.6 7.8 Iris-setosa 5 3.3 4.5 5.6 7.8 Iris-setosa 6 3.3 4.5 5.6 7.8 Iris-setosa 7 3.3 4.5 5.6 7.8 Iris-setosa 8 3.3 4.5 5.6 7.8 Iris-setosa 9 3.3 4.5 5.6 7.8 Iris-setosa 10 3.3 4.5 5.6 7.8 Iris-setosa 11 3.3 4.5 5.6 7.8 Iris-setosa 12 3.3 4.5 5.6 7.8 Iris-setosa 13 3.3 4.5 5.6 7.8 Iris-setosa 14 3.3 4.5 5.6 7.8 Iris-setosa 15 3.3 4.5 5.6 7.8 Iris-setosa 16 3.3 4.5 5.6 7.8 Iris-setosa 17 3.3 4.5 5.6 7.8 Iris-setosa 18 3.3 4.5 5.6 7.8 Iris-setosa 19 3.3 4.5 5.6 7.8 Iris-setosa 20 3.3 4.5 5.6 7.8 Iris-setosa 21 3.3 4.5 5.6 7.8 Iris-setosa 22 3.3 4.5 5.6 7.8 Iris-setosa 23 3.3 4.5 5.6 7.8 Iris-setosa 24 3.3 4.5 5.6 7.8 Iris-setosa 25 3.3 4.5 5.6 7.8 Iris-setosa 26 3.3 4.5 5.6 7.8 Iris-setosa 27 4.3 3.0 1.1 0.1 Iris-setosa 28 4.3 4.7 9.6 1.8 Iris-virginica 29 4.3 4.7 9.6 1.8 Iris-virginica 30 4.3 4.7 9.6 1.8 Iris-virginica 31 4.3 4.7 9.6 1.8 Iris-virginica 32 4.3 4.7 9.6 1.8 Iris-virginica 33 4.3 4.7 9.6 1.8 Iris-virginica 34 4.3 4.7 9.6 1.8 Iris-virginica 35 4.3 4.7 9.6 1.8 Iris-virginica 36 4.3 4.7 9.6 1.8 Iris-virginica 37 4.3 4.7 9.6 1.8 Iris-virginica 38 4.3 4.7 9.6 1.8 Iris-virginica 39 4.3 4.7 9.6 1.8 Iris-virginica 40 4.3 4.7 9.6 1.8 Iris-virginica 41 4.3 4.7 9.6 1.8 Iris-virginica 42 4.3 4.7 9.6 1.8 Iris-virginica 43 4.3 4.7 9.6 1.8 Iris-virginica 44 4.3 4.7 9.6 1.8 Iris-virginica 45 4.3 4.7 9.6 1.8 Iris-virginica 46 4.3 4.7 9.6 1.8 Iris-virginica 47 4.3 4.7 9.6 1.8 Iris-virginica 48 4.3 4.7 9.6 1.8 Iris-virginica 49 4.3 4.7 9.6 1.8 Iris-virginica 50 4.3 4.7 9.6 1.8 Iris-virginica 51 4.3 4.7 9.6 1.8 Iris-virginica 52 4.3 4.7 9.6 1.8 Iris-virginica 53 4.3 4.7 9.6 1.8 Iris-virginica 54 4.4 2.9 1.4 0.2 Iris-setosa 55 4.4 3.0 1.3 0.2 Iris-setosa 56 4.4 3.2 1.3 0.2 Iris-setosa 57 4.5 2.3 1.3 0.3 Iris-setosa 58 4.6 3.1 1.5 0.2 Iris-setosa 59 4.6 3.2 1.4 0.2 Iris-setosa 60 4.6 3.4 1.4 0.3 Iris-setosa 61 4.6 3.6 1.0 0.2 Iris-setosa 62 4.7 3.2 1.3 0.2 Iris-setosa 63 4.7 3.2 1.6 0.2 Iris-setosa 64 4.8 3.0 1.4 0.1 Iris-setosa 65 4.8 3.0 1.4 0.3 Iris-setosa 66 4.8 3.1 1.6 0.2 Iris-setosa 67 4.8 3.4 1.6 0.2 Iris-setosa 68 4.8 3.4 1.9 0.2 Iris-setosa 69 4.9 2.4 3.3 1.0 Iris-versicolor 70 4.9 2.5 4.5 1.7 Iris-virginica 71 4.9 3.0 1.4 0.2 Iris-setosa 72 4.9 3.1 1.5 0.1 Iris-setosa 73 4.9 3.1 1.5 0.1 Iris-setosa 74 4.9 3.1 1.5 0.1 Iris-setosa 75 5.0 2.0 3.5 1.0 Iris-versicolor 76 5.0 2.3 3.3 1.0 Iris-versicolor 77 5.0 3.0 1.6 0.2 Iris-setosa 78 5.0 3.2 1.2 0.2 Iris-setosa 79 5.0 3.3 1.4 0.2 Iris-setosa 80 5.0 3.4 1.5 0.2 Iris-setosa 81 5.0 3.4 1.6 0.4 Iris-setosa 82 5.0 3.5 1.3 0.3 Iris-setosa 83 5.0 3.5 1.6 0.6 Iris-setosa 84 5.0 3.6 1.4 0.2 Iris-setosa 85 5.1 2.5 3.0 1.1 Iris-versicolor 86 5.1 3.3 1.7 0.5 Iris-setosa 87 5.1 3.4 1.5 0.2 Iris-setosa 88 5.1 3.5 1.4 0.2 Iris-setosa 89 5.1 3.5 1.4 0.3 Iris-setosa 90 5.1 3.7 1.5 0.4 Iris-setosa 91 5.1 3.8 1.5 0.3 Iris-setosa 92 5.1 3.8 1.6 0.2 Iris-setosa 93 5.1 3.8 1.9 0.4 Iris-setosa 94 5.2 2.7 3.9 1.4 Iris-versicolor 95 5.2 3.4 1.4 0.2 Iris-setosa 96 5.2 3.5 1.5 0.2 Iris-setosa 97 5.2 4.1 1.5 0.1 Iris-setosa 98 5.3 3.7 1.5 0.2 Iris-setosa 99 5.4 3.0 4.5 1.5 Iris-versicolor 100 5.4 3.4 1.5 0.4 Iris-setosa Rows: 1-100 | Columns: 5Note
VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.
You can easily divide your dataset into training and testing subsets using the
vDataFrame.
train_test_split()
method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.data = vpd.load_iris() train, test = data.train_test_split(test_size = 0.2)
Warning
In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the
vDataFrame.
to_db()
method to save your results intotables
ortemporary tables
. This will help enhance the overall performance of the process.Balancing the Dataset¶
In VerticaPy, balancing a dataset to address class imbalances is made straightforward through the
balance()
function within thepreprocessing
module. This function enables users to rectify skewed class distributions efficiently. By specifying the target variable and setting parameters like the method for balancing, users can effortlessly achieve a more equitable representation of classes in their dataset. Whether opting for over-sampling, under-sampling, or a combination of both, VerticaPy’sbalance()
function streamlines the process, empowering users to enhance the performance and fairness of their machine learning models trained on imbalanced data.To balance the dataset, use the following syntax.
from verticapy.machine_learning.vertica.preprocessing import balance balanced_train = balance( name = "my_schema.train_balanced", input_relation = train, y = "good", method = "hybrid", )
Note
With this code, a table named train_balanced is created in the my_schema schema. It can then be used to train the model. In the rest of the example, we will work with the full dataset.
Hint
Balancing the dataset is a crucial step in improving the accuracy of machine learning models, particularly when faced with imbalanced class distributions. By addressing disparities in the number of instances across different classes, the model becomes more adept at learning patterns from all classes rather than being biased towards the majority class. This, in turn, enhances the model’s ability to make accurate predictions for under-represented classes. The balanced dataset ensures that the model is not dominated by the majority class and, as a result, leads to more robust and unbiased model performance. Therefore, by employing techniques such as over-sampling, under-sampling, or a combination of both during dataset preparation, practitioners can significantly contribute to achieving higher accuracy and better generalization of their machine learning models.
Model Initialization¶
First we import the
NaiveBayes
model:from verticapy.machine_learning.vertica import NaiveBayes
Then we can create the model:
model = NaiveBayes()
Hint
In
verticapy
1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.Important
The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.
Model Training¶
We can now fit the model:
model.fit( train, [ "SepalLengthCm", "SepalWidthCm", "PetalLengthCm", "PetalWidthCm", ], "Species", test, ) ======= details ======= index| predictor | type -----+-------------+--------- 0 | Species |ResponseC 1 |SepalLengthCm|Gaussian 2 |SepalWidthCm |Gaussian 3 |PetalLengthCm|Gaussian 4 |PetalWidthCm |Gaussian ===== prior ===== class |probability ---------------+----------- Iris-setosa | 0.37356 Iris-versicolor| 0.21264 Iris-virginica | 0.41379 =========== call_string =========== naive_bayes('"public"."_verticapy_tmp_naivebayes_v_demo_4033768055a411ef880f0242ac120002_"', '"public"."_verticapy_tmp_view_v_demo_4040d6e055a411ef880f0242ac120002_"', '"species"', '"SepalLengthCm", "SepalWidthCm", "PetalLengthCm", "PetalWidthCm"' USING PARAMETERS exclude_columns='', alpha=1) ==================== gaussian.Iris-setosa ==================== index| mu |sigma_sq -----+--------+-------- 1 | 4.32000| 0.82350 2 | 3.88000| 0.35944 3 | 3.17385| 4.26602 4 | 3.38615|14.06652 ======================== gaussian.Iris-versicolor ======================== index| mu |sigma_sq -----+--------+-------- 1 | 6.00270| 0.27860 2 | 2.82703| 0.08147 3 | 4.32432| 0.22467 4 | 1.36216| 0.03853 ======================= gaussian.Iris-virginica ======================= index| mu |sigma_sq -----+--------+-------- 1 | 5.66806| 1.53291 2 | 3.65972| 0.80413 3 | 7.17361| 4.21436 4 | 1.92361| 0.05760 =============== Additional Info =============== Name | Value ------------------+-------- alpha | 1.00000 accepted_row_count| 174 rejected_row_count| 0
Important
To train a model, you can directly use the
vDataFrame
or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. Inverticapy
, we don’t work usingX
matrices andy
vectors. Instead, we work directly with lists of predictors and the response name.Metrics¶
We can get the entire report using:
model.report()
Iris-setosa Iris-versicolor Iris-virginica avg_macro avg_weighted avg_micro auc 1.0 1.0 1.0 1.0 1.0 [null] prc_auc 1.0 1.0 1.0 1.0 1.0 [null] accuracy 1.0 1.0 1.0 1.0 1.0 1.0 log_loss 0.00331574265176705 0.00704123192889915 0.00548282045993375 0.005279931680199984 0.005007476613146998 [null] precision 1.0 1.0 1.0 1.0 1.0 1.0 recall 1.0 1.0 1.0 1.0 1.0 1.0 f1_score 1.0 1.0 1.0 1.0 1.0 1.0 mcc 1.0 1.0 1.0 1.0 1.0 1.0 informedness 1.0 1.0 1.0 1.0 1.0 1.0 markedness 1.0 1.0 1.0 1.0 1.0 1.0 csi 1.0 1.0 1.0 1.0 1.0 1.0 Rows: 1-11 | Columns: 7Important
Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g.
model.report(metrics = ["auc", "accuracy"])
.For classification models, we can easily modify the
cutoff
to observe the effect on different metrics:model.report(cutoff = 0.2)
Iris-setosa Iris-versicolor Iris-virginica avg_macro avg_weighted avg_micro auc 1.0 1.0 1.0 1.0 1.0 [null] prc_auc 1.0 1.0 1.0 1.0 1.0 [null] accuracy 0.9772727272727273 0.9772727272727273 1.0 0.9848484848484849 0.9834710743801653 0.9848484848484849 log_loss 0.00331574265176705 0.00704123192889915 0.00548282045993375 0.005279931680199984 0.005007476613146998 [null] precision 0.95 0.9285714285714286 1.0 0.9595238095238096 0.9573051948051948 0.9565217391304348 recall 1.0 1.0 1.0 1.0 1.0 1.0 f1_score 0.9743589743589743 0.962962962962963 1.0 0.9791073124406457 0.977984977984978 0.9777777777777777 mcc 0.9549869109050657 0.9479543826159238 1.0 0.9676470978403299 0.9651854154818923 0.966841563388569 informedness 0.96 0.967741935483871 1.0 0.9759139784946237 0.9731964809384164 0.9772727272727273 markedness 0.95 0.9285714285714286 1.0 0.9595238095238096 0.9573051948051948 0.9565217391304348 csi 0.95 0.9285714285714286 1.0 0.9595238095238096 0.9573051948051948 0.9565217391304348 Rows: 1-11 | Columns: 7You can also use the
NaiveBayes.score
function to compute any classification metric. The default metric is the accuracy:model.score(metric = "f1", average = "macro") Out[4]: 1.0
Note
For multi-class scoring,
verticapy
allows the flexibility to use three averaging techniques:micro
,macro
andweighted
. Please refer to this link for more details on how they are calculated.Prediction¶
Prediction is straight-forward:
model.predict( test, [ "SepalLengthCm", "SepalWidthCm", "PetalLengthCm", "PetalWidthCm", ], "prediction", )
123SepalLengthCm123SepalWidthCm123PetalLengthCm123PetalWidthCmAbcSpeciesAbcprediction1 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 2 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 3 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 4 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 5 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 6 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 7 4.3 4.7 9.6 1.8 Iris-virginica Iris-virginica 8 4.3 4.7 9.6 1.8 Iris-virginica Iris-virginica 9 4.3 4.7 9.6 1.8 Iris-virginica Iris-virginica 10 4.4 3.0 1.3 0.2 Iris-setosa Iris-setosa 11 4.6 3.2 1.4 0.2 Iris-setosa Iris-setosa 12 4.6 3.4 1.4 0.3 Iris-setosa Iris-setosa 13 4.7 3.2 1.6 0.2 Iris-setosa Iris-setosa 14 4.8 3.0 1.4 0.3 Iris-setosa Iris-setosa 15 4.8 3.4 1.9 0.2 Iris-setosa Iris-setosa 16 4.9 3.1 1.5 0.1 Iris-setosa Iris-setosa 17 5.0 2.0 3.5 1.0 Iris-versicolor Iris-versicolor 18 5.0 3.0 1.6 0.2 Iris-setosa Iris-setosa 19 5.0 3.3 1.4 0.2 Iris-setosa Iris-setosa 20 5.0 3.4 1.5 0.2 Iris-setosa Iris-setosa 21 5.1 3.8 1.6 0.2 Iris-setosa Iris-setosa 22 5.5 2.4 3.8 1.1 Iris-versicolor Iris-versicolor 23 5.5 2.5 4.0 1.3 Iris-versicolor Iris-versicolor 24 5.5 2.6 4.4 1.2 Iris-versicolor Iris-versicolor 25 5.6 2.9 3.6 1.3 Iris-versicolor Iris-versicolor 26 5.6 3.0 4.1 1.3 Iris-versicolor Iris-versicolor 27 5.7 2.6 3.5 1.0 Iris-versicolor Iris-versicolor 28 5.7 2.8 4.5 1.3 Iris-versicolor Iris-versicolor 29 5.7 3.0 4.2 1.2 Iris-versicolor Iris-versicolor 30 5.7 4.4 1.5 0.4 Iris-setosa Iris-setosa 31 5.8 2.6 4.0 1.2 Iris-versicolor Iris-versicolor 32 5.8 2.7 5.1 1.9 Iris-virginica Iris-virginica 33 6.0 2.2 4.0 1.0 Iris-versicolor Iris-versicolor 34 6.2 2.2 4.5 1.5 Iris-versicolor Iris-versicolor 35 6.3 3.4 5.6 2.4 Iris-virginica Iris-virginica 36 6.4 2.7 5.3 1.9 Iris-virginica Iris-virginica 37 6.4 3.2 5.3 2.3 Iris-virginica Iris-virginica 38 6.7 3.3 5.7 2.1 Iris-virginica Iris-virginica 39 6.8 3.2 5.9 2.3 Iris-virginica Iris-virginica 40 6.9 3.1 4.9 1.5 Iris-versicolor Iris-versicolor 41 7.6 3.0 6.6 2.1 Iris-virginica Iris-virginica 42 4.3 4.7 9.6 1.8 Iris-virginica Iris-virginica 43 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 44 4.3 4.7 9.6 1.8 Iris-virginica Iris-virginica Rows: 1-44 | Columns: 6Note
Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the
vDataFrame
to thepredict()
function, but in this case, it’s essential that the column names of thevDataFrame
match the predictors and response name in the model.Probabilities¶
It is also easy to get the model’s probabilities:
model.predict_proba( test, [ "SepalLengthCm", "SepalWidthCm", "PetalLengthCm", "PetalWidthCm", ], "prediction", )
123SepalLengthCm123SepalWidthCm123PetalLengthCm123PetalWidthCmAbcSpeciesAbcpredictionAbcprediction_irissetosaAbcprediction_irisversicolorAbcprediction_irisvirginica1 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 1 1.06069e-245 5.63887e-130 2 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 1 1.06069e-245 5.63887e-130 3 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 1 1.06069e-245 5.63887e-130 4 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 1 1.06069e-245 5.63887e-130 5 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 1 1.06069e-245 5.63887e-130 6 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 1 1.06069e-245 5.63887e-130 7 4.3 4.7 9.6 1.8 Iris-virginica Iris-virginica 0.00274656 4.24979e-38 0.997253 8 4.3 4.7 9.6 1.8 Iris-virginica Iris-virginica 0.00274656 4.24979e-38 0.997253 9 4.3 4.7 9.6 1.8 Iris-virginica Iris-virginica 0.00274656 4.24979e-38 0.997253 10 4.4 3.0 1.3 0.2 Iris-setosa Iris-setosa 1 3.19264e-16 2.59496e-12 11 4.6 3.2 1.4 0.2 Iris-setosa Iris-setosa 1 1.17083e-15 2.58653e-12 12 4.6 3.4 1.4 0.3 Iris-setosa Iris-setosa 1 4.65624e-15 3.66039e-11 13 4.7 3.2 1.6 0.2 Iris-setosa Iris-setosa 1 2.26247e-14 3.48191e-12 14 4.8 3.0 1.4 0.3 Iris-setosa Iris-setosa 1 1.73051e-13 7.72312e-11 15 4.8 3.4 1.9 0.2 Iris-setosa Iris-setosa 1 2.38517e-13 4.10572e-12 16 4.9 3.1 1.5 0.1 Iris-setosa Iris-setosa 1 1.46024e-15 2.1562e-13 17 5.0 2.0 3.5 1.0 Iris-versicolor Iris-versicolor 0.206216 0.786238 0.00754662 18 5.0 3.0 1.6 0.2 Iris-setosa Iris-setosa 1 2.86423e-13 6.65627e-12 19 5.0 3.3 1.4 0.2 Iris-setosa Iris-setosa 1 4.1503e-15 3.61705e-12 20 5.0 3.4 1.5 0.2 Iris-setosa Iris-setosa 1 6.50627e-15 3.56843e-12 21 5.1 3.8 1.6 0.2 Iris-setosa Iris-setosa 1 5.45251e-16 3.36887e-12 22 5.5 2.4 3.8 1.1 Iris-versicolor Iris-versicolor 0.00204448 0.997669 0.000286265 23 5.5 2.5 4.0 1.3 Iris-versicolor Iris-versicolor 0.000564746 0.998543 0.000892194 24 5.5 2.6 4.4 1.2 Iris-versicolor Iris-versicolor 0.000558532 0.999119 0.000322096 25 5.6 2.9 3.6 1.3 Iris-versicolor Iris-versicolor 0.00221776 0.996565 0.00121676 26 5.6 3.0 4.1 1.3 Iris-versicolor Iris-versicolor 0.00107561 0.998125 0.000799052 27 5.7 2.6 3.5 1.0 Iris-versicolor Iris-versicolor 0.00607317 0.993805 0.000121715 28 5.7 2.8 4.5 1.3 Iris-versicolor Iris-versicolor 0.000337017 0.999049 0.000614173 29 5.7 3.0 4.2 1.2 Iris-versicolor Iris-versicolor 0.000961619 0.998747 0.000291345 30 5.7 4.4 1.5 0.4 Iris-setosa Iris-setosa 1 3.85312e-17 2.08606e-09 31 5.8 2.6 4.0 1.2 Iris-versicolor Iris-versicolor 0.000323409 0.999468 0.000208107 32 5.8 2.7 5.1 1.9 Iris-virginica Iris-virginica 0.00605816 0.23472 0.759222 33 6.0 2.2 4.0 1.0 Iris-versicolor Iris-versicolor 0.00121018 0.998608 0.000182079 34 6.2 2.2 4.5 1.5 Iris-versicolor Iris-versicolor 0.000155771 0.982269 0.0175755 35 6.3 3.4 5.6 2.4 Iris-virginica Iris-virginica 0.0417475 5.67852e-07 0.958252 36 6.4 2.7 5.3 1.9 Iris-virginica Iris-virginica 0.00189951 0.110103 0.887998 37 6.4 3.2 5.3 2.3 Iris-virginica Iris-virginica 0.0174018 5.90872e-05 0.982539 38 6.7 3.3 5.7 2.1 Iris-virginica Iris-virginica 0.00272504 6.97232e-05 0.997205 39 6.8 3.2 5.9 2.3 Iris-virginica Iris-virginica 0.00422241 8.58287e-07 0.995777 40 6.9 3.1 4.9 1.5 Iris-versicolor Iris-versicolor 0.000466582 0.950888 0.0486451 41 7.6 3.0 6.6 2.1 Iris-virginica Iris-virginica 8.34743e-05 8.72557e-09 0.999917 42 4.3 4.7 9.6 1.8 Iris-virginica Iris-virginica 0.00274656 4.24979e-38 0.997253 43 3.3 4.5 5.6 7.8 Iris-setosa Iris-setosa 1 1.06069e-245 5.63887e-130 44 4.3 4.7 9.6 1.8 Iris-virginica Iris-virginica 0.00274656 4.24979e-38 0.997253 Rows: 1-44 | Columns: 9Note
Probabilities are added to the
vDataFrame
, and VerticaPy uses the corresponding probability function in SQL behind the scenes. You can use thepos_label
parameter to add only the probability of the selected category.Confusion Matrix¶
You can obtain the confusion matrix.
model.confusion_matrix() Out[5]: array([[19, 0, 0], [ 0, 13, 0], [ 0, 0, 12]])
Hint
In the context of multi-class classification, you typically work with an overall confusion matrix that summarizes the classification efficiency across all classes. However, you have the flexibility to specify a
pos_label
and adjust the cutoff threshold. In this case, a binary confusion matrix is computed, where the chosen class is treated as the positive class, allowing you to evaluate its efficiency as if it were a binary classification problem.model.confusion_matrix(pos_label = "Iris-setosa", cutoff = 0.6) Out[6]: array([[25, 0], [ 0, 19]])
Note
In classification, the
cutoff
is a threshold value used to determine class assignment based on predicted probabilities or scores from a classification model. In binary classification, if the predicted probability for a specific class is greater than or equal to the cutoff, the instance is assigned to the positive class; otherwise, it is assigned to the negative class. Adjusting the cutoff allows for trade-offs between true positives and false positives, enabling the model to be optimized for specific objectives or to consider the relative costs of different classification errors. The choice of cutoff is critical for tailoring the model’s performance to meet specific needs.Main Plots (Classification Curves)¶
Classification models allow for the creation of various plots that are very helpful in understanding the model, such as the ROC Curve, PRC Curve, Cutoff Curve, Gain Curve, and more.
Most of the classification curves can be found in the Machine Learning - Classification Curve.
For example, let’s draw the model’s ROC curve.
model.roc_curve(pos_label = "Iris-setosa")
Important
Most of the curves have a parameter called
nbins
, which is essential for estimating metrics. The larger thenbins
, the more precise the estimation, but it can significantly impact performance. Exercise caution when increasing this parameter excessively.Hint
In binary classification, various curves can be easily plotted. However, in multi-class classification, it’s important to select the
pos_label
, representing the class to be treated as positive when drawing the curve.Other Plots¶
Contour plot is another useful plot that can be produced for models with two predictors.
model.contour(pos_label = "Iris-setosa")
Important
Machine learning models with two predictors can usually benefit from their own contour plot. This visual representation aids in exploring predictions and gaining a deeper understanding of how these models perform in different scenarios. Please refer to Contour Plot for more examples.
Parameter Modification¶
In order to see the parameters:
model.get_params() Out[7]: {'alpha': 1.0, 'nbtype': 'auto'}
And to manually change some of the parameters:
model.set_params({'alpha': 0.9})
Model Register¶
In order to register the model for tracking and versioning:
model.register("model_v1")
Please refer to Model Tracking and Versioning for more details on model tracking and versioning.
Model Exporting¶
To Memmodel
model.to_memmodel()
Note
MemModel
objects serve as in-memory representations of machine learning models. They can be used for both in-database and in-memory prediction tasks. These objects can be pickled in the same way that you would pickle ascikit-learn
model.The following methods for exporting the model use
MemModel
, and it is recommended to useMemModel
directly.To SQL
You can get the SQL code by:
model.to_sql() Out[9]: 'CASE WHEN "SepalLengthCm" IS NULL OR "SepalWidthCm" IS NULL OR "PetalLengthCm" IS NULL OR "PetalWidthCm" IS NULL THEN NULL WHEN 0.3222195606763497 * EXP(- POWER("SepalLengthCm" - 5.66805555555555, 2) / 3.06581768388106) * 0.4448842886396511 * EXP(- POWER("SepalWidthCm" - 3.65972222222222, 2) / 1.60825899843508) * 0.19433187084595704 * EXP(- POWER("PetalLengthCm" - 7.17361111111111, 2) / 8.42872848200312) * 1.6622064385805801 * EXP(- POWER("PetalWidthCm" - 1.92361111111111, 2) / 0.1152073552425672) * 0.413793103448276 >= 0.4396208322277158 * EXP(- POWER("SepalLengthCm" - 4.32, 2) / 1.64700000000002) * 0.6654238662946499 * EXP(- POWER("SepalWidthCm" - 3.88, 2) / 0.718875) * 0.1931516474222166 * EXP(- POWER("PetalLengthCm" - 3.17384615384615, 2) / 8.53204807692312) * 0.1063693901905793 * EXP(- POWER("PetalWidthCm" - 3.38615384615384, 2) / 28.133048076923) * 0.373563218390805 AND 0.3222195606763497 * EXP(- POWER("SepalLengthCm" - 5.66805555555555, 2) / 3.06581768388106) * 0.4448842886396511 * EXP(- POWER("SepalWidthCm" - 3.65972222222222, 2) / 1.60825899843508) * 0.19433187084595704 * EXP(- POWER("PetalLengthCm" - 7.17361111111111, 2) / 8.42872848200312) * 1.6622064385805801 * EXP(- POWER("PetalWidthCm" - 1.92361111111111, 2) / 0.1152073552425672) * 0.413793103448276 >= 0.7558170785361528 * EXP(- POWER("SepalLengthCm" - 6.0027027027027, 2) / 0.557207207207246) * 1.3976785034573909 * EXP(- POWER("SepalWidthCm" - 2.82702702702703, 2) / 0.1629429429429478) * 0.8416622377514259 * EXP(- POWER("PetalLengthCm" - 4.32432432432432, 2) / 0.449339339339342) * 2.032445245252479 * EXP(- POWER("PetalWidthCm" - 1.36216216216216, 2) / 0.0770570570570574) * 0.21264367816092 THEN \'Iris-virginica\' WHEN 0.7558170785361528 * EXP(- POWER("SepalLengthCm" - 6.0027027027027, 2) / 0.557207207207246) * 1.3976785034573909 * EXP(- POWER("SepalWidthCm" - 2.82702702702703, 2) / 0.1629429429429478) * 0.8416622377514259 * EXP(- POWER("PetalLengthCm" - 4.32432432432432, 2) / 0.449339339339342) * 2.032445245252479 * EXP(- POWER("PetalWidthCm" - 1.36216216216216, 2) / 0.0770570570570574) * 0.21264367816092 >= 0.4396208322277158 * EXP(- POWER("SepalLengthCm" - 4.32, 2) / 1.64700000000002) * 0.6654238662946499 * EXP(- POWER("SepalWidthCm" - 3.88, 2) / 0.718875) * 0.1931516474222166 * EXP(- POWER("PetalLengthCm" - 3.17384615384615, 2) / 8.53204807692312) * 0.1063693901905793 * EXP(- POWER("PetalWidthCm" - 3.38615384615384, 2) / 28.133048076923) * 0.373563218390805 THEN \'Iris-versicolor\' ELSE \'Iris-setosa\' END'
To Python
To obtain the prediction function in Python syntax, use the following code:
X = [[5, 2, 3, 1]] model.to_python()(X) Out[11]: array(['Iris-setosa'], dtype='<U11')
Hint
The
to_python()
method is used to retrieve predictions, probabilities, or cluster distances. For specific details on how to use this method for different model types, refer to the relevant documentation for each model.- __init__(name: str = None, overwrite_model: bool = False, alpha: Annotated[int | float | Decimal, 'Python Numbers'] = 1.0, nbtype: Literal['auto', 'bernoulli', 'categorical', 'multinomial', 'gaussian'] = 'auto') None ¶
Must be overridden in the child class
Methods
__init__
([name, overwrite_model, alpha, nbtype])Must be overridden in the child class
classification_report
([metrics, cutoff, ...])Computes a classification report using multiple model evaluation metrics (
auc
,accuracy
,f1
...).confusion_matrix
([pos_label, cutoff])Computes the model confusion matrix.
contour
([pos_label, nbins, chart])Draws the model's contour plot.
cutoff_curve
([pos_label, nbins, show, chart])Draws the model Cutoff curve.
deploySQL
([X, pos_label, cutoff, allSQL])Returns the SQL code needed to deploy the model.
does_model_exists
(name[, raise_error, ...])Checks whether the model is stored in the Vertica database.
drop
()Drops the model from the Vertica database.
export_models
(name, path[, kind])Exports machine learning models.
fit
(input_relation, X, y[, test_relation, ...])Trains the model.
get_attributes
([attr_name])Returns the model attributes.
get_match_index
(x, col_list[, str_check])Returns the matching index.
Returns the parameters of the model.
get_plotting_lib
([class_name, chart, ...])Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.
get_vertica_attributes
([attr_name])Returns the model Vertica attributes.
import_models
(path[, schema, kind])Imports machine learning models.
lift_chart
([pos_label, nbins, show, chart])Draws the model Lift Chart.
prc_curve
([pos_label, nbins, show, chart])Draws the model PRC curve.
predict
(vdf[, X, name, cutoff, inplace])Predicts using the input relation.
predict_proba
(vdf[, X, name, pos_label, inplace])Returns the model's probabilities using the input relation.
register
(registered_name[, raise_error])Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.
report
([metrics, cutoff, labels, nbins])Computes a classification report using multiple model evaluation metrics (
auc
,accuracy
,f1
...).roc_curve
([pos_label, nbins, show, chart])Draws the model ROC curve.
score
([metric, average, pos_label, cutoff, ...])Computes the model score.
set_params
([parameters])Sets the parameters of the model.
Summarizes the model.
to_binary
(path)Exports the model to the Vertica Binary format.
Converts the model to an InMemory object that can be used for different types of predictions.
to_pmml
(path)Exports the model to PMML.
to_python
([return_proba, ...])Returns the Python function needed for in-memory scoring without using built-in Vertica functions.
to_sql
([X, return_proba, ...])Returns the SQL code needed to deploy the model without using built-in Vertica functions.
to_tf
(path)Exports the model to the Frozen Graph format (TensorFlow).
Attributes