verticapy.machine_learning.vertica.neighbors.KNeighborsClassifier#
- class verticapy.machine_learning.vertica.neighbors.KNeighborsClassifier(name: str = None, overwrite_model: bool = False, n_neighbors: int = 5, p: int = 2)#
[Beta Version] Creates a KNeighborsClassifier object using the k-nearest neighbors algorithm. This object uses pure SQL to compute all the distances and final score.
Warning
This algorithm uses a CROSS JOIN during computation and is therefore computationally expensive at O(n * n), where n is the total number of elements. Since KNeighborsClassifier uses the p- distance, it is highly sensitive to unnormalized data.
Important
This algorithm is not Vertica Native and relies solely on SQL for attribute computation. While this model does not take advantage of the benefits provided by a model management system, including versioning and tracking, the SQL code it generates can still be used to create a pipeline.
Parameters#
- n_neighbors: int, optional
Number of neighbors to consider when computing the score.
- p: int, optional
The
p
of thep
-distances (distance metric used during the model computation).
Attributes#
Many attributes are created during the fitting phase.
- n_neighbors_: int
Number of neighbors.
- p_: int
The
p
of thep
-distances.- classes_: numpy.array
The classes labels.
Note
All attributes can be accessed using the
get_attributes()
method.Examples#
The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.
Load data for machine learning#
We import
verticapy
:import verticapy as vp
Hint
By assigning an alias to
verticapy
, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions fromverticapy
are used as intended without interfering with functions from other libraries.For this example, we will use the winequality dataset.
import verticapy.datasets as vpd data = vpd.load_winequality()
123fixed_acidityNumeric(8)123volatile_acidityNumeric(9)123citric_acidNumeric(8)123residual_sugarNumeric(9)123chloridesFloat(22)123free_sulfur_dioxideNumeric(9)123total_sulfur_dioxideNumeric(9)123densityFloat(22)123pHNumeric(8)123sulphatesNumeric(8)123alcoholFloat(22)123qualityInteger123goodIntegerAbccolorVarchar(20)1 3.8 0.31 0.02 11.1 0.036 20.0 114.0 0.99248 3.75 0.44 12.4 6 0 white 2 3.9 0.225 0.4 4.2 0.03 29.0 118.0 0.989 3.57 0.36 12.8 8 1 white 3 4.2 0.17 0.36 1.8 0.029 93.0 161.0 0.98999 3.65 0.89 12.0 7 1 white 4 4.2 0.215 0.23 5.1 0.041 64.0 157.0 0.99688 3.42 0.44 8.0 3 0 white 5 4.4 0.32 0.39 4.3 0.03 31.0 127.0 0.98904 3.46 0.36 12.8 8 1 white 6 4.4 0.46 0.1 2.8 0.024 31.0 111.0 0.98816 3.48 0.34 13.1 6 0 white 7 4.4 0.54 0.09 5.1 0.038 52.0 97.0 0.99022 3.41 0.4 12.2 7 1 white 8 4.5 0.19 0.21 0.95 0.033 89.0 159.0 0.99332 3.34 0.42 8.0 5 0 white 9 4.6 0.445 0.0 1.4 0.053 11.0 178.0 0.99426 3.79 0.55 10.2 5 0 white 10 4.6 0.52 0.15 2.1 0.054 8.0 65.0 0.9934 3.9 0.56 13.1 4 0 red 11 4.7 0.145 0.29 1.0 0.042 35.0 90.0 0.9908 3.76 0.49 11.3 6 0 white 12 4.7 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.5 5 0 white 13 4.7 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 14 4.7 0.6 0.17 2.3 0.058 17.0 106.0 0.9932 3.85 0.6 12.9 6 0 red 15 4.7 0.67 0.09 1.0 0.02 5.0 9.0 0.98722 3.3 0.34 13.6 5 0 white 16 4.7 0.785 0.0 3.4 0.036 23.0 134.0 0.98981 3.53 0.92 13.8 6 0 white 17 4.8 0.13 0.32 1.2 0.042 40.0 98.0 0.9898 3.42 0.64 11.8 7 1 white 18 4.8 0.17 0.28 2.9 0.03 22.0 111.0 0.9902 3.38 0.34 11.3 7 1 white 19 4.8 0.21 0.21 10.2 0.037 17.0 112.0 0.99324 3.66 0.48 12.2 7 1 white 20 4.8 0.225 0.38 1.2 0.074 47.0 130.0 0.99132 3.31 0.4 10.3 6 0 white 21 4.8 0.26 0.23 10.6 0.034 23.0 111.0 0.99274 3.46 0.28 11.5 7 1 white 22 4.8 0.29 0.23 1.1 0.044 38.0 180.0 0.98924 3.28 0.34 11.9 6 0 white 23 4.8 0.33 0.0 6.5 0.028 34.0 163.0 0.9937 3.35 0.61 9.9 5 0 white 24 4.8 0.34 0.0 6.5 0.028 33.0 163.0 0.9939 3.36 0.61 9.9 6 0 white 25 4.8 0.65 0.12 1.1 0.013 4.0 10.0 0.99246 3.32 0.36 13.5 4 0 white 26 4.9 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 27 4.9 0.33 0.31 1.2 0.016 39.0 150.0 0.98713 3.33 0.59 14.0 8 1 white 28 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 29 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 30 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 31 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 32 4.9 0.42 0.0 2.1 0.048 16.0 42.0 0.99154 3.71 0.74 14.0 7 1 red 33 4.9 0.47 0.17 1.9 0.035 60.0 148.0 0.98964 3.27 0.35 11.5 6 0 white 34 5.0 0.17 0.56 1.5 0.026 24.0 115.0 0.9906 3.48 0.39 10.8 7 1 white 35 5.0 0.2 0.4 1.9 0.015 20.0 98.0 0.9897 3.37 0.55 12.05 6 0 white 36 5.0 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 37 5.0 0.24 0.19 5.0 0.043 17.0 101.0 0.99438 3.67 0.57 10.0 5 0 white 38 5.0 0.24 0.21 2.2 0.039 31.0 100.0 0.99098 3.69 0.62 11.7 6 0 white 39 5.0 0.24 0.34 1.1 0.034 49.0 158.0 0.98774 3.32 0.32 13.1 7 1 white 40 5.0 0.255 0.22 2.7 0.043 46.0 153.0 0.99238 3.75 0.76 11.3 6 0 white 41 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 42 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 43 5.0 0.27 0.4 1.2 0.076 42.0 124.0 0.99204 3.32 0.47 10.1 6 0 white 44 5.0 0.29 0.54 5.7 0.035 54.0 155.0 0.98976 3.27 0.34 12.9 8 1 white 45 5.0 0.3 0.33 3.7 0.03 54.0 173.0 0.9887 3.36 0.3 13.0 7 1 white 46 5.0 0.31 0.0 6.4 0.046 43.0 166.0 0.994 3.3 0.63 9.9 6 0 white 47 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 48 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 49 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 50 5.0 0.33 0.18 4.6 0.032 40.0 124.0 0.99114 3.18 0.4 11.0 6 0 white 51 5.0 0.33 0.23 11.8 0.03 23.0 158.0 0.99322 3.41 0.64 11.8 6 0 white 52 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 53 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 54 5.0 0.38 0.01 1.6 0.048 26.0 60.0 0.99084 3.7 0.75 14.0 6 0 red 55 5.0 0.4 0.5 4.3 0.046 29.0 80.0 0.9902 3.49 0.66 13.6 6 0 red 56 5.0 0.42 0.24 2.0 0.06 19.0 50.0 0.9917 3.72 0.74 14.0 8 1 red 57 5.0 0.44 0.04 18.6 0.039 38.0 128.0 0.9985 3.37 0.57 10.2 6 0 white 58 5.0 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 59 5.0 0.55 0.14 8.3 0.032 35.0 164.0 0.9918 3.53 0.51 12.5 8 1 white 60 5.0 0.61 0.12 1.3 0.009 65.0 100.0 0.9874 3.26 0.37 13.5 5 0 white 61 5.0 0.74 0.0 1.2 0.041 16.0 46.0 0.99258 4.01 0.59 12.5 6 0 red 62 5.0 1.02 0.04 1.4 0.045 41.0 85.0 0.9938 3.75 0.48 10.5 4 0 red 63 5.0 1.04 0.24 1.6 0.05 32.0 96.0 0.9934 3.74 0.62 11.5 5 0 red 64 5.1 0.11 0.32 1.6 0.028 12.0 90.0 0.99008 3.57 0.52 12.2 6 0 white 65 5.1 0.14 0.25 0.7 0.039 15.0 89.0 0.9919 3.22 0.43 9.2 6 0 white 66 5.1 0.165 0.22 5.7 0.047 42.0 146.0 0.9934 3.18 0.55 9.9 6 0 white 67 5.1 0.21 0.28 1.4 0.047 48.0 148.0 0.99168 3.5 0.49 10.4 5 0 white 68 5.1 0.23 0.18 1.0 0.053 13.0 99.0 0.98956 3.22 0.39 11.5 5 0 white 69 5.1 0.25 0.36 1.3 0.035 40.0 78.0 0.9891 3.23 0.64 12.1 7 1 white 70 5.1 0.26 0.33 1.1 0.027 46.0 113.0 0.98946 3.35 0.43 11.4 7 1 white 71 5.1 0.26 0.34 6.4 0.034 26.0 99.0 0.99449 3.23 0.41 9.2 6 0 white 72 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 73 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 74 5.1 0.3 0.3 2.3 0.048 40.0 150.0 0.98944 3.29 0.46 12.2 6 0 white 75 5.1 0.305 0.13 1.75 0.036 17.0 73.0 0.99 3.4 0.51 12.3333333333333 5 0 white 76 5.1 0.31 0.3 0.9 0.037 28.0 152.0 0.992 3.54 0.56 10.1 6 0 white 77 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 78 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 79 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 80 5.1 0.33 0.27 6.7 0.022 44.0 129.0 0.99221 3.36 0.39 11.0 7 1 white 81 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 82 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 83 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 84 5.1 0.39 0.21 1.7 0.027 15.0 72.0 0.9894 3.5 0.45 12.5 6 0 white 85 5.1 0.42 0.0 1.8 0.044 18.0 88.0 0.99157 3.68 0.73 13.6 7 1 red 86 5.1 0.42 0.01 1.5 0.017 25.0 102.0 0.9894 3.38 0.36 12.3 7 1 white 87 5.1 0.47 0.02 1.3 0.034 18.0 44.0 0.9921 3.9 0.62 12.8 6 0 red 88 5.1 0.51 0.18 2.1 0.042 16.0 101.0 0.9924 3.46 0.87 12.9 7 1 red 89 5.1 0.52 0.06 2.7 0.052 30.0 79.0 0.9932 3.32 0.43 9.3 5 0 white 90 5.1 0.585 0.0 1.7 0.044 14.0 86.0 0.99264 3.56 0.94 12.9 7 1 red 91 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 92 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 93 5.2 0.16 0.34 0.8 0.029 26.0 77.0 0.99155 3.25 0.51 10.1 6 0 white 94 5.2 0.17 0.27 0.7 0.03 11.0 68.0 0.99218 3.3 0.41 9.8 5 0 white 95 5.2 0.185 0.22 1.0 0.03 47.0 123.0 0.99218 3.55 0.44 10.15 6 0 white 96 5.2 0.2 0.27 3.2 0.047 16.0 93.0 0.99235 3.44 0.53 10.1 7 1 white 97 5.2 0.21 0.31 1.7 0.048 17.0 61.0 0.98953 3.24 0.37 12.0 7 1 white 98 5.2 0.22 0.46 6.2 0.066 41.0 187.0 0.99362 3.19 0.42 9.73333333333333 5 0 white 99 5.2 0.24 0.15 7.1 0.043 32.0 134.0 0.99378 3.24 0.48 9.9 6 0 white 100 5.2 0.24 0.45 3.8 0.027 21.0 128.0 0.992 3.55 0.49 11.2 8 1 white Rows: 1-100 | Columns: 14Note
VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.
There are multiple classes for the “quality” column. Let us filter the data for classes between 5 and 7:
data = data[data["quality"]>=5] data = data[data["quality"]<=7]
We can the balance the dataset to ensure equal representation:
data = data.balance(column="quality", x = 1)
You can easily divide your dataset into training and testing subsets using the
vDataFrame.
train_test_split()
method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.train, test = data.train_test_split(test_size = 0.2)
Warning
In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the
vDataFrame.
to_db()
method to save your results intotables
ortemporary tables
. This will help enhance the overall performance of the process.Balancing the Dataset#
In VerticaPy, balancing a dataset to address class imbalances is made straightforward through the
balance()
function within thepreprocessing
module. This function enables users to rectify skewed class distributions efficiently. By specifying the target variable and setting parameters like the method for balancing, users can effortlessly achieve a more equitable representation of classes in their dataset. Whether opting for over-sampling, under-sampling, or a combination of both, VerticaPy’sbalance()
function streamlines the process, empowering users to enhance the performance and fairness of their machine learning models trained on imbalanced data.To balance the dataset, use the following syntax.
from verticapy.machine_learning.vertica.preprocessing import balance balanced_train = balance( name = "my_schema.train_balanced", input_relation = train, y = "good", method = "hybrid", )
Note
With this code, a table named train_balanced is created in the my_schema schema. It can then be used to train the model. In the rest of the example, we will work with the full dataset.
Hint
Balancing the dataset is a crucial step in improving the accuracy of machine learning models, particularly when faced with imbalanced class distributions. By addressing disparities in the number of instances across different classes, the model becomes more adept at learning patterns from all classes rather than being biased towards the majority class. This, in turn, enhances the model’s ability to make accurate predictions for under-represented classes. The balanced dataset ensures that the model is not dominated by the majority class and, as a result, leads to more robust and unbiased model performance. Therefore, by employing techniques such as over-sampling, under-sampling, or a combination of both during dataset preparation, practitioners can significantly contribute to achieving higher accuracy and better generalization of their machine learning models.
Model Initialization#
First we import the
KNeighborsClassifier
model:from verticapy.machine_learning.vertica import KNeighborsClassifier
Then we can create the model:
model = KNeighborsClassifier( n_neighbors = 10, p = 2, )
Hint
In
verticapy
1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.Model Training#
We can now fit the model:
model.fit( train, [ "fixed_acidity", "volatile_acidity", "density", "pH", ], "quality", test, )
Important
To train a model, you can directly use the
vDataFrame
or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. Inverticapy
, we don’t work usingX
matrices andy
vectors. Instead, we work directly with lists of predictors and the response name.Important
As this model is not native, it solely relies on SQL statements to compute various attributes, storing them within the object. No data is saved in the database.
Metrics#
We can get the entire report using:
model.report()
value auc 0.6612807131745315 prc_auc 0.5385520120216585 accuracy 0.657556270096463 log_loss 0.269525468230416 precision 0.6428571428571429 recall 0.19313304721030042 f1_score 0.297029702970297 mcc 0.19736562886403478 informedness 0.1288656950252105 markedness 0.3022774327122155 csi 0.1744186046511628 Rows: 1-11 | Columns: 2Important
Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g.
model.report(metrics = ["auc", "accuracy"])
.For classification models, we can easily modify the
cutoff
to observe the effect on different metrics:model.report(cutoff = 0.2)
value auc 0.6612807131745315 prc_auc 0.5385520120216585 accuracy 0.5514469453376206 log_loss 0.269525468230416 precision 0.4470046082949309 recall 0.8326180257510729 f1_score 0.5817091454272865 mcc 0.2272905313259077 informedness 0.21565144477420928 markedness 0.2395577997842926 csi 0.41014799154334036 Rows: 1-11 | Columns: 2You can also use the
KNeighborsClassifier.score
function to compute any classification metric. The default metric is the accuracy:model.score(metric = "f1", average = "macro") Out[4]: 0.5095468273527024
Note
For multi-class scoring,
verticapy
allows the flexibility to use three averaging techniques:micro
,macro
andweighted
. Please refer to this link for more details on how they are calculated.Prediction#
Prediction is straight-forward:
model.predict( test, [ "fixed_acidity", "volatile_acidity", "density", "pH", ], "prediction", )
123fixed_acidityNumeric(8)123volatile_acidityNumeric(9)123densityFloat(22)123pHNumeric(8)123citric_acidNumeric(8)123residual_sugarNumeric(9)123chloridesFloat(22)123free_sulfur_dioxideNumeric(9)123total_sulfur_dioxideNumeric(9)123sulphatesNumeric(8)123alcoholFloat(22)123qualityInteger123goodIntegerAbccolorVarchar(20)123predictionInteger1 4.2 0.17 0.98999 3.65 0.36 1.8 0.029 93.0 161.0 0.89 12.0 7 1 white 7 2 4.7 0.67 0.98722 3.3 0.09 1.0 0.02 5.0 9.0 0.34 13.6 5 0 white 5 3 4.8 0.21 0.99324 3.66 0.21 10.2 0.037 17.0 112.0 0.48 12.2 7 1 white 7 4 5.0 0.24 0.98774 3.32 0.34 1.1 0.034 49.0 158.0 0.32 13.1 7 1 white 6 5 5.0 0.35 0.99241 3.39 0.25 7.8 0.031 24.0 116.0 0.4 11.3 6 0 white 6 6 5.0 0.4 0.9902 3.49 0.5 4.3 0.046 29.0 80.0 0.66 13.6 6 0 red 7 7 5.1 0.26 0.98946 3.35 0.33 1.1 0.027 46.0 113.0 0.43 11.4 7 1 white 7 8 5.1 0.3 0.98944 3.29 0.3 2.3 0.048 40.0 150.0 0.46 12.2 6 0 white 6 9 5.1 0.305 0.99 3.4 0.13 1.75 0.036 17.0 73.0 0.51 12.3333333333333 5 0 white 6 10 5.1 0.33 0.9893 3.51 0.22 1.6 0.027 18.0 89.0 0.38 12.5 7 1 white 6 11 5.1 0.35 0.99188 3.38 0.26 6.8 0.034 36.0 120.0 0.4 11.5 6 0 white 6 12 5.1 0.51 0.9924 3.46 0.18 2.1 0.042 16.0 101.0 0.87 12.9 7 1 red 7 13 5.2 0.31 0.98886 3.56 0.2 2.4 0.027 27.0 117.0 0.45 13.0 7 1 white 7 14 5.2 0.48 0.9927 3.54 0.04 1.6 0.054 19.0 106.0 0.62 12.2 7 1 red 7 15 5.3 0.2 0.99278 3.41 0.31 3.6 0.036 22.0 91.0 0.5 9.8 6 0 white 6 16 5.3 0.23 0.99119 3.16 0.56 0.9 0.041 46.0 141.0 0.62 9.7 5 0 white 6 17 5.3 0.3 0.98742 3.31 0.3 1.2 0.029 25.0 93.0 0.4 13.6 7 1 white 6 18 5.3 0.31 0.99321 3.34 0.38 10.5 0.031 53.0 140.0 0.46 11.7 6 0 white 6 19 5.4 0.18 0.99445 3.42 0.24 4.8 0.041 30.0 113.0 0.4 9.4 6 0 white 5 20 5.4 0.22 0.99092 3.29 0.35 6.5 0.029 26.0 87.0 0.44 12.5 7 1 white 6 21 5.4 0.22 0.99178 3.76 0.29 1.2 0.045 69.0 152.0 0.63 11.0 7 1 white 6 22 5.4 0.31 0.9931 3.29 0.47 3.0 0.053 46.0 144.0 0.76 10.0 5 0 white 7 23 5.5 0.12 0.99164 3.25 0.33 1.0 0.038 23.0 131.0 0.45 9.8 5 0 white 5 24 5.5 0.16 0.9898 3.33 0.31 1.2 0.026 31.0 68.0 0.44 11.65 6 0 white 6 25 5.5 0.16 0.99076 3.43 0.26 1.5 0.032 35.0 100.0 0.77 12.0 6 0 white 6 26 5.5 0.16 0.9938 3.24 0.22 4.5 0.03 30.0 102.0 0.36 9.4 6 0 white 7 27 5.5 0.17 0.99243 3.28 0.23 2.9 0.039 10.0 108.0 0.5 10.0 5 0 white 6 28 5.6 0.13 0.9948 3.34 0.27 4.8 0.028 22.0 104.0 0.45 9.2 6 0 white 5 29 5.6 0.18 0.98984 3.51 0.58 1.25 0.034 29.0 129.0 0.6 12.0 7 1 white 6 30 5.6 0.22 0.98823 3.2 0.32 1.2 0.024 29.0 97.0 0.46 13.05 7 1 white 6 31 5.6 0.26 0.99315 3.44 0.0 10.2 0.038 13.0 111.0 0.46 12.4 6 0 white 7 32 5.6 0.26 0.99428 3.23 0.5 11.4 0.029 25.0 93.0 0.49 10.5 6 0 white 7 33 5.6 0.28 0.99144 3.21 0.4 6.1 0.034 36.0 118.0 0.43 12.1 7 1 white 7 34 5.6 0.295 0.99378 3.21 0.2 2.2 0.049 18.0 134.0 0.68 10.0 5 0 white 5 35 5.6 0.35 0.9902 3.37 0.37 1.0 0.038 6.0 72.0 0.34 11.4 5 0 white 7 36 5.6 0.615 0.9943 3.58 0.0 1.6 0.089 16.0 59.0 0.52 9.9 5 0 red 5 37 5.6 0.66 0.99378 3.71 0.0 2.2 0.087 3.0 11.0 0.63 12.8 7 1 red 7 38 5.7 0.1 0.9928 3.27 0.27 1.3 0.047 21.0 100.0 0.46 9.5 5 0 white 6 39 5.7 0.135 0.9946 3.31 0.3 4.6 0.042 19.0 101.0 0.42 9.3 6 0 white 6 40 5.7 0.21 0.99074 3.24 0.32 0.9 0.038 38.0 121.0 0.46 10.6 6 0 white 6 41 5.7 0.22 0.99862 3.22 0.2 16.0 0.044 41.0 113.0 0.46 8.9 6 0 white 6 42 5.7 0.26 0.994 3.39 0.25 10.4 0.02 7.0 57.0 0.37 10.6 5 0 white 7 43 5.7 0.33 0.9934 3.38 0.15 1.9 0.05 20.0 93.0 0.62 9.9 5 0 white 7 44 5.7 0.43 0.992 3.54 0.3 5.7 0.039 24.0 98.0 0.61 12.3 7 1 white 7 45 5.8 0.13 0.99478 3.36 0.26 5.1 0.039 19.0 103.0 0.47 9.3 6 0 white 6 46 5.8 0.19 0.99107 3.16 0.33 4.2 0.038 49.0 133.0 0.42 11.3 7 1 white 6 47 5.8 0.26 0.99044 3.36 0.29 1.0 0.042 35.0 101.0 0.48 11.4 7 1 white 7 48 5.8 0.28 0.992 3.35 0.28 4.2 0.044 52.0 158.0 0.44 10.7 7 1 white 7 49 5.8 0.3 0.98871 3.1 0.42 1.1 0.036 19.0 113.0 0.46 12.6 7 1 white 5 50 5.8 0.3 0.98963 3.23 0.38 4.9 0.039 22.0 86.0 0.58 13.1 7 1 white 6 51 5.8 0.315 0.99704 2.97 0.19 19.4 0.031 28.0 106.0 0.4 10.55 6 0 white 6 52 5.8 0.34 0.9949 3.46 0.16 7.0 0.037 26.0 116.0 0.45 10.0 7 1 white 7 53 5.8 0.36 0.9911 3.34 0.26 3.3 0.038 40.0 153.0 0.55 11.3 6 0 white 7 54 5.8 0.415 0.9922 3.29 0.13 1.4 0.04 11.0 64.0 0.52 10.5 5 0 white 5 55 5.8 0.68 0.9944 3.54 0.02 1.8 0.087 21.0 94.0 0.52 10.0 5 0 red 5 56 5.9 0.15 0.9945 3.52 0.31 5.8 0.041 53.0 155.0 0.46 10.5 6 0 white 7 57 5.9 0.17 0.9931 3.68 0.3 1.4 0.042 25.0 119.0 0.72 10.5 6 0 white 5 58 5.9 0.21 0.98984 3.35 0.31 1.8 0.033 45.0 142.0 0.5 12.7 6 0 white 7 59 5.9 0.24 0.98889 3.08 0.28 1.3 0.032 36.0 95.0 0.64 12.9 7 1 white 7 60 5.9 0.24 0.99225 3.39 0.3 2.0 0.033 28.0 92.0 0.69 10.9 7 1 white 6 61 5.9 0.25 0.997 3.24 0.25 11.3 0.052 30.0 165.0 0.44 9.5 6 0 white 5 62 5.9 0.28 0.99542 3.28 0.14 8.6 0.032 30.0 142.0 0.44 9.5 6 0 white 5 63 5.9 0.29 0.99512 3.21 0.16 7.9 0.044 48.0 197.0 0.36 9.4 5 0 white 5 64 5.9 0.3 0.9904 3.19 0.29 1.1 0.036 23.0 56.0 0.38 11.3 5 0 white 6 65 5.9 0.32 0.98945 3.37 0.33 2.1 0.027 35.0 138.0 0.42 12.7 6 0 white 7 66 6.0 0.16 0.9951 3.63 0.3 6.7 0.043 43.0 153.0 0.46 10.6 5 0 white 7 67 6.0 0.17 0.99362 3.32 0.33 6.0 0.036 30.0 111.0 0.58 10.1333333333333 7 1 white 7 68 6.0 0.22 0.98862 3.22 0.28 1.1 0.034 47.0 90.0 0.38 12.6 6 0 white 6 69 6.0 0.23 0.98961 3.36 0.34 1.3 0.025 23.0 111.0 0.37 12.7 6 0 white 7 70 6.0 0.24 0.9938 3.64 0.27 1.9 0.048 40.0 170.0 0.54 10.0 7 1 white 5 71 6.0 0.26 0.98944 3.22 0.29 3.1 0.041 37.0 144.0 0.39 12.8 7 1 white 6 72 6.0 0.27 0.99215 3.23 0.32 3.6 0.035 36.0 133.0 0.46 10.8 6 0 white 7 73 6.0 0.27 0.9924 3.28 0.31 5.0 0.043 54.0 170.0 0.52 11.0 6 0 white 7 74 6.0 0.28 0.99126 3.27 0.27 4.1 0.046 50.0 147.0 0.56 11.6 6 0 white 7 75 6.0 0.28 0.9953 3.19 0.49 6.8 0.048 61.0 222.0 0.47 9.3 5 0 white 7 76 6.0 0.28 0.99911 3.14 0.29 19.3 0.051 36.0 174.0 0.5 9.0 5 0 white 5 77 6.0 0.29 0.9895 3.31 0.27 2.3 0.044 20.0 117.0 0.41 12.7 7 1 white 7 78 6.0 0.29 0.9937 3.09 0.41 10.8 0.048 55.0 149.0 0.59 10.9666666666667 7 1 white 7 79 6.0 0.29 0.9972 3.33 0.2 12.6 0.045 45.0 187.0 0.42 9.5 5 0 white 7 80 6.0 0.31 0.98952 3.32 0.27 2.3 0.042 19.0 120.0 0.41 12.7 7 1 white 7 81 6.0 0.32 0.98912 3.29 0.3 1.9 0.033 41.0 142.0 0.42 12.8 7 1 white 7 82 6.0 0.32 0.99258 3.1 0.33 9.9 0.032 22.0 90.0 0.43 12.1 7 1 white 7 83 6.0 0.34 0.99108 3.39 0.32 3.8 0.044 13.0 116.0 0.44 11.8 7 1 white 7 84 6.0 0.38 0.98872 3.18 0.26 3.5 0.035 38.0 111.0 0.47 13.6 7 1 white 6 85 6.0 0.43 0.99222 3.03 0.34 7.6 0.045 25.0 118.0 0.37 11.0 6 0 white 6 86 6.1 0.15 0.99471 3.6 0.29 6.2 0.046 39.0 151.0 0.44 10.6 6 0 white 7 87 6.1 0.16 0.99365 3.32 0.34 6.1 0.034 31.0 114.0 0.58 10.1333333333333 7 1 white 7 88 6.1 0.18 0.98962 3.16 0.38 2.3 0.033 28.0 111.0 0.49 12.4 6 0 white 5 89 6.1 0.2 0.991 3.3 0.17 1.6 0.048 46.0 129.0 0.43 11.4 6 0 white 7 90 6.1 0.25 0.9963 3.14 0.18 10.5 0.049 41.0 124.0 0.35 10.5 5 0 white 6 91 6.1 0.25 0.99782 3.07 0.48 15.8 0.052 25.0 94.0 0.45 9.2 6 0 white 5 92 6.1 0.27 0.99076 3.32 0.31 1.5 0.035 17.0 83.0 0.44 11.1 7 1 white 7 93 6.1 0.27 0.99281 3.22 0.32 6.2 0.048 47.0 161.0 0.6 11.0 6 0 white 6 94 6.1 0.29 0.9893 3.21 0.27 1.7 0.024 13.0 76.0 0.51 12.6 7 1 white 6 95 6.1 0.3 0.9895 3.39 0.3 2.1 0.031 50.0 163.0 0.43 12.7 7 1 white 7 96 6.1 0.37 0.99558 3.22 0.2 7.6 0.031 49.0 170.0 0.48 9.5 5 0 white 5 97 6.1 0.37 0.997 3.17 0.46 12.0 0.042 61.0 210.0 0.59 9.7 6 0 white 5 98 6.1 0.38 0.99309 3.24 0.47 1.4 0.051 59.0 210.0 0.5 9.6 5 0 white 5 99 6.1 0.43 0.9971 3.37 0.35 9.1 0.059 83.0 249.0 0.5 8.5 5 0 white 7 100 6.1 0.44 0.9916 3.26 0.28 4.25 0.032 43.0 132.0 0.47 11.3 7 1 white 5 Rows: 1-100 | Columns: 15Note
Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the
vDataFrame
to thepredict()
function, but in this case, it’s essential that the column names of thevDataFrame
match the predictors and response name in the model.Probabilities#
It is also easy to get the model’s probabilities:
model.predict_proba( test, [ "fixed_acidity", "volatile_acidity", "density", "pH", ], "prediction", )
123fixed_acidityNumeric(8)123volatile_acidityNumeric(9)123densityFloat(22)123pHNumeric(8)123citric_acidNumeric(8)123residual_sugarNumeric(9)123chloridesFloat(22)123free_sulfur_dioxideNumeric(9)123total_sulfur_dioxideNumeric(9)123sulphatesNumeric(8)123alcoholFloat(22)123qualityInteger123goodIntegerAbccolorVarchar(20)123predictionInteger123prediction_5Float(22)123prediction_6Float(22)1 4.2 0.17 0.98999 3.65 0.36 1.8 0.029 93.0 161.0 0.89 12.0 7 1 white 7 0.3 0.3 2 4.7 0.67 0.98722 3.3 0.09 1.0 0.02 5.0 9.0 0.34 13.6 5 0 white 5 0.4 0.4 3 4.8 0.21 0.99324 3.66 0.21 10.2 0.037 17.0 112.0 0.48 12.2 7 1 white 7 0.3 0.2 4 5.0 0.24 0.98774 3.32 0.34 1.1 0.034 49.0 158.0 0.32 13.1 7 1 white 6 0.1 0.3 5 5.0 0.35 0.99241 3.39 0.25 7.8 0.031 24.0 116.0 0.4 11.3 6 0 white 6 0.2 0.5 6 5.0 0.4 0.9902 3.49 0.5 4.3 0.046 29.0 80.0 0.66 13.6 6 0 red 7 0.1 0.4 7 5.1 0.26 0.98946 3.35 0.33 1.1 0.027 46.0 113.0 0.43 11.4 7 1 white 7 0.2 0.4 8 5.1 0.3 0.98944 3.29 0.3 2.3 0.048 40.0 150.0 0.46 12.2 6 0 white 6 0.2 0.6 9 5.1 0.305 0.99 3.4 0.13 1.75 0.036 17.0 73.0 0.51 12.3333333333333 5 0 white 6 0.0 0.3 10 5.1 0.33 0.9893 3.51 0.22 1.6 0.027 18.0 89.0 0.38 12.5 7 1 white 6 0.2 0.4 11 5.1 0.35 0.99188 3.38 0.26 6.8 0.034 36.0 120.0 0.4 11.5 6 0 white 6 0.1 0.7 12 5.1 0.51 0.9924 3.46 0.18 2.1 0.042 16.0 101.0 0.87 12.9 7 1 red 7 0.0 0.4 13 5.2 0.31 0.98886 3.56 0.2 2.4 0.027 27.0 117.0 0.45 13.0 7 1 white 7 0.1 0.3 14 5.2 0.48 0.9927 3.54 0.04 1.6 0.054 19.0 106.0 0.62 12.2 7 1 red 7 0.0 0.2 15 5.3 0.2 0.99278 3.41 0.31 3.6 0.036 22.0 91.0 0.5 9.8 6 0 white 6 0.1 0.7 16 5.3 0.23 0.99119 3.16 0.56 0.9 0.041 46.0 141.0 0.62 9.7 5 0 white 6 0.4 0.3 17 5.3 0.3 0.98742 3.31 0.3 1.2 0.029 25.0 93.0 0.4 13.6 7 1 white 6 0.3 0.3 18 5.3 0.31 0.99321 3.34 0.38 10.5 0.031 53.0 140.0 0.46 11.7 6 0 white 6 0.3 0.4 19 5.4 0.18 0.99445 3.42 0.24 4.8 0.041 30.0 113.0 0.4 9.4 6 0 white 5 0.3 0.5 20 5.4 0.22 0.99092 3.29 0.35 6.5 0.029 26.0 87.0 0.44 12.5 7 1 white 6 0.3 0.3 21 5.4 0.22 0.99178 3.76 0.29 1.2 0.045 69.0 152.0 0.63 11.0 7 1 white 6 0.0 0.5 22 5.4 0.31 0.9931 3.29 0.47 3.0 0.053 46.0 144.0 0.76 10.0 5 0 white 7 0.1 0.3 23 5.5 0.12 0.99164 3.25 0.33 1.0 0.038 23.0 131.0 0.45 9.8 5 0 white 5 0.4 0.3 24 5.5 0.16 0.9898 3.33 0.31 1.2 0.026 31.0 68.0 0.44 11.65 6 0 white 6 0.4 0.5 25 5.5 0.16 0.99076 3.43 0.26 1.5 0.032 35.0 100.0 0.77 12.0 6 0 white 6 0.3 0.5 26 5.5 0.16 0.9938 3.24 0.22 4.5 0.03 30.0 102.0 0.36 9.4 6 0 white 7 0.4 0.4 27 5.5 0.17 0.99243 3.28 0.23 2.9 0.039 10.0 108.0 0.5 10.0 5 0 white 6 0.3 0.4 28 5.6 0.13 0.9948 3.34 0.27 4.8 0.028 22.0 104.0 0.45 9.2 6 0 white 5 0.2 0.5 29 5.6 0.18 0.98984 3.51 0.58 1.25 0.034 29.0 129.0 0.6 12.0 7 1 white 6 0.1 0.5 30 5.6 0.22 0.98823 3.2 0.32 1.2 0.024 29.0 97.0 0.46 13.05 7 1 white 6 0.2 0.5 31 5.6 0.26 0.99315 3.44 0.0 10.2 0.038 13.0 111.0 0.46 12.4 6 0 white 7 0.2 0.3 32 5.6 0.26 0.99428 3.23 0.5 11.4 0.029 25.0 93.0 0.49 10.5 6 0 white 7 0.3 0.4 33 5.6 0.28 0.99144 3.21 0.4 6.1 0.034 36.0 118.0 0.43 12.1 7 1 white 7 0.3 0.3 34 5.6 0.295 0.99378 3.21 0.2 2.2 0.049 18.0 134.0 0.68 10.0 5 0 white 5 0.5 0.2 35 5.6 0.35 0.9902 3.37 0.37 1.0 0.038 6.0 72.0 0.34 11.4 5 0 white 7 0.4 0.2 36 5.6 0.615 0.9943 3.58 0.0 1.6 0.089 16.0 59.0 0.52 9.9 5 0 red 5 0.5 0.0 37 5.6 0.66 0.99378 3.71 0.0 2.2 0.087 3.0 11.0 0.63 12.8 7 1 red 7 0.3 0.2 38 5.7 0.1 0.9928 3.27 0.27 1.3 0.047 21.0 100.0 0.46 9.5 5 0 white 6 0.2 0.8 39 5.7 0.135 0.9946 3.31 0.3 4.6 0.042 19.0 101.0 0.42 9.3 6 0 white 6 0.2 0.7 40 5.7 0.21 0.99074 3.24 0.32 0.9 0.038 38.0 121.0 0.46 10.6 6 0 white 6 0.1 0.7 41 5.7 0.22 0.99862 3.22 0.2 16.0 0.044 41.0 113.0 0.46 8.9 6 0 white 6 0.1 0.8 42 5.7 0.26 0.994 3.39 0.25 10.4 0.02 7.0 57.0 0.37 10.6 5 0 white 7 0.2 0.4 43 5.7 0.33 0.9934 3.38 0.15 1.9 0.05 20.0 93.0 0.62 9.9 5 0 white 7 0.2 0.3 44 5.7 0.43 0.992 3.54 0.3 5.7 0.039 24.0 98.0 0.61 12.3 7 1 white 7 0.3 0.0 45 5.8 0.13 0.99478 3.36 0.26 5.1 0.039 19.0 103.0 0.47 9.3 6 0 white 6 0.1 0.6 46 5.8 0.19 0.99107 3.16 0.33 4.2 0.038 49.0 133.0 0.42 11.3 7 1 white 6 0.2 0.2 47 5.8 0.26 0.99044 3.36 0.29 1.0 0.042 35.0 101.0 0.48 11.4 7 1 white 7 0.3 0.3 48 5.8 0.28 0.992 3.35 0.28 4.2 0.044 52.0 158.0 0.44 10.7 7 1 white 7 0.3 0.2 49 5.8 0.3 0.98871 3.1 0.42 1.1 0.036 19.0 113.0 0.46 12.6 7 1 white 5 0.3 0.3 50 5.8 0.3 0.98963 3.23 0.38 4.9 0.039 22.0 86.0 0.58 13.1 7 1 white 6 0.1 0.5 51 5.8 0.315 0.99704 2.97 0.19 19.4 0.031 28.0 106.0 0.4 10.55 6 0 white 6 0.1 0.5 52 5.8 0.34 0.9949 3.46 0.16 7.0 0.037 26.0 116.0 0.45 10.0 7 1 white 7 0.0 0.1 53 5.8 0.36 0.9911 3.34 0.26 3.3 0.038 40.0 153.0 0.55 11.3 6 0 white 7 0.4 0.3 54 5.8 0.415 0.9922 3.29 0.13 1.4 0.04 11.0 64.0 0.52 10.5 5 0 white 5 0.2 0.7 55 5.8 0.68 0.9944 3.54 0.02 1.8 0.087 21.0 94.0 0.52 10.0 5 0 red 5 0.6 0.3 56 5.9 0.15 0.9945 3.52 0.31 5.8 0.041 53.0 155.0 0.46 10.5 6 0 white 7 0.4 0.3 57 5.9 0.17 0.9931 3.68 0.3 1.4 0.042 25.0 119.0 0.72 10.5 6 0 white 5 0.4 0.0 58 5.9 0.21 0.98984 3.35 0.31 1.8 0.033 45.0 142.0 0.5 12.7 6 0 white 7 0.2 0.4 59 5.9 0.24 0.98889 3.08 0.28 1.3 0.032 36.0 95.0 0.64 12.9 7 1 white 7 0.4 0.2 60 5.9 0.24 0.99225 3.39 0.3 2.0 0.033 28.0 92.0 0.69 10.9 7 1 white 6 0.3 0.3 61 5.9 0.25 0.997 3.24 0.25 11.3 0.052 30.0 165.0 0.44 9.5 6 0 white 5 0.4 0.5 62 5.9 0.28 0.99542 3.28 0.14 8.6 0.032 30.0 142.0 0.44 9.5 6 0 white 5 0.4 0.5 63 5.9 0.29 0.99512 3.21 0.16 7.9 0.044 48.0 197.0 0.36 9.4 5 0 white 5 0.4 0.4 64 5.9 0.3 0.9904 3.19 0.29 1.1 0.036 23.0 56.0 0.38 11.3 5 0 white 6 0.4 0.4 65 5.9 0.32 0.98945 3.37 0.33 2.1 0.027 35.0 138.0 0.42 12.7 6 0 white 7 0.1 0.3 66 6.0 0.16 0.9951 3.63 0.3 6.7 0.043 43.0 153.0 0.46 10.6 5 0 white 7 0.3 0.2 67 6.0 0.17 0.99362 3.32 0.33 6.0 0.036 30.0 111.0 0.58 10.1333333333333 7 1 white 7 0.2 0.3 68 6.0 0.22 0.98862 3.22 0.28 1.1 0.034 47.0 90.0 0.38 12.6 6 0 white 6 0.3 0.4 69 6.0 0.23 0.98961 3.36 0.34 1.3 0.025 23.0 111.0 0.37 12.7 6 0 white 7 0.2 0.1 70 6.0 0.24 0.9938 3.64 0.27 1.9 0.048 40.0 170.0 0.54 10.0 7 1 white 5 0.4 0.2 71 6.0 0.26 0.98944 3.22 0.29 3.1 0.041 37.0 144.0 0.39 12.8 7 1 white 6 0.3 0.3 72 6.0 0.27 0.99215 3.23 0.32 3.6 0.035 36.0 133.0 0.46 10.8 6 0 white 7 0.2 0.3 73 6.0 0.27 0.9924 3.28 0.31 5.0 0.043 54.0 170.0 0.52 11.0 6 0 white 7 0.1 0.3 74 6.0 0.28 0.99126 3.27 0.27 4.1 0.046 50.0 147.0 0.56 11.6 6 0 white 7 0.0 0.5 75 6.0 0.28 0.9953 3.19 0.49 6.8 0.048 61.0 222.0 0.47 9.3 5 0 white 7 0.2 0.3 76 6.0 0.28 0.99911 3.14 0.29 19.3 0.051 36.0 174.0 0.5 9.0 5 0 white 5 0.4 0.2 77 6.0 0.29 0.9895 3.31 0.27 2.3 0.044 20.0 117.0 0.41 12.7 7 1 white 7 0.1 0.2 78 6.0 0.29 0.9937 3.09 0.41 10.8 0.048 55.0 149.0 0.59 10.9666666666667 7 1 white 7 0.5 0.0 79 6.0 0.29 0.9972 3.33 0.2 12.6 0.045 45.0 187.0 0.42 9.5 5 0 white 7 0.1 0.3 80 6.0 0.31 0.98952 3.32 0.27 2.3 0.042 19.0 120.0 0.41 12.7 7 1 white 7 0.2 0.3 81 6.0 0.32 0.98912 3.29 0.3 1.9 0.033 41.0 142.0 0.42 12.8 7 1 white 7 0.0 0.3 82 6.0 0.32 0.99258 3.1 0.33 9.9 0.032 22.0 90.0 0.43 12.1 7 1 white 7 0.4 0.2 83 6.0 0.34 0.99108 3.39 0.32 3.8 0.044 13.0 116.0 0.44 11.8 7 1 white 7 0.1 0.4 84 6.0 0.38 0.98872 3.18 0.26 3.5 0.035 38.0 111.0 0.47 13.6 7 1 white 6 0.2 0.3 85 6.0 0.43 0.99222 3.03 0.34 7.6 0.045 25.0 118.0 0.37 11.0 6 0 white 6 0.4 0.3 86 6.1 0.15 0.99471 3.6 0.29 6.2 0.046 39.0 151.0 0.44 10.6 6 0 white 7 0.3 0.2 87 6.1 0.16 0.99365 3.32 0.34 6.1 0.034 31.0 114.0 0.58 10.1333333333333 7 1 white 7 0.3 0.2 88 6.1 0.18 0.98962 3.16 0.38 2.3 0.033 28.0 111.0 0.49 12.4 6 0 white 5 0.4 0.3 89 6.1 0.2 0.991 3.3 0.17 1.6 0.048 46.0 129.0 0.43 11.4 6 0 white 7 0.4 0.1 90 6.1 0.25 0.9963 3.14 0.18 10.5 0.049 41.0 124.0 0.35 10.5 5 0 white 6 0.3 0.5 91 6.1 0.25 0.99782 3.07 0.48 15.8 0.052 25.0 94.0 0.45 9.2 6 0 white 5 0.5 0.2 92 6.1 0.27 0.99076 3.32 0.31 1.5 0.035 17.0 83.0 0.44 11.1 7 1 white 7 0.2 0.2 93 6.1 0.27 0.99281 3.22 0.32 6.2 0.048 47.0 161.0 0.6 11.0 6 0 white 6 0.2 0.6 94 6.1 0.29 0.9893 3.21 0.27 1.7 0.024 13.0 76.0 0.51 12.6 7 1 white 6 0.3 0.4 95 6.1 0.3 0.9895 3.39 0.3 2.1 0.031 50.0 163.0 0.43 12.7 7 1 white 7 0.3 0.1 96 6.1 0.37 0.99558 3.22 0.2 7.6 0.031 49.0 170.0 0.48 9.5 5 0 white 5 0.7 0.2 97 6.1 0.37 0.997 3.17 0.46 12.0 0.042 61.0 210.0 0.59 9.7 6 0 white 5 0.6 0.2 98 6.1 0.38 0.99309 3.24 0.47 1.4 0.051 59.0 210.0 0.5 9.6 5 0 white 5 0.4 0.2 99 6.1 0.43 0.9971 3.37 0.35 9.1 0.059 83.0 249.0 0.5 8.5 5 0 white 7 0.5 0.2 100 6.1 0.44 0.9916 3.26 0.28 4.25 0.032 43.0 132.0 0.47 11.3 7 1 white 5 0.6 0.2 Rows: 1-100 | Columns: 17Note
Probabilities are added to the
vDataFrame
, and VerticaPy uses the corresponding probability function in SQL behind the scenes. You can use thepos_label
parameter to add only the probability of the selected category.Confusion Matrix#
You can obtain the confusion matrix of your choice by specifying the desired cutoff.
model.confusion_matrix(cutoff = 0.5) Out[5]: array([[101, 52], [ 63, 109]])
Hint
In the context of multi-class classification, you typically work with an overall confusion matrix that summarizes the classification efficiency across all classes. However, you have the flexibility to specify a
pos_label
and adjust the cutoff threshold. In this case, a binary confusion matrix is computed, where the chosen class is treated as the positive class, allowing you to evaluate its efficiency as if it were a binary classification problem.model.confusion_matrix(pos_label = "5", cutoff = 0.6) Out[6]: array([[388, 22], [166, 24]])
Note
In classification, the
cutoff
is a threshold value used to determine class assignment based on predicted probabilities or scores from a classification model. In binary classification, if the predicted probability for a specific class is greater than or equal to the cutoff, the instance is assigned to the positive class; otherwise, it is assigned to the negative class. Adjusting the cutoff allows for trade-offs between true positives and false positives, enabling the model to be optimized for specific objectives or to consider the relative costs of different classification errors. The choice of cutoff is critical for tailoring the model’s performance to meet specific needs.Main Plots (Classification Curves)#
Classification models allow for the creation of various plots that are very helpful in understanding the model, such as the ROC Curve, PRC Curve, Cutoff Curve, Gain Curve, and more.
Most of the classification curves can be found in the Machine Learning - Classification Curve.
For example, let’s draw the model’s ROC curve.
model.roc_curve(pos_label = "5")
Important
Most of the curves have a parameter called
nbins
, which is essential for estimating metrics. The larger thenbins
, the more precise the estimation, but it can significantly impact performance. Exercise caution when increasing this parameter excessively.Hint
In binary classification, various curves can be easily plotted. However, in multi-class classification, it’s important to select the
pos_label
, representing the class to be treated as positive when drawing the curve.Other Plots#
Contour plot is another useful plot that can be produced for models with two predictors.
model.contour(pos_label = "5")
Important
Machine learning models with two predictors can usually benefit from their own contour plot. This visual representation aids in exploring predictions and gaining a deeper understanding of how these models perform in different scenarios. Please refer to Contour Plot for more examples.
Parameter Modification#
In order to see the parameters:
model.get_params() Out[7]: {'n_neighbors': 10, 'p': 2}
And to manually change some of the parameters:
model.set_params({'n_neighbors': 8})
Model Register#
As this model is not native, it does not support model management and versioning. However, it is possible to use the SQL code it generates for deployment.
Model Exporting#
It is not possible to export this type of model, but you can still examine the SQL code generated by using the
deploySQL()
method.- __init__(name: str = None, overwrite_model: bool = False, n_neighbors: int = 5, p: int = 2) None #
Must be overridden in the child class
Methods
__init__
([name, overwrite_model, n_neighbors, p])Must be overridden in the child class
classification_report
([metrics, cutoff, ...])Computes a classification report using multiple model evaluation metrics (
auc
,accuracy
,f1
...).confusion_matrix
([pos_label, cutoff])Computes the model confusion matrix.
contour
([pos_label, nbins, chart])Draws the model's contour plot.
cutoff_curve
([pos_label, nbins, show, chart])Draws the model Cutoff curve.
deploySQL
([X, test_relation, predict, ...])Returns the SQL code needed to deploy the model.
does_model_exists
(name[, raise_error, ...])Checks whether the model is stored in the Vertica database.
drop
()KNeighborsClassifier
models are not stored in the Vertica DB.export_models
(name, path[, kind])Exports machine learning models.
fit
(input_relation, X, y[, test_relation, ...])Trains the model.
get_attributes
([attr_name])Returns the model attributes.
get_match_index
(x, col_list[, str_check])Returns the matching index.
Returns the parameters of the model.
get_plotting_lib
([class_name, chart, ...])Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.
get_vertica_attributes
([attr_name])Returns the model Vertica attributes.
import_models
(path[, schema, kind])Imports machine learning models.
lift_chart
([pos_label, nbins, show, chart])Draws the model Lift Chart.
prc_curve
([pos_label, nbins, show, chart])Draws the model PRC curve.
predict
(vdf[, X, name, cutoff, inplace])Predicts using the input relation.
predict_proba
(vdf[, X, name, pos_label, inplace])Returns the model's probabilities using the input relation.
register
(registered_name[, raise_error])Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.
report
([metrics, cutoff, labels, nbins])Computes a classification report using multiple model evaluation metrics (
auc
,accuracy
,f1
...).roc_curve
([pos_label, nbins, show, chart])Draws the model ROC curve.
score
([metric, average, pos_label, cutoff, ...])Computes the model score.
set_params
([parameters])Sets the parameters of the model.
Summarizes the model.
to_binary
(path)Exports the model to the Vertica Binary format.
to_pmml
(path)Exports the model to PMML.
to_python
([return_proba, ...])Returns the Python function needed for in-memory scoring without using built-in Vertica functions.
to_sql
([X, return_proba, ...])Returns the SQL code needed to deploy the model without using built-in Vertica functions.
to_tf
(path)Exports the model to the Frozen Graph format (TensorFlow).
Attributes