verticapy.machine_learning.vertica.linear_model.LogisticRegression#
- class verticapy.machine_learning.vertica.linear_model.LogisticRegression(name: str = None, overwrite_model: bool = False, penalty: Literal['none', 'l1', 'l2', 'enet', None] = 'none', tol: float = 1e-06, C: int | float | Decimal = 1.0, max_iter: int = 100, solver: Literal['newton', 'bfgs', 'cgd'] = 'newton', l1_ratio: float = 0.5, fit_intercept: bool = True)#
Creates a
LogisticRegression
object using the Vertica Logistic Regression algorithm.Parameters#
- name: str, optional
Name of the model. The model is stored in the database.
- overwrite_model: bool, optional
If set to
True
, training a model with the same name as an existing model overwrites the existing model.- penalty: str, optional
Determines the method of regularization.
- None:
No Regularization.
- l1:
L1
Regularization.
- l2:
L2
Regularization.
- enet:
Combination between
L1
andL2
.
- tol: float, optional
Determines whether the algorithm has reached the specified accuracy result.
- C: PythonNumber, optional
The regularization parameter value. The value must be zero or non-negative.
- max_iter: int, optional
Determines the maximum number of iterations the algorithm performs before achieving the specified accuracy result.
- solver: str, optional
The optimizer method used to train the model.
- newton:
Newton Method.
- bfgs:
Broyden Fletcher Goldfarb Shanno.
- cgd:
Coordinate Gradient Descent.
- l1_ratio: float, optional
ENet mixture parameter that defines the provided ratio of
L1
versusL2
regularization.- fit_intercept: bool, optional
boolean
, specifies whether the model includes an intercept. If set toFalse
, no intercept is used in training the model. Note that settingfit_intercept
toFalse
does not work well with the BFGS optimizer.
Attributes#
Many attributes are created during the fitting phase.
- coef_: numpy.array
The regression coefficients. The order of coefficients is the same as the order of columns used during the fitting phase.
- intercept_: float
The expected value of the dependent variable when all independent variables are zero, serving as the baseline or constant term in the model.
- features_importance_: numpy.array
The importance of features is computed through the model coefficients, which are normalized based on their range. Subsequently, an activation function calculates the final score. It is necessary to use the
features_importance()
method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.- classes_: numpy.array
The classes labels.
Note
All attributes can be accessed using the
get_attributes()
method.Note
Several other attributes can be accessed by using the
get_vertica_attributes()
method.Examples#
The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.
Load data for machine learning#
We import
verticapy
:import verticapy as vp
Hint
By assigning an alias to
verticapy
, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions fromverticapy
are used as intended without interfering with functions from other libraries.For this example, we will use the winequality dataset.
import verticapy.datasets as vpd data = vpd.load_winequality()
123fixed_acidityNumeric(8)123volatile_acidityNumeric(9)123citric_acidNumeric(8)123residual_sugarNumeric(9)123chloridesFloat(22)123free_sulfur_dioxideNumeric(9)123total_sulfur_dioxideNumeric(9)123densityFloat(22)123pHNumeric(8)123sulphatesNumeric(8)123alcoholFloat(22)123qualityInteger123goodIntegerAbccolorVarchar(20)1 3.8 0.31 0.02 11.1 0.036 20.0 114.0 0.99248 3.75 0.44 12.4 6 0 white 2 3.9 0.225 0.4 4.2 0.03 29.0 118.0 0.989 3.57 0.36 12.8 8 1 white 3 4.2 0.17 0.36 1.8 0.029 93.0 161.0 0.98999 3.65 0.89 12.0 7 1 white 4 4.2 0.215 0.23 5.1 0.041 64.0 157.0 0.99688 3.42 0.44 8.0 3 0 white 5 4.4 0.32 0.39 4.3 0.03 31.0 127.0 0.98904 3.46 0.36 12.8 8 1 white 6 4.4 0.46 0.1 2.8 0.024 31.0 111.0 0.98816 3.48 0.34 13.1 6 0 white 7 4.4 0.54 0.09 5.1 0.038 52.0 97.0 0.99022 3.41 0.4 12.2 7 1 white 8 4.5 0.19 0.21 0.95 0.033 89.0 159.0 0.99332 3.34 0.42 8.0 5 0 white 9 4.6 0.445 0.0 1.4 0.053 11.0 178.0 0.99426 3.79 0.55 10.2 5 0 white 10 4.6 0.52 0.15 2.1 0.054 8.0 65.0 0.9934 3.9 0.56 13.1 4 0 red 11 4.7 0.145 0.29 1.0 0.042 35.0 90.0 0.9908 3.76 0.49 11.3 6 0 white 12 4.7 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.5 5 0 white 13 4.7 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 14 4.7 0.6 0.17 2.3 0.058 17.0 106.0 0.9932 3.85 0.6 12.9 6 0 red 15 4.7 0.67 0.09 1.0 0.02 5.0 9.0 0.98722 3.3 0.34 13.6 5 0 white 16 4.7 0.785 0.0 3.4 0.036 23.0 134.0 0.98981 3.53 0.92 13.8 6 0 white 17 4.8 0.13 0.32 1.2 0.042 40.0 98.0 0.9898 3.42 0.64 11.8 7 1 white 18 4.8 0.17 0.28 2.9 0.03 22.0 111.0 0.9902 3.38 0.34 11.3 7 1 white 19 4.8 0.21 0.21 10.2 0.037 17.0 112.0 0.99324 3.66 0.48 12.2 7 1 white 20 4.8 0.225 0.38 1.2 0.074 47.0 130.0 0.99132 3.31 0.4 10.3 6 0 white 21 4.8 0.26 0.23 10.6 0.034 23.0 111.0 0.99274 3.46 0.28 11.5 7 1 white 22 4.8 0.29 0.23 1.1 0.044 38.0 180.0 0.98924 3.28 0.34 11.9 6 0 white 23 4.8 0.33 0.0 6.5 0.028 34.0 163.0 0.9937 3.35 0.61 9.9 5 0 white 24 4.8 0.34 0.0 6.5 0.028 33.0 163.0 0.9939 3.36 0.61 9.9 6 0 white 25 4.8 0.65 0.12 1.1 0.013 4.0 10.0 0.99246 3.32 0.36 13.5 4 0 white 26 4.9 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 27 4.9 0.33 0.31 1.2 0.016 39.0 150.0 0.98713 3.33 0.59 14.0 8 1 white 28 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 29 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 30 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 31 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 32 4.9 0.42 0.0 2.1 0.048 16.0 42.0 0.99154 3.71 0.74 14.0 7 1 red 33 4.9 0.47 0.17 1.9 0.035 60.0 148.0 0.98964 3.27 0.35 11.5 6 0 white 34 5.0 0.17 0.56 1.5 0.026 24.0 115.0 0.9906 3.48 0.39 10.8 7 1 white 35 5.0 0.2 0.4 1.9 0.015 20.0 98.0 0.9897 3.37 0.55 12.05 6 0 white 36 5.0 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 37 5.0 0.24 0.19 5.0 0.043 17.0 101.0 0.99438 3.67 0.57 10.0 5 0 white 38 5.0 0.24 0.21 2.2 0.039 31.0 100.0 0.99098 3.69 0.62 11.7 6 0 white 39 5.0 0.24 0.34 1.1 0.034 49.0 158.0 0.98774 3.32 0.32 13.1 7 1 white 40 5.0 0.255 0.22 2.7 0.043 46.0 153.0 0.99238 3.75 0.76 11.3 6 0 white 41 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 42 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 43 5.0 0.27 0.4 1.2 0.076 42.0 124.0 0.99204 3.32 0.47 10.1 6 0 white 44 5.0 0.29 0.54 5.7 0.035 54.0 155.0 0.98976 3.27 0.34 12.9 8 1 white 45 5.0 0.3 0.33 3.7 0.03 54.0 173.0 0.9887 3.36 0.3 13.0 7 1 white 46 5.0 0.31 0.0 6.4 0.046 43.0 166.0 0.994 3.3 0.63 9.9 6 0 white 47 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 48 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 49 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 50 5.0 0.33 0.18 4.6 0.032 40.0 124.0 0.99114 3.18 0.4 11.0 6 0 white 51 5.0 0.33 0.23 11.8 0.03 23.0 158.0 0.99322 3.41 0.64 11.8 6 0 white 52 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 53 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 54 5.0 0.38 0.01 1.6 0.048 26.0 60.0 0.99084 3.7 0.75 14.0 6 0 red 55 5.0 0.4 0.5 4.3 0.046 29.0 80.0 0.9902 3.49 0.66 13.6 6 0 red 56 5.0 0.42 0.24 2.0 0.06 19.0 50.0 0.9917 3.72 0.74 14.0 8 1 red 57 5.0 0.44 0.04 18.6 0.039 38.0 128.0 0.9985 3.37 0.57 10.2 6 0 white 58 5.0 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 59 5.0 0.55 0.14 8.3 0.032 35.0 164.0 0.9918 3.53 0.51 12.5 8 1 white 60 5.0 0.61 0.12 1.3 0.009 65.0 100.0 0.9874 3.26 0.37 13.5 5 0 white 61 5.0 0.74 0.0 1.2 0.041 16.0 46.0 0.99258 4.01 0.59 12.5 6 0 red 62 5.0 1.02 0.04 1.4 0.045 41.0 85.0 0.9938 3.75 0.48 10.5 4 0 red 63 5.0 1.04 0.24 1.6 0.05 32.0 96.0 0.9934 3.74 0.62 11.5 5 0 red 64 5.1 0.11 0.32 1.6 0.028 12.0 90.0 0.99008 3.57 0.52 12.2 6 0 white 65 5.1 0.14 0.25 0.7 0.039 15.0 89.0 0.9919 3.22 0.43 9.2 6 0 white 66 5.1 0.165 0.22 5.7 0.047 42.0 146.0 0.9934 3.18 0.55 9.9 6 0 white 67 5.1 0.21 0.28 1.4 0.047 48.0 148.0 0.99168 3.5 0.49 10.4 5 0 white 68 5.1 0.23 0.18 1.0 0.053 13.0 99.0 0.98956 3.22 0.39 11.5 5 0 white 69 5.1 0.25 0.36 1.3 0.035 40.0 78.0 0.9891 3.23 0.64 12.1 7 1 white 70 5.1 0.26 0.33 1.1 0.027 46.0 113.0 0.98946 3.35 0.43 11.4 7 1 white 71 5.1 0.26 0.34 6.4 0.034 26.0 99.0 0.99449 3.23 0.41 9.2 6 0 white 72 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 73 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 74 5.1 0.3 0.3 2.3 0.048 40.0 150.0 0.98944 3.29 0.46 12.2 6 0 white 75 5.1 0.305 0.13 1.75 0.036 17.0 73.0 0.99 3.4 0.51 12.3333333333333 5 0 white 76 5.1 0.31 0.3 0.9 0.037 28.0 152.0 0.992 3.54 0.56 10.1 6 0 white 77 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 78 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 79 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 80 5.1 0.33 0.27 6.7 0.022 44.0 129.0 0.99221 3.36 0.39 11.0 7 1 white 81 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 82 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 83 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 84 5.1 0.39 0.21 1.7 0.027 15.0 72.0 0.9894 3.5 0.45 12.5 6 0 white 85 5.1 0.42 0.0 1.8 0.044 18.0 88.0 0.99157 3.68 0.73 13.6 7 1 red 86 5.1 0.42 0.01 1.5 0.017 25.0 102.0 0.9894 3.38 0.36 12.3 7 1 white 87 5.1 0.47 0.02 1.3 0.034 18.0 44.0 0.9921 3.9 0.62 12.8 6 0 red 88 5.1 0.51 0.18 2.1 0.042 16.0 101.0 0.9924 3.46 0.87 12.9 7 1 red 89 5.1 0.52 0.06 2.7 0.052 30.0 79.0 0.9932 3.32 0.43 9.3 5 0 white 90 5.1 0.585 0.0 1.7 0.044 14.0 86.0 0.99264 3.56 0.94 12.9 7 1 red 91 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 92 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 93 5.2 0.16 0.34 0.8 0.029 26.0 77.0 0.99155 3.25 0.51 10.1 6 0 white 94 5.2 0.17 0.27 0.7 0.03 11.0 68.0 0.99218 3.3 0.41 9.8 5 0 white 95 5.2 0.185 0.22 1.0 0.03 47.0 123.0 0.99218 3.55 0.44 10.15 6 0 white 96 5.2 0.2 0.27 3.2 0.047 16.0 93.0 0.99235 3.44 0.53 10.1 7 1 white 97 5.2 0.21 0.31 1.7 0.048 17.0 61.0 0.98953 3.24 0.37 12.0 7 1 white 98 5.2 0.22 0.46 6.2 0.066 41.0 187.0 0.99362 3.19 0.42 9.73333333333333 5 0 white 99 5.2 0.24 0.15 7.1 0.043 32.0 134.0 0.99378 3.24 0.48 9.9 6 0 white 100 5.2 0.24 0.45 3.8 0.027 21.0 128.0 0.992 3.55 0.49 11.2 8 1 white Rows: 1-100 | Columns: 14Note
VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.
You can easily divide your dataset into training and testing subsets using the
vDataFrame.
train_test_split()
method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.data = vpd.load_winequality() train, test = data.train_test_split(test_size = 0.2)
Warning
In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the
vDataFrame.
to_db()
method to save your results intotables
ortemporary tables
. This will help enhance the overall performance of the process.Balancing the Dataset#
In VerticaPy, balancing a dataset to address class imbalances is made straightforward through the
balance()
function within thepreprocessing
module. This function enables users to rectify skewed class distributions efficiently. By specifying the target variable and setting parameters like the method for balancing, users can effortlessly achieve a more equitable representation of classes in their dataset. Whether opting for over-sampling, under-sampling, or a combination of both, VerticaPy’sbalance()
function streamlines the process, empowering users to enhance the performance and fairness of their machine learning models trained on imbalanced data.To balance the dataset, use the following syntax.
from verticapy.machine_learning.vertica.preprocessing import balance balanced_train = balance( name = "my_schema.train_balanced", input_relation = train, y = "good", method = "hybrid", )
Note
With this code, a table named train_balanced is created in the my_schema schema. It can then be used to train the model. In the rest of the example, we will work with the full dataset.
Hint
Balancing the dataset is a crucial step in improving the accuracy of machine learning models, particularly when faced with imbalanced class distributions. By addressing disparities in the number of instances across different classes, the model becomes more adept at learning patterns from all classes rather than being biased towards the majority class. This, in turn, enhances the model’s ability to make accurate predictions for under-represented classes. The balanced dataset ensures that the model is not dominated by the majority class and, as a result, leads to more robust and unbiased model performance. Therefore, by employing techniques such as over-sampling, under-sampling, or a combination of both during dataset preparation, practitioners can significantly contribute to achieving higher accuracy and better generalization of their machine learning models.
Model Initialization#
First we import the
LogisticRegression
model:from verticapy.machine_learning.vertica import LogisticRegression
Then we can create the model:
model = LogisticRegression( tol = 1e-6, max_iter = 100, solver = 'newton', fit_intercept = True, )
Hint
In
verticapy
1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.Important
The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.
Model Training#
We can now fit the model:
model.fit( train, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "good", test, )
Important
To train a model, you can directly use the
vDataFrame
or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. Inverticapy
, we don’t work usingX
matrices andy
vectors. Instead, we work directly with lists of predictors and the response name.Features Importance#
We can conveniently get the features importance:
result = model.features_importance()
Note
For
LinearModel
, feature importance is computed using the coefficients. These coefficients are then normalized using the feature distribution. An activation function is applied to get the final score.Metrics#
We can get the entire report using:
model.report()
value auc 0.7180360774873755 prc_auc 0.3437908329067543 accuracy 0.8003084040092521 log_loss 0.192819312539489 precision 0.40476190476190477 recall 0.06772908366533864 f1_score 0.11604095563139931 mcc 0.09781667631788643 informedness 0.043828510051571845 markedness 0.21830772149497246 csi 0.06159420289855073 Rows: 1-11 | Columns: 2Important
Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g.
model.report(metrics = ["auc", "accuracy"])
.For classification models, we can easily modify the
cutoff
to observe the effect on different metrics:model.report(cutoff = 0.2)
value auc 0.7180360774873755 prc_auc 0.3437908329067543 accuracy 0.6738627602158828 log_loss 0.192819312539489 precision 0.3286852589641434 recall 0.6573705179282868 f1_score 0.4382470119521912 mcc 0.27186878823730154 informedness 0.3351907856147114 markedness 0.22050915833521256 csi 0.28061224489795916 Rows: 1-11 | Columns: 2You can also use the
LinearModel.score
function to compute any classification metric. The default metric is the accuracy:model.score() Out[2]: 0.8003084040092521
Prediction#
Prediction is straight-forward:
model.predict( test, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "prediction", )
123fixed_acidityNumeric(8)123volatile_acidityNumeric(9)123citric_acidNumeric(8)123residual_sugarNumeric(9)123chloridesFloat(22)123free_sulfur_dioxideNumeric(9)123total_sulfur_dioxideNumeric(9)123densityFloat(22)123pHNumeric(8)123sulphatesNumeric(8)123alcoholFloat(22)123qualityInteger123goodIntegerAbccolorVarchar(20)123predictionInteger1 4.4 0.32 0.39 4.3 0.03 31.0 127.0 0.98904 3.46 0.36 12.8 8 1 white 0 2 4.7 0.145 0.29 1.0 0.042 35.0 90.0 0.9908 3.76 0.49 11.3 6 0 white 0 3 4.7 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.5 5 0 white 0 4 4.7 0.67 0.09 1.0 0.02 5.0 9.0 0.98722 3.3 0.34 13.6 5 0 white 0 5 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 0 6 4.9 0.42 0.0 2.1 0.048 16.0 42.0 0.99154 3.71 0.74 14.0 7 1 red 0 7 4.9 0.47 0.17 1.9 0.035 60.0 148.0 0.98964 3.27 0.35 11.5 6 0 white 0 8 5.0 0.24 0.21 2.2 0.039 31.0 100.0 0.99098 3.69 0.62 11.7 6 0 white 0 9 5.0 0.29 0.54 5.7 0.035 54.0 155.0 0.98976 3.27 0.34 12.9 8 1 white 0 10 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 0 11 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 0 12 5.0 0.42 0.24 2.0 0.06 19.0 50.0 0.9917 3.72 0.74 14.0 8 1 red 0 13 5.0 0.74 0.0 1.2 0.041 16.0 46.0 0.99258 4.01 0.59 12.5 6 0 red 0 14 5.1 0.26 0.33 1.1 0.027 46.0 113.0 0.98946 3.35 0.43 11.4 7 1 white 0 15 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 0 16 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 0 17 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 0 18 5.2 0.185 0.22 1.0 0.03 47.0 123.0 0.99218 3.55 0.44 10.15 6 0 white 0 19 5.2 0.25 0.23 1.4 0.047 20.0 77.0 0.99001 3.32 0.62 11.4 5 0 white 0 20 5.2 0.36 0.02 1.6 0.031 24.0 104.0 0.9896 3.44 0.35 12.2 6 0 white 0 21 5.2 0.38 0.26 7.7 0.053 20.0 103.0 0.9925 3.27 0.45 12.2 6 0 white 0 22 5.2 0.48 0.04 1.6 0.054 19.0 106.0 0.9927 3.54 0.62 12.2 7 1 red 0 23 5.2 0.49 0.26 2.3 0.09 23.0 74.0 0.9953 3.71 0.62 12.2 6 0 red 0 24 5.3 0.16 0.39 1.0 0.028 40.0 101.0 0.99156 3.57 0.59 10.6 6 0 white 0 25 5.3 0.21 0.29 0.7 0.028 11.0 66.0 0.99215 3.3 0.4 9.8 5 0 white 0 26 5.3 0.24 0.33 1.3 0.033 25.0 97.0 0.9906 3.59 0.38 11.0 8 1 white 0 27 5.3 0.26 0.23 5.15 0.034 48.0 160.0 0.9952 3.82 0.51 10.5 7 1 white 0 28 5.3 0.36 0.27 6.3 0.028 40.0 132.0 0.99186 3.37 0.4 11.6 6 0 white 0 29 5.3 0.4 0.25 3.9 0.031 45.0 130.0 0.99072 3.31 0.58 11.75 7 1 white 0 30 5.3 0.43 0.11 1.1 0.029 6.0 51.0 0.99076 3.51 0.48 11.2 4 0 white 0 31 5.3 0.57 0.01 1.7 0.054 5.0 27.0 0.9934 3.57 0.84 12.5 7 1 red 0 32 5.3 0.715 0.19 1.5 0.161 7.0 62.0 0.99395 3.62 0.61 11.0 5 0 red 0 33 5.3 0.76 0.03 2.7 0.043 27.0 93.0 0.9932 3.34 0.38 9.2 5 0 white 0 34 5.4 0.255 0.33 1.2 0.051 29.0 122.0 0.99048 3.37 0.66 11.3 6 0 white 0 35 5.4 0.29 0.38 1.2 0.029 31.0 132.0 0.98895 3.28 0.36 12.4 6 0 white 0 36 5.4 0.29 0.47 3.0 0.052 47.0 145.0 0.993 3.29 0.75 10.0 6 0 white 0 37 5.4 0.415 0.19 1.6 0.039 27.0 88.0 0.99265 3.54 0.41 10.0 7 1 white 0 38 5.4 0.53 0.16 2.7 0.036 34.0 128.0 0.98856 3.2 0.53 13.2 8 1 white 0 39 5.4 0.58 0.08 1.9 0.059 20.0 31.0 0.99484 3.5 0.64 10.2 6 0 red 0 40 5.5 0.485 0.0 1.5 0.065 8.0 103.0 0.994 3.63 0.4 9.7 4 0 white 0 41 5.6 0.175 0.29 0.8 0.043 20.0 67.0 0.99112 3.28 0.48 9.9 6 0 white 0 42 5.6 0.18 0.27 1.7 0.03 31.0 103.0 0.98892 3.35 0.37 12.9 6 0 white 0 43 5.6 0.18 0.58 1.25 0.034 29.0 129.0 0.98984 3.51 0.6 12.0 7 1 white 0 44 5.6 0.19 0.26 1.4 0.03 12.0 76.0 0.9905 3.25 0.37 10.9 7 1 white 0 45 5.6 0.19 0.31 2.7 0.027 11.0 100.0 0.98964 3.46 0.4 13.2 7 1 white 0 46 5.6 0.2 0.66 10.2 0.043 78.0 175.0 0.9945 2.98 0.43 10.4 7 1 white 0 47 5.6 0.23 0.29 3.1 0.023 19.0 89.0 0.99068 3.25 0.51 11.2 6 0 white 0 48 5.6 0.235 0.29 1.2 0.047 33.0 127.0 0.991 3.34 0.5 11.0 7 1 white 0 49 5.6 0.24 0.34 2.0 0.041 14.0 73.0 0.98981 3.04 0.45 11.6 7 1 white 0 50 5.6 0.31 0.37 1.4 0.074 12.0 96.0 0.9954 3.32 0.58 9.2 5 0 red 0 51 5.6 0.32 0.32 8.3 0.043 32.0 105.0 0.99266 3.24 0.47 11.2 6 0 white 0 52 5.6 0.32 0.33 7.4 0.037 25.0 95.0 0.99268 3.25 0.49 11.1 6 0 white 0 53 5.6 0.35 0.14 5.0 0.046 48.0 198.0 0.9937 3.3 0.71 10.3 5 0 white 0 54 5.6 0.5 0.09 2.3 0.049 17.0 99.0 0.9937 3.63 0.63 13.0 5 0 red 0 55 5.6 0.5 0.09 2.3 0.049 17.0 99.0 0.9937 3.63 0.63 13.0 5 0 red 0 56 5.6 0.615 0.0 1.6 0.089 16.0 59.0 0.9943 3.58 0.52 9.9 5 0 red 0 57 5.6 0.62 0.03 1.5 0.08 6.0 13.0 0.99498 3.66 0.62 10.1 4 0 red 0 58 5.6 0.66 0.0 2.5 0.066 7.0 15.0 0.99256 3.52 0.58 12.9 5 0 red 0 59 5.6 0.695 0.06 6.8 0.042 9.0 84.0 0.99432 3.44 0.44 10.2 5 0 white 0 60 5.6 0.85 0.05 1.4 0.045 12.0 88.0 0.9924 3.56 0.82 12.9 8 1 red 0 61 5.7 0.16 0.26 6.3 0.043 28.0 113.0 0.9936 3.06 0.58 9.9 6 0 white 0 62 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 0 63 5.7 0.22 0.22 16.65 0.044 39.0 110.0 0.99855 3.24 0.48 9.0 6 0 white 0 64 5.7 0.23 0.25 7.95 0.042 16.0 108.0 0.99486 3.44 0.61 10.3 6 0 white 0 65 5.7 0.25 0.26 12.5 0.049 52.5 106.0 0.99691 3.08 0.45 9.4 6 0 white 0 66 5.7 0.255 0.65 1.2 0.079 17.0 137.0 0.99307 3.2 0.42 9.4 5 0 white 0 67 5.7 0.26 0.24 17.8 0.059 23.0 124.0 0.99773 3.3 0.5 10.1 5 0 white 0 68 5.7 0.28 0.36 1.8 0.041 38.0 90.0 0.99002 3.27 0.98 11.9 7 1 white 0 69 5.7 0.32 0.18 1.4 0.029 26.0 104.0 0.9906 3.44 0.37 11.0 6 0 white 0 70 5.7 0.32 0.38 4.75 0.033 23.0 94.0 0.991 3.42 0.42 11.8 7 1 white 0 71 5.7 0.335 0.34 1.0 0.04 13.0 174.0 0.992 3.27 0.66 10.0 5 0 white 0 72 5.8 0.13 0.22 12.7 0.058 24.0 183.0 0.9956 3.32 0.42 11.7 6 0 white 0 73 5.8 0.17 0.34 1.8 0.045 96.0 170.0 0.99035 3.38 0.9 11.8 8 1 white 0 74 5.8 0.17 0.36 1.3 0.036 11.0 70.0 0.99202 3.43 0.68 10.4 7 1 white 0 75 5.8 0.18 0.37 1.1 0.036 31.0 96.0 0.98942 3.16 0.48 12.0 6 0 white 0 76 5.8 0.19 0.25 10.8 0.042 33.0 124.0 0.99646 3.22 0.41 9.2 6 0 white 0 77 5.8 0.21 0.32 1.6 0.045 38.0 95.0 0.98946 3.23 0.94 12.4 8 1 white 0 78 5.8 0.23 0.2 2.0 0.043 39.0 154.0 0.99226 3.21 0.39 10.2 6 0 white 0 79 5.8 0.23 0.27 1.8 0.043 24.0 69.0 0.9933 3.38 0.31 9.4 6 0 white 0 80 5.8 0.24 0.26 10.05 0.039 63.0 162.0 0.99375 3.33 0.5 11.2 6 0 white 0 81 5.8 0.25 0.24 13.3 0.044 41.0 137.0 0.9972 3.34 0.42 9.5 5 0 white 0 82 5.8 0.26 0.24 9.2 0.044 55.0 152.0 0.9961 3.31 0.38 9.4 5 0 white 0 83 5.8 0.28 0.3 3.9 0.026 36.0 105.0 0.98963 3.26 0.58 12.75 6 0 white 0 84 5.8 0.28 0.34 4.0 0.031 40.0 99.0 0.9896 3.39 0.39 12.8 7 1 white 0 85 5.8 0.28 0.66 9.1 0.039 26.0 159.0 0.9965 3.66 0.55 10.8 5 0 white 0 86 5.8 0.29 0.15 1.1 0.029 12.0 83.0 0.9898 3.3 0.4 11.4 6 0 white 0 87 5.8 0.29 0.26 1.7 0.063 3.0 11.0 0.9915 3.39 0.54 13.5 6 0 red 0 88 5.8 0.29 0.33 3.7 0.029 30.0 88.0 0.98994 3.25 0.42 12.3 6 0 white 0 89 5.8 0.3 0.38 4.9 0.039 22.0 86.0 0.98963 3.23 0.58 13.1 7 1 white 0 90 5.8 0.3 0.38 4.9 0.039 22.0 86.0 0.98963 3.23 0.58 13.1 7 1 white 0 91 5.8 0.34 0.16 7.0 0.037 26.0 116.0 0.9949 3.46 0.45 10.0 7 1 white 0 92 5.8 0.34 0.21 6.6 0.04 50.0 167.0 0.9941 3.29 0.62 10.0 5 0 white 0 93 5.8 0.345 0.15 10.8 0.033 26.0 120.0 0.99494 3.25 0.49 10.0 6 0 white 0 94 5.8 0.36 0.38 0.9 0.037 3.0 75.0 0.9904 3.28 0.34 11.4 4 0 white 0 95 5.8 0.38 0.26 1.1 0.058 20.0 140.0 0.99271 3.27 0.43 9.7 6 0 white 0 96 5.8 0.39 0.47 7.5 0.027 12.0 88.0 0.9907 3.38 0.45 14.0 6 0 white 0 97 5.8 0.61 0.11 1.8 0.066 18.0 28.0 0.99483 3.55 0.66 10.9 6 0 red 0 98 5.8 1.01 0.66 2.0 0.039 15.0 88.0 0.99357 3.66 0.6 11.5 6 0 red 0 99 5.9 0.17 0.28 0.7 0.027 5.0 28.0 0.98985 3.13 0.32 10.6 5 0 white 0 100 5.9 0.17 0.29 3.1 0.03 32.0 123.0 0.98913 3.41 0.33 13.7 7 1 white 1 Rows: 1-100 | Columns: 15Note
Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the
vDataFrame
to thepredict()
function, but in this case, it’s essential that the column names of thevDataFrame
match the predictors and response name in the model.Probabilities#
It is also easy to get the model’s probabilities:
model.predict_proba( test, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "prediction", )
123fixed_acidityNumeric(8)123volatile_acidityNumeric(9)123citric_acidNumeric(8)123residual_sugarNumeric(9)123chloridesFloat(22)123free_sulfur_dioxideNumeric(9)123total_sulfur_dioxideNumeric(9)123densityFloat(22)123pHNumeric(8)123sulphatesNumeric(8)123alcoholFloat(22)123qualityInteger123goodIntegerAbccolorVarchar(20)123predictionInteger123prediction_0Float(22)123prediction_1Float(22)1 4.4 0.32 0.39 4.3 0.03 31.0 127.0 0.98904 3.46 0.36 12.8 8 1 white 0 0.614955179666659 0.385044820333341 2 4.7 0.145 0.29 1.0 0.042 35.0 90.0 0.9908 3.76 0.49 11.3 6 0 white 0 0.797314999923649 0.202685000076351 3 4.7 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.5 5 0 white 0 0.89367218198362 0.10632781801638 4 4.7 0.67 0.09 1.0 0.02 5.0 9.0 0.98722 3.3 0.34 13.6 5 0 white 0 0.591615176070833 0.408384823929167 5 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 0 0.862723585730535 0.137276414269465 6 4.9 0.42 0.0 2.1 0.048 16.0 42.0 0.99154 3.71 0.74 14.0 7 1 red 0 0.86126674221518 0.13873325778482 7 4.9 0.47 0.17 1.9 0.035 60.0 148.0 0.98964 3.27 0.35 11.5 6 0 white 0 0.736863120550404 0.263136879449596 8 5.0 0.24 0.21 2.2 0.039 31.0 100.0 0.99098 3.69 0.62 11.7 6 0 white 0 0.781354587750733 0.218645412249267 9 5.0 0.29 0.54 5.7 0.035 54.0 155.0 0.98976 3.27 0.34 12.9 8 1 white 0 0.577305884632833 0.422694115367167 10 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 0 0.860057552708508 0.139942447291492 11 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 0 0.783345314180002 0.216654685819998 12 5.0 0.42 0.24 2.0 0.06 19.0 50.0 0.9917 3.72 0.74 14.0 8 1 red 0 0.867677706510973 0.132322293489027 13 5.0 0.74 0.0 1.2 0.041 16.0 46.0 0.99258 4.01 0.59 12.5 6 0 red 0 0.937802706702235 0.0621972932977647 14 5.1 0.26 0.33 1.1 0.027 46.0 113.0 0.98946 3.35 0.43 11.4 7 1 white 0 0.665510608753849 0.334489391246151 15 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 0 0.799945913354424 0.200054086645576 16 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 0 0.655490598201746 0.344509401798254 17 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 0 0.758656915536076 0.241343084463924 18 5.2 0.185 0.22 1.0 0.03 47.0 123.0 0.99218 3.55 0.44 10.15 6 0 white 0 0.855650634368047 0.144349365631953 19 5.2 0.25 0.23 1.4 0.047 20.0 77.0 0.99001 3.32 0.62 11.4 5 0 white 0 0.710042263981593 0.289957736018407 20 5.2 0.36 0.02 1.6 0.031 24.0 104.0 0.9896 3.44 0.35 12.2 6 0 white 0 0.688176265145152 0.311823734854848 21 5.2 0.38 0.26 7.7 0.053 20.0 103.0 0.9925 3.27 0.45 12.2 6 0 white 0 0.793109209807302 0.206890790192698 22 5.2 0.48 0.04 1.6 0.054 19.0 106.0 0.9927 3.54 0.62 12.2 7 1 red 0 0.913216074176281 0.0867839258237186 23 5.2 0.49 0.26 2.3 0.09 23.0 74.0 0.9953 3.71 0.62 12.2 6 0 red 0 0.970590280772543 0.0294097192274574 24 5.3 0.16 0.39 1.0 0.028 40.0 101.0 0.99156 3.57 0.59 10.6 6 0 white 0 0.804938598587043 0.195061401412958 25 5.3 0.21 0.29 0.7 0.028 11.0 66.0 0.99215 3.3 0.4 9.8 5 0 white 0 0.855828219902772 0.144171780097228 26 5.3 0.24 0.33 1.3 0.033 25.0 97.0 0.9906 3.59 0.38 11.0 8 1 white 0 0.744336028472125 0.255663971527875 27 5.3 0.26 0.23 5.15 0.034 48.0 160.0 0.9952 3.82 0.51 10.5 7 1 white 0 0.932508966896108 0.0674910331038918 28 5.3 0.36 0.27 6.3 0.028 40.0 132.0 0.99186 3.37 0.4 11.6 6 0 white 0 0.752069291083858 0.247930708916142 29 5.3 0.4 0.25 3.9 0.031 45.0 130.0 0.99072 3.31 0.58 11.75 7 1 white 0 0.726055209826978 0.273944790173022 30 5.3 0.43 0.11 1.1 0.029 6.0 51.0 0.99076 3.51 0.48 11.2 4 0 white 0 0.801070593461449 0.198929406538551 31 5.3 0.57 0.01 1.7 0.054 5.0 27.0 0.9934 3.57 0.84 12.5 7 1 red 0 0.937776586213916 0.0622234137860842 32 5.3 0.715 0.19 1.5 0.161 7.0 62.0 0.99395 3.62 0.61 11.0 5 0 red 0 0.968334508249815 0.0316654917501852 33 5.3 0.76 0.03 2.7 0.043 27.0 93.0 0.9932 3.34 0.38 9.2 5 0 white 0 0.936378442557654 0.0636215574423463 34 5.4 0.255 0.33 1.2 0.051 29.0 122.0 0.99048 3.37 0.66 11.3 6 0 white 0 0.740904509290157 0.259095490709843 35 5.4 0.29 0.38 1.2 0.029 31.0 132.0 0.98895 3.28 0.36 12.4 6 0 white 0 0.588629453865786 0.411370546134214 36 5.4 0.29 0.47 3.0 0.052 47.0 145.0 0.993 3.29 0.75 10.0 6 0 white 0 0.876472556446885 0.123527443553115 37 5.4 0.415 0.19 1.6 0.039 27.0 88.0 0.99265 3.54 0.41 10.0 7 1 white 0 0.892689378499269 0.107310621500731 38 5.4 0.53 0.16 2.7 0.036 34.0 128.0 0.98856 3.2 0.53 13.2 8 1 white 0 0.576839048623018 0.423160951376982 39 5.4 0.58 0.08 1.9 0.059 20.0 31.0 0.99484 3.5 0.64 10.2 6 0 red 0 0.964299234122836 0.0357007658771636 40 5.5 0.485 0.0 1.5 0.065 8.0 103.0 0.994 3.63 0.4 9.7 4 0 white 0 0.945075110816231 0.0549248891837686 41 5.6 0.175 0.29 0.8 0.043 20.0 67.0 0.99112 3.28 0.48 9.9 6 0 white 0 0.765736632369779 0.234263367630221 42 5.6 0.18 0.27 1.7 0.03 31.0 103.0 0.98892 3.35 0.37 12.9 6 0 white 0 0.518770258083343 0.481229741916657 43 5.6 0.18 0.58 1.25 0.034 29.0 129.0 0.98984 3.51 0.6 12.0 7 1 white 0 0.628552519451716 0.371447480548284 44 5.6 0.19 0.26 1.4 0.03 12.0 76.0 0.9905 3.25 0.37 10.9 7 1 white 0 0.694276770724165 0.305723229275835 45 5.6 0.19 0.31 2.7 0.027 11.0 100.0 0.98964 3.46 0.4 13.2 7 1 white 0 0.565493377818475 0.434506622181525 46 5.6 0.2 0.66 10.2 0.043 78.0 175.0 0.9945 2.98 0.43 10.4 7 1 white 0 0.811644725647033 0.188355274352967 47 5.6 0.23 0.29 3.1 0.023 19.0 89.0 0.99068 3.25 0.51 11.2 6 0 white 0 0.669647572668059 0.330352427331941 48 5.6 0.235 0.29 1.2 0.047 33.0 127.0 0.991 3.34 0.5 11.0 7 1 white 0 0.761559047056349 0.238440952943651 49 5.6 0.24 0.34 2.0 0.041 14.0 73.0 0.98981 3.04 0.45 11.6 7 1 white 0 0.627619752903072 0.372380247096928 50 5.6 0.31 0.37 1.4 0.074 12.0 96.0 0.9954 3.32 0.58 9.2 5 0 red 0 0.961646541784064 0.0383534582159361 51 5.6 0.32 0.32 8.3 0.043 32.0 105.0 0.99266 3.24 0.47 11.2 6 0 white 0 0.743160639027751 0.256839360972249 52 5.6 0.32 0.33 7.4 0.037 25.0 95.0 0.99268 3.25 0.49 11.1 6 0 white 0 0.763275385043613 0.236724614956387 53 5.6 0.35 0.14 5.0 0.046 48.0 198.0 0.9937 3.3 0.71 10.3 5 0 white 0 0.880840189976978 0.119159810023022 54 5.6 0.5 0.09 2.3 0.049 17.0 99.0 0.9937 3.63 0.63 13.0 5 0 red 0 0.92664518728035 0.0733548127196496 55 5.6 0.5 0.09 2.3 0.049 17.0 99.0 0.9937 3.63 0.63 13.0 5 0 red 0 0.92664518728035 0.0733548127196496 56 5.6 0.615 0.0 1.6 0.089 16.0 59.0 0.9943 3.58 0.52 9.9 5 0 red 0 0.958397725082717 0.0416022749172831 57 5.6 0.62 0.03 1.5 0.08 6.0 13.0 0.99498 3.66 0.62 10.1 4 0 red 0 0.968589174621192 0.0314108253788083 58 5.6 0.66 0.0 2.5 0.066 7.0 15.0 0.99256 3.52 0.58 12.9 5 0 red 0 0.90457474531892 0.0954252546810799 59 5.6 0.695 0.06 6.8 0.042 9.0 84.0 0.99432 3.44 0.44 10.2 5 0 white 0 0.919937284993101 0.0800627150068991 60 5.6 0.85 0.05 1.4 0.045 12.0 88.0 0.9924 3.56 0.82 12.9 8 1 red 0 0.923095894290782 0.0769041057092179 61 5.7 0.16 0.26 6.3 0.043 28.0 113.0 0.9936 3.06 0.58 9.9 6 0 white 0 0.818448856734669 0.181551143265331 62 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 0 0.926748454742007 0.0732515452579933 63 5.7 0.22 0.22 16.65 0.044 39.0 110.0 0.99855 3.24 0.48 9.0 6 0 white 0 0.918478609966421 0.081521390033579 64 5.7 0.23 0.25 7.95 0.042 16.0 108.0 0.99486 3.44 0.61 10.3 6 0 white 0 0.87291443228604 0.12708556771396 65 5.7 0.25 0.26 12.5 0.049 52.5 106.0 0.99691 3.08 0.45 9.4 6 0 white 0 0.907388075666488 0.0926119243335115 66 5.7 0.255 0.65 1.2 0.079 17.0 137.0 0.99307 3.2 0.42 9.4 5 0 white 0 0.891856354912307 0.108143645087693 67 5.7 0.26 0.24 17.8 0.059 23.0 124.0 0.99773 3.3 0.5 10.1 5 0 white 0 0.880659539579629 0.119340460420372 68 5.7 0.28 0.36 1.8 0.041 38.0 90.0 0.99002 3.27 0.98 11.9 7 1 white 0 0.655295684619126 0.344704315380874 69 5.7 0.32 0.18 1.4 0.029 26.0 104.0 0.9906 3.44 0.37 11.0 6 0 white 0 0.726452010935391 0.273547989064609 70 5.7 0.32 0.38 4.75 0.033 23.0 94.0 0.991 3.42 0.42 11.8 7 1 white 0 0.671969191464436 0.328030808535564 71 5.7 0.335 0.34 1.0 0.04 13.0 174.0 0.992 3.27 0.66 10.0 5 0 white 0 0.842717645006446 0.157282354993554 72 5.8 0.13 0.22 12.7 0.058 24.0 183.0 0.9956 3.32 0.42 11.7 6 0 white 0 0.821004225092566 0.178995774907434 73 5.8 0.17 0.34 1.8 0.045 96.0 170.0 0.99035 3.38 0.9 11.8 8 1 white 0 0.65173052770867 0.34826947229133 74 5.8 0.17 0.36 1.3 0.036 11.0 70.0 0.99202 3.43 0.68 10.4 7 1 white 0 0.802092933110642 0.197907066889358 75 5.8 0.18 0.37 1.1 0.036 31.0 96.0 0.98942 3.16 0.48 12.0 6 0 white 0 0.573056809371685 0.426943190628315 76 5.8 0.19 0.25 10.8 0.042 33.0 124.0 0.99646 3.22 0.41 9.2 6 0 white 0 0.897709716794374 0.102290283205626 77 5.8 0.21 0.32 1.6 0.045 38.0 95.0 0.98946 3.23 0.94 12.4 8 1 white 0 0.576881891313979 0.423118108686021 78 5.8 0.23 0.2 2.0 0.043 39.0 154.0 0.99226 3.21 0.39 10.2 6 0 white 0 0.819779768388669 0.180220231611332 79 5.8 0.23 0.27 1.8 0.043 24.0 69.0 0.9933 3.38 0.31 9.4 6 0 white 0 0.880007545152679 0.119992454847321 80 5.8 0.24 0.26 10.05 0.039 63.0 162.0 0.99375 3.33 0.5 11.2 6 0 white 0 0.755589114868332 0.244410885131668 81 5.8 0.25 0.24 13.3 0.044 41.0 137.0 0.9972 3.34 0.42 9.5 5 0 white 0 0.904766148984753 0.0952338510152473 82 5.8 0.26 0.24 9.2 0.044 55.0 152.0 0.9961 3.31 0.38 9.4 5 0 white 0 0.909574963861911 0.0904250361380886 83 5.8 0.28 0.3 3.9 0.026 36.0 105.0 0.98963 3.26 0.58 12.75 6 0 white 0 0.530407807759237 0.469592192240763 84 5.8 0.28 0.34 4.0 0.031 40.0 99.0 0.9896 3.39 0.39 12.8 7 1 white 0 0.526501169092277 0.473498830907723 85 5.8 0.28 0.66 9.1 0.039 26.0 159.0 0.9965 3.66 0.55 10.8 5 0 white 0 0.922579041680854 0.0774209583191463 86 5.8 0.29 0.15 1.1 0.029 12.0 83.0 0.9898 3.3 0.4 11.4 6 0 white 0 0.642802795326546 0.357197204673454 87 5.8 0.29 0.26 1.7 0.063 3.0 11.0 0.9915 3.39 0.54 13.5 6 0 red 0 0.792301892116815 0.207698107883185 88 5.8 0.29 0.33 3.7 0.029 30.0 88.0 0.98994 3.25 0.42 12.3 6 0 white 0 0.57475723171627 0.42524276828373 89 5.8 0.3 0.38 4.9 0.039 22.0 86.0 0.98963 3.23 0.58 13.1 7 1 white 0 0.511272472094322 0.488727527905678 90 5.8 0.3 0.38 4.9 0.039 22.0 86.0 0.98963 3.23 0.58 13.1 7 1 white 0 0.511272472094322 0.488727527905678 91 5.8 0.34 0.16 7.0 0.037 26.0 116.0 0.9949 3.46 0.45 10.0 7 1 white 0 0.89531390537583 0.10468609462417 92 5.8 0.34 0.21 6.6 0.04 50.0 167.0 0.9941 3.29 0.62 10.0 5 0 white 0 0.864222637586117 0.135777362413883 93 5.8 0.345 0.15 10.8 0.033 26.0 120.0 0.99494 3.25 0.49 10.0 6 0 white 0 0.841574904461567 0.158425095538433 94 5.8 0.36 0.38 0.9 0.037 3.0 75.0 0.9904 3.28 0.34 11.4 4 0 white 0 0.723954233772158 0.276045766227842 95 5.8 0.38 0.26 1.1 0.058 20.0 140.0 0.99271 3.27 0.43 9.7 6 0 white 0 0.884962350625828 0.115037649374172 96 5.8 0.39 0.47 7.5 0.027 12.0 88.0 0.9907 3.38 0.45 14.0 6 0 white 0 0.561631855723713 0.438368144276287 97 5.8 0.61 0.11 1.8 0.066 18.0 28.0 0.99483 3.55 0.66 10.9 6 0 red 0 0.960200454824425 0.039799545175575 98 5.8 1.01 0.66 2.0 0.039 15.0 88.0 0.99357 3.66 0.6 11.5 6 0 red 0 0.950955484727204 0.049044515272796 99 5.9 0.17 0.28 0.7 0.027 5.0 28.0 0.98985 3.13 0.32 10.6 5 0 white 0 0.61355507029619 0.38644492970381 100 5.9 0.17 0.29 3.1 0.03 32.0 123.0 0.98913 3.41 0.33 13.7 7 1 white 1 0.461057470080073 0.538942529919927 Rows: 1-100 | Columns: 17Note
Probabilities are added to the
vDataFrame
, and VerticaPy uses the corresponding probability function in SQL behind the scenes. You can use thepos_label
parameter to add only the probability of the selected category.Confusion Matrix#
You can obtain the confusion matrix of your choice by specifying the desired cutoff.
model.confusion_matrix(cutoff = 0.5) Out[3]: array([[1021, 25], [ 234, 17]])
Note
In classification, the
cutoff
is a threshold value used to determine class assignment based on predicted probabilities or scores from a classification model. In binary classification, if the predicted probability for a specific class is greater than or equal to the cutoff, the instance is assigned to the positive class; otherwise, it is assigned to the negative class. Adjusting the cutoff allows for trade-offs between true positives and false positives, enabling the model to be optimized for specific objectives or to consider the relative costs of different classification errors. The choice of cutoff is critical for tailoring the model’s performance to meet specific needs.Main Plots (Classification Curves)#
Classification models allow for the creation of various plots that are very helpful in understanding the model, such as the ROC Curve, PRC Curve, Cutoff Curve, Gain Curve, and more.
Most of the classification curves can be found in the Machine Learning - Classification Curve.
For example, let’s draw the model’s ROC curve.
model.roc_curve()
Important
Most of the curves have a parameter called
nbins
, which is essential for estimating metrics. The larger thenbins
, the more precise the estimation, but it can significantly impact performance. Exercise caution when increasing this parameter excessively.Hint
In binary classification, various curves can be easily plotted. However, in multi-class classification, it’s important to select the
pos_label
, representing the class to be treated as positive when drawing the curve.Other Plots#
If the model allows, you can also generate relevant plots. For example, classification plots can be found in the Machine Learning - Classification Plots.
model.plot()
Important
The plotting feature is typically suitable for models with fewer than three predictors.
Contour plot is another useful plot that can be produced for models with two predictors.
model.contour()
Machine learning models with two predictors can usually benefit from their own contour plot. This visual representation aids in exploring predictions and gaining a deeper understanding of how these models perform in different scenarios. Please refer to Contour Plot for more examples.
Parameter Modification#
In order to see the parameters:
model.get_params() Out[4]: {'penalty': 'none', 'tol': 1e-06, 'max_iter': 100, 'solver': 'newton', 'fit_intercept': True}
And to manually change some of the parameters:
model.set_params({'tol': 0.001})
Model Register#
In order to register the model for tracking and versioning:
model.register("model_v1")
Please refer to Model Tracking and Versioning for more details on model tracking and versioning.
Model Exporting#
To Memmodel
model.to_memmodel()
Note
MemModel
objects serve as in-memory representations of machine learning models. They can be used for both in-database and in-memory prediction tasks. These objects can be pickled in the same way that you would pickle ascikit-learn
model.The following methods for exporting the model use
MemModel
, and it is recommended to useMemModel
directly.To SQL
You can get the SQL code by:
model.to_sql() Out[6]: '((1 / (1 + EXP(- (431.892701297998 + 0.432721801631203 * "fixed_acidity" + -1.17605582782568 * "volatile_acidity" + 0.0700977324896712 * "citric_acid" + 0.128897499413057 * "residual_sugar" + -2.63847801597149 * "chlorides" + -439.204655044829 * "density")))) > 0.5)::int'
To Python
To obtain the prediction function in Python syntax, use the following code:
X = [[4.2, 0.17, 0.36, 1.8, 0.029, 0.9899]] model.to_python()(X) Out[8]: array([0])
Hint
The
to_python()
method is used to retrieve predictions, probabilities, or cluster distances. For specific details on how to use this method for different model types, refer to the relevant documentation for each model.- __init__(name: str = None, overwrite_model: bool = False, penalty: Literal['none', 'l1', 'l2', 'enet', None] = 'none', tol: float = 1e-06, C: int | float | Decimal = 1.0, max_iter: int = 100, solver: Literal['newton', 'bfgs', 'cgd'] = 'newton', l1_ratio: float = 0.5, fit_intercept: bool = True) None #
Methods
__init__
([name, overwrite_model, penalty, ...])classification_report
([metrics, cutoff, nbins])Computes a classification report using multiple model evaluation metrics (
auc
,accuracy
,f1
...).confusion_matrix
([cutoff])Computes the model confusion matrix.
contour
([nbins, chart])Draws the model's contour plot.
cutoff_curve
([nbins, show, chart])Draws the model Cutoff curve.
deploySQL
([X, cutoff])Returns the SQL code needed to deploy the model.
does_model_exists
(name[, raise_error, ...])Checks whether the model is stored in the Vertica database.
drop
()Drops the model from the Vertica database.
export_models
(name, path[, kind])Exports machine learning models.
features_importance
([show, chart])Computes the model's features importance.
fit
(input_relation, X, y[, test_relation, ...])Trains the model.
get_attributes
([attr_name])Returns the model attributes.
get_match_index
(x, col_list[, str_check])Returns the matching index.
Returns the parameters of the model.
get_plotting_lib
([class_name, chart, ...])Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.
get_vertica_attributes
([attr_name])Returns the model Vertica attributes.
import_models
(path[, schema, kind])Imports machine learning models.
lift_chart
([nbins, show, chart])Draws the model Lift Chart.
plot
([max_nb_points, chart])Draws the model.
prc_curve
([nbins, show, chart])Draws the model PRC curve.
predict
(vdf[, X, name, cutoff, inplace])Makes predictions on the input relation.
predict_proba
(vdf[, X, name, pos_label, inplace])Returns the model's probabilities using the input relation.
register
(registered_name[, raise_error])Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.
report
([metrics, cutoff, nbins])Computes a classification report using multiple model evaluation metrics (
auc
,accuracy
,f1
...).roc_curve
([nbins, show, chart])Draws the model ROC curve.
score
([metric, cutoff, nbins])Computes the model score.
set_params
([parameters])Sets the parameters of the model.
Summarizes the model.
to_binary
(path)Exports the model to the Vertica Binary format.
Converts the model to an InMemory object that can be used for different types of predictions.
to_pmml
(path)Exports the model to PMML.
to_python
([return_proba, ...])Returns the Python function needed for in-memory scoring without using built-in Vertica functions.
to_sql
([X, return_proba, ...])Returns the SQL code needed to deploy the model without using built-in Vertica functions.
to_tf
(path)Exports the model to the Frozen Graph format (TensorFlow).
Attributes