verticapy.machine_learning.vertica.ensemble.RandomForestClassifier#
- class verticapy.machine_learning.vertica.ensemble.RandomForestClassifier(name: str = None, overwrite_model: bool = False, n_estimators: int = 10, max_features: Literal['auto', 'max'] | int = 'auto', max_leaf_nodes: int | float | Decimal = 1000000000.0, sample: float = 0.632, max_depth: int = 5, min_samples_leaf: int = 1, min_info_gain: int | float | Decimal = 0.0, nbins: int = 32)#
Creates a
RandomForestClassifier
object using the Vertica RF_CLASSIFIER function. It is an ensemble learning method for classification that operates by constructing a multitude of decision trees at training-time and outputting a class with the mode.Parameters#
- name: str, optional
Name of the model. The model is stored in the DB.
- overwrite_model: bool, optional
If set to
True
, training a model with the same name as an existing model overwrites the existing model.- n_estimators: int, optional
The number of trees in the forest, an
integer
between1
and1000
, inclusive.- max_features: int | str, optional
The number of randomly chosen features from which to pick the best feature to split a given tree node. It can be an
integer
or one of the two following methods.- auto:
square root of the total number of predictors.
- max :
number of predictors.
- max_leaf_nodes: PythonNumber, optional
The maximum number of leaf nodes for a tree in the forest, an
integerv between ``1
and1e9
, inclusive.- sample: float, optional
The portion of the input data set that is randomly selected for training each tree, a
float
between0.0
and1.0
, inclusive.- max_depth: int, optional
aximum depth of each tree, an
integer
between1
and100
, inclusive.- min_samples_leaf: int, optional
The minimum number of samples each branch must have after splitting a node, an
integer
between1
and1e6
, inclusive. A split that results in remaining samples less than this value is discarded.- min_info_gain: PythonNumber, optional
The minimum threshold for including a split, a
float
between0.0
and1.0
, inclusive. A split with information gain less than this threshold is discarded.- nbins: int, optional
Number of bins used to find splits in each column, where more splits leads to a longer runtime but more fine-grained, possibly better splits. Must be an
integer
between2
and1000
, inclusive.
Attributes#
Many attributes are created during the fitting phase.
- trees_: list of BinaryTreeClassifier
Tree models are instances of `
BinaryTreeClassifier
, each possessing various attributes. For more detailed information, refer to the documentation forBinaryTreeClassifier
.- features_importance_: numpy.array
The importance of features. It is calculated using the MDI (Mean Decreased Impurity). To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them. It is necessary to use the
features_importance()
method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.- features_importance_trees_: dict of numpy.array
Each element of the array represents the feature importance of tree i. The importance of features is calculated using the MDI (Mean Decreased Impurity). It is necessary to use the
features_importance()
method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.- n_estimators_: int
The number of model estimators.
- classes_: numpy.array
The classes labels.
Note
All attributes can be accessed using the
get_attributes()
method.Note
Several other attributes can be accessed by using the
get_vertica_attributes()
method.Examples#
The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.
Important
Many tree-based models inherit from the
RandomForest
base class, and it’s recommended to use it directly for access to a wider range of options.Load data for machine learning#
We import
verticapy
:import verticapy as vp
Hint
By assigning an alias to
verticapy
, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions fromverticapy
are used as intended without interfering with functions from other libraries.For this example, we will use the winequality dataset.
import verticapy.datasets as vpd data = vpd.load_winequality()
123fixed_acidityNumeric(8)123volatile_acidityNumeric(9)123citric_acidNumeric(8)123residual_sugarNumeric(9)123chloridesFloat(22)123free_sulfur_dioxideNumeric(9)123total_sulfur_dioxideNumeric(9)123densityFloat(22)123pHNumeric(8)123sulphatesNumeric(8)123alcoholFloat(22)123qualityInteger123goodIntegerAbccolorVarchar(20)1 3.8 0.31 0.02 11.1 0.036 20.0 114.0 0.99248 3.75 0.44 12.4 6 0 white 2 3.9 0.225 0.4 4.2 0.03 29.0 118.0 0.989 3.57 0.36 12.8 8 1 white 3 4.2 0.17 0.36 1.8 0.029 93.0 161.0 0.98999 3.65 0.89 12.0 7 1 white 4 4.2 0.215 0.23 5.1 0.041 64.0 157.0 0.99688 3.42 0.44 8.0 3 0 white 5 4.4 0.32 0.39 4.3 0.03 31.0 127.0 0.98904 3.46 0.36 12.8 8 1 white 6 4.4 0.46 0.1 2.8 0.024 31.0 111.0 0.98816 3.48 0.34 13.1 6 0 white 7 4.4 0.54 0.09 5.1 0.038 52.0 97.0 0.99022 3.41 0.4 12.2 7 1 white 8 4.5 0.19 0.21 0.95 0.033 89.0 159.0 0.99332 3.34 0.42 8.0 5 0 white 9 4.6 0.445 0.0 1.4 0.053 11.0 178.0 0.99426 3.79 0.55 10.2 5 0 white 10 4.6 0.52 0.15 2.1 0.054 8.0 65.0 0.9934 3.9 0.56 13.1 4 0 red 11 4.7 0.145 0.29 1.0 0.042 35.0 90.0 0.9908 3.76 0.49 11.3 6 0 white 12 4.7 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.5 5 0 white 13 4.7 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 14 4.7 0.6 0.17 2.3 0.058 17.0 106.0 0.9932 3.85 0.6 12.9 6 0 red 15 4.7 0.67 0.09 1.0 0.02 5.0 9.0 0.98722 3.3 0.34 13.6 5 0 white 16 4.7 0.785 0.0 3.4 0.036 23.0 134.0 0.98981 3.53 0.92 13.8 6 0 white 17 4.8 0.13 0.32 1.2 0.042 40.0 98.0 0.9898 3.42 0.64 11.8 7 1 white 18 4.8 0.17 0.28 2.9 0.03 22.0 111.0 0.9902 3.38 0.34 11.3 7 1 white 19 4.8 0.21 0.21 10.2 0.037 17.0 112.0 0.99324 3.66 0.48 12.2 7 1 white 20 4.8 0.225 0.38 1.2 0.074 47.0 130.0 0.99132 3.31 0.4 10.3 6 0 white 21 4.8 0.26 0.23 10.6 0.034 23.0 111.0 0.99274 3.46 0.28 11.5 7 1 white 22 4.8 0.29 0.23 1.1 0.044 38.0 180.0 0.98924 3.28 0.34 11.9 6 0 white 23 4.8 0.33 0.0 6.5 0.028 34.0 163.0 0.9937 3.35 0.61 9.9 5 0 white 24 4.8 0.34 0.0 6.5 0.028 33.0 163.0 0.9939 3.36 0.61 9.9 6 0 white 25 4.8 0.65 0.12 1.1 0.013 4.0 10.0 0.99246 3.32 0.36 13.5 4 0 white 26 4.9 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 27 4.9 0.33 0.31 1.2 0.016 39.0 150.0 0.98713 3.33 0.59 14.0 8 1 white 28 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 29 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 30 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 31 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 32 4.9 0.42 0.0 2.1 0.048 16.0 42.0 0.99154 3.71 0.74 14.0 7 1 red 33 4.9 0.47 0.17 1.9 0.035 60.0 148.0 0.98964 3.27 0.35 11.5 6 0 white 34 5.0 0.17 0.56 1.5 0.026 24.0 115.0 0.9906 3.48 0.39 10.8 7 1 white 35 5.0 0.2 0.4 1.9 0.015 20.0 98.0 0.9897 3.37 0.55 12.05 6 0 white 36 5.0 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 37 5.0 0.24 0.19 5.0 0.043 17.0 101.0 0.99438 3.67 0.57 10.0 5 0 white 38 5.0 0.24 0.21 2.2 0.039 31.0 100.0 0.99098 3.69 0.62 11.7 6 0 white 39 5.0 0.24 0.34 1.1 0.034 49.0 158.0 0.98774 3.32 0.32 13.1 7 1 white 40 5.0 0.255 0.22 2.7 0.043 46.0 153.0 0.99238 3.75 0.76 11.3 6 0 white 41 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 42 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 43 5.0 0.27 0.4 1.2 0.076 42.0 124.0 0.99204 3.32 0.47 10.1 6 0 white 44 5.0 0.29 0.54 5.7 0.035 54.0 155.0 0.98976 3.27 0.34 12.9 8 1 white 45 5.0 0.3 0.33 3.7 0.03 54.0 173.0 0.9887 3.36 0.3 13.0 7 1 white 46 5.0 0.31 0.0 6.4 0.046 43.0 166.0 0.994 3.3 0.63 9.9 6 0 white 47 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 48 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 49 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 50 5.0 0.33 0.18 4.6 0.032 40.0 124.0 0.99114 3.18 0.4 11.0 6 0 white 51 5.0 0.33 0.23 11.8 0.03 23.0 158.0 0.99322 3.41 0.64 11.8 6 0 white 52 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 53 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 54 5.0 0.38 0.01 1.6 0.048 26.0 60.0 0.99084 3.7 0.75 14.0 6 0 red 55 5.0 0.4 0.5 4.3 0.046 29.0 80.0 0.9902 3.49 0.66 13.6 6 0 red 56 5.0 0.42 0.24 2.0 0.06 19.0 50.0 0.9917 3.72 0.74 14.0 8 1 red 57 5.0 0.44 0.04 18.6 0.039 38.0 128.0 0.9985 3.37 0.57 10.2 6 0 white 58 5.0 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 59 5.0 0.55 0.14 8.3 0.032 35.0 164.0 0.9918 3.53 0.51 12.5 8 1 white 60 5.0 0.61 0.12 1.3 0.009 65.0 100.0 0.9874 3.26 0.37 13.5 5 0 white 61 5.0 0.74 0.0 1.2 0.041 16.0 46.0 0.99258 4.01 0.59 12.5 6 0 red 62 5.0 1.02 0.04 1.4 0.045 41.0 85.0 0.9938 3.75 0.48 10.5 4 0 red 63 5.0 1.04 0.24 1.6 0.05 32.0 96.0 0.9934 3.74 0.62 11.5 5 0 red 64 5.1 0.11 0.32 1.6 0.028 12.0 90.0 0.99008 3.57 0.52 12.2 6 0 white 65 5.1 0.14 0.25 0.7 0.039 15.0 89.0 0.9919 3.22 0.43 9.2 6 0 white 66 5.1 0.165 0.22 5.7 0.047 42.0 146.0 0.9934 3.18 0.55 9.9 6 0 white 67 5.1 0.21 0.28 1.4 0.047 48.0 148.0 0.99168 3.5 0.49 10.4 5 0 white 68 5.1 0.23 0.18 1.0 0.053 13.0 99.0 0.98956 3.22 0.39 11.5 5 0 white 69 5.1 0.25 0.36 1.3 0.035 40.0 78.0 0.9891 3.23 0.64 12.1 7 1 white 70 5.1 0.26 0.33 1.1 0.027 46.0 113.0 0.98946 3.35 0.43 11.4 7 1 white 71 5.1 0.26 0.34 6.4 0.034 26.0 99.0 0.99449 3.23 0.41 9.2 6 0 white 72 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 73 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 74 5.1 0.3 0.3 2.3 0.048 40.0 150.0 0.98944 3.29 0.46 12.2 6 0 white 75 5.1 0.305 0.13 1.75 0.036 17.0 73.0 0.99 3.4 0.51 12.3333333333333 5 0 white 76 5.1 0.31 0.3 0.9 0.037 28.0 152.0 0.992 3.54 0.56 10.1 6 0 white 77 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 78 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 79 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 80 5.1 0.33 0.27 6.7 0.022 44.0 129.0 0.99221 3.36 0.39 11.0 7 1 white 81 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 82 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 83 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 84 5.1 0.39 0.21 1.7 0.027 15.0 72.0 0.9894 3.5 0.45 12.5 6 0 white 85 5.1 0.42 0.0 1.8 0.044 18.0 88.0 0.99157 3.68 0.73 13.6 7 1 red 86 5.1 0.42 0.01 1.5 0.017 25.0 102.0 0.9894 3.38 0.36 12.3 7 1 white 87 5.1 0.47 0.02 1.3 0.034 18.0 44.0 0.9921 3.9 0.62 12.8 6 0 red 88 5.1 0.51 0.18 2.1 0.042 16.0 101.0 0.9924 3.46 0.87 12.9 7 1 red 89 5.1 0.52 0.06 2.7 0.052 30.0 79.0 0.9932 3.32 0.43 9.3 5 0 white 90 5.1 0.585 0.0 1.7 0.044 14.0 86.0 0.99264 3.56 0.94 12.9 7 1 red 91 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 92 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 93 5.2 0.16 0.34 0.8 0.029 26.0 77.0 0.99155 3.25 0.51 10.1 6 0 white 94 5.2 0.17 0.27 0.7 0.03 11.0 68.0 0.99218 3.3 0.41 9.8 5 0 white 95 5.2 0.185 0.22 1.0 0.03 47.0 123.0 0.99218 3.55 0.44 10.15 6 0 white 96 5.2 0.2 0.27 3.2 0.047 16.0 93.0 0.99235 3.44 0.53 10.1 7 1 white 97 5.2 0.21 0.31 1.7 0.048 17.0 61.0 0.98953 3.24 0.37 12.0 7 1 white 98 5.2 0.22 0.46 6.2 0.066 41.0 187.0 0.99362 3.19 0.42 9.73333333333333 5 0 white 99 5.2 0.24 0.15 7.1 0.043 32.0 134.0 0.99378 3.24 0.48 9.9 6 0 white 100 5.2 0.24 0.45 3.8 0.027 21.0 128.0 0.992 3.55 0.49 11.2 8 1 white Rows: 1-100 | Columns: 14Note
VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.
You can easily divide your dataset into training and testing subsets using the
vDataFrame.
train_test_split()
method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.data = vpd.load_winequality() train, test = data.train_test_split(test_size = 0.2)
Warning
In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the
vDataFrame.
to_db()
method to save your results intotables
ortemporary tables
. This will help enhance the overall performance of the process.Balancing the Dataset#
In VerticaPy, balancing a dataset to address class imbalances is made straightforward through the
balance()
function within thepreprocessing
module. This function enables users to rectify skewed class distributions efficiently. By specifying the target variable and setting parameters like the method for balancing, users can effortlessly achieve a more equitable representation of classes in their dataset. Whether opting for over-sampling, under-sampling, or a combination of both, VerticaPy’sbalance()
function streamlines the process, empowering users to enhance the performance and fairness of their machine learning models trained on imbalanced data.To balance the dataset, use the following syntax.
from verticapy.machine_learning.vertica.preprocessing import balance balanced_train = balance( name = "my_schema.train_balanced", input_relation = train, y = "good", method = "hybrid", )
Note
With this code, a table named train_balanced is created in the my_schema schema. It can then be used to train the model. In the rest of the example, we will work with the full dataset.
Hint
Balancing the dataset is a crucial step in improving the accuracy of machine learning models, particularly when faced with imbalanced class distributions. By addressing disparities in the number of instances across different classes, the model becomes more adept at learning patterns from all classes rather than being biased towards the majority class. This, in turn, enhances the model’s ability to make accurate predictions for under-represented classes. The balanced dataset ensures that the model is not dominated by the majority class and, as a result, leads to more robust and unbiased model performance. Therefore, by employing techniques such as over-sampling, under-sampling, or a combination of both during dataset preparation, practitioners can significantly contribute to achieving higher accuracy and better generalization of their machine learning models.
Model Initialization#
First we import the
RandomForestClassifier
model:from verticapy.machine_learning.vertica import RandomForestClassifier
Then we can create the model:
model = RandomForestClassifier( max_features = "auto", max_leaf_nodes = 32, sample = 0.5, max_depth = 3, min_samples_leaf = 5, min_info_gain = 0.0, nbins = 32, )
Hint
In
verticapy
1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.Important
The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.
Model Training#
We can now fit the model:
model.fit( train, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "good", test, )
Important
To train a model, you can directly use the
vDataFrame
or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. Inverticapy
, we don’t work usingX
matrices andy
vectors. Instead, we work directly with lists of predictors and the response name.Features Importance#
We can conveniently get the features importance:
result = model.features_importance()
Note
In models such as
RandomForest
, feature importance is calculated using the MDI (Mean Decreased Impurity). To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them.Metrics#
We can get the entire report using:
model.report()
value auc 0.7526598414132288 prc_auc 0.43951769633060406 accuracy 0.8121632024634334 log_loss 0.183867356058981 precision 0.6666666666666666 recall 0.016260162601626018 f1_score 0.03174603174603175 mcc 0.08298246149809468 informedness 0.014360827368957496 markedness 0.47950502706883213 csi 0.016129032258064516 Rows: 1-11 | Columns: 2Important
Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g.
model.report(metrics = ["auc", "accuracy"])
.For classification models, we can easily modify the
cutoff
to observe the effect on different metrics:model.report(cutoff = 0.2)
value auc 0.7526598414132288 prc_auc 0.43951769633060406 accuracy 0.6828329484218629 log_loss 0.183867356058981 precision 0.3359683794466403 recall 0.6910569105691057 f1_score 0.4521276595744681 mcc 0.2988657565366034 informedness 0.3719685914807864 markedness 0.240129791804774 csi 0.2920962199312715 Rows: 1-11 | Columns: 2You can also use the
score()
function to compute any classification metric. The default metric is the accuracy:model.score() Out[3]: 0.8121632024634334
Prediction#
Prediction is straight-forward:
model.predict( test, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "prediction", )
123fixed_acidityNumeric(8)123volatile_acidityNumeric(9)123citric_acidNumeric(8)123residual_sugarNumeric(9)123chloridesFloat(22)123free_sulfur_dioxideNumeric(9)123total_sulfur_dioxideNumeric(9)123densityFloat(22)123pHNumeric(8)123sulphatesNumeric(8)123alcoholFloat(22)123qualityInteger123goodIntegerAbccolorVarchar(20)AbcpredictionVarchar(1)1 3.8 0.31 0.02 11.1 0.036 20.0 114.0 0.99248 3.75 0.44 12.4 6 0 white 0 2 3.9 0.225 0.4 4.2 0.03 29.0 118.0 0.989 3.57 0.36 12.8 8 1 white 0 3 4.8 0.26 0.23 10.6 0.034 23.0 111.0 0.99274 3.46 0.28 11.5 7 1 white 0 4 4.8 0.33 0.0 6.5 0.028 34.0 163.0 0.9937 3.35 0.61 9.9 5 0 white 0 5 5.0 0.24 0.19 5.0 0.043 17.0 101.0 0.99438 3.67 0.57 10.0 5 0 white 0 6 5.0 0.24 0.21 2.2 0.039 31.0 100.0 0.99098 3.69 0.62 11.7 6 0 white 0 7 5.0 0.31 0.0 6.4 0.046 43.0 166.0 0.994 3.3 0.63 9.9 6 0 white 0 8 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 0 9 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 0 10 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 0 11 5.0 0.4 0.5 4.3 0.046 29.0 80.0 0.9902 3.49 0.66 13.6 6 0 red 0 12 5.1 0.23 0.18 1.0 0.053 13.0 99.0 0.98956 3.22 0.39 11.5 5 0 white 0 13 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 0 14 5.1 0.33 0.27 6.7 0.022 44.0 129.0 0.99221 3.36 0.39 11.0 7 1 white 0 15 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 1 16 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 1 17 5.2 0.2 0.27 3.2 0.047 16.0 93.0 0.99235 3.44 0.53 10.1 7 1 white 0 18 5.2 0.21 0.31 1.7 0.048 17.0 61.0 0.98953 3.24 0.37 12.0 7 1 white 0 19 5.2 0.24 0.15 7.1 0.043 32.0 134.0 0.99378 3.24 0.48 9.9 6 0 white 0 20 5.2 0.28 0.29 1.1 0.028 18.0 69.0 0.99168 3.24 0.54 10.0 6 0 white 0 21 5.2 0.32 0.25 1.8 0.103 13.0 50.0 0.9957 3.38 0.55 9.2 5 0 red 0 22 5.2 0.34 0.37 6.2 0.031 42.0 133.0 0.99076 3.25 0.41 12.5 6 0 white 0 23 5.2 0.365 0.08 13.5 0.041 37.0 142.0 0.997 3.46 0.39 9.9 6 0 white 0 24 5.2 0.44 0.04 1.4 0.036 43.0 119.0 0.9894 3.36 0.33 12.1 8 1 white 0 25 5.2 0.645 0.0 2.15 0.08 15.0 28.0 0.99444 3.78 0.61 12.5 6 0 red 0 26 5.3 0.2 0.31 3.6 0.036 22.0 91.0 0.99278 3.41 0.5 9.8 6 0 white 0 27 5.3 0.31 0.38 10.5 0.031 53.0 140.0 0.99321 3.34 0.46 11.7 6 0 white 0 28 5.3 0.31 0.38 10.5 0.031 53.0 140.0 0.99321 3.34 0.46 11.7 6 0 white 0 29 5.3 0.715 0.19 1.5 0.161 7.0 62.0 0.99395 3.62 0.61 11.0 5 0 red 0 30 5.4 0.17 0.27 2.7 0.049 28.0 104.0 0.99224 3.46 0.55 10.3 6 0 white 0 31 5.4 0.255 0.33 1.2 0.051 29.0 122.0 0.99048 3.37 0.66 11.3 6 0 white 0 32 5.4 0.29 0.38 1.2 0.029 31.0 132.0 0.98895 3.28 0.36 12.4 6 0 white 0 33 5.4 0.29 0.38 1.2 0.029 31.0 132.0 0.98895 3.28 0.36 12.4 6 0 white 0 34 5.4 0.29 0.47 3.0 0.052 47.0 145.0 0.993 3.29 0.75 10.0 6 0 white 0 35 5.4 0.375 0.4 3.3 0.054 29.0 147.0 0.99482 3.42 0.52 9.1 5 0 white 0 36 5.4 0.46 0.15 2.1 0.026 29.0 130.0 0.98953 3.39 0.77 13.4 8 1 white 0 37 5.4 0.53 0.16 2.7 0.036 34.0 128.0 0.98856 3.2 0.53 13.2 8 1 white 0 38 5.4 0.74 0.09 1.7 0.089 16.0 26.0 0.99402 3.67 0.56 11.6 6 0 red 0 39 5.5 0.16 0.26 1.5 0.032 35.0 100.0 0.99076 3.43 0.77 12.0 6 0 white 0 40 5.5 0.17 0.23 2.9 0.039 10.0 108.0 0.99243 3.28 0.5 10.0 5 0 white 0 41 5.5 0.17 0.23 2.9 0.039 10.0 108.0 0.99243 3.28 0.5 10.0 5 0 white 0 42 5.5 0.23 0.19 2.2 0.044 39.0 161.0 0.99209 3.19 0.43 10.4 6 0 white 0 43 5.5 0.28 0.21 1.6 0.032 23.0 85.0 0.99027 3.42 0.42 12.5 5 0 white 0 44 5.5 0.315 0.38 2.6 0.033 10.0 69.0 0.9909 3.12 0.59 10.8 6 0 white 0 45 5.5 0.335 0.3 2.5 0.071 27.0 128.0 0.9924 3.14 0.51 9.6 6 0 white 0 46 5.5 0.34 0.26 2.2 0.021 31.0 119.0 0.98919 3.55 0.49 13.0 8 1 white 0 47 5.5 0.375 0.38 1.7 0.036 17.0 98.0 0.99142 3.29 0.39 10.5 6 0 white 0 48 5.6 0.12 0.33 2.9 0.044 21.0 73.0 0.98896 3.17 0.32 12.9 8 1 white 0 49 5.6 0.18 0.58 1.25 0.034 29.0 129.0 0.98984 3.51 0.6 12.0 7 1 white 0 50 5.6 0.185 0.49 1.1 0.03 28.0 117.0 0.9918 3.55 0.45 10.3 6 0 white 0 51 5.6 0.19 0.26 1.4 0.03 12.0 76.0 0.9905 3.25 0.37 10.9 7 1 white 0 52 5.6 0.19 0.27 0.9 0.04 52.0 103.0 0.99026 3.5 0.39 11.2 5 0 white 0 53 5.6 0.2 0.22 1.3 0.049 25.0 155.0 0.99296 3.74 0.43 10.0 5 0 white 0 54 5.6 0.21 0.4 1.3 0.041 81.0 147.0 0.9901 3.22 0.95 11.6 8 1 white 0 55 5.6 0.245 0.25 9.7 0.032 12.0 68.0 0.994 3.31 0.34 10.5 5 0 white 0 56 5.6 0.26 0.27 10.6 0.03 27.0 119.0 0.9947 3.4 0.34 10.7 7 1 white 0 57 5.6 0.27 0.37 0.9 0.025 11.0 49.0 0.98845 3.29 0.33 13.1 6 0 white 0 58 5.6 0.28 0.28 4.2 0.044 52.0 158.0 0.992 3.35 0.44 10.7 7 1 white 0 59 5.6 0.31 0.37 1.4 0.074 12.0 96.0 0.9954 3.32 0.58 9.2 5 0 red 0 60 5.6 0.31 0.78 13.9 0.074 23.0 92.0 0.99677 3.39 0.48 10.5 6 0 red 0 61 5.6 0.34 0.3 6.9 0.038 23.0 89.0 0.99266 3.25 0.49 11.1 6 0 white 0 62 5.6 0.5 0.09 2.3 0.049 17.0 99.0 0.9937 3.63 0.63 13.0 5 0 red 0 63 5.6 0.54 0.04 1.7 0.049 5.0 13.0 0.9942 3.72 0.58 11.4 5 0 red 0 64 5.6 0.62 0.03 1.5 0.08 6.0 13.0 0.99498 3.66 0.62 10.1 4 0 red 0 65 5.6 0.695 0.06 6.8 0.042 9.0 84.0 0.99432 3.44 0.44 10.2 5 0 white 0 66 5.7 0.18 0.22 4.2 0.042 25.0 111.0 0.994 3.35 0.39 9.4 5 0 white 0 67 5.7 0.18 0.26 2.2 0.023 21.0 95.0 0.9893 3.07 0.54 12.3 6 0 white 0 68 5.7 0.18 0.36 1.2 0.046 9.0 71.0 0.99199 3.7 0.68 10.9 7 1 white 0 69 5.7 0.21 0.25 1.1 0.035 26.0 81.0 0.9902 3.31 0.52 11.4 6 0 white 0 70 5.7 0.21 0.32 0.9 0.038 38.0 121.0 0.99074 3.24 0.46 10.6 6 0 white 0 71 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 0 72 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 0 73 5.7 0.22 0.22 16.65 0.044 39.0 110.0 0.99855 3.24 0.48 9.0 6 0 white 0 74 5.7 0.26 0.24 17.8 0.059 23.0 124.0 0.99773 3.3 0.5 10.1 5 0 white 0 75 5.7 0.26 0.25 10.4 0.02 7.0 57.0 0.994 3.39 0.37 10.6 5 0 white 0 76 5.7 0.28 0.28 2.2 0.019 15.0 65.0 0.9902 3.06 0.52 11.2 6 0 white 0 77 5.7 0.28 0.35 1.2 0.052 39.0 141.0 0.99108 3.44 0.69 11.3 6 0 white 0 78 5.7 0.32 0.38 4.75 0.033 23.0 94.0 0.991 3.42 0.42 11.8 7 1 white 0 79 5.7 0.4 0.35 5.1 0.026 17.0 113.0 0.99052 3.18 0.67 12.4 6 0 white 0 80 5.7 0.41 0.21 1.9 0.048 30.0 112.0 0.99138 3.29 0.55 11.2 6 0 white 0 81 5.7 1.13 0.09 1.5 0.172 7.0 19.0 0.994 3.5 0.48 9.8 4 0 red 0 82 5.8 0.13 0.22 12.7 0.058 24.0 183.0 0.9956 3.32 0.42 11.7 6 0 white 0 83 5.8 0.15 0.28 0.8 0.037 43.0 127.0 0.99198 3.24 0.51 9.3 5 0 white 0 84 5.8 0.19 0.24 1.3 0.044 38.0 128.0 0.99362 3.77 0.6 10.6 5 0 white 0 85 5.8 0.2 0.34 1.0 0.035 40.0 86.0 0.98993 3.5 0.42 11.7 5 0 white 0 86 5.8 0.22 0.25 1.5 0.024 21.0 109.0 0.99234 3.37 0.58 10.4 6 0 white 0 87 5.8 0.23 0.2 2.0 0.043 39.0 154.0 0.99226 3.21 0.39 10.2 6 0 white 0 88 5.8 0.23 0.27 1.8 0.043 24.0 69.0 0.9933 3.38 0.31 9.4 6 0 white 0 89 5.8 0.25 0.28 11.1 0.056 45.0 175.0 0.99755 3.42 0.43 9.5 5 0 white 0 90 5.8 0.26 0.18 1.2 0.031 40.0 114.0 0.9908 3.42 0.4 11.0 7 1 white 0 91 5.8 0.26 0.18 1.2 0.031 40.0 114.0 0.9908 3.42 0.4 11.0 7 1 white 0 92 5.8 0.27 0.22 12.7 0.058 42.0 206.0 0.9946 3.32 0.38 12.3 6 0 white 0 93 5.8 0.275 0.3 5.4 0.043 41.0 149.0 0.9926 3.33 0.42 10.8 7 1 white 0 94 5.8 0.28 0.66 9.1 0.039 26.0 159.0 0.9965 3.66 0.55 10.8 5 0 white 0 95 5.8 0.3 0.12 1.6 0.036 57.0 163.0 0.99239 3.38 0.59 10.5 6 0 white 0 96 5.8 0.3 0.23 1.5 0.034 37.0 121.0 0.98871 2.96 0.34 12.1 6 0 white 0 97 5.8 0.31 0.32 4.5 0.024 28.0 94.0 0.98906 3.25 0.52 13.7 7 1 white 0 98 5.8 0.32 0.28 4.3 0.032 46.0 115.0 0.98946 3.16 0.57 13.0 8 1 white 0 99 5.8 0.32 0.31 2.7 0.049 25.0 153.0 0.99067 3.44 0.73 12.2 7 1 white 0 100 5.8 0.345 0.15 10.8 0.033 26.0 120.0 0.99494 3.25 0.49 10.0 6 0 white 0 Rows: 1-100 | Columns: 15Note
Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the
vDataFrame
to thepredict()
function, but in this case, it’s essential that the column names of thevDataFrame
match the predictors and response name in the model.Probabilities#
It is also easy to get the model’s probabilities:
model.predict_proba( test, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "prediction", )
123fixed_acidityNumeric(8)123volatile_acidityNumeric(9)123citric_acidNumeric(8)123residual_sugarNumeric(9)123chloridesFloat(22)123free_sulfur_dioxideNumeric(9)123total_sulfur_dioxideNumeric(9)123densityFloat(22)123pHNumeric(8)123sulphatesNumeric(8)123alcoholFloat(22)123qualityInteger123goodIntegerAbccolorVarchar(20)AbcpredictionVarchar(1)Abcprediction_0Varchar(128)Abcprediction_1Varchar(128)1 3.8 0.31 0.02 11.1 0.036 20.0 114.0 0.99248 3.75 0.44 12.4 6 0 white 0 0.717743 0.282257 2 3.9 0.225 0.4 4.2 0.03 29.0 118.0 0.989 3.57 0.36 12.8 8 1 white 0 0.534054 0.465946 3 4.8 0.26 0.23 10.6 0.034 23.0 111.0 0.99274 3.46 0.28 11.5 7 1 white 0 0.717743 0.282257 4 4.8 0.33 0.0 6.5 0.028 34.0 163.0 0.9937 3.35 0.61 9.9 5 0 white 0 0.855036 0.144964 5 5.0 0.24 0.19 5.0 0.043 17.0 101.0 0.99438 3.67 0.57 10.0 5 0 white 0 0.855036 0.144964 6 5.0 0.24 0.21 2.2 0.039 31.0 100.0 0.99098 3.69 0.62 11.7 6 0 white 0 0.675274 0.324726 7 5.0 0.31 0.0 6.4 0.046 43.0 166.0 0.994 3.3 0.63 9.9 6 0 white 0 0.855036 0.144964 8 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 0 0.763785 0.236215 9 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 0 0.722592 0.277408 10 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 0 0.722592 0.277408 11 5.0 0.4 0.5 4.3 0.046 29.0 80.0 0.9902 3.49 0.66 13.6 6 0 red 0 0.52832 0.47168 12 5.1 0.23 0.18 1.0 0.053 13.0 99.0 0.98956 3.22 0.39 11.5 5 0 white 0 0.754978 0.245022 13 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 0 0.584548 0.415452 14 5.1 0.33 0.27 6.7 0.022 44.0 129.0 0.99221 3.36 0.39 11.0 7 1 white 0 0.679197 0.320803 15 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 1 0.495415 0.504585 16 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 1 0.495415 0.504585 17 5.2 0.2 0.27 3.2 0.047 16.0 93.0 0.99235 3.44 0.53 10.1 7 1 white 0 0.766075 0.233925 18 5.2 0.21 0.31 1.7 0.048 17.0 61.0 0.98953 3.24 0.37 12.0 7 1 white 0 0.692399 0.307601 19 5.2 0.24 0.15 7.1 0.043 32.0 134.0 0.99378 3.24 0.48 9.9 6 0 white 0 0.855036 0.144964 20 5.2 0.28 0.29 1.1 0.028 18.0 69.0 0.99168 3.24 0.54 10.0 6 0 white 0 0.653614 0.346386 21 5.2 0.32 0.25 1.8 0.103 13.0 50.0 0.9957 3.38 0.55 9.2 5 0 red 0 0.920678 0.0793224 22 5.2 0.34 0.37 6.2 0.031 42.0 133.0 0.99076 3.25 0.41 12.5 6 0 white 0 0.532585 0.467415 23 5.2 0.365 0.08 13.5 0.041 37.0 142.0 0.997 3.46 0.39 9.9 6 0 white 0 0.901263 0.098737 24 5.2 0.44 0.04 1.4 0.036 43.0 119.0 0.9894 3.36 0.33 12.1 8 1 white 0 0.610506 0.389494 25 5.2 0.645 0.0 2.15 0.08 15.0 28.0 0.99444 3.78 0.61 12.5 6 0 red 0 0.880317 0.119683 26 5.3 0.2 0.31 3.6 0.036 22.0 91.0 0.99278 3.41 0.5 9.8 6 0 white 0 0.68977 0.31023 27 5.3 0.31 0.38 10.5 0.031 53.0 140.0 0.99321 3.34 0.46 11.7 6 0 white 0 0.646058 0.353942 28 5.3 0.31 0.38 10.5 0.031 53.0 140.0 0.99321 3.34 0.46 11.7 6 0 white 0 0.646058 0.353942 29 5.3 0.715 0.19 1.5 0.161 7.0 62.0 0.99395 3.62 0.61 11.0 5 0 red 0 0.880317 0.119683 30 5.4 0.17 0.27 2.7 0.049 28.0 104.0 0.99224 3.46 0.55 10.3 6 0 white 0 0.766075 0.233925 31 5.4 0.255 0.33 1.2 0.051 29.0 122.0 0.99048 3.37 0.66 11.3 6 0 white 0 0.756997 0.243003 32 5.4 0.29 0.38 1.2 0.029 31.0 132.0 0.98895 3.28 0.36 12.4 6 0 white 0 0.632458 0.367542 33 5.4 0.29 0.38 1.2 0.029 31.0 132.0 0.98895 3.28 0.36 12.4 6 0 white 0 0.632458 0.367542 34 5.4 0.29 0.47 3.0 0.052 47.0 145.0 0.993 3.29 0.75 10.0 6 0 white 0 0.802419 0.197581 35 5.4 0.375 0.4 3.3 0.054 29.0 147.0 0.99482 3.42 0.52 9.1 5 0 white 0 0.865274 0.134726 36 5.4 0.46 0.15 2.1 0.026 29.0 130.0 0.98953 3.39 0.77 13.4 8 1 white 0 0.542114 0.457886 37 5.4 0.53 0.16 2.7 0.036 34.0 128.0 0.98856 3.2 0.53 13.2 8 1 white 0 0.517293 0.482707 38 5.4 0.74 0.09 1.7 0.089 16.0 26.0 0.99402 3.67 0.56 11.6 6 0 red 0 0.880317 0.119683 39 5.5 0.16 0.26 1.5 0.032 35.0 100.0 0.99076 3.43 0.77 12.0 6 0 white 0 0.653614 0.346386 40 5.5 0.17 0.23 2.9 0.039 10.0 108.0 0.99243 3.28 0.5 10.0 5 0 white 0 0.757702 0.242298 41 5.5 0.17 0.23 2.9 0.039 10.0 108.0 0.99243 3.28 0.5 10.0 5 0 white 0 0.757702 0.242298 42 5.5 0.23 0.19 2.2 0.044 39.0 161.0 0.99209 3.19 0.43 10.4 6 0 white 0 0.781536 0.218464 43 5.5 0.28 0.21 1.6 0.032 23.0 85.0 0.99027 3.42 0.42 12.5 5 0 white 0 0.642243 0.357757 44 5.5 0.315 0.38 2.6 0.033 10.0 69.0 0.9909 3.12 0.59 10.8 6 0 white 0 0.625916 0.374084 45 5.5 0.335 0.3 2.5 0.071 27.0 128.0 0.9924 3.14 0.51 9.6 6 0 white 0 0.829462 0.170538 46 5.5 0.34 0.26 2.2 0.021 31.0 119.0 0.98919 3.55 0.49 13.0 8 1 white 0 0.599555 0.400445 47 5.5 0.375 0.38 1.7 0.036 17.0 98.0 0.99142 3.29 0.39 10.5 6 0 white 0 0.625916 0.374084 48 5.6 0.12 0.33 2.9 0.044 21.0 73.0 0.98896 3.17 0.32 12.9 8 1 white 0 0.522179 0.477821 49 5.6 0.18 0.58 1.25 0.034 29.0 129.0 0.98984 3.51 0.6 12.0 7 1 white 0 0.647752 0.352248 50 5.6 0.185 0.49 1.1 0.03 28.0 117.0 0.9918 3.55 0.45 10.3 6 0 white 0 0.680783 0.319217 51 5.6 0.19 0.26 1.4 0.03 12.0 76.0 0.9905 3.25 0.37 10.9 7 1 white 0 0.653614 0.346386 52 5.6 0.19 0.27 0.9 0.04 52.0 103.0 0.99026 3.5 0.39 11.2 5 0 white 0 0.620583 0.379417 53 5.6 0.2 0.22 1.3 0.049 25.0 155.0 0.99296 3.74 0.43 10.0 5 0 white 0 0.863764 0.136236 54 5.6 0.21 0.4 1.3 0.041 81.0 147.0 0.9901 3.22 0.95 11.6 8 1 white 0 0.632458 0.367542 55 5.6 0.245 0.25 9.7 0.032 12.0 68.0 0.994 3.31 0.34 10.5 5 0 white 0 0.834839 0.165161 56 5.6 0.26 0.27 10.6 0.03 27.0 119.0 0.9947 3.4 0.34 10.7 7 1 white 0 0.814464 0.185536 57 5.6 0.27 0.37 0.9 0.025 11.0 49.0 0.98845 3.29 0.33 13.1 6 0 white 0 0.632458 0.367542 58 5.6 0.28 0.28 4.2 0.044 52.0 158.0 0.992 3.35 0.44 10.7 7 1 white 0 0.694546 0.305454 59 5.6 0.31 0.37 1.4 0.074 12.0 96.0 0.9954 3.32 0.58 9.2 5 0 red 0 0.89111 0.10889 60 5.6 0.31 0.78 13.9 0.074 23.0 92.0 0.99677 3.39 0.48 10.5 6 0 red 0 0.89111 0.10889 61 5.6 0.34 0.3 6.9 0.038 23.0 89.0 0.99266 3.25 0.49 11.1 6 0 white 0 0.650906 0.349094 62 5.6 0.5 0.09 2.3 0.049 17.0 99.0 0.9937 3.63 0.63 13.0 5 0 red 0 0.880317 0.119683 63 5.6 0.54 0.04 1.7 0.049 5.0 13.0 0.9942 3.72 0.58 11.4 5 0 red 0 0.880317 0.119683 64 5.6 0.62 0.03 1.5 0.08 6.0 13.0 0.99498 3.66 0.62 10.1 4 0 red 0 0.880317 0.119683 65 5.6 0.695 0.06 6.8 0.042 9.0 84.0 0.99432 3.44 0.44 10.2 5 0 white 0 0.855036 0.144964 66 5.7 0.18 0.22 4.2 0.042 25.0 111.0 0.994 3.35 0.39 9.4 5 0 white 0 0.844557 0.155443 67 5.7 0.18 0.26 2.2 0.023 21.0 95.0 0.9893 3.07 0.54 12.3 6 0 white 0 0.620583 0.379417 68 5.7 0.18 0.36 1.2 0.046 9.0 71.0 0.99199 3.7 0.68 10.9 7 1 white 0 0.736874 0.263126 69 5.7 0.21 0.25 1.1 0.035 26.0 81.0 0.9902 3.31 0.52 11.4 6 0 white 0 0.642243 0.357757 70 5.7 0.21 0.32 0.9 0.038 38.0 121.0 0.99074 3.24 0.46 10.6 6 0 white 0 0.653614 0.346386 71 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 0 0.879213 0.120787 72 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 0 0.879213 0.120787 73 5.7 0.22 0.22 16.65 0.044 39.0 110.0 0.99855 3.24 0.48 9.0 6 0 white 0 0.879213 0.120787 74 5.7 0.26 0.24 17.8 0.059 23.0 124.0 0.99773 3.3 0.5 10.1 5 0 white 0 0.949014 0.0509856 75 5.7 0.26 0.25 10.4 0.02 7.0 57.0 0.994 3.39 0.37 10.6 5 0 white 0 0.809651 0.190349 76 5.7 0.28 0.28 2.2 0.019 15.0 65.0 0.9902 3.06 0.52 11.2 6 0 white 0 0.620583 0.379417 77 5.7 0.28 0.35 1.2 0.052 39.0 141.0 0.99108 3.44 0.69 11.3 6 0 white 0 0.756997 0.243003 78 5.7 0.32 0.38 4.75 0.033 23.0 94.0 0.991 3.42 0.42 11.8 7 1 white 0 0.532585 0.467415 79 5.7 0.4 0.35 5.1 0.026 17.0 113.0 0.99052 3.18 0.67 12.4 6 0 white 0 0.52071 0.47929 80 5.7 0.41 0.21 1.9 0.048 30.0 112.0 0.99138 3.29 0.55 11.2 6 0 white 0 0.738208 0.261792 81 5.7 1.13 0.09 1.5 0.172 7.0 19.0 0.994 3.5 0.48 9.8 4 0 red 0 0.908654 0.0913461 82 5.8 0.13 0.22 12.7 0.058 24.0 183.0 0.9956 3.32 0.42 11.7 6 0 white 0 0.904321 0.0956787 83 5.8 0.15 0.28 0.8 0.037 43.0 127.0 0.99198 3.24 0.51 9.3 5 0 white 0 0.736874 0.263126 84 5.8 0.19 0.24 1.3 0.044 38.0 128.0 0.99362 3.77 0.6 10.6 5 0 white 0 0.844557 0.155443 85 5.8 0.2 0.34 1.0 0.035 40.0 86.0 0.98993 3.5 0.42 11.7 5 0 white 0 0.620583 0.379417 86 5.8 0.22 0.25 1.5 0.024 21.0 109.0 0.99234 3.37 0.58 10.4 6 0 white 0 0.762502 0.237498 87 5.8 0.23 0.2 2.0 0.043 39.0 154.0 0.99226 3.21 0.39 10.2 6 0 white 0 0.793015 0.206985 88 5.8 0.23 0.27 1.8 0.043 24.0 69.0 0.9933 3.38 0.31 9.4 6 0 white 0 0.74165 0.25835 89 5.8 0.25 0.28 11.1 0.056 45.0 175.0 0.99755 3.42 0.43 9.5 5 0 white 0 0.901037 0.0989632 90 5.8 0.26 0.18 1.2 0.031 40.0 114.0 0.9908 3.42 0.4 11.0 7 1 white 0 0.675274 0.324726 91 5.8 0.26 0.18 1.2 0.031 40.0 114.0 0.9908 3.42 0.4 11.0 7 1 white 0 0.675274 0.324726 92 5.8 0.27 0.22 12.7 0.058 42.0 206.0 0.9946 3.32 0.38 12.3 6 0 white 0 0.888457 0.111543 93 5.8 0.275 0.3 5.4 0.043 41.0 149.0 0.9926 3.33 0.42 10.8 7 1 white 0 0.694546 0.305454 94 5.8 0.28 0.66 9.1 0.039 26.0 159.0 0.9965 3.66 0.55 10.8 5 0 white 0 0.896182 0.103818 95 5.8 0.3 0.12 1.6 0.036 57.0 163.0 0.99239 3.38 0.59 10.5 6 0 white 0 0.793015 0.206985 96 5.8 0.3 0.23 1.5 0.034 37.0 121.0 0.98871 2.96 0.34 12.1 6 0 white 0 0.642243 0.357757 97 5.8 0.31 0.32 4.5 0.024 28.0 94.0 0.98906 3.25 0.52 13.7 7 1 white 0 0.522179 0.477821 98 5.8 0.32 0.28 4.3 0.032 46.0 115.0 0.98946 3.16 0.57 13.0 8 1 white 0 0.501151 0.498849 99 5.8 0.32 0.31 2.7 0.049 25.0 153.0 0.99067 3.44 0.73 12.2 7 1 white 0 0.679356 0.320644 100 5.8 0.345 0.15 10.8 0.033 26.0 120.0 0.99494 3.25 0.49 10.0 6 0 white 0 0.846318 0.153682 Rows: 1-100 | Columns: 17Note
Probabilities are added to the
vDataFrame
, and VerticaPy uses the corresponding probability function in SQL behind the scenes. You can use thepos_label
parameter to add only the probability of the selected category.Confusion Matrix#
You can obtain the confusion matrix of your choice by specifying the desired cutoff.
model.confusion_matrix(cutoff = 0.5) Out[4]: array([[1053, 0], [ 246, 0]])
Note
In classification, the
cutoff
is a threshold value used to determine class assignment based on predicted probabilities or scores from a classification model. In binary classification, if the predicted probability for a specific class is greater than or equal to the cutoff, the instance is assigned to the positive class; otherwise, it is assigned to the negative class. Adjusting the cutoff allows for trade-offs between true positives and false positives, enabling the model to be optimized for specific objectives or to consider the relative costs of different classification errors. The choice of cutoff is critical for tailoring the model’s performance to meet specific needs.Main Plots (Classification Curves)#
Classification models allow for the creation of various plots that are very helpful in understanding the model, such as the ROC Curve, PRC Curve, Cutoff Curve, Gain Curve, and more.
Most of the classification curves can be found in the Machine Learning - Classification Curve.
For example, let’s draw the model’s ROC curve.
model.roc_curve()
Important
Most of the curves have a parameter called
nbins
, which is essential for estimating metrics. The larger thenbins
, the more precise the estimation, but it can significantly impact performance. Exercise caution when increasing this parameter excessively.Hint
In binary classification, various curves can be easily plotted. However, in multi-class classification, it’s important to select the
pos_label
, representing the class to be treated as positive when drawing the curve.Other Plots#
Tree models can be visualized by drawing their tree plots. For more examples, check out Machine Learning - Tree Plots.
model.plot_tree()
Note
The above example may not render properly in the doc because of the huge size of the tree. But it should render nicely in jupyter environment.
In order to plot graph using graphviz separately, you can extract the graphviz DOT file code as follows:
model.to_graphviz() Out[5]: 'digraph Tree {\ngraph [bgcolor="#FFFFFF00"];\n0 [label="\\"density\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n0 -> 1 [label="<= 0.991973", color="#666666", fontcolor="#666666"]\n0 -> 2 [label="> 0.991973", color="#666666", fontcolor="#666666"]\n1 [label="\\"residual_sugar\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n1 -> 3 [label="<= 2.6375", color="#666666", fontcolor="#666666"]\n1 -> 4 [label="> 2.6375", color="#666666", fontcolor="#666666"]\n2 [label="\\"volatile_acidity\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n2 -> 5 [label="<= 0.220625", color="#666666", fontcolor="#666666"]\n2 -> 6 [label="> 0.220625", color="#666666", fontcolor="#666666"]\n3 [label="\\"fixed_acidity\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n3 -> 7 [label="<= 6.759375", color="#666666", fontcolor="#666666"]\n3 -> 8 [label="> 6.759375", color="#666666", fontcolor="#666666"]\n4 [label="\\"residual_sugar\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n4 -> 9 [label="<= 6.7125", color="#666666", fontcolor="#666666"]\n4 -> 10 [label="> 6.7125", color="#666666", fontcolor="#666666"]\n5 [label="\\"fixed_acidity\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n5 -> 11 [label="<= 6.39375", color="#666666", fontcolor="#666666"]\n5 -> 12 [label="> 6.39375", color="#666666", fontcolor="#666666"]\n6 [label="\\"density\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n6 -> 13 [label="<= 0.995215", color="#666666", fontcolor="#666666"]\n6 -> 14 [label="> 0.995215", color="#666666", fontcolor="#666666"]\n7 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.59</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.41</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n8 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.74</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.26</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n9 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#efc5b5" color="#666666"><FONT color="#000000"><b>prediction: 1 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.48</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.52</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n10 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#efc5b5" color="#666666"><FONT color="#000000"><b>prediction: 1 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.23</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.77</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n11 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.89</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.11</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n12 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.75</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.25</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n13 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.83</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.17</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n14 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.94</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.06</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n}'
This string can then be copied into a DOT file which can beparsed by graphviz.
Contour plot is another useful plot that can be produced for models with two predictors.
model.contour()
Important
Machine learning models with two predictors can usually benefit from their own contour plot. This visual representation aids in exploring predictions and gaining a deeper understanding of how these models perform in different scenarios. Please refer to Contour Plot for more examples.
Parameter Modification#
In order to see the parameters:
model.get_params() Out[6]: {'n_estimators': 10, 'max_features': 'auto', 'max_leaf_nodes': 32, 'sample': 0.5, 'max_depth': 3, 'min_samples_leaf': 5, 'min_info_gain': 0.0, 'nbins': 32}
And to manually change some of the parameters:
model.set_params({'max_depth': 5})
Model Register#
In order to register the model for tracking and versioning:
model.register("model_v1")
Please refer to Model Tracking and Versioning for more details on model tracking and versioning.
Model Exporting#
To Memmodel
model.to_memmodel()
Note
MemModel
objects serve as in-memory representations of machine learning models. They can be used for both in-database and in-memory prediction tasks. These objects can be pickled in the same way that you would pickle ascikit-learn
model.The following methods for exporting the model use
MemModel
, and it is recommended to useMemModel
directly.To SQL
You can get the SQL code by:
model.to_sql() Out[8]: '(CASE WHEN ((CASE WHEN "density" < 0.991973 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN (CASE WHEN "fixed_acidity" < 6.759375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "residual_sugar" < 6.7125 THEN 1.0 ELSE 1.0 END) END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN (CASE WHEN "fixed_acidity" < 6.39375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "density" < 0.995215 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "citric_acid" < 0.259375 THEN (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "residual_sugar" < 4.675 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "density" < 0.995215 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "density" < 0.991973 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 1.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "density" < 0.993594 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.126875 THEN 1.0 ELSE 0.0 END) END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN (CASE WHEN "density" < 0.996836 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "citric_acid" < 0.31125 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "density" < 0.991973 THEN (CASE WHEN "volatile_acidity" < 0.314375 THEN (CASE WHEN "density" < 0.990352 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "citric_acid" < 0.10375 THEN 0.0 ELSE 1.0 END) END) ELSE (CASE WHEN "citric_acid" < 0.259375 THEN 0.0 ELSE (CASE WHEN "density" < 0.993594 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "density" < 0.991973 THEN (CASE WHEN "density" < 0.990352 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 1.0 END) ELSE (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "density" < 0.995215 THEN (CASE WHEN "residual_sugar" < 8.75 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "density" < 0.993594 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "density" < 0.995215 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "density" < 0.995215 THEN 0.0 ELSE (CASE WHEN "citric_acid" < 0.259375 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "density" < 0.991973 THEN (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "citric_acid" < 0.363125 THEN 0.0 ELSE 0.0 END) ELSE 0.0 END) ELSE (CASE WHEN "density" < 0.993594 THEN (CASE WHEN "residual_sugar" < 6.7125 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "density" < 0.993594 THEN (CASE WHEN "volatile_acidity" < 0.314375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "density" < 0.995215 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "citric_acid" < 0.259375 THEN (CASE WHEN "fixed_acidity" < 5.6625 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "citric_acid" < 0.259375 THEN (CASE WHEN "chlorides" < 0.027813 THEN 1.0 ELSE (CASE WHEN "chlorides" < 0.046625 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "citric_acid" < 0.466875 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "citric_acid" < 0.259375 THEN (CASE WHEN "density" < 0.991973 THEN (CASE WHEN "volatile_acidity" < 0.408125 THEN 0.0 ELSE 1.0 END) ELSE (CASE WHEN "fixed_acidity" < 5.6625 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "density" < 0.993594 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "fixed_acidity" < 6.759375 THEN 0.0 ELSE 0.0 END) END) END)) / 10 > 0.5 THEN 1 ELSE 0 END)'
To Python
To obtain the prediction function in Python syntax, use the following code:
X = [[4.2, 0.17, 0.36, 1.8, 0.029, 0.9899]] model.to_python()(X) Out[10]: array([0])
Hint
The
to_python()
method is used to retrieve predictions, probabilities, or cluster distances. For specific details on how to use this method for different model types, refer to the relevant documentation for each model.- __init__(name: str = None, overwrite_model: bool = False, n_estimators: int = 10, max_features: Literal['auto', 'max'] | int = 'auto', max_leaf_nodes: int | float | Decimal = 1000000000.0, sample: float = 0.632, max_depth: int = 5, min_samples_leaf: int = 1, min_info_gain: int | float | Decimal = 0.0, nbins: int = 32) None #
Must be overridden in the child class
Methods
__init__
([name, overwrite_model, ...])Must be overridden in the child class
classification_report
([metrics, cutoff, ...])Computes a classification report using multiple model evaluation metrics (
auc
,accuracy
,f1
...).confusion_matrix
([pos_label, cutoff])Computes the model confusion matrix.
contour
([pos_label, nbins, chart])Draws the model's contour plot.
cutoff_curve
([pos_label, nbins, show, chart])Draws the model Cutoff curve.
deploySQL
([X, pos_label, cutoff, allSQL])Returns the SQL code needed to deploy the model.
does_model_exists
(name[, raise_error, ...])Checks whether the model is stored in the Vertica database.
drop
()Drops the model from the Vertica database.
export_models
(name, path[, kind])Exports machine learning models.
features_importance
([tree_id, show, chart])Computes the model's features importance.
fit
(input_relation, X, y[, test_relation, ...])Trains the model.
get_attributes
([attr_name])Returns the model attributes.
get_match_index
(x, col_list[, str_check])Returns the matching index.
Returns the parameters of the model.
get_plotting_lib
([class_name, chart, ...])Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.
get_score
([tree_id])Returns the feature importance metrics for the input tree.
get_tree
([tree_id])Returns a table with all the input tree information.
get_vertica_attributes
([attr_name])Returns the model Vertica attributes.
import_models
(path[, schema, kind])Imports machine learning models.
lift_chart
([pos_label, nbins, show, chart])Draws the model Lift Chart.
plot
([max_nb_points, chart])Draws the model.
plot_tree
([tree_id, pic_path])Draws the input tree.
prc_curve
([pos_label, nbins, show, chart])Draws the model PRC curve.
predict
(vdf[, X, name, cutoff, inplace])Predicts using the input relation.
predict_proba
(vdf[, X, name, pos_label, inplace])Returns the model's probabilities using the input relation.
register
(registered_name[, raise_error])Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.
report
([metrics, cutoff, labels, nbins])Computes a classification report using multiple model evaluation metrics (
auc
,accuracy
,f1
...).roc_curve
([pos_label, nbins, show, chart])Draws the model ROC curve.
score
([metric, average, pos_label, cutoff, ...])Computes the model score.
set_params
([parameters])Sets the parameters of the model.
Summarizes the model.
to_binary
(path)Exports the model to the Vertica Binary format.
to_graphviz
([tree_id, classes_color, ...])Returns the code for a Graphviz tree.
Converts the model to an InMemory object that can be used for different types of predictions.
to_pmml
(path)Exports the model to PMML.
to_python
([return_proba, ...])Returns the Python function needed for in-memory scoring without using built-in Vertica functions.
to_sql
([X, return_proba, ...])Returns the SQL code needed to deploy the model without using built-in Vertica functions.
to_tf
(path)Exports the model to the Frozen Graph format (TensorFlow).
Attributes