
verticapy.machine_learning.vertica.tree.DecisionTreeClassifier¶
- class verticapy.machine_learning.vertica.tree.DecisionTreeClassifier(name: str = None, overwrite_model: bool = False, max_features: Literal['auto', 'max'] | int = 'auto', max_leaf_nodes: Annotated[int | float | Decimal, 'Python Numbers'] = 1000000000.0, max_depth: int = 100, min_samples_leaf: int = 1, min_info_gain: Annotated[int | float | Decimal, 'Python Numbers'] = 0.0, nbins: int = 32)¶
A DecisionTreeClassifier consisting of a single tree.
Parameters¶
- name: str, optional
Name of the model. The model is stored in the database.
- overwrite_model: bool, optional
If set to
True
, training a model with the same name as an existing model overwrites the existing model.- max_features: str / int, optional
The number of randomly chosen features from which to pick the best feature to split on a given tree node. It can be an integer or one of the two following methods.
- auto:
square root of the total number of predictors.
- max:
number of predictors.
- max_leaf_nodes: PythonNumber, optional
The maximum number of leaf nodes for a tree in the forest, an integer between 1 and 1e9, inclusive.
- max_depth: int, optional
The maximum depth for growing each tree, an integer between 1 and 100, inclusive.
- min_samples_leaf: int, optional
The minimum number of samples each branch must have after a node is split, an integer between 1 and 1e6, inclusive. Any split that results in fewer remaining samples is discarded.
- min_info_gain: PythonNumber, optional
The minimum threshold for including a split, a float between 0.0 and 1.0, inclusive. A split with information gain less than this threshold is discarded.
- nbins: int, optional
The number of bins to use for continuous features, an integer between 2 and 1000, inclusive.
Attributes¶
Many attributes are created during the fitting phase.
- trees_: list of one BinaryTreeClassifier
One tree model which is instance of
BinaryTreeClassifier
. It possess various attributes. For more detailed information, refer to the documentation forBinaryTreeClassifier()
.- features_importance_: numpy.array
The importance of features. It is calculated using the MDI (Mean Decreased Impurity). To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them. It is necessary to use the
features_importance()
method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.- classes_: numpy.array
The classes labels.
Note
All attributes can be accessed using the
get_attributes()
method.Note
Several other attributes can be accessed by using the
get_vertica_attributes()
method.Examples¶
The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.
Important
Many tree-based models inherit from the
RandomForest
base class, and it’s recommended to use it directly for access to a wider range of options.Load data for machine learning¶
We import
verticapy
:import verticapy as vp
Hint
By assigning an alias to
verticapy
, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions fromverticapy
are used as intended without interfering with functions from other libraries.For this example, we will use the winequality dataset.
import verticapy.datasets as vpd data = vpd.load_winequality()
123fixed_acidity123volatile_acidity123citric_acid123residual_sugar123chlorides123free_sulfur_dioxide123total_sulfur_dioxide123density123pH123sulphates123alcohol123quality123goodAbccolor1 3.8 0.31 0.02 11.1 0.036 20.0 114.0 0.99248 3.75 0.44 12.4 6 0 white 2 3.9 0.225 0.4 4.2 0.03 29.0 118.0 0.989 3.57 0.36 12.8 8 1 white 3 4.2 0.17 0.36 1.8 0.029 93.0 161.0 0.98999 3.65 0.89 12.0 7 1 white 4 4.2 0.215 0.23 5.1 0.041 64.0 157.0 0.99688 3.42 0.44 8.0 3 0 white 5 4.4 0.32 0.39 4.3 0.03 31.0 127.0 0.98904 3.46 0.36 12.8 8 1 white 6 4.4 0.46 0.1 2.8 0.024 31.0 111.0 0.98816 3.48 0.34 13.1 6 0 white 7 4.4 0.54 0.09 5.1 0.038 52.0 97.0 0.99022 3.41 0.4 12.2 7 1 white 8 4.5 0.19 0.21 0.95 0.033 89.0 159.0 0.99332 3.34 0.42 8.0 5 0 white 9 4.6 0.445 0.0 1.4 0.053 11.0 178.0 0.99426 3.79 0.55 10.2 5 0 white 10 4.6 0.52 0.15 2.1 0.054 8.0 65.0 0.9934 3.9 0.56 13.1 4 0 red 11 4.7 0.145 0.29 1.0 0.042 35.0 90.0 0.9908 3.76 0.49 11.3 6 0 white 12 4.7 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.5 5 0 white 13 4.7 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 14 4.7 0.6 0.17 2.3 0.058 17.0 106.0 0.9932 3.85 0.6 12.9 6 0 red 15 4.7 0.67 0.09 1.0 0.02 5.0 9.0 0.98722 3.3 0.34 13.6 5 0 white 16 4.7 0.785 0.0 3.4 0.036 23.0 134.0 0.98981 3.53 0.92 13.8 6 0 white 17 4.8 0.13 0.32 1.2 0.042 40.0 98.0 0.9898 3.42 0.64 11.8 7 1 white 18 4.8 0.17 0.28 2.9 0.03 22.0 111.0 0.9902 3.38 0.34 11.3 7 1 white 19 4.8 0.21 0.21 10.2 0.037 17.0 112.0 0.99324 3.66 0.48 12.2 7 1 white 20 4.8 0.225 0.38 1.2 0.074 47.0 130.0 0.99132 3.31 0.4 10.3 6 0 white 21 4.8 0.26 0.23 10.6 0.034 23.0 111.0 0.99274 3.46 0.28 11.5 7 1 white 22 4.8 0.29 0.23 1.1 0.044 38.0 180.0 0.98924 3.28 0.34 11.9 6 0 white 23 4.8 0.33 0.0 6.5 0.028 34.0 163.0 0.9937 3.35 0.61 9.9 5 0 white 24 4.8 0.34 0.0 6.5 0.028 33.0 163.0 0.9939 3.36 0.61 9.9 6 0 white 25 4.8 0.65 0.12 1.1 0.013 4.0 10.0 0.99246 3.32 0.36 13.5 4 0 white 26 4.9 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 27 4.9 0.33 0.31 1.2 0.016 39.0 150.0 0.98713 3.33 0.59 14.0 8 1 white 28 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 29 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 30 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 31 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 32 4.9 0.42 0.0 2.1 0.048 16.0 42.0 0.99154 3.71 0.74 14.0 7 1 red 33 4.9 0.47 0.17 1.9 0.035 60.0 148.0 0.98964 3.27 0.35 11.5 6 0 white 34 5.0 0.17 0.56 1.5 0.026 24.0 115.0 0.9906 3.48 0.39 10.8 7 1 white 35 5.0 0.2 0.4 1.9 0.015 20.0 98.0 0.9897 3.37 0.55 12.05 6 0 white 36 5.0 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 37 5.0 0.24 0.19 5.0 0.043 17.0 101.0 0.99438 3.67 0.57 10.0 5 0 white 38 5.0 0.24 0.21 2.2 0.039 31.0 100.0 0.99098 3.69 0.62 11.7 6 0 white 39 5.0 0.24 0.34 1.1 0.034 49.0 158.0 0.98774 3.32 0.32 13.1 7 1 white 40 5.0 0.255 0.22 2.7 0.043 46.0 153.0 0.99238 3.75 0.76 11.3 6 0 white 41 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 42 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 43 5.0 0.27 0.4 1.2 0.076 42.0 124.0 0.99204 3.32 0.47 10.1 6 0 white 44 5.0 0.29 0.54 5.7 0.035 54.0 155.0 0.98976 3.27 0.34 12.9 8 1 white 45 5.0 0.3 0.33 3.7 0.03 54.0 173.0 0.9887 3.36 0.3 13.0 7 1 white 46 5.0 0.31 0.0 6.4 0.046 43.0 166.0 0.994 3.3 0.63 9.9 6 0 white 47 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 48 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 49 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 50 5.0 0.33 0.18 4.6 0.032 40.0 124.0 0.99114 3.18 0.4 11.0 6 0 white 51 5.0 0.33 0.23 11.8 0.03 23.0 158.0 0.99322 3.41 0.64 11.8 6 0 white 52 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 53 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 54 5.0 0.38 0.01 1.6 0.048 26.0 60.0 0.99084 3.7 0.75 14.0 6 0 red 55 5.0 0.4 0.5 4.3 0.046 29.0 80.0 0.9902 3.49 0.66 13.6 6 0 red 56 5.0 0.42 0.24 2.0 0.06 19.0 50.0 0.9917 3.72 0.74 14.0 8 1 red 57 5.0 0.44 0.04 18.6 0.039 38.0 128.0 0.9985 3.37 0.57 10.2 6 0 white 58 5.0 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 59 5.0 0.55 0.14 8.3 0.032 35.0 164.0 0.9918 3.53 0.51 12.5 8 1 white 60 5.0 0.61 0.12 1.3 0.009 65.0 100.0 0.9874 3.26 0.37 13.5 5 0 white 61 5.0 0.74 0.0 1.2 0.041 16.0 46.0 0.99258 4.01 0.59 12.5 6 0 red 62 5.0 1.02 0.04 1.4 0.045 41.0 85.0 0.9938 3.75 0.48 10.5 4 0 red 63 5.0 1.04 0.24 1.6 0.05 32.0 96.0 0.9934 3.74 0.62 11.5 5 0 red 64 5.1 0.11 0.32 1.6 0.028 12.0 90.0 0.99008 3.57 0.52 12.2 6 0 white 65 5.1 0.14 0.25 0.7 0.039 15.0 89.0 0.9919 3.22 0.43 9.2 6 0 white 66 5.1 0.165 0.22 5.7 0.047 42.0 146.0 0.9934 3.18 0.55 9.9 6 0 white 67 5.1 0.21 0.28 1.4 0.047 48.0 148.0 0.99168 3.5 0.49 10.4 5 0 white 68 5.1 0.23 0.18 1.0 0.053 13.0 99.0 0.98956 3.22 0.39 11.5 5 0 white 69 5.1 0.25 0.36 1.3 0.035 40.0 78.0 0.9891 3.23 0.64 12.1 7 1 white 70 5.1 0.26 0.33 1.1 0.027 46.0 113.0 0.98946 3.35 0.43 11.4 7 1 white 71 5.1 0.26 0.34 6.4 0.034 26.0 99.0 0.99449 3.23 0.41 9.2 6 0 white 72 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 73 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 74 5.1 0.3 0.3 2.3 0.048 40.0 150.0 0.98944 3.29 0.46 12.2 6 0 white 75 5.1 0.305 0.13 1.75 0.036 17.0 73.0 0.99 3.4 0.51 12.3333333333333 5 0 white 76 5.1 0.31 0.3 0.9 0.037 28.0 152.0 0.992 3.54 0.56 10.1 6 0 white 77 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 78 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 79 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 80 5.1 0.33 0.27 6.7 0.022 44.0 129.0 0.99221 3.36 0.39 11.0 7 1 white 81 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 82 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 83 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 84 5.1 0.39 0.21 1.7 0.027 15.0 72.0 0.9894 3.5 0.45 12.5 6 0 white 85 5.1 0.42 0.0 1.8 0.044 18.0 88.0 0.99157 3.68 0.73 13.6 7 1 red 86 5.1 0.42 0.01 1.5 0.017 25.0 102.0 0.9894 3.38 0.36 12.3 7 1 white 87 5.1 0.47 0.02 1.3 0.034 18.0 44.0 0.9921 3.9 0.62 12.8 6 0 red 88 5.1 0.51 0.18 2.1 0.042 16.0 101.0 0.9924 3.46 0.87 12.9 7 1 red 89 5.1 0.52 0.06 2.7 0.052 30.0 79.0 0.9932 3.32 0.43 9.3 5 0 white 90 5.1 0.585 0.0 1.7 0.044 14.0 86.0 0.99264 3.56 0.94 12.9 7 1 red 91 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 92 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 93 5.2 0.16 0.34 0.8 0.029 26.0 77.0 0.99155 3.25 0.51 10.1 6 0 white 94 5.2 0.17 0.27 0.7 0.03 11.0 68.0 0.99218 3.3 0.41 9.8 5 0 white 95 5.2 0.185 0.22 1.0 0.03 47.0 123.0 0.99218 3.55 0.44 10.15 6 0 white 96 5.2 0.2 0.27 3.2 0.047 16.0 93.0 0.99235 3.44 0.53 10.1 7 1 white 97 5.2 0.21 0.31 1.7 0.048 17.0 61.0 0.98953 3.24 0.37 12.0 7 1 white 98 5.2 0.22 0.46 6.2 0.066 41.0 187.0 0.99362 3.19 0.42 9.73333333333333 5 0 white 99 5.2 0.24 0.15 7.1 0.043 32.0 134.0 0.99378 3.24 0.48 9.9 6 0 white 100 5.2 0.24 0.45 3.8 0.027 21.0 128.0 0.992 3.55 0.49 11.2 8 1 white Rows: 1-100 | Columns: 14Note
VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.
You can easily divide your dataset into training and testing subsets using the
vDataFrame.
train_test_split()
method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.data = vpd.load_winequality() train, test = data.train_test_split(test_size = 0.2)
Warning
In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the
vDataFrame.
to_db()
method to save your results intotables
ortemporary tables
. This will help enhance the overall performance of the process.Balancing the Dataset¶
In VerticaPy, balancing a dataset to address class imbalances is made straightforward through the
balance()
function within thepreprocessing
module. This function enables users to rectify skewed class distributions efficiently. By specifying the target variable and setting parameters like the method for balancing, users can effortlessly achieve a more equitable representation of classes in their dataset. Whether opting for over-sampling, under-sampling, or a combination of both, VerticaPy’sbalance()
function streamlines the process, empowering users to enhance the performance and fairness of their machine learning models trained on imbalanced data.To balance the dataset, use the following syntax.
from verticapy.machine_learning.vertica.preprocessing import balance balanced_train = balance( name = "my_schema.train_balanced", input_relation = train, y = "good", method = "hybrid", )
Note
With this code, a table named train_balanced is created in the my_schema schema. It can then be used to train the model. In the rest of the example, we will work with the full dataset.
Hint
Balancing the dataset is a crucial step in improving the accuracy of machine learning models, particularly when faced with imbalanced class distributions. By addressing disparities in the number of instances across different classes, the model becomes more adept at learning patterns from all classes rather than being biased towards the majority class. This, in turn, enhances the model’s ability to make accurate predictions for under-represented classes. The balanced dataset ensures that the model is not dominated by the majority class and, as a result, leads to more robust and unbiased model performance. Therefore, by employing techniques such as over-sampling, under-sampling, or a combination of both during dataset preparation, practitioners can significantly contribute to achieving higher accuracy and better generalization of their machine learning models.
Model Initialization¶
First we import the
DecisionTreeClassifier
model:from verticapy.machine_learning.vertica import DecisionTreeClassifier
Then we can create the model:
model = DecisionTreeClassifier( max_features = "auto", max_leaf_nodes = 32, max_depth = 3, min_samples_leaf = 5, min_info_gain = 0.0, nbins = 32, )
Hint
In
verticapy
1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.Important
The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.
Model Training¶
We can now fit the model:
model.fit( train, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density" ], "good", test, ) =========== call_string =========== SELECT rf_classifier('"public"."_verticapy_tmp_randomforestclassifier_v_demo_fb75e43c55a411ef880f0242ac120002_"', '"public"."_verticapy_tmp_view_v_demo_fb854a2655a411ef880f0242ac120002_"', 'good', '"fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density"' USING PARAMETERS exclude_columns='', ntree=1, mtry=3, sampling_size=1, max_depth=3, max_breadth=32, min_leaf_size=5, min_info_gain=0, nbins=32); ======= details ======= predictor | type ----------------+---------------- fixed_acidity |float or numeric volatile_acidity|float or numeric citric_acid |float or numeric residual_sugar |float or numeric chlorides |float or numeric density |float or numeric =============== Additional Info =============== Name |Value ------------------+----- tree_count | 1 rejected_row_count| 0 accepted_row_count|5195
Important
To train a model, you can directly use the
vDataFrame
or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. Inverticapy
, we don’t work usingX
matrices andy
vectors. Instead, we work directly with lists of predictors and the response name.Features Importance¶
We can conveniently get the features importance:
result = model.features_importance()
Note
In models such as
RandomForest
, feature importance is calculated using the MDI (Mean Decreased Impurity). To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them.Metrics¶
We can get the entire report using:
model.report()
value auc 0.7468945232673556 prc_auc 0.46441582138894216 accuracy 0.8033794162826421 log_loss 0.191153481288304 precision 0.5454545454545454 recall 0.18181818181818182 f1_score 0.2727272727272727 mcc 0.22947914410412062 informedness 0.1432825363461201 markedness 0.3675303279916129 csi 0.15789473684210525 Rows: 1-11 | Columns: 2Important
Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g.
model.report(metrics = ["auc", "accuracy"])
.For classification models, we can easily modify the
cutoff
to observe the effect on different metrics:model.report(cutoff = 0.2)
value auc 0.7468945232673556 prc_auc 0.46441582138894216 accuracy 0.7204301075268817 log_loss 0.191153481288304 precision 0.38095238095238093 recall 0.6060606060606061 f1_score 0.4678362573099415 mcc 0.3058283188048218 informedness 0.35557891049220536 markedness 0.2630385487528346 csi 0.3053435114503817 Rows: 1-11 | Columns: 2You can also use the
score()
function to compute any classification metric. The default metric is the accuracy:model.score() Out[3]: 0.8033794162826421
Prediction¶
Prediction is straight-forward:
model.predict( test, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density" ], "prediction", )
123fixed_acidity123volatile_acidity123citric_acid123residual_sugar123chlorides123free_sulfur_dioxide123total_sulfur_dioxide123density123pH123sulphates123alcohol123quality123goodAbccolorAbcprediction1 3.8 0.31 0.02 11.1 0.036 20.0 114.0 0.99248 3.75 0.44 12.4 6 0 white 0 2 4.2 0.215 0.23 5.1 0.041 64.0 157.0 0.99688 3.42 0.44 8.0 3 0 white 0 3 4.7 0.6 0.17 2.3 0.058 17.0 106.0 0.9932 3.85 0.6 12.9 6 0 red 0 4 4.8 0.225 0.38 1.2 0.074 47.0 130.0 0.99132 3.31 0.4 10.3 6 0 white 0 5 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 0 6 5.0 0.17 0.56 1.5 0.026 24.0 115.0 0.9906 3.48 0.39 10.8 7 1 white 0 7 5.0 0.2 0.4 1.9 0.015 20.0 98.0 0.9897 3.37 0.55 12.05 6 0 white 1 8 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 0 9 5.0 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 1 10 5.0 0.74 0.0 1.2 0.041 16.0 46.0 0.99258 4.01 0.59 12.5 6 0 red 0 11 5.1 0.23 0.18 1.0 0.053 13.0 99.0 0.98956 3.22 0.39 11.5 5 0 white 0 12 5.1 0.25 0.36 1.3 0.035 40.0 78.0 0.9891 3.23 0.64 12.1 7 1 white 1 13 5.1 0.26 0.33 1.1 0.027 46.0 113.0 0.98946 3.35 0.43 11.4 7 1 white 1 14 5.1 0.26 0.34 6.4 0.034 26.0 99.0 0.99449 3.23 0.41 9.2 6 0 white 0 15 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 0 16 5.1 0.42 0.0 1.8 0.044 18.0 88.0 0.99157 3.68 0.73 13.6 7 1 red 0 17 5.1 0.42 0.01 1.5 0.017 25.0 102.0 0.9894 3.38 0.36 12.3 7 1 white 1 18 5.1 0.585 0.0 1.7 0.044 14.0 86.0 0.99264 3.56 0.94 12.9 7 1 red 0 19 5.2 0.25 0.23 1.4 0.047 20.0 77.0 0.99001 3.32 0.62 11.4 5 0 white 0 20 5.2 0.31 0.36 5.1 0.031 46.0 145.0 0.9897 3.14 0.31 12.4 7 1 white 1 21 5.2 0.32 0.25 1.8 0.103 13.0 50.0 0.9957 3.38 0.55 9.2 5 0 red 0 22 5.2 0.48 0.04 1.6 0.054 19.0 106.0 0.9927 3.54 0.62 12.2 7 1 red 0 23 5.2 0.49 0.26 2.3 0.09 23.0 74.0 0.9953 3.71 0.62 12.2 6 0 red 0 24 5.3 0.165 0.24 1.1 0.051 25.0 105.0 0.9925 3.32 0.47 9.1 5 0 white 0 25 5.3 0.23 0.56 0.9 0.041 46.0 141.0 0.99119 3.16 0.62 9.7 5 0 white 0 26 5.3 0.24 0.33 1.3 0.033 25.0 97.0 0.9906 3.59 0.38 11.0 8 1 white 0 27 5.3 0.3 0.2 1.1 0.077 48.0 166.0 0.9944 3.3 0.54 8.7 4 0 white 0 28 5.3 0.3 0.3 1.2 0.029 25.0 93.0 0.98742 3.31 0.4 13.6 7 1 white 1 29 5.3 0.57 0.01 1.7 0.054 5.0 27.0 0.9934 3.57 0.84 12.5 7 1 red 0 30 5.3 0.715 0.19 1.5 0.161 7.0 62.0 0.99395 3.62 0.61 11.0 5 0 red 0 31 5.4 0.15 0.32 2.5 0.037 10.0 51.0 0.98878 3.04 0.58 12.6 6 0 white 1 32 5.4 0.17 0.27 2.7 0.049 28.0 104.0 0.99224 3.46 0.55 10.3 6 0 white 0 33 5.4 0.22 0.29 1.2 0.045 69.0 152.0 0.99178 3.76 0.63 11.0 7 1 white 0 34 5.4 0.23 0.36 1.5 0.03 74.0 121.0 0.98976 3.24 0.99 12.1 7 1 white 1 35 5.4 0.255 0.33 1.2 0.051 29.0 122.0 0.99048 3.37 0.66 11.3 6 0 white 0 36 5.4 0.27 0.22 4.6 0.022 29.0 107.0 0.98889 3.33 0.54 13.8 6 0 white 1 37 5.4 0.46 0.15 2.1 0.026 29.0 130.0 0.98953 3.39 0.77 13.4 8 1 white 1 38 5.4 0.595 0.1 2.8 0.042 26.0 80.0 0.9932 3.36 0.38 9.3 5 0 white 0 39 5.4 0.835 0.08 1.2 0.046 13.0 93.0 0.9924 3.57 0.85 13.0 7 1 red 0 40 5.5 0.12 0.33 1.0 0.038 23.0 131.0 0.99164 3.25 0.45 9.8 5 0 white 0 41 5.5 0.17 0.23 2.9 0.039 10.0 108.0 0.99243 3.28 0.5 10.0 5 0 white 0 42 5.5 0.23 0.19 2.2 0.044 39.0 161.0 0.99209 3.19 0.43 10.4 6 0 white 0 43 5.5 0.24 0.45 1.7 0.046 22.0 113.0 0.99224 3.22 0.48 10.0 5 0 white 0 44 5.5 0.28 0.21 1.6 0.032 23.0 85.0 0.99027 3.42 0.42 12.5 5 0 white 1 45 5.5 0.29 0.3 1.1 0.022 20.0 110.0 0.98869 3.34 0.38 12.8 7 1 white 1 46 5.5 0.34 0.26 2.2 0.021 31.0 119.0 0.98919 3.55 0.49 13.0 8 1 white 1 47 5.6 0.15 0.26 5.55 0.051 51.0 139.0 0.99336 3.47 0.5 11.0 6 0 white 0 48 5.6 0.16 0.27 1.4 0.044 53.0 168.0 0.9918 3.28 0.37 10.1 6 0 white 0 49 5.6 0.175 0.29 0.8 0.043 20.0 67.0 0.99112 3.28 0.48 9.9 6 0 white 0 50 5.6 0.18 0.27 1.7 0.03 31.0 103.0 0.98892 3.35 0.37 12.9 6 0 white 1 51 5.6 0.18 0.31 1.5 0.038 16.0 84.0 0.9924 3.34 0.58 10.1 6 0 white 0 52 5.6 0.2 0.66 10.2 0.043 78.0 175.0 0.9945 2.98 0.43 10.4 7 1 white 0 53 5.6 0.21 0.24 4.4 0.027 37.0 150.0 0.991 3.3 0.31 11.5 7 1 white 0 54 5.6 0.23 0.25 8.0 0.043 31.0 101.0 0.99429 3.19 0.42 10.4 6 0 white 0 55 5.6 0.26 0.18 1.4 0.034 18.0 135.0 0.99174 3.32 0.35 10.2 6 0 white 0 56 5.6 0.26 0.27 10.6 0.03 27.0 119.0 0.9947 3.4 0.34 10.7 7 1 white 0 57 5.6 0.28 0.27 3.9 0.043 52.0 158.0 0.99202 3.35 0.44 10.7 7 1 white 0 58 5.6 0.29 0.05 0.8 0.038 11.0 30.0 0.9924 3.36 0.35 9.2 5 0 white 0 59 5.6 0.31 0.37 1.4 0.074 12.0 96.0 0.9954 3.32 0.58 9.2 5 0 red 0 60 5.6 0.33 0.28 1.2 0.031 33.0 97.0 0.99126 3.49 0.58 10.9 6 0 white 0 61 5.6 0.35 0.37 1.0 0.038 6.0 72.0 0.9902 3.37 0.34 11.4 5 0 white 1 62 5.6 0.39 0.24 4.7 0.034 27.0 77.0 0.9906 3.28 0.36 12.7 5 0 white 0 63 5.6 0.41 0.22 7.1 0.05 44.0 154.0 0.9931 3.3 0.4 10.5 5 0 white 0 64 5.6 0.49 0.13 4.5 0.039 17.0 116.0 0.9907 3.42 0.9 13.7 7 1 white 0 65 5.6 0.5 0.09 2.3 0.049 17.0 99.0 0.9937 3.63 0.63 13.0 5 0 red 0 66 5.6 0.615 0.0 1.6 0.089 16.0 59.0 0.9943 3.58 0.52 9.9 5 0 red 0 67 5.6 0.66 0.0 2.5 0.066 7.0 15.0 0.99256 3.52 0.58 12.9 5 0 red 0 68 5.7 0.1 0.27 1.3 0.047 21.0 100.0 0.9928 3.27 0.46 9.5 5 0 white 0 69 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 0 70 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 0 71 5.7 0.22 0.28 1.3 0.027 26.0 101.0 0.98948 3.35 0.38 12.5 7 1 white 1 72 5.7 0.22 0.29 3.5 0.04 27.0 146.0 0.98999 3.17 0.36 12.1 6 0 white 1 73 5.7 0.25 0.27 10.8 0.05 58.0 116.0 0.99592 3.1 0.5 9.8 6 0 white 0 74 5.7 0.26 0.25 10.4 0.02 7.0 57.0 0.994 3.39 0.37 10.6 5 0 white 0 75 5.7 0.26 0.27 4.1 0.201 73.5 189.5 0.9942 3.27 0.38 9.4 6 0 white 0 76 5.7 0.27 0.16 9.0 0.053 32.0 111.0 0.99474 3.36 0.37 10.4 6 0 white 0 77 5.7 0.27 0.32 1.2 0.046 20.0 155.0 0.9934 3.8 0.41 10.2 6 0 white 0 78 5.7 0.33 0.32 1.4 0.043 28.0 93.0 0.9897 3.31 0.5 12.3 6 0 white 1 79 5.7 0.385 0.04 12.6 0.034 22.0 115.0 0.9964 3.28 0.63 9.9 6 0 white 0 80 5.8 0.14 0.15 6.1 0.042 27.0 123.0 0.99362 3.06 0.6 9.9 6 0 white 0 81 5.8 0.17 0.34 1.8 0.045 96.0 170.0 0.99035 3.38 0.9 11.8 8 1 white 1 82 5.8 0.18 0.28 1.3 0.034 9.0 94.0 0.99092 3.21 0.52 11.2 6 0 white 0 83 5.8 0.18 0.37 1.2 0.036 19.0 74.0 0.98853 3.09 0.49 12.7 7 1 white 1 84 5.8 0.19 0.24 1.3 0.044 38.0 128.0 0.99362 3.77 0.6 10.6 5 0 white 0 85 5.8 0.19 0.25 10.8 0.042 33.0 124.0 0.99646 3.22 0.41 9.2 6 0 white 0 86 5.8 0.2 0.24 1.4 0.033 65.0 169.0 0.99043 3.59 0.56 12.3 7 1 white 0 87 5.8 0.2 0.27 1.4 0.031 12.0 77.0 0.9905 3.25 0.36 10.9 7 1 white 0 88 5.8 0.2 0.3 1.5 0.031 21.0 57.0 0.99115 3.44 0.55 11.0 6 0 white 0 89 5.8 0.24 0.26 10.05 0.039 63.0 162.0 0.99375 3.33 0.5 11.2 6 0 white 0 90 5.8 0.26 0.18 1.2 0.031 40.0 114.0 0.9908 3.42 0.4 11.0 7 1 white 0 91 5.8 0.26 0.24 9.2 0.044 55.0 152.0 0.9961 3.31 0.38 9.4 5 0 white 0 92 5.8 0.27 0.2 7.3 0.04 42.0 145.0 0.99442 3.15 0.48 9.8 5 0 white 0 93 5.8 0.27 0.22 12.7 0.058 42.0 206.0 0.9946 3.32 0.38 12.3 6 0 white 0 94 5.8 0.27 0.27 12.3 0.045 55.0 170.0 0.9972 3.28 0.42 9.3 6 0 white 0 95 5.8 0.28 0.28 4.2 0.044 52.0 158.0 0.992 3.35 0.44 10.7 7 1 white 0 96 5.8 0.28 0.3 3.9 0.026 36.0 105.0 0.98963 3.26 0.58 12.75 6 0 white 1 97 5.8 0.28 0.66 9.1 0.039 26.0 159.0 0.9965 3.66 0.55 10.8 5 0 white 0 98 5.8 0.3 0.12 1.6 0.036 57.0 163.0 0.99239 3.38 0.59 10.5 6 0 white 0 99 5.8 0.33 0.2 16.05 0.047 26.0 166.0 0.9976 3.09 0.46 8.9 5 0 white 0 100 5.8 0.39 0.47 7.5 0.027 12.0 88.0 0.9907 3.38 0.45 14.0 6 0 white 0 Rows: 1-100 | Columns: 15Note
Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the
vDataFrame
to thepredict()
function, but in this case, it’s essential that the column names of thevDataFrame
match the predictors and response name in the model.Probabilities¶
It is also easy to get the model’s probabilities:
model.predict_proba( test, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density" ], "prediction", )
123fixed_acidity123volatile_acidity123citric_acid123residual_sugar123chlorides123free_sulfur_dioxide123total_sulfur_dioxide123density123pH123sulphates123alcohol123quality123goodAbccolorAbcpredictionAbcprediction_0Abcprediction_11 3.8 0.31 0.02 11.1 0.036 20.0 114.0 0.99248 3.75 0.44 12.4 6 0 white 0 0.664607 0.335393 2 4.2 0.215 0.23 5.1 0.041 64.0 157.0 0.99688 3.42 0.44 8.0 3 0 white 0 0.902349 0.0976514 3 4.7 0.6 0.17 2.3 0.058 17.0 106.0 0.9932 3.85 0.6 12.9 6 0 red 0 0.75 0.25 4 4.8 0.225 0.38 1.2 0.074 47.0 130.0 0.99132 3.31 0.4 10.3 6 0 white 0 0.900185 0.0998152 5 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 0 0.900185 0.0998152 6 5.0 0.17 0.56 1.5 0.026 24.0 115.0 0.9906 3.48 0.39 10.8 7 1 white 0 0.664607 0.335393 7 5.0 0.2 0.4 1.9 0.015 20.0 98.0 0.9897 3.37 0.55 12.05 6 0 white 1 0.493573 0.506427 8 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 0 0.664607 0.335393 9 5.0 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 1 0.493573 0.506427 10 5.0 0.74 0.0 1.2 0.041 16.0 46.0 0.99258 4.01 0.59 12.5 6 0