Loading...

verticapy.machine_learning.vertica.ensemble.RandomForestClassifier#

class verticapy.machine_learning.vertica.ensemble.RandomForestClassifier(name: str = None, overwrite_model: bool = False, n_estimators: int = 10, max_features: Literal['auto', 'max'] | int = 'auto', max_leaf_nodes: int | float | Decimal = 1000000000.0, sample: float = 0.632, max_depth: int = 5, min_samples_leaf: int = 1, min_info_gain: int | float | Decimal = 0.0, nbins: int = 32)#

Creates a RandomForestClassifier object using the Vertica RF_CLASSIFIER function. It is an ensemble learning method for classification that operates by constructing a multitude of decision trees at training-time and outputting a class with the mode.

Parameters#

name: str, optional

Name of the model. The model is stored in the DB.

overwrite_model: bool, optional

If set to True, training a model with the same name as an existing model overwrites the existing model.

n_estimators: int, optional

The number of trees in the forest, an integer between 1 and 1000, inclusive.

max_features: int | str, optional

The number of randomly chosen features from which to pick the best feature to split a given tree node. It can be an integer or one of the two following methods.

  • auto:

    square root of the total number of predictors.

  • max :

    number of predictors.

max_leaf_nodes: PythonNumber, optional

The maximum number of leaf nodes for a tree in the forest, an integerv between ``1 and 1e9, inclusive.

sample: float, optional

The portion of the input data set that is randomly selected for training each tree, a float between 0.0 and 1.0, inclusive.

max_depth: int, optional

aximum depth of each tree, an integer between 1 and 100, inclusive.

min_samples_leaf: int, optional

The minimum number of samples each branch must have after splitting a node, an integer between 1 and 1e6, inclusive. A split that results in remaining samples less than this value is discarded.

min_info_gain: PythonNumber, optional

The minimum threshold for including a split, a float between 0.0 and 1.0, inclusive. A split with information gain less than this threshold is discarded.

nbins: int, optional

Number of bins used to find splits in each column, where more splits leads to a longer runtime but more fine-grained, possibly better splits. Must be an integer between 2 and 1000, inclusive.

Attributes#

Many attributes are created during the fitting phase.

trees_: list of BinaryTreeClassifier

Tree models are instances of ` BinaryTreeClassifier, each possessing various attributes. For more detailed information, refer to the documentation for BinaryTreeClassifier.

features_importance_: numpy.array

The importance of features. It is calculated using the MDI (Mean Decreased Impurity). To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them. It is necessary to use the features_importance() method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.

features_importance_trees_: dict of numpy.array

Each element of the array represents the feature importance of tree i. The importance of features is calculated using the MDI (Mean Decreased Impurity). It is necessary to use the features_importance() method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.

n_estimators_: int

The number of model estimators.

classes_: numpy.array

The classes labels.

Note

All attributes can be accessed using the get_attributes() method.

Note

Several other attributes can be accessed by using the get_vertica_attributes() method.

Examples#

The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.

Important

Many tree-based models inherit from the RandomForest base class, and it’s recommended to use it directly for access to a wider range of options.

Load data for machine learning#

We import verticapy:

import verticapy as vp

Hint

By assigning an alias to verticapy, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions from verticapy are used as intended without interfering with functions from other libraries.

For this example, we will use the winequality dataset.

import verticapy.datasets as vpd

data = vpd.load_winequality()
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
13.80.310.0211.10.03620.0114.00.992483.750.4412.460white
23.90.2250.44.20.0329.0118.00.9893.570.3612.881white
34.20.170.361.80.02993.0161.00.989993.650.8912.071white
44.20.2150.235.10.04164.0157.00.996883.420.448.030white
54.40.320.394.30.0331.0127.00.989043.460.3612.881white
64.40.460.12.80.02431.0111.00.988163.480.3413.160white
74.40.540.095.10.03852.097.00.990223.410.412.271white
84.50.190.210.950.03389.0159.00.993323.340.428.050white
94.60.4450.01.40.05311.0178.00.994263.790.5510.250white
104.60.520.152.10.0548.065.00.99343.90.5613.140red
114.70.1450.291.00.04235.090.00.99083.760.4911.360white
124.70.3350.141.30.03669.0168.00.992123.470.4610.550white
134.70.4550.181.90.03633.0106.00.987463.210.8314.071white
144.70.60.172.30.05817.0106.00.99323.850.612.960red
154.70.670.091.00.025.09.00.987223.30.3413.650white
164.70.7850.03.40.03623.0134.00.989813.530.9213.860white
174.80.130.321.20.04240.098.00.98983.420.6411.871white
184.80.170.282.90.0322.0111.00.99023.380.3411.371white
194.80.210.2110.20.03717.0112.00.993243.660.4812.271white
204.80.2250.381.20.07447.0130.00.991323.310.410.360white
214.80.260.2310.60.03423.0111.00.992743.460.2811.571white
224.80.290.231.10.04438.0180.00.989243.280.3411.960white
234.80.330.06.50.02834.0163.00.99373.350.619.950white
244.80.340.06.50.02833.0163.00.99393.360.619.960white
254.80.650.121.10.0134.010.00.992463.320.3613.540white
264.90.2350.2711.750.0334.0118.00.99543.070.59.460white
274.90.330.311.20.01639.0150.00.987133.330.5914.081white
284.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white
294.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white
304.90.3450.341.00.06832.0143.00.991383.240.410.150white
314.90.3450.341.00.06832.0143.00.991383.240.410.150white
324.90.420.02.10.04816.042.00.991543.710.7414.071red
334.90.470.171.90.03560.0148.00.989643.270.3511.560white
345.00.170.561.50.02624.0115.00.99063.480.3910.871white
355.00.20.41.90.01520.098.00.98973.370.5512.0560white
365.00.2350.2711.750.0334.0118.00.99543.070.59.460white
375.00.240.195.00.04317.0101.00.994383.670.5710.050white
385.00.240.212.20.03931.0100.00.990983.690.6211.760white
395.00.240.341.10.03449.0158.00.987743.320.3213.171white
405.00.2550.222.70.04346.0153.00.992383.750.7611.360white
415.00.270.324.50.03258.0178.00.989563.450.3112.671white
425.00.270.324.50.03258.0178.00.989563.450.3112.671white
435.00.270.41.20.07642.0124.00.992043.320.4710.160white
445.00.290.545.70.03554.0155.00.989763.270.3412.981white
455.00.30.333.70.0354.0173.00.98873.360.313.071white
465.00.310.06.40.04643.0166.00.9943.30.639.960white
475.00.330.161.50.04910.097.00.99173.480.4410.760white
485.00.330.161.50.04910.097.00.99173.480.4410.760white
495.00.330.161.50.04910.097.00.99173.480.4410.760white
505.00.330.184.60.03240.0124.00.991143.180.411.060white
515.00.330.2311.80.0323.0158.00.993223.410.6411.860white
525.00.350.257.80.03124.0116.00.992413.390.411.360white
535.00.350.257.80.03124.0116.00.992413.390.411.360white
545.00.380.011.60.04826.060.00.990843.70.7514.060red
555.00.40.54.30.04629.080.00.99023.490.6613.660red
565.00.420.242.00.0619.050.00.99173.720.7414.081red
575.00.440.0418.60.03938.0128.00.99853.370.5710.260white
585.00.4550.181.90.03633.0106.00.987463.210.8314.071white
595.00.550.148.30.03235.0164.00.99183.530.5112.581white
605.00.610.121.30.00965.0100.00.98743.260.3713.550white
615.00.740.01.20.04116.046.00.992584.010.5912.560red
625.01.020.041.40.04541.085.00.99383.750.4810.540red
635.01.040.241.60.0532.096.00.99343.740.6211.550red
645.10.110.321.60.02812.090.00.990083.570.5212.260white
655.10.140.250.70.03915.089.00.99193.220.439.260white
665.10.1650.225.70.04742.0146.00.99343.180.559.960white
675.10.210.281.40.04748.0148.00.991683.50.4910.450white
685.10.230.181.00.05313.099.00.989563.220.3911.550white
695.10.250.361.30.03540.078.00.98913.230.6412.171white
705.10.260.331.10.02746.0113.00.989463.350.4311.471white
715.10.260.346.40.03426.099.00.994493.230.419.260white
725.10.290.288.30.02627.0107.00.993083.360.3711.060white
735.10.290.288.30.02627.0107.00.993083.360.3711.060white
745.10.30.32.30.04840.0150.00.989443.290.4612.260white
755.10.3050.131.750.03617.073.00.993.40.5112.333333333333350white
765.10.310.30.90.03728.0152.00.9923.540.5610.160white
775.10.330.221.60.02718.089.00.98933.510.3812.571white
785.10.330.221.60.02718.089.00.98933.510.3812.571white
795.10.330.221.60.02718.089.00.98933.510.3812.571white
805.10.330.276.70.02244.0129.00.992213.360.3911.071white
815.10.350.266.80.03436.0120.00.991883.380.411.560white
825.10.350.266.80.03436.0120.00.991883.380.411.560white
835.10.350.266.80.03436.0120.00.991883.380.411.560white
845.10.390.211.70.02715.072.00.98943.50.4512.560white
855.10.420.01.80.04418.088.00.991573.680.7313.671red
865.10.420.011.50.01725.0102.00.98943.380.3612.371white
875.10.470.021.30.03418.044.00.99213.90.6212.860red
885.10.510.182.10.04216.0101.00.99243.460.8712.971red
895.10.520.062.70.05230.079.00.99323.320.439.350white
905.10.5850.01.70.04414.086.00.992643.560.9412.971red
915.20.1550.331.60.02813.059.00.989753.30.8411.981white
925.20.1550.331.60.02813.059.00.989753.30.8411.981white
935.20.160.340.80.02926.077.00.991553.250.5110.160white
945.20.170.270.70.0311.068.00.992183.30.419.850white
955.20.1850.221.00.0347.0123.00.992183.550.4410.1560white
965.20.20.273.20.04716.093.00.992353.440.5310.171white
975.20.210.311.70.04817.061.00.989533.240.3712.071white
985.20.220.466.20.06641.0187.00.993623.190.429.7333333333333350white
995.20.240.157.10.04332.0134.00.993783.240.489.960white
1005.20.240.453.80.02721.0128.00.9923.550.4911.281white
Rows: 1-100 | Columns: 14

Note

VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.

You can easily divide your dataset into training and testing subsets using the vDataFrame.train_test_split() method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.

data = vpd.load_winequality()
train, test = data.train_test_split(test_size = 0.2)

Warning

In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the vDataFrame.to_db() method to save your results into tables or temporary tables. This will help enhance the overall performance of the process.

Balancing the Dataset#

In VerticaPy, balancing a dataset to address class imbalances is made straightforward through the balance() function within the preprocessing module. This function enables users to rectify skewed class distributions efficiently. By specifying the target variable and setting parameters like the method for balancing, users can effortlessly achieve a more equitable representation of classes in their dataset. Whether opting for over-sampling, under-sampling, or a combination of both, VerticaPy’s balance() function streamlines the process, empowering users to enhance the performance and fairness of their machine learning models trained on imbalanced data.

To balance the dataset, use the following syntax.

from verticapy.machine_learning.vertica.preprocessing import balance

balanced_train = balance(
    name = "my_schema.train_balanced",
    input_relation = train,
    y = "good",
    method = "hybrid",
)

Note

With this code, a table named train_balanced is created in the my_schema schema. It can then be used to train the model. In the rest of the example, we will work with the full dataset.

Hint

Balancing the dataset is a crucial step in improving the accuracy of machine learning models, particularly when faced with imbalanced class distributions. By addressing disparities in the number of instances across different classes, the model becomes more adept at learning patterns from all classes rather than being biased towards the majority class. This, in turn, enhances the model’s ability to make accurate predictions for under-represented classes. The balanced dataset ensures that the model is not dominated by the majority class and, as a result, leads to more robust and unbiased model performance. Therefore, by employing techniques such as over-sampling, under-sampling, or a combination of both during dataset preparation, practitioners can significantly contribute to achieving higher accuracy and better generalization of their machine learning models.

Model Initialization#

First we import the RandomForestClassifier model:

from verticapy.machine_learning.vertica import RandomForestClassifier

Then we can create the model:

model = RandomForestClassifier(
    max_features = "auto",
    max_leaf_nodes = 32,
    sample = 0.5,
    max_depth = 3,
    min_samples_leaf = 5,
    min_info_gain = 0.0,
    nbins = 32,
)

Hint

In verticapy 1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.

Important

The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.

Model Training#

We can now fit the model:

model.fit(
    train,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density",
    ],
    "good",
    test,
)

Important

To train a model, you can directly use the vDataFrame or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. In verticapy, we don’t work using X matrices and y vectors. Instead, we work directly with lists of predictors and the response name.

Features Importance#

We can conveniently get the features importance:

result = model.features_importance()

Note

In models such as RandomForest, feature importance is calculated using the MDI (Mean Decreased Impurity). To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them.

Metrics#

We can get the entire report using:

model.report()
value
auc0.7526598414132288
prc_auc0.43951769633060406
accuracy0.8121632024634334
log_loss0.183867356058981
precision0.6666666666666666
recall0.016260162601626018
f1_score0.03174603174603175
mcc0.08298246149809468
informedness0.014360827368957496
markedness0.47950502706883213
csi0.016129032258064516
Rows: 1-11 | Columns: 2

Important

Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g. model.report(metrics = ["auc", "accuracy"]).

For classification models, we can easily modify the cutoff to observe the effect on different metrics:

model.report(cutoff = 0.2)
value
auc0.7526598414132288
prc_auc0.43951769633060406
accuracy0.6828329484218629
log_loss0.183867356058981
precision0.3359683794466403
recall0.6910569105691057
f1_score0.4521276595744681
mcc0.2988657565366034
informedness0.3719685914807864
markedness0.240129791804774
csi0.2920962199312715
Rows: 1-11 | Columns: 2

You can also use the score() function to compute any classification metric. The default metric is the accuracy:

model.score()
Out[3]: 0.8121632024634334

Prediction#

Prediction is straight-forward:

model.predict(
    test,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density",
    ],
    "prediction",
)
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
Abc
prediction
Varchar(1)
13.80.310.0211.10.03620.0114.00.992483.750.4412.460white0
23.90.2250.44.20.0329.0118.00.9893.570.3612.881white0
34.80.260.2310.60.03423.0111.00.992743.460.2811.571white0
44.80.330.06.50.02834.0163.00.99373.350.619.950white0
55.00.240.195.00.04317.0101.00.994383.670.5710.050white0
65.00.240.212.20.03931.0100.00.990983.690.6211.760white0
75.00.310.06.40.04643.0166.00.9943.30.639.960white0
85.00.330.161.50.04910.097.00.99173.480.4410.760white0
95.00.350.257.80.03124.0116.00.992413.390.411.360white0
105.00.350.257.80.03124.0116.00.992413.390.411.360white0
115.00.40.54.30.04629.080.00.99023.490.6613.660red0
125.10.230.181.00.05313.099.00.989563.220.3911.550white0
135.10.330.221.60.02718.089.00.98933.510.3812.571white0
145.10.330.276.70.02244.0129.00.992213.360.3911.071white0
155.10.350.266.80.03436.0120.00.991883.380.411.560white1
165.10.350.266.80.03436.0120.00.991883.380.411.560white1
175.20.20.273.20.04716.093.00.992353.440.5310.171white0
185.20.210.311.70.04817.061.00.989533.240.3712.071white0
195.20.240.157.10.04332.0134.00.993783.240.489.960white0
205.20.280.291.10.02818.069.00.991683.240.5410.060white0
215.20.320.251.80.10313.050.00.99573.380.559.250red0
225.20.340.376.20.03142.0133.00.990763.250.4112.560white0
235.20.3650.0813.50.04137.0142.00.9973.460.399.960white0
245.20.440.041.40.03643.0119.00.98943.360.3312.181white0
255.20.6450.02.150.0815.028.00.994443.780.6112.560red0
265.30.20.313.60.03622.091.00.992783.410.59.860white0
275.30.310.3810.50.03153.0140.00.993213.340.4611.760white0
285.30.310.3810.50.03153.0140.00.993213.340.4611.760white0
295.30.7150.191.50.1617.062.00.993953.620.6111.050red0
305.40.170.272.70.04928.0104.00.992243.460.5510.360white0
315.40.2550.331.20.05129.0122.00.990483.370.6611.360white0
325.40.290.381.20.02931.0132.00.988953.280.3612.460white0
335.40.290.381.20.02931.0132.00.988953.280.3612.460white0
345.40.290.473.00.05247.0145.00.9933.290.7510.060white0
355.40.3750.43.30.05429.0147.00.994823.420.529.150white0
365.40.460.152.10.02629.0130.00.989533.390.7713.481white0
375.40.530.162.70.03634.0128.00.988563.20.5313.281white0
385.40.740.091.70.08916.026.00.994023.670.5611.660red0
395.50.160.261.50.03235.0100.00.990763.430.7712.060white0
405.50.170.232.90.03910.0108.00.992433.280.510.050white0
415.50.170.232.90.03910.0108.00.992433.280.510.050white0
425.50.230.192.20.04439.0161.00.992093.190.4310.460white0
435.50.280.211.60.03223.085.00.990273.420.4212.550white0
445.50.3150.382.60.03310.069.00.99093.120.5910.860white0
455.50.3350.32.50.07127.0128.00.99243.140.519.660white0
465.50.340.262.20.02131.0119.00.989193.550.4913.081white0
475.50.3750.381.70.03617.098.00.991423.290.3910.560white0
485.60.120.332.90.04421.073.00.988963.170.3212.981white0
495.60.180.581.250.03429.0129.00.989843.510.612.071white0
505.60.1850.491.10.0328.0117.00.99183.550.4510.360white0
515.60.190.261.40.0312.076.00.99053.250.3710.971white0
525.60.190.270.90.0452.0103.00.990263.50.3911.250white0
535.60.20.221.30.04925.0155.00.992963.740.4310.050white0
545.60.210.41.30.04181.0147.00.99013.220.9511.681white0
555.60.2450.259.70.03212.068.00.9943.310.3410.550white0
565.60.260.2710.60.0327.0119.00.99473.40.3410.771white0
575.60.270.370.90.02511.049.00.988453.290.3313.160white0
585.60.280.284.20.04452.0158.00.9923.350.4410.771white0
595.60.310.371.40.07412.096.00.99543.320.589.250red0
605.60.310.7813.90.07423.092.00.996773.390.4810.560red0
615.60.340.36.90.03823.089.00.992663.250.4911.160white0
625.60.50.092.30.04917.099.00.99373.630.6313.050red0
635.60.540.041.70.0495.013.00.99423.720.5811.450red0
645.60.620.031.50.086.013.00.994983.660.6210.140red0
655.60.6950.066.80.0429.084.00.994323.440.4410.250white0
665.70.180.224.20.04225.0111.00.9943.350.399.450white0
675.70.180.262.20.02321.095.00.98933.070.5412.360white0
685.70.180.361.20.0469.071.00.991993.70.6810.971white0
695.70.210.251.10.03526.081.00.99023.310.5211.460white0
705.70.210.320.90.03838.0121.00.990743.240.4610.660white0
715.70.220.216.00.04441.0113.00.998623.220.468.960white0
725.70.220.216.00.04441.0113.00.998623.220.468.960white0
735.70.220.2216.650.04439.0110.00.998553.240.489.060white0
745.70.260.2417.80.05923.0124.00.997733.30.510.150white0
755.70.260.2510.40.027.057.00.9943.390.3710.650white0
765.70.280.282.20.01915.065.00.99023.060.5211.260white0
775.70.280.351.20.05239.0141.00.991083.440.6911.360white0
785.70.320.384.750.03323.094.00.9913.420.4211.871white0
795.70.40.355.10.02617.0113.00.990523.180.6712.460white0
805.70.410.211.90.04830.0112.00.991383.290.5511.260white0
815.71.130.091.50.1727.019.00.9943.50.489.840red0
825.80.130.2212.70.05824.0183.00.99563.320.4211.760white0
835.80.150.280.80.03743.0127.00.991983.240.519.350white0
845.80.190.241.30.04438.0128.00.993623.770.610.650white0
855.80.20.341.00.03540.086.00.989933.50.4211.750white0
865.80.220.251.50.02421.0109.00.992343.370.5810.460white0
875.80.230.22.00.04339.0154.00.992263.210.3910.260white0
885.80.230.271.80.04324.069.00.99333.380.319.460white0
895.80.250.2811.10.05645.0175.00.997553.420.439.550white0
905.80.260.181.20.03140.0114.00.99083.420.411.071white0
915.80.260.181.20.03140.0114.00.99083.420.411.071white0
925.80.270.2212.70.05842.0206.00.99463.320.3812.360white0
935.80.2750.35.40.04341.0149.00.99263.330.4210.871white0
945.80.280.669.10.03926.0159.00.99653.660.5510.850white0
955.80.30.121.60.03657.0163.00.992393.380.5910.560white0
965.80.30.231.50.03437.0121.00.988712.960.3412.160white0
975.80.310.324.50.02428.094.00.989063.250.5213.771white0
985.80.320.284.30.03246.0115.00.989463.160.5713.081white0
995.80.320.312.70.04925.0153.00.990673.440.7312.271white0
1005.80.3450.1510.80.03326.0120.00.994943.250.4910.060white0
Rows: 1-100 | Columns: 15

Note

Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the vDataFrame to the predict() function, but in this case, it’s essential that the column names of the vDataFrame match the predictors and response name in the model.

Probabilities#

It is also easy to get the model’s probabilities:

model.predict_proba(
    test,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density",
    ],
    "prediction",
)
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
Abc
prediction
Varchar(1)
Abc
prediction_0
Varchar(128)
Abc
prediction_1
Varchar(128)
13.80.310.0211.10.03620.0114.00.992483.750.4412.460white00.7177430.282257
23.90.2250.44.20.0329.0118.00.9893.570.3612.881white00.5340540.465946
34.80.260.2310.60.03423.0111.00.992743.460.2811.571white00.7177430.282257
44.80.330.06.50.02834.0163.00.99373.350.619.950white00.8550360.144964
55.00.240.195.00.04317.0101.00.994383.670.5710.050white00.8550360.144964
65.00.240.212.20.03931.0100.00.990983.690.6211.760white00.6752740.324726
75.00.310.06.40.04643.0166.00.9943.30.639.960white00.8550360.144964
85.00.330.161.50.04910.097.00.99173.480.4410.760white00.7637850.236215
95.00.350.257.80.03124.0116.00.992413.390.411.360white00.7225920.277408
105.00.350.257.80.03124.0116.00.992413.390.411.360white00.7225920.277408
115.00.40.54.30.04629.080.00.99023.490.6613.660red00.528320.47168
125.10.230.181.00.05313.099.00.989563.220.3911.550white00.7549780.245022
135.10.330.221.60.02718.089.00.98933.510.3812.571white00.5845480.415452
145.10.330.276.70.02244.0129.00.992213.360.3911.071white00.6791970.320803
155.10.350.266.80.03436.0120.00.991883.380.411.560white10.4954150.504585
165.10.350.266.80.03436.0120.00.991883.380.411.560white10.4954150.504585
175.20.20.273.20.04716.093.00.992353.440.5310.171white00.7660750.233925
185.20.210.311.70.04817.061.00.989533.240.3712.071white00.6923990.307601
195.20.240.157.10.04332.0134.00.993783.240.489.960white00.8550360.144964
205.20.280.291.10.02818.069.00.991683.240.5410.060white00.6536140.346386
215.20.320.251.80.10313.050.00.99573.380.559.250red00.9206780.0793224
225.20.340.376.20.03142.0133.00.990763.250.4112.560white00.5325850.467415
235.20.3650.0813.50.04137.0142.00.9973.460.399.960white00.9012630.098737
245.20.440.041.40.03643.0119.00.98943.360.3312.181white00.6105060.389494
255.20.6450.02.150.0815.028.00.994443.780.6112.560red00.8803170.119683
265.30.20.313.60.03622.091.00.992783.410.59.860white00.689770.31023
275.30.310.3810.50.03153.0140.00.993213.340.4611.760white00.6460580.353942
285.30.310.3810.50.03153.0140.00.993213.340.4611.760white00.6460580.353942
295.30.7150.191.50.1617.062.00.993953.620.6111.050red00.8803170.119683
305.40.170.272.70.04928.0104.00.992243.460.5510.360white00.7660750.233925
315.40.2550.331.20.05129.0122.00.990483.370.6611.360white00.7569970.243003
325.40.290.381.20.02931.0132.00.988953.280.3612.460white00.6324580.367542
335.40.290.381.20.02931.0132.00.988953.280.3612.460white00.6324580.367542
345.40.290.473.00.05247.0145.00.9933.290.7510.060white00.8024190.197581
355.40.3750.43.30.05429.0147.00.994823.420.529.150white00.8652740.134726
365.40.460.152.10.02629.0130.00.989533.390.7713.481white00.5421140.457886
375.40.530.162.70.03634.0128.00.988563.20.5313.281white00.5172930.482707
385.40.740.091.70.08916.026.00.994023.670.5611.660red00.8803170.119683
395.50.160.261.50.03235.0100.00.990763.430.7712.060white00.6536140.346386
405.50.170.232.90.03910.0108.00.992433.280.510.050white00.7577020.242298
415.50.170.232.90.03910.0108.00.992433.280.510.050white00.7577020.242298
425.50.230.192.20.04439.0161.00.992093.190.4310.460white00.7815360.218464
435.50.280.211.60.03223.085.00.990273.420.4212.550white00.6422430.357757
445.50.3150.382.60.03310.069.00.99093.120.5910.860white00.6259160.374084
455.50.3350.32.50.07127.0128.00.99243.140.519.660white00.8294620.170538
465.50.340.262.20.02131.0119.00.989193.550.4913.081white00.5995550.400445
475.50.3750.381.70.03617.098.00.991423.290.3910.560white00.6259160.374084
485.60.120.332.90.04421.073.00.988963.170.3212.981white00.5221790.477821
495.60.180.581.250.03429.0129.00.989843.510.612.071white00.6477520.352248
505.60.1850.491.10.0328.0117.00.99183.550.4510.360white00.6807830.319217
515.60.190.261.40.0312.076.00.99053.250.3710.971white00.6536140.346386
525.60.190.270.90.0452.0103.00.990263.50.3911.250white00.6205830.379417
535.60.20.221.30.04925.0155.00.992963.740.4310.050white00.8637640.136236
545.60.210.41.30.04181.0147.00.99013.220.9511.681white00.6324580.367542
555.60.2450.259.70.03212.068.00.9943.310.3410.550white00.8348390.165161
565.60.260.2710.60.0327.0119.00.99473.40.3410.771white00.8144640.185536
575.60.270.370.90.02511.049.00.988453.290.3313.160white00.6324580.367542
585.60.280.284.20.04452.0158.00.9923.350.4410.771white00.6945460.305454
595.60.310.371.40.07412.096.00.99543.320.589.250red00.891110.10889
605.60.310.7813.90.07423.092.00.996773.390.4810.560red00.891110.10889
615.60.340.36.90.03823.089.00.992663.250.4911.160white00.6509060.349094
625.60.50.092.30.04917.099.00.99373.630.6313.050red00.8803170.119683
635.60.540.041.70.0495.013.00.99423.720.5811.450red00.8803170.119683
645.60.620.031.50.086.013.00.994983.660.6210.140red00.8803170.119683
655.60.6950.066.80.0429.084.00.994323.440.4410.250white00.8550360.144964
665.70.180.224.20.04225.0111.00.9943.350.399.450white00.8445570.155443
675.70.180.262.20.02321.095.00.98933.070.5412.360white00.6205830.379417
685.70.180.361.20.0469.071.00.991993.70.6810.971white00.7368740.263126
695.70.210.251.10.03526.081.00.99023.310.5211.460white00.6422430.357757
705.70.210.320.90.03838.0121.00.990743.240.4610.660white00.6536140.346386
715.70.220.216.00.04441.0113.00.998623.220.468.960white00.8792130.120787
725.70.220.216.00.04441.0113.00.998623.220.468.960white00.8792130.120787
735.70.220.2216.650.04439.0110.00.998553.240.489.060white00.8792130.120787
745.70.260.2417.80.05923.0124.00.997733.30.510.150white00.9490140.0509856
755.70.260.2510.40.027.057.00.9943.390.3710.650white00.8096510.190349
765.70.280.282.20.01915.065.00.99023.060.5211.260white00.6205830.379417
775.70.280.351.20.05239.0141.00.991083.440.6911.360white00.7569970.243003
785.70.320.384.750.03323.094.00.9913.420.4211.871white00.5325850.467415
795.70.40.355.10.02617.0113.00.990523.180.6712.460white00.520710.47929
805.70.410.211.90.04830.0112.00.991383.290.5511.260white00.7382080.261792
815.71.130.091.50.1727.019.00.9943.50.489.840red00.9086540.0913461
825.80.130.2212.70.05824.0183.00.99563.320.4211.760white00.9043210.0956787
835.80.150.280.80.03743.0127.00.991983.240.519.350white00.7368740.263126
845.80.190.241.30.04438.0128.00.993623.770.610.650white00.8445570.155443
855.80.20.341.00.03540.086.00.989933.50.4211.750white00.6205830.379417
865.80.220.251.50.02421.0109.00.992343.370.5810.460white00.7625020.237498
875.80.230.22.00.04339.0154.00.992263.210.3910.260white00.7930150.206985
885.80.230.271.80.04324.069.00.99333.380.319.460white00.741650.25835
895.80.250.2811.10.05645.0175.00.997553.420.439.550white00.9010370.0989632
905.80.260.181.20.03140.0114.00.99083.420.411.071white00.6752740.324726
915.80.260.181.20.03140.0114.00.99083.420.411.071white00.6752740.324726
925.80.270.2212.70.05842.0206.00.99463.320.3812.360white00.8884570.111543
935.80.2750.35.40.04341.0149.00.99263.330.4210.871white00.6945460.305454
945.80.280.669.10.03926.0159.00.99653.660.5510.850white00.8961820.103818
955.80.30.121.60.03657.0163.00.992393.380.5910.560white00.7930150.206985
965.80.30.231.50.03437.0121.00.988712.960.3412.160white00.6422430.357757
975.80.310.324.50.02428.094.00.989063.250.5213.771white00.5221790.477821
985.80.320.284.30.03246.0115.00.989463.160.5713.081white00.5011510.498849
995.80.320.312.70.04925.0153.00.990673.440.7312.271white00.6793560.320644
1005.80.3450.1510.80.03326.0120.00.994943.250.4910.060white00.8463180.153682
Rows: 1-100 | Columns: 17

Note

Probabilities are added to the vDataFrame, and VerticaPy uses the corresponding probability function in SQL behind the scenes. You can use the pos_label parameter to add only the probability of the selected category.

Confusion Matrix#

You can obtain the confusion matrix of your choice by specifying the desired cutoff.

model.confusion_matrix(cutoff = 0.5)
Out[4]: 
array([[1053,    0],
       [ 246,    0]])

Note

In classification, the cutoff is a threshold value used to determine class assignment based on predicted probabilities or scores from a classification model. In binary classification, if the predicted probability for a specific class is greater than or equal to the cutoff, the instance is assigned to the positive class; otherwise, it is assigned to the negative class. Adjusting the cutoff allows for trade-offs between true positives and false positives, enabling the model to be optimized for specific objectives or to consider the relative costs of different classification errors. The choice of cutoff is critical for tailoring the model’s performance to meet specific needs.

Main Plots (Classification Curves)#

Classification models allow for the creation of various plots that are very helpful in understanding the model, such as the ROC Curve, PRC Curve, Cutoff Curve, Gain Curve, and more.

Most of the classification curves can be found in the Machine Learning - Classification Curve.

For example, let’s draw the model’s ROC curve.

model.roc_curve()

Important

Most of the curves have a parameter called nbins, which is essential for estimating metrics. The larger the nbins, the more precise the estimation, but it can significantly impact performance. Exercise caution when increasing this parameter excessively.

Hint

In binary classification, various curves can be easily plotted. However, in multi-class classification, it’s important to select the pos_label, representing the class to be treated as positive when drawing the curve.

Other Plots#

Tree models can be visualized by drawing their tree plots. For more examples, check out Machine Learning - Tree Plots.

model.plot_tree()
../_images/machine_learning_vertica_tree_rf_classifier_.png

Note

The above example may not render properly in the doc because of the huge size of the tree. But it should render nicely in jupyter environment.

In order to plot graph using graphviz separately, you can extract the graphviz DOT file code as follows:

model.to_graphviz()
Out[5]: 'digraph Tree {\ngraph [bgcolor="#FFFFFF00"];\n0 [label="\\"density\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n0 -> 1 [label="<= 0.991973", color="#666666", fontcolor="#666666"]\n0 -> 2 [label="> 0.991973", color="#666666", fontcolor="#666666"]\n1 [label="\\"residual_sugar\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n1 -> 3 [label="<= 2.6375", color="#666666", fontcolor="#666666"]\n1 -> 4 [label="> 2.6375", color="#666666", fontcolor="#666666"]\n2 [label="\\"volatile_acidity\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n2 -> 5 [label="<= 0.220625", color="#666666", fontcolor="#666666"]\n2 -> 6 [label="> 0.220625", color="#666666", fontcolor="#666666"]\n3 [label="\\"fixed_acidity\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n3 -> 7 [label="<= 6.759375", color="#666666", fontcolor="#666666"]\n3 -> 8 [label="> 6.759375", color="#666666", fontcolor="#666666"]\n4 [label="\\"residual_sugar\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n4 -> 9 [label="<= 6.7125", color="#666666", fontcolor="#666666"]\n4 -> 10 [label="> 6.7125", color="#666666", fontcolor="#666666"]\n5 [label="\\"fixed_acidity\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n5 -> 11 [label="<= 6.39375", color="#666666", fontcolor="#666666"]\n5 -> 12 [label="> 6.39375", color="#666666", fontcolor="#666666"]\n6 [label="\\"density\\"", shape="box", style="filled", fillcolor="#FFFFFF00", fontcolor="#666666", color="#666666"]\n6 -> 13 [label="<= 0.995215", color="#666666", fontcolor="#666666"]\n6 -> 14 [label="> 0.995215", color="#666666", fontcolor="#666666"]\n7 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.59</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.41</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n8 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.74</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.26</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n9 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#efc5b5" color="#666666"><FONT color="#000000"><b>prediction: 1 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.48</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.52</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n10 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#efc5b5" color="#666666"><FONT color="#000000"><b>prediction: 1 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.23</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.77</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n11 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.89</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.11</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n12 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.75</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.25</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n13 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.83</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.17</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n14 [label=<<table border="0" cellspacing="0"> <tr><td port="port1" border="1" bgcolor="#87cefa" color="#666666"><FONT color="#000000"><b>prediction: 0 </b></FONT></td></tr><tr><td port="port0" border="1" align="left" color="#666666"><FONT color="#666666">prob(0): 0.94</FONT></td></tr><tr><td port="port1" border="1" align="left" color="#666666"><FONT color="#666666">prob(1): 0.06</FONT></td></tr></table>>, fillcolor="#FFFFFF00", fontcolor="#666666", shape="none", color="#666666"]\n}'

This string can then be copied into a DOT file which can beparsed by graphviz.

Contour plot is another useful plot that can be produced for models with two predictors.

model.contour()

Important

Machine learning models with two predictors can usually benefit from their own contour plot. This visual representation aids in exploring predictions and gaining a deeper understanding of how these models perform in different scenarios. Please refer to Contour Plot for more examples.

Parameter Modification#

In order to see the parameters:

model.get_params()
Out[6]: 
{'n_estimators': 10,
 'max_features': 'auto',
 'max_leaf_nodes': 32,
 'sample': 0.5,
 'max_depth': 3,
 'min_samples_leaf': 5,
 'min_info_gain': 0.0,
 'nbins': 32}

And to manually change some of the parameters:

model.set_params({'max_depth': 5})

Model Register#

In order to register the model for tracking and versioning:

model.register("model_v1")

Please refer to Model Tracking and Versioning for more details on model tracking and versioning.

Model Exporting#

To Memmodel

model.to_memmodel()

Note

MemModel objects serve as in-memory representations of machine learning models. They can be used for both in-database and in-memory prediction tasks. These objects can be pickled in the same way that you would pickle a scikit-learn model.

The following methods for exporting the model use MemModel, and it is recommended to use MemModel directly.

To SQL

You can get the SQL code by:

model.to_sql()
Out[8]: '(CASE WHEN ((CASE WHEN "density" < 0.991973 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN (CASE WHEN "fixed_acidity" < 6.759375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "residual_sugar" < 6.7125 THEN 1.0 ELSE 1.0 END) END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN (CASE WHEN "fixed_acidity" < 6.39375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "density" < 0.995215 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "citric_acid" < 0.259375 THEN (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "residual_sugar" < 4.675 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "density" < 0.995215 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "density" < 0.991973 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 1.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "density" < 0.993594 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.126875 THEN 1.0 ELSE 0.0 END) END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN (CASE WHEN "density" < 0.996836 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "citric_acid" < 0.31125 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "density" < 0.991973 THEN (CASE WHEN "volatile_acidity" < 0.314375 THEN (CASE WHEN "density" < 0.990352 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "citric_acid" < 0.10375 THEN 0.0 ELSE 1.0 END) END) ELSE (CASE WHEN "citric_acid" < 0.259375 THEN 0.0 ELSE (CASE WHEN "density" < 0.993594 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "density" < 0.991973 THEN (CASE WHEN "density" < 0.990352 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 1.0 END) ELSE (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "density" < 0.995215 THEN (CASE WHEN "residual_sugar" < 8.75 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "density" < 0.993594 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "density" < 0.995215 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "density" < 0.995215 THEN 0.0 ELSE (CASE WHEN "citric_acid" < 0.259375 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "density" < 0.991973 THEN (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "citric_acid" < 0.363125 THEN 0.0 ELSE 0.0 END) ELSE 0.0 END) ELSE (CASE WHEN "density" < 0.993594 THEN (CASE WHEN "residual_sugar" < 6.7125 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "density" < 0.993594 THEN (CASE WHEN "volatile_acidity" < 0.314375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "density" < 0.995215 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "citric_acid" < 0.259375 THEN (CASE WHEN "fixed_acidity" < 5.6625 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "citric_acid" < 0.259375 THEN (CASE WHEN "chlorides" < 0.027813 THEN 1.0 ELSE (CASE WHEN "chlorides" < 0.046625 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "chlorides" < 0.046625 THEN (CASE WHEN "citric_acid" < 0.466875 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "volatile_acidity" < 0.220625 THEN 0.0 ELSE 0.0 END) END) END) + (CASE WHEN "citric_acid" < 0.259375 THEN (CASE WHEN "density" < 0.991973 THEN (CASE WHEN "volatile_acidity" < 0.408125 THEN 0.0 ELSE 1.0 END) ELSE (CASE WHEN "fixed_acidity" < 5.6625 THEN 0.0 ELSE 0.0 END) END) ELSE (CASE WHEN "density" < 0.993594 THEN (CASE WHEN "residual_sugar" < 2.6375 THEN 0.0 ELSE 0.0 END) ELSE (CASE WHEN "fixed_acidity" < 6.759375 THEN 0.0 ELSE 0.0 END) END) END)) / 10 > 0.5 THEN 1 ELSE 0 END)'

To Python

To obtain the prediction function in Python syntax, use the following code:

X = [[4.2, 0.17, 0.36, 1.8, 0.029, 0.9899]]

model.to_python()(X)
Out[10]: array([0])

Hint

The to_python() method is used to retrieve predictions, probabilities, or cluster distances. For specific details on how to use this method for different model types, refer to the relevant documentation for each model.

__init__(name: str = None, overwrite_model: bool = False, n_estimators: int = 10, max_features: Literal['auto', 'max'] | int = 'auto', max_leaf_nodes: int | float | Decimal = 1000000000.0, sample: float = 0.632, max_depth: int = 5, min_samples_leaf: int = 1, min_info_gain: int | float | Decimal = 0.0, nbins: int = 32) None#

Must be overridden in the child class

Methods

__init__([name, overwrite_model, ...])

Must be overridden in the child class

classification_report([metrics, cutoff, ...])

Computes a classification report using multiple model evaluation metrics (auc, accuracy, f1...).

confusion_matrix([pos_label, cutoff])

Computes the model confusion matrix.

contour([pos_label, nbins, chart])

Draws the model's contour plot.

cutoff_curve([pos_label, nbins, show, chart])

Draws the model Cutoff curve.

deploySQL([X, pos_label, cutoff, allSQL])

Returns the SQL code needed to deploy the model.

does_model_exists(name[, raise_error, ...])

Checks whether the model is stored in the Vertica database.

drop()

Drops the model from the Vertica database.

export_models(name, path[, kind])

Exports machine learning models.

features_importance([tree_id, show, chart])

Computes the model's features importance.

fit(input_relation, X, y[, test_relation, ...])

Trains the model.

get_attributes([attr_name])

Returns the model attributes.

get_match_index(x, col_list[, str_check])

Returns the matching index.

get_params()

Returns the parameters of the model.

get_plotting_lib([class_name, chart, ...])

Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.

get_score([tree_id])

Returns the feature importance metrics for the input tree.

get_tree([tree_id])

Returns a table with all the input tree information.

get_vertica_attributes([attr_name])

Returns the model Vertica attributes.

import_models(path[, schema, kind])

Imports machine learning models.

lift_chart([pos_label, nbins, show, chart])

Draws the model Lift Chart.

plot([max_nb_points, chart])

Draws the model.

plot_tree([tree_id, pic_path])

Draws the input tree.

prc_curve([pos_label, nbins, show, chart])

Draws the model PRC curve.

predict(vdf[, X, name, cutoff, inplace])

Predicts using the input relation.

predict_proba(vdf[, X, name, pos_label, inplace])

Returns the model's probabilities using the input relation.

register(registered_name[, raise_error])

Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.

report([metrics, cutoff, labels, nbins])

Computes a classification report using multiple model evaluation metrics (auc, accuracy, f1...).

roc_curve([pos_label, nbins, show, chart])

Draws the model ROC curve.

score([metric, average, pos_label, cutoff, ...])

Computes the model score.

set_params([parameters])

Sets the parameters of the model.

summarize()

Summarizes the model.

to_binary(path)

Exports the model to the Vertica Binary format.

to_graphviz([tree_id, classes_color, ...])

Returns the code for a Graphviz tree.

to_memmodel()

Converts the model to an InMemory object that can be used for different types of predictions.

to_pmml(path)

Exports the model to PMML.

to_python([return_proba, ...])

Returns the Python function needed for in-memory scoring without using built-in Vertica functions.

to_sql([X, return_proba, ...])

Returns the SQL code needed to deploy the model without using built-in Vertica functions.

to_tf(path)

Exports the model to the Frozen Graph format (TensorFlow).

Attributes