Loading...

verticapy.machine_learning.vertica.linear_model.LogisticRegression#

class verticapy.machine_learning.vertica.linear_model.LogisticRegression(name: str = None, overwrite_model: bool = False, penalty: Literal['none', 'l1', 'l2', 'enet', None] = 'none', tol: float = 1e-06, C: int | float | Decimal = 1.0, max_iter: int = 100, solver: Literal['newton', 'bfgs', 'cgd'] = 'newton', l1_ratio: float = 0.5, fit_intercept: bool = True)#

Creates a LogisticRegression object using the Vertica Logistic Regression algorithm.

Parameters#

name: str, optional

Name of the model. The model is stored in the database.

overwrite_model: bool, optional

If set to True, training a model with the same name as an existing model overwrites the existing model.

penalty: str, optional

Determines the method of regularization.

  • None:

    No Regularization.

  • l1:

    L1 Regularization.

  • l2:

    L2 Regularization.

  • enet:

    Combination between L1 and L2.

tol: float, optional

Determines whether the algorithm has reached the specified accuracy result.

C: PythonNumber, optional

The regularization parameter value. The value must be zero or non-negative.

max_iter: int, optional

Determines the maximum number of iterations the algorithm performs before achieving the specified accuracy result.

solver: str, optional

The optimizer method used to train the model.

  • newton:

    Newton Method.

  • bfgs:

    Broyden Fletcher Goldfarb Shanno.

  • cgd:

    Coordinate Gradient Descent.

l1_ratio: float, optional

ENet mixture parameter that defines the provided ratio of L1 versus L2 regularization.

fit_intercept: bool, optional

boolean, specifies whether the model includes an intercept. If set to False, no intercept is used in training the model. Note that setting fit_intercept to False does not work well with the BFGS optimizer.

Attributes#

Many attributes are created during the fitting phase.

coef_: numpy.array

The regression coefficients. The order of coefficients is the same as the order of columns used during the fitting phase.

intercept_: float

The expected value of the dependent variable when all independent variables are zero, serving as the baseline or constant term in the model.

features_importance_: numpy.array

The importance of features is computed through the model coefficients, which are normalized based on their range. Subsequently, an activation function calculates the final score. It is necessary to use the features_importance() method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.

classes_: numpy.array

The classes labels.

Note

All attributes can be accessed using the get_attributes() method.

Note

Several other attributes can be accessed by using the get_vertica_attributes() method.

Examples#

The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.

Load data for machine learning#

We import verticapy:

import verticapy as vp

Hint

By assigning an alias to verticapy, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions from verticapy are used as intended without interfering with functions from other libraries.

For this example, we will use the winequality dataset.

import verticapy.datasets as vpd

data = vpd.load_winequality()
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
13.80.310.0211.10.03620.0114.00.992483.750.4412.460white
23.90.2250.44.20.0329.0118.00.9893.570.3612.881white
34.20.170.361.80.02993.0161.00.989993.650.8912.071white
44.20.2150.235.10.04164.0157.00.996883.420.448.030white
54.40.320.394.30.0331.0127.00.989043.460.3612.881white
64.40.460.12.80.02431.0111.00.988163.480.3413.160white
74.40.540.095.10.03852.097.00.990223.410.412.271white
84.50.190.210.950.03389.0159.00.993323.340.428.050white
94.60.4450.01.40.05311.0178.00.994263.790.5510.250white
104.60.520.152.10.0548.065.00.99343.90.5613.140red
114.70.1450.291.00.04235.090.00.99083.760.4911.360white
124.70.3350.141.30.03669.0168.00.992123.470.4610.550white
134.70.4550.181.90.03633.0106.00.987463.210.8314.071white
144.70.60.172.30.05817.0106.00.99323.850.612.960red
154.70.670.091.00.025.09.00.987223.30.3413.650white
164.70.7850.03.40.03623.0134.00.989813.530.9213.860white
174.80.130.321.20.04240.098.00.98983.420.6411.871white
184.80.170.282.90.0322.0111.00.99023.380.3411.371white
194.80.210.2110.20.03717.0112.00.993243.660.4812.271white
204.80.2250.381.20.07447.0130.00.991323.310.410.360white
214.80.260.2310.60.03423.0111.00.992743.460.2811.571white
224.80.290.231.10.04438.0180.00.989243.280.3411.960white
234.80.330.06.50.02834.0163.00.99373.350.619.950white
244.80.340.06.50.02833.0163.00.99393.360.619.960white
254.80.650.121.10.0134.010.00.992463.320.3613.540white
264.90.2350.2711.750.0334.0118.00.99543.070.59.460white
274.90.330.311.20.01639.0150.00.987133.330.5914.081white
284.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white
294.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white
304.90.3450.341.00.06832.0143.00.991383.240.410.150white
314.90.3450.341.00.06832.0143.00.991383.240.410.150white
324.90.420.02.10.04816.042.00.991543.710.7414.071red
334.90.470.171.90.03560.0148.00.989643.270.3511.560white
345.00.170.561.50.02624.0115.00.99063.480.3910.871white
355.00.20.41.90.01520.098.00.98973.370.5512.0560white
365.00.2350.2711.750.0334.0118.00.99543.070.59.460white
375.00.240.195.00.04317.0101.00.994383.670.5710.050white
385.00.240.212.20.03931.0100.00.990983.690.6211.760white
395.00.240.341.10.03449.0158.00.987743.320.3213.171white
405.00.2550.222.70.04346.0153.00.992383.750.7611.360white
415.00.270.324.50.03258.0178.00.989563.450.3112.671white
425.00.270.324.50.03258.0178.00.989563.450.3112.671white
435.00.270.41.20.07642.0124.00.992043.320.4710.160white
445.00.290.545.70.03554.0155.00.989763.270.3412.981white
455.00.30.333.70.0354.0173.00.98873.360.313.071white
465.00.310.06.40.04643.0166.00.9943.30.639.960white
475.00.330.161.50.04910.097.00.99173.480.4410.760white
485.00.330.161.50.04910.097.00.99173.480.4410.760white
495.00.330.161.50.04910.097.00.99173.480.4410.760white
505.00.330.184.60.03240.0124.00.991143.180.411.060white
515.00.330.2311.80.0323.0158.00.993223.410.6411.860white
525.00.350.257.80.03124.0116.00.992413.390.411.360white
535.00.350.257.80.03124.0116.00.992413.390.411.360white
545.00.380.011.60.04826.060.00.990843.70.7514.060red
555.00.40.54.30.04629.080.00.99023.490.6613.660red
565.00.420.242.00.0619.050.00.99173.720.7414.081red
575.00.440.0418.60.03938.0128.00.99853.370.5710.260white
585.00.4550.181.90.03633.0106.00.987463.210.8314.071white
595.00.550.148.30.03235.0164.00.99183.530.5112.581white
605.00.610.121.30.00965.0100.00.98743.260.3713.550white
615.00.740.01.20.04116.046.00.992584.010.5912.560red
625.01.020.041.40.04541.085.00.99383.750.4810.540red
635.01.040.241.60.0532.096.00.99343.740.6211.550red
645.10.110.321.60.02812.090.00.990083.570.5212.260white
655.10.140.250.70.03915.089.00.99193.220.439.260white
665.10.1650.225.70.04742.0146.00.99343.180.559.960white
675.10.210.281.40.04748.0148.00.991683.50.4910.450white
685.10.230.181.00.05313.099.00.989563.220.3911.550white
695.10.250.361.30.03540.078.00.98913.230.6412.171white
705.10.260.331.10.02746.0113.00.989463.350.4311.471white
715.10.260.346.40.03426.099.00.994493.230.419.260white
725.10.290.288.30.02627.0107.00.993083.360.3711.060white
735.10.290.288.30.02627.0107.00.993083.360.3711.060white
745.10.30.32.30.04840.0150.00.989443.290.4612.260white
755.10.3050.131.750.03617.073.00.993.40.5112.333333333333350white
765.10.310.30.90.03728.0152.00.9923.540.5610.160white
775.10.330.221.60.02718.089.00.98933.510.3812.571white
785.10.330.221.60.02718.089.00.98933.510.3812.571white
795.10.330.221.60.02718.089.00.98933.510.3812.571white
805.10.330.276.70.02244.0129.00.992213.360.3911.071white
815.10.350.266.80.03436.0120.00.991883.380.411.560white
825.10.350.266.80.03436.0120.00.991883.380.411.560white
835.10.350.266.80.03436.0120.00.991883.380.411.560white
845.10.390.211.70.02715.072.00.98943.50.4512.560white
855.10.420.01.80.04418.088.00.991573.680.7313.671red
865.10.420.011.50.01725.0102.00.98943.380.3612.371white
875.10.470.021.30.03418.044.00.99213.90.6212.860red
885.10.510.182.10.04216.0101.00.99243.460.8712.971red
895.10.520.062.70.05230.079.00.99323.320.439.350white
905.10.5850.01.70.04414.086.00.992643.560.9412.971red
915.20.1550.331.60.02813.059.00.989753.30.8411.981white
925.20.1550.331.60.02813.059.00.989753.30.8411.981white
935.20.160.340.80.02926.077.00.991553.250.5110.160white
945.20.170.270.70.0311.068.00.992183.30.419.850white
955.20.1850.221.00.0347.0123.00.992183.550.4410.1560white
965.20.20.273.20.04716.093.00.992353.440.5310.171white
975.20.210.311.70.04817.061.00.989533.240.3712.071white
985.20.220.466.20.06641.0187.00.993623.190.429.7333333333333350white
995.20.240.157.10.04332.0134.00.993783.240.489.960white
1005.20.240.453.80.02721.0128.00.9923.550.4911.281white
Rows: 1-100 | Columns: 14

Note

VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.

You can easily divide your dataset into training and testing subsets using the vDataFrame.train_test_split() method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.

data = vpd.load_winequality()
train, test = data.train_test_split(test_size = 0.2)

Warning

In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the vDataFrame.to_db() method to save your results into tables or temporary tables. This will help enhance the overall performance of the process.

Balancing the Dataset#

In VerticaPy, balancing a dataset to address class imbalances is made straightforward through the balance() function within the preprocessing module. This function enables users to rectify skewed class distributions efficiently. By specifying the target variable and setting parameters like the method for balancing, users can effortlessly achieve a more equitable representation of classes in their dataset. Whether opting for over-sampling, under-sampling, or a combination of both, VerticaPy’s balance() function streamlines the process, empowering users to enhance the performance and fairness of their machine learning models trained on imbalanced data.

To balance the dataset, use the following syntax.

from verticapy.machine_learning.vertica.preprocessing import balance

balanced_train = balance(
    name = "my_schema.train_balanced",
    input_relation = train,
    y = "good",
    method = "hybrid",
)

Note

With this code, a table named train_balanced is created in the my_schema schema. It can then be used to train the model. In the rest of the example, we will work with the full dataset.

Hint

Balancing the dataset is a crucial step in improving the accuracy of machine learning models, particularly when faced with imbalanced class distributions. By addressing disparities in the number of instances across different classes, the model becomes more adept at learning patterns from all classes rather than being biased towards the majority class. This, in turn, enhances the model’s ability to make accurate predictions for under-represented classes. The balanced dataset ensures that the model is not dominated by the majority class and, as a result, leads to more robust and unbiased model performance. Therefore, by employing techniques such as over-sampling, under-sampling, or a combination of both during dataset preparation, practitioners can significantly contribute to achieving higher accuracy and better generalization of their machine learning models.

Model Initialization#

First we import the LogisticRegression model:

from verticapy.machine_learning.vertica import LogisticRegression

Then we can create the model:

model = LogisticRegression(
    tol = 1e-6,
    max_iter = 100,
    solver = 'newton',
    fit_intercept = True,
)

Hint

In verticapy 1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.

Important

The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.

Model Training#

We can now fit the model:

model.fit(
    train,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density",
    ],
    "good",
    test,
)

Important

To train a model, you can directly use the vDataFrame or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. In verticapy, we don’t work using X matrices and y vectors. Instead, we work directly with lists of predictors and the response name.

Features Importance#

We can conveniently get the features importance:

result = model.features_importance()

Note

For LinearModel, feature importance is computed using the coefficients. These coefficients are then normalized using the feature distribution. An activation function is applied to get the final score.

Metrics#

We can get the entire report using:

model.report()
value
auc0.7180360774873755
prc_auc0.3437908329067543
accuracy0.8003084040092521
log_loss0.192819312539489
precision0.40476190476190477
recall0.06772908366533864
f1_score0.11604095563139931
mcc0.09781667631788643
informedness0.043828510051571845
markedness0.21830772149497246
csi0.06159420289855073
Rows: 1-11 | Columns: 2

Important

Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g. model.report(metrics = ["auc", "accuracy"]).

For classification models, we can easily modify the cutoff to observe the effect on different metrics:

model.report(cutoff = 0.2)
value
auc0.7180360774873755
prc_auc0.3437908329067543
accuracy0.6738627602158828
log_loss0.192819312539489
precision0.3286852589641434
recall0.6573705179282868
f1_score0.4382470119521912
mcc0.27186878823730154
informedness0.3351907856147114
markedness0.22050915833521256
csi0.28061224489795916
Rows: 1-11 | Columns: 2

You can also use the LinearModel.score function to compute any classification metric. The default metric is the accuracy:

model.score()
Out[2]: 0.8003084040092521

Prediction#

Prediction is straight-forward:

model.predict(
    test,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density",
    ],
    "prediction",
)
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
123
prediction
Integer
14.40.320.394.30.0331.0127.00.989043.460.3612.881white0
24.70.1450.291.00.04235.090.00.99083.760.4911.360white0
34.70.3350.141.30.03669.0168.00.992123.470.4610.550white0
44.70.670.091.00.025.09.00.987223.30.3413.650white0
54.90.3450.341.00.06832.0143.00.991383.240.410.150white0
64.90.420.02.10.04816.042.00.991543.710.7414.071red0
74.90.470.171.90.03560.0148.00.989643.270.3511.560white0
85.00.240.212.20.03931.0100.00.990983.690.6211.760white0
95.00.290.545.70.03554.0155.00.989763.270.3412.981white0
105.00.330.161.50.04910.097.00.99173.480.4410.760white0
115.00.350.257.80.03124.0116.00.992413.390.411.360white0
125.00.420.242.00.0619.050.00.99173.720.7414.081red0
135.00.740.01.20.04116.046.00.992584.010.5912.560red0
145.10.260.331.10.02746.0113.00.989463.350.4311.471white0
155.10.290.288.30.02627.0107.00.993083.360.3711.060white0
165.10.330.221.60.02718.089.00.98933.510.3812.571white0
175.10.350.266.80.03436.0120.00.991883.380.411.560white0
185.20.1850.221.00.0347.0123.00.992183.550.4410.1560white0
195.20.250.231.40.04720.077.00.990013.320.6211.450white0
205.20.360.021.60.03124.0104.00.98963.440.3512.260white0
215.20.380.267.70.05320.0103.00.99253.270.4512.260white0
225.20.480.041.60.05419.0106.00.99273.540.6212.271red0
235.20.490.262.30.0923.074.00.99533.710.6212.260red0
245.30.160.391.00.02840.0101.00.991563.570.5910.660white0
255.30.210.290.70.02811.066.00.992153.30.49.850white0
265.30.240.331.30.03325.097.00.99063.590.3811.081white0
275.30.260.235.150.03448.0160.00.99523.820.5110.571white0
285.30.360.276.30.02840.0132.00.991863.370.411.660white0
295.30.40.253.90.03145.0130.00.990723.310.5811.7571white0
305.30.430.111.10.0296.051.00.990763.510.4811.240white0
315.30.570.011.70.0545.027.00.99343.570.8412.571red0
325.30.7150.191.50.1617.062.00.993953.620.6111.050red0
335.30.760.032.70.04327.093.00.99323.340.389.250white0
345.40.2550.331.20.05129.0122.00.990483.370.6611.360white0
355.40.290.381.20.02931.0132.00.988953.280.3612.460white0
365.40.290.473.00.05247.0145.00.9933.290.7510.060white0
375.40.4150.191.60.03927.088.00.992653.540.4110.071white0
385.40.530.162.70.03634.0128.00.988563.20.5313.281white0
395.40.580.081.90.05920.031.00.994843.50.6410.260red0
405.50.4850.01.50.0658.0103.00.9943.630.49.740white0
415.60.1750.290.80.04320.067.00.991123.280.489.960white0
425.60.180.271.70.0331.0103.00.988923.350.3712.960white0
435.60.180.581.250.03429.0129.00.989843.510.612.071white0
445.60.190.261.40.0312.076.00.99053.250.3710.971white0
455.60.190.312.70.02711.0100.00.989643.460.413.271white0
465.60.20.6610.20.04378.0175.00.99452.980.4310.471white0
475.60.230.293.10.02319.089.00.990683.250.5111.260white0
485.60.2350.291.20.04733.0127.00.9913.340.511.071white0
495.60.240.342.00.04114.073.00.989813.040.4511.671white0
505.60.310.371.40.07412.096.00.99543.320.589.250red0
515.60.320.328.30.04332.0105.00.992663.240.4711.260white0
525.60.320.337.40.03725.095.00.992683.250.4911.160white0
535.60.350.145.00.04648.0198.00.99373.30.7110.350white0
545.60.50.092.30.04917.099.00.99373.630.6313.050red0
555.60.50.092.30.04917.099.00.99373.630.6313.050red0
565.60.6150.01.60.08916.059.00.99433.580.529.950red0
575.60.620.031.50.086.013.00.994983.660.6210.140red0
585.60.660.02.50.0667.015.00.992563.520.5812.950red0
595.60.6950.066.80.0429.084.00.994323.440.4410.250white0
605.60.850.051.40.04512.088.00.99243.560.8212.981red0
615.70.160.266.30.04328.0113.00.99363.060.589.960white0
625.70.220.216.00.04441.0113.00.998623.220.468.960white0
635.70.220.2216.650.04439.0110.00.998553.240.489.060white0
645.70.230.257.950.04216.0108.00.994863.440.6110.360white0
655.70.250.2612.50.04952.5106.00.996913.080.459.460white0
665.70.2550.651.20.07917.0137.00.993073.20.429.450white0
675.70.260.2417.80.05923.0124.00.997733.30.510.150white0
685.70.280.361.80.04138.090.00.990023.270.9811.971white0
695.70.320.181.40.02926.0104.00.99063.440.3711.060white0
705.70.320.384.750.03323.094.00.9913.420.4211.871white0
715.70.3350.341.00.0413.0174.00.9923.270.6610.050white0
725.80.130.2212.70.05824.0183.00.99563.320.4211.760white0
735.80.170.341.80.04596.0170.00.990353.380.911.881white0
745.80.170.361.30.03611.070.00.992023.430.6810.471white0
755.80.180.371.10.03631.096.00.989423.160.4812.060white0
765.80.190.2510.80.04233.0124.00.996463.220.419.260white0
775.80.210.321.60.04538.095.00.989463.230.9412.481white0
785.80.230.22.00.04339.0154.00.992263.210.3910.260white0
795.80.230.271.80.04324.069.00.99333.380.319.460white0
805.80.240.2610.050.03963.0162.00.993753.330.511.260white0
815.80.250.2413.30.04441.0137.00.99723.340.429.550white0
825.80.260.249.20.04455.0152.00.99613.310.389.450white0
835.80.280.33.90.02636.0105.00.989633.260.5812.7560white0
845.80.280.344.00.03140.099.00.98963.390.3912.871white0
855.80.280.669.10.03926.0159.00.99653.660.5510.850white0
865.80.290.151.10.02912.083.00.98983.30.411.460white0
875.80.290.261.70.0633.011.00.99153.390.5413.560red0
885.80.290.333.70.02930.088.00.989943.250.4212.360white0
895.80.30.384.90.03922.086.00.989633.230.5813.171white0
905.80.30.384.90.03922.086.00.989633.230.5813.171white0
915.80.340.167.00.03726.0116.00.99493.460.4510.071white0
925.80.340.216.60.0450.0167.00.99413.290.6210.050white0
935.80.3450.1510.80.03326.0120.00.994943.250.4910.060white0
945.80.360.380.90.0373.075.00.99043.280.3411.440white0
955.80.380.261.10.05820.0140.00.992713.270.439.760white0
965.80.390.477.50.02712.088.00.99073.380.4514.060white0
975.80.610.111.80.06618.028.00.994833.550.6610.960red0
985.81.010.662.00.03915.088.00.993573.660.611.560red0
995.90.170.280.70.0275.028.00.989853.130.3210.650white0
1005.90.170.293.10.0332.0123.00.989133.410.3313.771white1
Rows: 1-100 | Columns: 15

Note

Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the vDataFrame to the predict() function, but in this case, it’s essential that the column names of the vDataFrame match the predictors and response name in the model.

Probabilities#

It is also easy to get the model’s probabilities:

model.predict_proba(
    test,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density",
    ],
    "prediction",
)
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
123
prediction
Integer
123
prediction_0
Float(22)
123
prediction_1
Float(22)
14.40.320.394.30.0331.0127.00.989043.460.3612.881white00.6149551796666590.385044820333341
24.70.1450.291.00.04235.090.00.99083.760.4911.360white00.7973149999236490.202685000076351
34.70.3350.141.30.03669.0168.00.992123.470.4610.550white00.893672181983620.10632781801638
44.70.670.091.00.025.09.00.987223.30.3413.650white00.5916151760708330.408384823929167
54.90.3450.341.00.06832.0143.00.991383.240.410.150white00.8627235857305350.137276414269465
64.90.420.02.10.04816.042.00.991543.710.7414.071red00.861266742215180.13873325778482
74.90.470.171.90.03560.0148.00.989643.270.3511.560white00.7368631205504040.263136879449596
85.00.240.212.20.03931.0100.00.990983.690.6211.760white00.7813545877507330.218645412249267
95.00.290.545.70.03554.0155.00.989763.270.3412.981white00.5773058846328330.422694115367167
105.00.330.161.50.04910.097.00.99173.480.4410.760white00.8600575527085080.139942447291492
115.00.350.257.80.03124.0116.00.992413.390.411.360white00.7833453141800020.216654685819998
125.00.420.242.00.0619.050.00.99173.720.7414.081red00.8676777065109730.132322293489027
135.00.740.01.20.04116.046.00.992584.010.5912.560red00.9378027067022350.0621972932977647
145.10.260.331.10.02746.0113.00.989463.350.4311.471white00.6655106087538490.334489391246151
155.10.290.288.30.02627.0107.00.993083.360.3711.060white00.7999459133544240.200054086645576
165.10.330.221.60.02718.089.00.98933.510.3812.571white00.6554905982017460.344509401798254
175.10.350.266.80.03436.0120.00.991883.380.411.560white00.7586569155360760.241343084463924
185.20.1850.221.00.0347.0123.00.992183.550.4410.1560white00.8556506343680470.144349365631953
195.20.250.231.40.04720.077.00.990013.320.6211.450white00.7100422639815930.289957736018407
205.20.360.021.60.03124.0104.00.98963.440.3512.260white00.6881762651451520.311823734854848
215.20.380.267.70.05320.0103.00.99253.270.4512.260white00.7931092098073020.206890790192698
225.20.480.041.60.05419.0106.00.99273.540.6212.271red00.9132160741762810.0867839258237186
235.20.490.262.30.0923.074.00.99533.710.6212.260red00.9705902807725430.0294097192274574
245.30.160.391.00.02840.0101.00.991563.570.5910.660white00.8049385985870430.195061401412958
255.30.210.290.70.02811.066.00.992153.30.49.850white00.8558282199027720.144171780097228
265.30.240.331.30.03325.097.00.99063.590.3811.081white00.7443360284721250.255663971527875
275.30.260.235.150.03448.0160.00.99523.820.5110.571white00.9325089668961080.0674910331038918
285.30.360.276.30.02840.0132.00.991863.370.411.660white00.7520692910838580.247930708916142
295.30.40.253.90.03145.0130.00.990723.310.5811.7571white00.7260552098269780.273944790173022
305.30.430.111.10.0296.051.00.990763.510.4811.240white00.8010705934614490.198929406538551
315.30.570.011.70.0545.027.00.99343.570.8412.571red00.9377765862139160.0622234137860842
325.30.7150.191.50.1617.062.00.993953.620.6111.050red00.9683345082498150.0316654917501852
335.30.760.032.70.04327.093.00.99323.340.389.250white00.9363784425576540.0636215574423463
345.40.2550.331.20.05129.0122.00.990483.370.6611.360white00.7409045092901570.259095490709843
355.40.290.381.20.02931.0132.00.988953.280.3612.460white00.5886294538657860.411370546134214
365.40.290.473.00.05247.0145.00.9933.290.7510.060white00.8764725564468850.123527443553115
375.40.4150.191.60.03927.088.00.992653.540.4110.071white00.8926893784992690.107310621500731
385.40.530.162.70.03634.0128.00.988563.20.5313.281white00.5768390486230180.423160951376982
395.40.580.081.90.05920.031.00.994843.50.6410.260red00.9642992341228360.0357007658771636
405.50.4850.01.50.0658.0103.00.9943.630.49.740white00.9450751108162310.0549248891837686
415.60.1750.290.80.04320.067.00.991123.280.489.960white00.7657366323697790.234263367630221
425.60.180.271.70.0331.0103.00.988923.350.3712.960white00.5187702580833430.481229741916657
435.60.180.581.250.03429.0129.00.989843.510.612.071white00.6285525194517160.371447480548284
445.60.190.261.40.0312.076.00.99053.250.3710.971white00.6942767707241650.305723229275835
455.60.190.312.70.02711.0100.00.989643.460.413.271white00.5654933778184750.434506622181525
465.60.20.6610.20.04378.0175.00.99452.980.4310.471white00.8116447256470330.188355274352967
475.60.230.293.10.02319.089.00.990683.250.5111.260white00.6696475726680590.330352427331941
485.60.2350.291.20.04733.0127.00.9913.340.511.071white00.7615590470563490.238440952943651
495.60.240.342.00.04114.073.00.989813.040.4511.671white00.6276197529030720.372380247096928
505.60.310.371.40.07412.096.00.99543.320.589.250red00.9616465417840640.0383534582159361
515.60.320.328.30.04332.0105.00.992663.240.4711.260white00.7431606390277510.256839360972249
525.60.320.337.40.03725.095.00.992683.250.4911.160white00.7632753850436130.236724614956387
535.60.350.145.00.04648.0198.00.99373.30.7110.350white00.8808401899769780.119159810023022
545.60.50.092.30.04917.099.00.99373.630.6313.050red00.926645187280350.0733548127196496
555.60.50.092.30.04917.099.00.99373.630.6313.050red00.926645187280350.0733548127196496
565.60.6150.01.60.08916.059.00.99433.580.529.950red00.9583977250827170.0416022749172831
575.60.620.031.50.086.013.00.994983.660.6210.140red00.9685891746211920.0314108253788083
585.60.660.02.50.0667.015.00.992563.520.5812.950red00.904574745318920.0954252546810799
595.60.6950.066.80.0429.084.00.994323.440.4410.250white00.9199372849931010.0800627150068991
605.60.850.051.40.04512.088.00.99243.560.8212.981red00.9230958942907820.0769041057092179
615.70.160.266.30.04328.0113.00.99363.060.589.960white00.8184488567346690.181551143265331
625.70.220.216.00.04441.0113.00.998623.220.468.960white00.9267484547420070.0732515452579933
635.70.220.2216.650.04439.0110.00.998553.240.489.060white00.9184786099664210.081521390033579
645.70.230.257.950.04216.0108.00.994863.440.6110.360white00.872914432286040.12708556771396
655.70.250.2612.50.04952.5106.00.996913.080.459.460white00.9073880756664880.0926119243335115
665.70.2550.651.20.07917.0137.00.993073.20.429.450white00.8918563549123070.108143645087693
675.70.260.2417.80.05923.0124.00.997733.30.510.150white00.8806595395796290.119340460420372
685.70.280.361.80.04138.090.00.990023.270.9811.971white00.6552956846191260.344704315380874
695.70.320.181.40.02926.0104.00.99063.440.3711.060white00.7264520109353910.273547989064609
705.70.320.384.750.03323.094.00.9913.420.4211.871white00.6719691914644360.328030808535564
715.70.3350.341.00.0413.0174.00.9923.270.6610.050white00.8427176450064460.157282354993554
725.80.130.2212.70.05824.0183.00.99563.320.4211.760white00.8210042250925660.178995774907434
735.80.170.341.80.04596.0170.00.990353.380.911.881white00.651730527708670.34826947229133
745.80.170.361.30.03611.070.00.992023.430.6810.471white00.8020929331106420.197907066889358
755.80.180.371.10.03631.096.00.989423.160.4812.060white00.5730568093716850.426943190628315
765.80.190.2510.80.04233.0124.00.996463.220.419.260white00.8977097167943740.102290283205626
775.80.210.321.60.04538.095.00.989463.230.9412.481white00.5768818913139790.423118108686021
785.80.230.22.00.04339.0154.00.992263.210.3910.260white00.8197797683886690.180220231611332
795.80.230.271.80.04324.069.00.99333.380.319.460white00.8800075451526790.119992454847321
805.80.240.2610.050.03963.0162.00.993753.330.511.260white00.7555891148683320.244410885131668
815.80.250.2413.30.04441.0137.00.99723.340.429.550white00.9047661489847530.0952338510152473
825.80.260.249.20.04455.0152.00.99613.310.389.450white00.9095749638619110.0904250361380886
835.80.280.33.90.02636.0105.00.989633.260.5812.7560white00.5304078077592370.469592192240763
845.80.280.344.00.03140.099.00.98963.390.3912.871white00.5265011690922770.473498830907723
855.80.280.669.10.03926.0159.00.99653.660.5510.850white00.9225790416808540.0774209583191463
865.80.290.151.10.02912.083.00.98983.30.411.460white00.6428027953265460.357197204673454
875.80.290.261.70.0633.011.00.99153.390.5413.560red00.7923018921168150.207698107883185
885.80.290.333.70.02930.088.00.989943.250.4212.360white00.574757231716270.42524276828373
895.80.30.384.90.03922.086.00.989633.230.5813.171white00.5112724720943220.488727527905678
905.80.30.384.90.03922.086.00.989633.230.5813.171white00.5112724720943220.488727527905678
915.80.340.167.00.03726.0116.00.99493.460.4510.071white00.895313905375830.10468609462417
925.80.340.216.60.0450.0167.00.99413.290.6210.050white00.8642226375861170.135777362413883
935.80.3450.1510.80.03326.0120.00.994943.250.4910.060white00.8415749044615670.158425095538433
945.80.360.380.90.0373.075.00.99043.280.3411.440white00.7239542337721580.276045766227842
955.80.380.261.10.05820.0140.00.992713.270.439.760white00.8849623506258280.115037649374172
965.80.390.477.50.02712.088.00.99073.380.4514.060white00.5616318557237130.438368144276287
975.80.610.111.80.06618.028.00.994833.550.6610.960red00.9602004548244250.039799545175575
985.81.010.662.00.03915.088.00.993573.660.611.560red00.9509554847272040.049044515272796
995.90.170.280.70.0275.028.00.989853.130.3210.650white00.613555070296190.38644492970381
1005.90.170.293.10.0332.0123.00.989133.410.3313.771white10.4610574700800730.538942529919927
Rows: 1-100 | Columns: 17

Note

Probabilities are added to the vDataFrame, and VerticaPy uses the corresponding probability function in SQL behind the scenes. You can use the pos_label parameter to add only the probability of the selected category.

Confusion Matrix#

You can obtain the confusion matrix of your choice by specifying the desired cutoff.

model.confusion_matrix(cutoff = 0.5)
Out[3]: 
array([[1021,   25],
       [ 234,   17]])

Note

In classification, the cutoff is a threshold value used to determine class assignment based on predicted probabilities or scores from a classification model. In binary classification, if the predicted probability for a specific class is greater than or equal to the cutoff, the instance is assigned to the positive class; otherwise, it is assigned to the negative class. Adjusting the cutoff allows for trade-offs between true positives and false positives, enabling the model to be optimized for specific objectives or to consider the relative costs of different classification errors. The choice of cutoff is critical for tailoring the model’s performance to meet specific needs.

Main Plots (Classification Curves)#

Classification models allow for the creation of various plots that are very helpful in understanding the model, such as the ROC Curve, PRC Curve, Cutoff Curve, Gain Curve, and more.

Most of the classification curves can be found in the Machine Learning - Classification Curve.

For example, let’s draw the model’s ROC curve.

model.roc_curve()

Important

Most of the curves have a parameter called nbins, which is essential for estimating metrics. The larger the nbins, the more precise the estimation, but it can significantly impact performance. Exercise caution when increasing this parameter excessively.

Hint

In binary classification, various curves can be easily plotted. However, in multi-class classification, it’s important to select the pos_label, representing the class to be treated as positive when drawing the curve.

Other Plots#

If the model allows, you can also generate relevant plots. For example, classification plots can be found in the Machine Learning - Classification Plots.

model.plot()

Important

The plotting feature is typically suitable for models with fewer than three predictors.

Contour plot is another useful plot that can be produced for models with two predictors.

model.contour()

Machine learning models with two predictors can usually benefit from their own contour plot. This visual representation aids in exploring predictions and gaining a deeper understanding of how these models perform in different scenarios. Please refer to Contour Plot for more examples.

Parameter Modification#

In order to see the parameters:

model.get_params()
Out[4]: 
{'penalty': 'none',
 'tol': 1e-06,
 'max_iter': 100,
 'solver': 'newton',
 'fit_intercept': True}

And to manually change some of the parameters:

model.set_params({'tol': 0.001})

Model Register#

In order to register the model for tracking and versioning:

model.register("model_v1")

Please refer to Model Tracking and Versioning for more details on model tracking and versioning.

Model Exporting#

To Memmodel

model.to_memmodel()

Note

MemModel objects serve as in-memory representations of machine learning models. They can be used for both in-database and in-memory prediction tasks. These objects can be pickled in the same way that you would pickle a scikit-learn model.

The following methods for exporting the model use MemModel, and it is recommended to use MemModel directly.

To SQL

You can get the SQL code by:

model.to_sql()
Out[6]: '((1 / (1 + EXP(- (431.892701297998 + 0.432721801631203 * "fixed_acidity" + -1.17605582782568 * "volatile_acidity" + 0.0700977324896712 * "citric_acid" + 0.128897499413057 * "residual_sugar" + -2.63847801597149 * "chlorides" + -439.204655044829 * "density")))) > 0.5)::int'

To Python

To obtain the prediction function in Python syntax, use the following code:

X = [[4.2, 0.17, 0.36, 1.8, 0.029, 0.9899]]

model.to_python()(X)
Out[8]: array([0])

Hint

The to_python() method is used to retrieve predictions, probabilities, or cluster distances. For specific details on how to use this method for different model types, refer to the relevant documentation for each model.

__init__(name: str = None, overwrite_model: bool = False, penalty: Literal['none', 'l1', 'l2', 'enet', None] = 'none', tol: float = 1e-06, C: int | float | Decimal = 1.0, max_iter: int = 100, solver: Literal['newton', 'bfgs', 'cgd'] = 'newton', l1_ratio: float = 0.5, fit_intercept: bool = True) None#

Methods

__init__([name, overwrite_model, penalty, ...])

classification_report([metrics, cutoff, nbins])

Computes a classification report using multiple model evaluation metrics (auc, accuracy, f1...).

confusion_matrix([cutoff])

Computes the model confusion matrix.

contour([nbins, chart])

Draws the model's contour plot.

cutoff_curve([nbins, show, chart])

Draws the model Cutoff curve.

deploySQL([X, cutoff])

Returns the SQL code needed to deploy the model.

does_model_exists(name[, raise_error, ...])

Checks whether the model is stored in the Vertica database.

drop()

Drops the model from the Vertica database.

export_models(name, path[, kind])

Exports machine learning models.

features_importance([show, chart])

Computes the model's features importance.

fit(input_relation, X, y[, test_relation, ...])

Trains the model.

get_attributes([attr_name])

Returns the model attributes.

get_match_index(x, col_list[, str_check])

Returns the matching index.

get_params()

Returns the parameters of the model.

get_plotting_lib([class_name, chart, ...])

Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.

get_vertica_attributes([attr_name])

Returns the model Vertica attributes.

import_models(path[, schema, kind])

Imports machine learning models.

lift_chart([nbins, show, chart])

Draws the model Lift Chart.

plot([max_nb_points, chart])

Draws the model.

prc_curve([nbins, show, chart])

Draws the model PRC curve.

predict(vdf[, X, name, cutoff, inplace])

Makes predictions on the input relation.

predict_proba(vdf[, X, name, pos_label, inplace])

Returns the model's probabilities using the input relation.

register(registered_name[, raise_error])

Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.

report([metrics, cutoff, nbins])

Computes a classification report using multiple model evaluation metrics (auc, accuracy, f1...).

roc_curve([nbins, show, chart])

Draws the model ROC curve.

score([metric, cutoff, nbins])

Computes the model score.

set_params([parameters])

Sets the parameters of the model.

summarize()

Summarizes the model.

to_binary(path)

Exports the model to the Vertica Binary format.

to_memmodel()

Converts the model to an InMemory object that can be used for different types of predictions.

to_pmml(path)

Exports the model to PMML.

to_python([return_proba, ...])

Returns the Python function needed for in-memory scoring without using built-in Vertica functions.

to_sql([X, return_proba, ...])

Returns the SQL code needed to deploy the model without using built-in Vertica functions.

to_tf(path)

Exports the model to the Frozen Graph format (TensorFlow).

Attributes