Loading...

verticapy.machine_learning.vertica.tree.DecisionTreeClassifier

class verticapy.machine_learning.vertica.tree.DecisionTreeClassifier(name: str = None, overwrite_model: bool = False, max_features: Literal['auto', 'max'] | int = 'auto', max_leaf_nodes: Annotated[int | float | Decimal, 'Python Numbers'] = 1000000000.0, max_depth: int = 100, min_samples_leaf: int = 1, min_info_gain: Annotated[int | float | Decimal, 'Python Numbers'] = 0.0, nbins: int = 32)

A DecisionTreeClassifier consisting of a single tree.

Parameters

name: str, optional

Name of the model. The model is stored in the database.

overwrite_model: bool, optional

If set to True, training a model with the same name as an existing model overwrites the existing model.

max_features: str / int, optional

The number of randomly chosen features from which to pick the best feature to split on a given tree node. It can be an integer or one of the two following methods.

  • auto:

    square root of the total number of predictors.

  • max:

    number of predictors.

max_leaf_nodes: PythonNumber, optional

The maximum number of leaf nodes for a tree in the forest, an integer between 1 and 1e9, inclusive.

max_depth: int, optional

The maximum depth for growing each tree, an integer between 1 and 100, inclusive.

min_samples_leaf: int, optional

The minimum number of samples each branch must have after a node is split, an integer between 1 and 1e6, inclusive. Any split that results in fewer remaining samples is discarded.

min_info_gain: PythonNumber, optional

The minimum threshold for including a split, a float between 0.0 and 1.0, inclusive. A split with information gain less than this threshold is discarded.

nbins: int, optional

The number of bins to use for continuous features, an integer between 2 and 1000, inclusive.

Attributes

Many attributes are created during the fitting phase.

trees_: list of one BinaryTreeClassifier

One tree model which is instance of BinaryTreeClassifier. It possess various attributes. For more detailed information, refer to the documentation for BinaryTreeClassifier().

features_importance_: numpy.array

The importance of features. It is calculated using the MDI (Mean Decreased Impurity). To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them. It is necessary to use the features_importance() method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.

classes_: numpy.array

The classes labels.

Note

All attributes can be accessed using the get_attributes() method.

Note

Several other attributes can be accessed by using the get_vertica_attributes() method.

Examples

The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.

Important

Many tree-based models inherit from the RandomForest base class, and it’s recommended to use it directly for access to a wider range of options.

Load data for machine learning

We import verticapy:

import verticapy as vp

Hint

By assigning an alias to verticapy, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions from verticapy are used as intended without interfering with functions from other libraries.

For this example, we will use the winequality dataset.

import verticapy.datasets as vpd

data = vpd.load_winequality()
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
13.80.310.0211.10.03620.0114.00.992483.750.4412.460white
23.90.2250.44.20.0329.0118.00.9893.570.3612.881white
34.20.170.361.80.02993.0161.00.989993.650.8912.071white
44.20.2150.235.10.04164.0157.00.996883.420.448.030white
54.40.320.394.30.0331.0127.00.989043.460.3612.881white
64.40.460.12.80.02431.0111.00.988163.480.3413.160white
74.40.540.095.10.03852.097.00.990223.410.412.271white
84.50.190.210.950.03389.0159.00.993323.340.428.050white
94.60.4450.01.40.05311.0178.00.994263.790.5510.250white
104.60.520.152.10.0548.065.00.99343.90.5613.140red
114.70.1450.291.00.04235.090.00.99083.760.4911.360white
124.70.3350.141.30.03669.0168.00.992123.470.4610.550white
134.70.4550.181.90.03633.0106.00.987463.210.8314.071white
144.70.60.172.30.05817.0106.00.99323.850.612.960red
154.70.670.091.00.025.09.00.987223.30.3413.650white
164.70.7850.03.40.03623.0134.00.989813.530.9213.860white
174.80.130.321.20.04240.098.00.98983.420.6411.871white
184.80.170.282.90.0322.0111.00.99023.380.3411.371white
194.80.210.2110.20.03717.0112.00.993243.660.4812.271white
204.80.2250.381.20.07447.0130.00.991323.310.410.360white
214.80.260.2310.60.03423.0111.00.992743.460.2811.571white
224.80.290.231.10.04438.0180.00.989243.280.3411.960white
234.80.330.06.50.02834.0163.00.99373.350.619.950white
244.80.340.06.50.02833.0163.00.99393.360.619.960white
254.80.650.121.10.0134.010.00.992463.320.3613.540white
264.90.2350.2711.750.0334.0118.00.99543.070.59.460white
274.90.330.311.20.01639.0150.00.987133.330.5914.081white
284.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white
294.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white
304.90.3450.341.00.06832.0143.00.991383.240.410.150white
314.90.3450.341.00.06832.0143.00.991383.240.410.150white
324.90.420.02.10.04816.042.00.991543.710.7414.071red
334.90.470.171.90.03560.0148.00.989643.270.3511.560white
345.00.170.561.50.02624.0115.00.99063.480.3910.871white
355.00.20.41.90.01520.098.00.98973.370.5512.0560white
365.00.2350.2711.750.0334.0118.00.99543.070.59.460white
375.00.240.195.00.04317.0101.00.994383.670.5710.050white
385.00.240.212.20.03931.0100.00.990983.690.6211.760white
395.00.240.341.10.03449.0158.00.987743.320.3213.171white
405.00.2550.222.70.04346.0153.00.992383.750.7611.360white
415.00.270.324.50.03258.0178.00.989563.450.3112.671white
425.00.270.324.50.03258.0178.00.989563.450.3112.671white
435.00.270.41.20.07642.0124.00.992043.320.4710.160white
445.00.290.545.70.03554.0155.00.989763.270.3412.981white
455.00.30.333.70.0354.0173.00.98873.360.313.071white
465.00.310.06.40.04643.0166.00.9943.30.639.960white
475.00.330.161.50.04910.097.00.99173.480.4410.760white
485.00.330.161.50.04910.097.00.99173.480.4410.760white
495.00.330.161.50.04910.097.00.99173.480.4410.760white
505.00.330.184.60.03240.0124.00.991143.180.411.060white
515.00.330.2311.80.0323.0158.00.993223.410.6411.860white
525.00.350.257.80.03124.0116.00.992413.390.411.360white
535.00.350.257.80.03124.0116.00.992413.390.411.360white
545.00.380.011.60.04826.060.00.990843.70.7514.060red
555.00.40.54.30.04629.080.00.99023.490.6613.660red
565.00.420.242.00.0619.050.00.99173.720.7414.081red
575.00.440.0418.60.03938.0128.00.99853.370.5710.260white
585.00.4550.181.90.03633.0106.00.987463.210.8314.071white
595.00.550.148.30.03235.0164.00.99183.530.5112.581white
605.00.610.121.30.00965.0100.00.98743.260.3713.550white
615.00.740.01.20.04116.046.00.992584.010.5912.560red
625.01.020.041.40.04541.085.00.99383.750.4810.540red
635.01.040.241.60.0532.096.00.99343.740.6211.550red
645.10.110.321.60.02812.090.00.990083.570.5212.260white
655.10.140.250.70.03915.089.00.99193.220.439.260white
665.10.1650.225.70.04742.0146.00.99343.180.559.960white
675.10.210.281.40.04748.0148.00.991683.50.4910.450white
685.10.230.181.00.05313.099.00.989563.220.3911.550white
695.10.250.361.30.03540.078.00.98913.230.6412.171white
705.10.260.331.10.02746.0113.00.989463.350.4311.471white
715.10.260.346.40.03426.099.00.994493.230.419.260white
725.10.290.288.30.02627.0107.00.993083.360.3711.060white
735.10.290.288.30.02627.0107.00.993083.360.3711.060white
745.10.30.32.30.04840.0150.00.989443.290.4612.260white
755.10.3050.131.750.03617.073.00.993.40.5112.333333333333350white
765.10.310.30.90.03728.0152.00.9923.540.5610.160white
775.10.330.221.60.02718.089.00.98933.510.3812.571white
785.10.330.221.60.02718.089.00.98933.510.3812.571white
795.10.330.221.60.02718.089.00.98933.510.3812.571white
805.10.330.276.70.02244.0129.00.992213.360.3911.071white
815.10.350.266.80.03436.0120.00.991883.380.411.560white
825.10.350.266.80.03436.0120.00.991883.380.411.560white
835.10.350.266.80.03436.0120.00.991883.380.411.560white
845.10.390.211.70.02715.072.00.98943.50.4512.560white
855.10.420.01.80.04418.088.00.991573.680.7313.671red
865.10.420.011.50.01725.0102.00.98943.380.3612.371white
875.10.470.021.30.03418.044.00.99213.90.6212.860red
885.10.510.182.10.04216.0101.00.99243.460.8712.971red
895.10.520.062.70.05230.079.00.99323.320.439.350white
905.10.5850.01.70.04414.086.00.992643.560.9412.971red
915.20.1550.331.60.02813.059.00.989753.30.8411.981white
925.20.1550.331.60.02813.059.00.989753.30.8411.981white
935.20.160.340.80.02926.077.00.991553.250.5110.160white
945.20.170.270.70.0311.068.00.992183.30.419.850white
955.20.1850.221.00.0347.0123.00.992183.550.4410.1560white
965.20.20.273.20.04716.093.00.992353.440.5310.171white
975.20.210.311.70.04817.061.00.989533.240.3712.071white
985.20.220.466.20.06641.0187.00.993623.190.429.7333333333333350white
995.20.240.157.10.04332.0134.00.993783.240.489.960white
1005.20.240.453.80.02721.0128.00.9923.550.4911.281white
Rows: 1-100 | Columns: 14

Note

VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.

You can easily divide your dataset into training and testing subsets using the vDataFrame.train_test_split() method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.

data = vpd.load_winequality()
train, test = data.train_test_split(test_size = 0.2)

Warning

In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the vDataFrame.to_db() method to save your results into tables or temporary tables. This will help enhance the overall performance of the process.

Balancing the Dataset

In VerticaPy, balancing a dataset to address class imbalances is made straightforward through the balance() function within the preprocessing module. This function enables users to rectify skewed class distributions efficiently. By specifying the target variable and setting parameters like the method for balancing, users can effortlessly achieve a more equitable representation of classes in their dataset. Whether opting for over-sampling, under-sampling, or a combination of both, VerticaPy’s balance() function streamlines the process, empowering users to enhance the performance and fairness of their machine learning models trained on imbalanced data.

To balance the dataset, use the following syntax.

from verticapy.machine_learning.vertica.preprocessing import balance

balanced_train = balance(
    name = "my_schema.train_balanced",
    input_relation = train,
    y = "good",
    method = "hybrid",
)

Note

With this code, a table named train_balanced is created in the my_schema schema. It can then be used to train the model. In the rest of the example, we will work with the full dataset.

Hint

Balancing the dataset is a crucial step in improving the accuracy of machine learning models, particularly when faced with imbalanced class distributions. By addressing disparities in the number of instances across different classes, the model becomes more adept at learning patterns from all classes rather than being biased towards the majority class. This, in turn, enhances the model’s ability to make accurate predictions for under-represented classes. The balanced dataset ensures that the model is not dominated by the majority class and, as a result, leads to more robust and unbiased model performance. Therefore, by employing techniques such as over-sampling, under-sampling, or a combination of both during dataset preparation, practitioners can significantly contribute to achieving higher accuracy and better generalization of their machine learning models.

Model Initialization

First we import the DecisionTreeClassifier model:

from verticapy.machine_learning.vertica import DecisionTreeClassifier

Then we can create the model:

model = DecisionTreeClassifier(
    max_features = "auto",
    max_leaf_nodes = 32,
    max_depth = 3,
    min_samples_leaf = 5,
    min_info_gain = 0.0,
    nbins = 32,
)

Hint

In verticapy 1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.

Important

The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.

Model Training

We can now fit the model:

model.fit(
    train,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density"
    ],
    "good",
    test,
)



===========
call_string
===========
SELECT rf_classifier('"public"."_verticapy_tmp_randomforestclassifier_v_demo_fb75e43c55a411ef880f0242ac120002_"', '"public"."_verticapy_tmp_view_v_demo_fb854a2655a411ef880f0242ac120002_"', 'good', '"fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density"' USING PARAMETERS exclude_columns='', ntree=1, mtry=3, sampling_size=1, max_depth=3, max_breadth=32, min_leaf_size=5, min_info_gain=0, nbins=32);

=======
details
=======
   predictor    |      type      
----------------+----------------
 fixed_acidity  |float or numeric
volatile_acidity|float or numeric
  citric_acid   |float or numeric
 residual_sugar |float or numeric
   chlorides    |float or numeric
    density     |float or numeric


===============
Additional Info
===============
       Name       |Value
------------------+-----
    tree_count    |  1  
rejected_row_count|  0  
accepted_row_count|5195 

Important

To train a model, you can directly use the vDataFrame or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. In verticapy, we don’t work using X matrices and y vectors. Instead, we work directly with lists of predictors and the response name.

Features Importance

We can conveniently get the features importance:

result = model.features_importance()

Note

In models such as RandomForest, feature importance is calculated using the MDI (Mean Decreased Impurity). To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them.

Metrics

We can get the entire report using:

model.report()
value
auc0.7468945232673556
prc_auc0.46441582138894216
accuracy0.8033794162826421
log_loss0.191153481288304
precision0.5454545454545454
recall0.18181818181818182
f1_score0.2727272727272727
mcc0.22947914410412062
informedness0.1432825363461201
markedness0.3675303279916129
csi0.15789473684210525
Rows: 1-11 | Columns: 2

Important

Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g. model.report(metrics = ["auc", "accuracy"]).

For classification models, we can easily modify the cutoff to observe the effect on different metrics:

model.report(cutoff = 0.2)
value
auc0.7468945232673556
prc_auc0.46441582138894216
accuracy0.7204301075268817
log_loss0.191153481288304
precision0.38095238095238093
recall0.6060606060606061
f1_score0.4678362573099415
mcc0.3058283188048218
informedness0.35557891049220536
markedness0.2630385487528346
csi0.3053435114503817
Rows: 1-11 | Columns: 2

You can also use the score() function to compute any classification metric. The default metric is the accuracy:

model.score()
Out[3]: 0.8033794162826421

Prediction

Prediction is straight-forward:

model.predict(
    test,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density"
    ],
    "prediction",
)
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
Abc
prediction
Varchar(1)
13.80.310.0211.10.03620.0114.00.992483.750.4412.460white0
24.20.2150.235.10.04164.0157.00.996883.420.448.030white0
34.70.60.172.30.05817.0106.00.99323.850.612.960red0
44.80.2250.381.20.07447.0130.00.991323.310.410.360white0
54.90.3450.341.00.06832.0143.00.991383.240.410.150white0
65.00.170.561.50.02624.0115.00.99063.480.3910.871white0
75.00.20.41.90.01520.098.00.98973.370.5512.0560white1
85.00.350.257.80.03124.0116.00.992413.390.411.360white0
95.00.4550.181.90.03633.0106.00.987463.210.8314.071white1
105.00.740.01.20.04116.046.00.992584.010.5912.560red0
115.10.230.181.00.05313.099.00.989563.220.3911.550white0
125.10.250.361.30.03540.078.00.98913.230.6412.171white1
135.10.260.331.10.02746.0113.00.989463.350.4311.471white1
145.10.260.346.40.03426.099.00.994493.230.419.260white0
155.10.290.288.30.02627.0107.00.993083.360.3711.060white0
165.10.420.01.80.04418.088.00.991573.680.7313.671red0
175.10.420.011.50.01725.0102.00.98943.380.3612.371white1
185.10.5850.01.70.04414.086.00.992643.560.9412.971red0
195.20.250.231.40.04720.077.00.990013.320.6211.450white0
205.20.310.365.10.03146.0145.00.98973.140.3112.471white1
215.20.320.251.80.10313.050.00.99573.380.559.250red0
225.20.480.041.60.05419.0106.00.99273.540.6212.271red0
235.20.490.262.30.0923.074.00.99533.710.6212.260red0
245.30.1650.241.10.05125.0105.00.99253.320.479.150white0
255.30.230.560.90.04146.0141.00.991193.160.629.750white0
265.30.240.331.30.03325.097.00.99063.590.3811.081white0
275.30.30.21.10.07748.0166.00.99443.30.548.740white0
285.30.30.31.20.02925.093.00.987423.310.413.671white1
295.30.570.011.70.0545.027.00.99343.570.8412.571red0
305.30.7150.191.50.1617.062.00.993953.620.6111.050red0
315.40.150.322.50.03710.051.00.988783.040.5812.660white1
325.40.170.272.70.04928.0104.00.992243.460.5510.360white0
335.40.220.291.20.04569.0152.00.991783.760.6311.071white0
345.40.230.361.50.0374.0121.00.989763.240.9912.171white1
355.40.2550.331.20.05129.0122.00.990483.370.6611.360white0
365.40.270.224.60.02229.0107.00.988893.330.5413.860white1
375.40.460.152.10.02629.0130.00.989533.390.7713.481white1
385.40.5950.12.80.04226.080.00.99323.360.389.350white0
395.40.8350.081.20.04613.093.00.99243.570.8513.071red0
405.50.120.331.00.03823.0131.00.991643.250.459.850white0
415.50.170.232.90.03910.0108.00.992433.280.510.050white0
425.50.230.192.20.04439.0161.00.992093.190.4310.460white0
435.50.240.451.70.04622.0113.00.992243.220.4810.050white0
445.50.280.211.60.03223.085.00.990273.420.4212.550white1
455.50.290.31.10.02220.0110.00.988693.340.3812.871white1
465.50.340.262.20.02131.0119.00.989193.550.4913.081white1
475.60.150.265.550.05151.0139.00.993363.470.511.060white0
485.60.160.271.40.04453.0168.00.99183.280.3710.160white0
495.60.1750.290.80.04320.067.00.991123.280.489.960white0
505.60.180.271.70.0331.0103.00.988923.350.3712.960white1
515.60.180.311.50.03816.084.00.99243.340.5810.160white0
525.60.20.6610.20.04378.0175.00.99452.980.4310.471white0
535.60.210.244.40.02737.0150.00.9913.30.3111.571white0
545.60.230.258.00.04331.0101.00.994293.190.4210.460white0
555.60.260.181.40.03418.0135.00.991743.320.3510.260white0
565.60.260.2710.60.0327.0119.00.99473.40.3410.771white0
575.60.280.273.90.04352.0158.00.992023.350.4410.771white0
585.60.290.050.80.03811.030.00.99243.360.359.250white0
595.60.310.371.40.07412.096.00.99543.320.589.250red0
605.60.330.281.20.03133.097.00.991263.490.5810.960white0
615.60.350.371.00.0386.072.00.99023.370.3411.450white1
625.60.390.244.70.03427.077.00.99063.280.3612.750white0
635.60.410.227.10.0544.0154.00.99313.30.410.550white0
645.60.490.134.50.03917.0116.00.99073.420.913.771white0
655.60.50.092.30.04917.099.00.99373.630.6313.050red0
665.60.6150.01.60.08916.059.00.99433.580.529.950red0
675.60.660.02.50.0667.015.00.992563.520.5812.950red0
685.70.10.271.30.04721.0100.00.99283.270.469.550white0
695.70.220.216.00.04441.0113.00.998623.220.468.960white0
705.70.220.216.00.04441.0113.00.998623.220.468.960white0
715.70.220.281.30.02726.0101.00.989483.350.3812.571white1
725.70.220.293.50.0427.0146.00.989993.170.3612.160white1
735.70.250.2710.80.0558.0116.00.995923.10.59.860white0
745.70.260.2510.40.027.057.00.9943.390.3710.650white0
755.70.260.274.10.20173.5189.50.99423.270.389.460white0
765.70.270.169.00.05332.0111.00.994743.360.3710.460white0
775.70.270.321.20.04620.0155.00.99343.80.4110.260white0
785.70.330.321.40.04328.093.00.98973.310.512.360white1
795.70.3850.0412.60.03422.0115.00.99643.280.639.960white0
805.80.140.156.10.04227.0123.00.993623.060.69.960white0
815.80.170.341.80.04596.0170.00.990353.380.911.881white1
825.80.180.281.30.0349.094.00.990923.210.5211.260white0
835.80.180.371.20.03619.074.00.988533.090.4912.771white1
845.80.190.241.30.04438.0128.00.993623.770.610.650white0
855.80.190.2510.80.04233.0124.00.996463.220.419.260white0
865.80.20.241.40.03365.0169.00.990433.590.5612.371white0
875.80.20.271.40.03112.077.00.99053.250.3610.971white0
885.80.20.31.50.03121.057.00.991153.440.5511.060white0
895.80.240.2610.050.03963.0162.00.993753.330.511.260white0
905.80.260.181.20.03140.0114.00.99083.420.411.071white0
915.80.260.249.20.04455.0152.00.99613.310.389.450white0
925.80.270.27.30.0442.0145.00.994423.150.489.850white0
935.80.270.2212.70.05842.0206.00.99463.320.3812.360white0
945.80.270.2712.30.04555.0170.00.99723.280.429.360white0
955.80.280.284.20.04452.0158.00.9923.350.4410.771white0
965.80.280.33.90.02636.0105.00.989633.260.5812.7560white1
975.80.280.669.10.03926.0159.00.99653.660.5510.850white0
985.80.30.121.60.03657.0163.00.992393.380.5910.560white0
995.80.330.216.050.04726.0166.00.99763.090.468.950white0
1005.80.390.477.50.02712.088.00.99073.380.4514.060white0
Rows: 1-100 | Columns: 15

Note

Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the vDataFrame to the predict() function, but in this case, it’s essential that the column names of the vDataFrame match the predictors and response name in the model.

Probabilities

It is also easy to get the model’s probabilities:

model.predict_proba(
    test,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density"
    ],
    "prediction",
)
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
Abc
prediction
Varchar(1)
Abc
prediction_0
Varchar(128)
Abc
prediction_1
Varchar(128)
13.80.310.0211.10.03620.0114.00.992483.750.4412.460white00.6646070.335393
24.20.2150.235.10.04164.0157.00.996883.420.448.030white00.9023490.0976514
34.70.60.172.30.05817.0106.00.99323.850.612.960red00.750.25
44.80.2250.381.20.07447.0130.00.991323.310.410.360white00.9001850.0998152
54.90.3450.341.00.06832.0143.00.991383.240.410.150white00.9001850.0998152
65.00.170.561.50.02624.0115.00.99063.480.3910.871white00.6646070.335393
75.00.20.41.90.01520.098.00.98973.370.5512.0560white10.4935730.506427
85.00.350.257.80.03124.0116.00.992413.390.411.360white00.6646070.335393
95.00.4550.181.90.03633.0106.00.987463.210.8314.071white10.4935730.506427
105.00.740.01.20.04116.046.00.992584.010.5912.560