Loading...

verticapy.machine_learning.vertica.tree.DummyTreeClassifier

class verticapy.machine_learning.vertica.tree.DummyTreeClassifier(name: str = None, overwrite_model: bool = False)

A classifier that overfits the training data. These models are typically used as a control to compare with your other models.

Parameters

name: str, optional

Name of the model. The model is stored in the database.

overwrite_model: bool, optional

If set to True, training a model with the same name as an existing model overwrites the existing model.

Attributes

Many attributes are created during the fitting phase.

trees_: list of one BinaryTreeClassifier

One tree model which is instance of BinaryTreeClassifier. It possess various attributes. For more detailed information, refer to the documentation for BinaryTreeClassifier().

features_importance_: numpy.array

The importance of features. It is calculated using the MDI (Mean Decreased Impurity). To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them. It is necessary to use the features_importance() method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.

classes_: numpy.array

The classes labels.

Note

All attributes can be accessed using the get_attributes() method.

Note

Several other attributes can be accessed by using the get_vertica_attributes() method.

Examples

The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.

Important

Many tree-based models inherit from the RandomForest base class, and it’s recommended to use it directly for access to a wider range of options.

Load data for machine learning

We import verticapy:

import verticapy as vp

Hint

By assigning an alias to verticapy, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions from verticapy are used as intended without interfering with functions from other libraries.

For this example, we will use the winequality dataset.

import verticapy.datasets as vpd

data = vpd.load_winequality()
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
13.80.310.0211.10.03620.0114.00.992483.750.4412.460white
23.90.2250.44.20.0329.0118.00.9893.570.3612.881white
34.20.170.361.80.02993.0161.00.989993.650.8912.071white
44.20.2150.235.10.04164.0157.00.996883.420.448.030white
54.40.320.394.30.0331.0127.00.989043.460.3612.881white
64.40.460.12.80.02431.0111.00.988163.480.3413.160white
74.40.540.095.10.03852.097.00.990223.410.412.271white
84.50.190.210.950.03389.0159.00.993323.340.428.050white
94.60.4450.01.40.05311.0178.00.994263.790.5510.250white
104.60.520.152.10.0548.065.00.99343.90.5613.140red
114.70.1450.291.00.04235.090.00.99083.760.4911.360white
124.70.3350.141.30.03669.0168.00.992123.470.4610.550white
134.70.4550.181.90.03633.0106.00.987463.210.8314.071white
144.70.60.172.30.05817.0106.00.99323.850.612.960red
154.70.670.091.00.025.09.00.987223.30.3413.650white
164.70.7850.03.40.03623.0134.00.989813.530.9213.860white
174.80.130.321.20.04240.098.00.98983.420.6411.871white
184.80.170.282.90.0322.0111.00.99023.380.3411.371white
194.80.210.2110.20.03717.0112.00.993243.660.4812.271white
204.80.2250.381.20.07447.0130.00.991323.310.410.360white
214.80.260.2310.60.03423.0111.00.992743.460.2811.571white
224.80.290.231.10.04438.0180.00.989243.280.3411.960white
234.80.330.06.50.02834.0163.00.99373.350.619.950white
244.80.340.06.50.02833.0163.00.99393.360.619.960white
254.80.650.121.10.0134.010.00.992463.320.3613.540white
264.90.2350.2711.750.0334.0118.00.99543.070.59.460white
274.90.330.311.20.01639.0150.00.987133.330.5914.081white
284.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white
294.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white
304.90.3450.341.00.06832.0143.00.991383.240.410.150white
314.90.3450.341.00.06832.0143.00.991383.240.410.150white
324.90.420.02.10.04816.042.00.991543.710.7414.071red
334.90.470.171.90.03560.0148.00.989643.270.3511.560white
345.00.170.561.50.02624.0115.00.99063.480.3910.871white
355.00.20.41.90.01520.098.00.98973.370.5512.0560white
365.00.2350.2711.750.0334.0118.00.99543.070.59.460white
375.00.240.195.00.04317.0101.00.994383.670.5710.050white
385.00.240.212.20.03931.0100.00.990983.690.6211.760white
395.00.240.341.10.03449.0158.00.987743.320.3213.171white
405.00.2550.222.70.04346.0153.00.992383.750.7611.360white
415.00.270.324.50.03258.0178.00.989563.450.3112.671white
425.00.270.324.50.03258.0178.00.989563.450.3112.671white
435.00.270.41.20.07642.0124.00.992043.320.4710.160white
445.00.290.545.70.03554.0155.00.989763.270.3412.981white
455.00.30.333.70.0354.0173.00.98873.360.313.071white
465.00.310.06.40.04643.0166.00.9943.30.639.960white
475.00.330.161.50.04910.097.00.99173.480.4410.760white
485.00.330.161.50.04910.097.00.99173.480.4410.760white
495.00.330.161.50.04910.097.00.99173.480.4410.760white
505.00.330.184.60.03240.0124.00.991143.180.411.060white
515.00.330.2311.80.0323.0158.00.993223.410.6411.860white
525.00.350.257.80.03124.0116.00.992413.390.411.360white
535.00.350.257.80.03124.0116.00.992413.390.411.360white
545.00.380.011.60.04826.060.00.990843.70.7514.060red
555.00.40.54.30.04629.080.00.99023.490.6613.660red
565.00.420.242.00.0619.050.00.99173.720.7414.081red
575.00.440.0418.60.03938.0128.00.99853.370.5710.260white
585.00.4550.181.90.03633.0106.00.987463.210.8314.071white
595.00.550.148.30.03235.0164.00.99183.530.5112.581white
605.00.610.121.30.00965.0100.00.98743.260.3713.550white
615.00.740.01.20.04116.046.00.992584.010.5912.560red
625.01.020.041.40.04541.085.00.99383.750.4810.540red
635.01.040.241.60.0532.096.00.99343.740.6211.550red
645.10.110.321.60.02812.090.00.990083.570.5212.260white
655.10.140.250.70.03915.089.00.99193.220.439.260white
665.10.1650.225.70.04742.0146.00.99343.180.559.960white
675.10.210.281.40.04748.0148.00.991683.50.4910.450white
685.10.230.181.00.05313.099.00.989563.220.3911.550white
695.10.250.361.30.03540.078.00.98913.230.6412.171white
705.10.260.331.10.02746.0113.00.989463.350.4311.471white
715.10.260.346.40.03426.099.00.994493.230.419.260white
725.10.290.288.30.02627.0107.00.993083.360.3711.060white
735.10.290.288.30.02627.0107.00.993083.360.3711.060white
745.10.30.32.30.04840.0150.00.989443.290.4612.260white
755.10.3050.131.750.03617.073.00.993.40.5112.333333333333350white
765.10.310.30.90.03728.0152.00.9923.540.5610.160white
775.10.330.221.60.02718.089.00.98933.510.3812.571white
785.10.330.221.60.02718.089.00.98933.510.3812.571white
795.10.330.221.60.02718.089.00.98933.510.3812.571white
805.10.330.276.70.02244.0129.00.992213.360.3911.071white
815.10.350.266.80.03436.0120.00.991883.380.411.560white
825.10.350.266.80.03436.0120.00.991883.380.411.560white
835.10.350.266.80.03436.0120.00.991883.380.411.560white
845.10.390.211.70.02715.072.00.98943.50.4512.560white
855.10.420.01.80.04418.088.00.991573.680.7313.671red
865.10.420.011.50.01725.0102.00.98943.380.3612.371white
875.10.470.021.30.03418.044.00.99213.90.6212.860red
885.10.510.182.10.04216.0101.00.99243.460.8712.971red
895.10.520.062.70.05230.079.00.99323.320.439.350white
905.10.5850.01.70.04414.086.00.992643.560.9412.971red
915.20.1550.331.60.02813.059.00.989753.30.8411.981white
925.20.1550.331.60.02813.059.00.989753.30.8411.981white
935.20.160.340.80.02926.077.00.991553.250.5110.160white
945.20.170.270.70.0311.068.00.992183.30.419.850white
955.20.1850.221.00.0347.0123.00.992183.550.4410.1560white
965.20.20.273.20.04716.093.00.992353.440.5310.171white
975.20.210.311.70.04817.061.00.989533.240.3712.071white
985.20.220.466.20.06641.0187.00.993623.190.429.7333333333333350white
995.20.240.157.10.04332.0134.00.993783.240.489.960white
1005.20.240.453.80.02721.0128.00.9923.550.4911.281white
Rows: 1-100 | Columns: 14

Note

VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.

You can easily divide your dataset into training and testing subsets using the vDataFrame.train_test_split() method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.

data = vpd.load_winequality()
train, test = data.train_test_split(test_size = 0.2)

Warning

In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the vDataFrame.to_db() method to save your results into tables or temporary tables. This will help enhance the overall performance of the process.

Balancing the Dataset

In VerticaPy, balancing a dataset to address class imbalances is made straightforward through the balance() function within the preprocessing module. This function enables users to rectify skewed class distributions efficiently. By specifying the target variable and setting parameters like the method for balancing, users can effortlessly achieve a more equitable representation of classes in their dataset. Whether opting for over-sampling, under-sampling, or a combination of both, VerticaPy’s balance() function streamlines the process, empowering users to enhance the performance and fairness of their machine learning models trained on imbalanced data.

To balance the dataset, use the following syntax.

from verticapy.machine_learning.vertica.preprocessing import balance

balanced_train = balance(
    name = "my_schema.train_balanced",
    input_relation = train,
    y = "good",
    method = "hybrid",
)

Note

With this code, a table named train_balanced is created in the my_schema schema. It can then be used to train the model. In the rest of the example, we will work with the full dataset.

Hint

Balancing the dataset is a crucial step in improving the accuracy of machine learning models, particularly when faced with imbalanced class distributions. By addressing disparities in the number of instances across different classes, the model becomes more adept at learning patterns from all classes rather than being biased towards the majority class. This, in turn, enhances the model’s ability to make accurate predictions for under-represented classes. The balanced dataset ensures that the model is not dominated by the majority class and, as a result, leads to more robust and unbiased model performance. Therefore, by employing techniques such as over-sampling, under-sampling, or a combination of both during dataset preparation, practitioners can significantly contribute to achieving higher accuracy and better generalization of their machine learning models.

Model Initialization

First we import the DummyTreeClassifier model:

from verticapy.machine_learning.vertica import DummyTreeClassifier

Then we can create the model:

model = DummyTreeClassifier()

Hint

In verticapy 1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.

Important

The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.

Model Training

We can now fit the model:

model.fit(
    train,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density"
    ],
    "good",
    test,
)



===========
call_string
===========
SELECT rf_classifier('"public"."_verticapy_tmp_randomforestclassifier_v_demo_1d56aa7855a511ef880f0242ac120002_"', '"public"."_verticapy_tmp_view_v_demo_1d65876455a511ef880f0242ac120002_"', 'good', '"fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density"' USING PARAMETERS exclude_columns='', ntree=1, mtry=6, sampling_size=1, max_depth=100, max_breadth=1000000000, min_leaf_size=1, min_info_gain=0, nbins=1000);

=======
details
=======
   predictor    |      type      
----------------+----------------
 fixed_acidity  |float or numeric
volatile_acidity|float or numeric
  citric_acid   |float or numeric
 residual_sugar |float or numeric
   chlorides    |float or numeric
    density     |float or numeric


===============
Additional Info
===============
       Name       |Value
------------------+-----
    tree_count    |  1  
rejected_row_count|  0  
accepted_row_count|5197 

Important

To train a model, you can directly use the vDataFrame or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. In verticapy, we don’t work using X matrices and y vectors. Instead, we work directly with lists of predictors and the response name.

Features Importance

We can conveniently get the features importance:

result = model.features_importance()

Note

In models such as RandomForest, feature importance is calculated using the MDI (Mean Decreased Impurity). To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them.

Metrics

We can get the entire report using:

model.report()
value
auc0.7355160393593084
prc_auc0.6080504991192017
accuracy0.8123076923076923
log_loss16.8923076923077
precision0.53
recall0.6068702290076335
f1_score0.5658362989323843
mcc0.4484759721689103
informedness0.4710320787186162
markedness0.42700000000000005
csi0.3945409429280397
Rows: 1-11 | Columns: 2

Important

Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g. model.report(metrics = ["auc", "accuracy"]).

For classification models, we can easily modify the cutoff to observe the effect on different metrics:

model.report(cutoff = 0.2)
value
auc0.7355160393593084
prc_auc0.6080504991192017
accuracy0.8123076923076923
log_loss16.8923076923077
precision0.53
recall0.6068702290076335
f1_score0.5658362989323843
mcc0.4484759721689103
informedness0.4710320787186162
markedness0.42700000000000005
csi0.3945409429280397
Rows: 1-11 | Columns: 2

You can also use the score() function to compute any classification metric. The default metric is the accuracy:

model.score()
Out[3]: 0.8123076923076923

Prediction

Prediction is straight-forward:

model.predict(
    test,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density"
    ],
    "prediction",
)
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
Abc
prediction
Varchar(1)
13.90.2250.44.20.0329.0118.00.9893.570.3612.881white1
24.40.320.394.30.0331.0127.00.989043.460.3612.881white1
34.40.460.12.80.02431.0111.00.988163.480.3413.160white1
44.60.4450.01.40.05311.0178.00.994263.790.5510.250white1
54.60.520.152.10.0548.065.00.99343.90.5613.140red0
64.70.670.091.00.025.09.00.987223.30.3413.650white0
74.80.2250.381.20.07447.0130.00.991323.310.410.360white0
84.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white0
95.00.170.561.50.02624.0115.00.99063.480.3910.871white1
105.00.240.212.20.03931.0100.00.990983.690.6211.760white1
115.00.330.161.50.04910.097.00.99173.480.4410.760white0
125.00.330.2311.80.0323.0158.00.993223.410.6411.860white0
135.00.440.0418.60.03938.0128.00.99853.370.5710.260white0
145.00.4550.181.90.03633.0106.00.987463.210.8314.071white1
155.10.140.250.70.03915.089.00.99193.220.439.260white0
165.10.250.361.30.03540.078.00.98913.230.6412.171white1
175.10.330.276.70.02244.0129.00.992213.360.3911.071white0
185.20.20.273.20.04716.093.00.992353.440.5310.171white0
195.20.2850.295.150.03564.0138.00.98953.190.3412.481white0
205.20.340.01.80.0527.063.00.99163.680.7914.060red0
215.20.370.331.20.02813.081.00.99023.370.3811.760white1
225.20.4050.151.450.03810.044.00.991253.520.411.640white0
235.20.480.041.60.05419.0106.00.99273.540.6212.271red0
245.20.490.262.30.0923.074.00.99533.710.6212.260red1
255.30.210.290.70.02811.066.00.992153.30.49.850white0
265.30.360.276.30.02840.0132.00.991863.370.411.660white0
275.30.3950.071.30.03526.0102.00.9923.50.3510.660white0
285.30.470.112.20.04816.089.00.991823.540.8813.566666666666771red1
295.40.2550.331.20.05129.0122.00.990483.370.6611.360white0
305.40.2650.287.80.05227.091.00.994323.190.3810.460white0
315.40.270.224.60.02229.0107.00.988893.330.5413.860white0
325.40.290.381.20.02931.0132.00.988953.280.3612.460white0
335.40.290.473.00.05247.0145.00.9933.290.7510.060white0
345.50.120.331.00.03823.0131.00.991643.250.459.850white0
355.50.140.274.60.02922.0104.00.99493.340.449.050white0
365.50.170.232.90.03910.0108.00.992433.280.510.050white0
375.50.240.328.70.0619.0102.00.9943.270.3110.450white0
385.60.120.264.30.03818.097.00.994773.360.469.250white0
395.60.150.265.550.05151.0139.00.993363.470.511.060white0
405.60.180.271.70.0331.0103.00.988923.350.3712.960white0
415.60.180.581.250.03429.0129.00.989843.510.612.071white1
425.60.1850.197.10.04836.0110.00.994383.260.419.560white0
435.60.190.270.90.0452.0103.00.990263.50.3911.250white0
445.60.190.461.10.03233.0115.00.99093.360.510.460white1
455.60.2450.259.70.03212.068.00.9943.310.3410.550white1
465.60.280.273.90.04352.0158.00.992023.350.4410.771white1
475.60.2950.22.20.04918.0134.00.993783.210.6810.050white0
485.60.2950.261.10.03540.0102.00.991543.470.5610.660white0
495.60.310.371.40.07412.096.00.99543.320.589.250red0
505.60.320.337.40.03725.095.00.992683.250.4911.160white0
515.60.410.227.10.0544.0154.00.99313.30.410.550white0
525.60.410.241.90.03410.053.00.988153.320.513.571white1
535.60.420.342.40.02234.097.00.989153.220.3812.871white0
545.60.50.092.30.04917.099.00.99373.630.6313.050red0
555.60.540.041.70.0495.013.00.99423.720.5811.450red0
565.60.6050.052.40.07319.025.00.992583.560.5512.950red0
575.60.660.02.20.0873.011.00.993783.710.6312.871red1
585.70.10.271.30.04721.0100.00.99283.270.469.550white1
595.70.160.266.30.04328.0113.00.99363.060.589.960white0
605.70.180.262.20.02321.095.00.98933.070.5412.360white0
615.70.20.2413.80.04744.0112.00.998372.970.668.860white0
625.70.20.32.50.04638.0125.00.992763.340.59.960white0
635.70.210.251.10.03526.081.00.99023.310.5211.460white0
645.70.220.216.00.04441.0113.00.998623.220.468.960white0
655.70.220.216.00.04441.0113.00.998623.220.468.960white0
665.70.220.216.00.04441.0113.00.998623.220.468.960white0
675.70.220.216.00.04441.0113.00.998623.220.468.960white0
685.70.220.2216.650.04439.0110.00.998553.240.489.060white0
695.70.220.281.30.02726.0101.00.989483.350.3812.571white1
705.70.230.289.650.02526.0121.00.99253.280.3811.360white0
715.70.240.31.30.0325.098.00.989683.370.4312.471white1
725.70.240.476.30.06935.0182.00.993913.110.469.7550white0
735.70.250.2612.50.04952.5106.00.996913.080.459.460white0
745.70.250.2612.50.04952.5120.00.996913.080.459.460white0
755.70.250.2711.50.0424.0120.00.994113.330.3110.860white1
765.70.260.2417.80.05923.0124.00.997733.30.510.150white0
775.70.260.31.80.03930.0105.00.989953.480.5212.571white0
785.70.320.181.40.02926.0104.00.99063.440.3711.060white0
795.70.320.384.750.03323.094.00.9913.420.4211.871white1
805.70.320.52.60.04917.0155.00.99273.220.6410.060white0
815.70.360.344.20.02621.077.00.99073.410.4511.960white1
825.70.430.35.70.03924.098.00.9923.540.6112.371white1
835.70.6950.066.80.0429.084.00.994323.440.4410.250white0
845.71.130.091.50.1727.019.00.9943.50.489.840red0
855.80.130.265.10.03919.0103.00.994783.360.479.360white0
865.80.150.315.90.0367.073.00.991523.20.4311.960white0
875.80.150.321.20.03714.0119.00.991373.190.510.260white0
885.80.150.491.10.04821.098.00.99293.190.489.250white1
895.80.180.281.30.0349.094.00.990923.210.5211.260white0
905.80.180.371.10.03631.096.00.989423.160.4812.060white1
915.80.190.334.20.03849.0133.00.991073.160.4211.371white1
925.80.20.161.40.04244.099.00.989123.230.3712.260white0
935.80.20.241.40.03365.0169.00.990433.590.5612.371white1
945.80.220.290.90.03434.089.00.989363.140.3611.171white0
955.80.230.211.50.04421.0110.00.991383.30.5711.060white0
965.80.230.313.50.04435.0158.00.989983.190.3712.171white0
975.80.240.2610.050.03963.0162.00.993753.330.511.260white0
985.80.240.391.50.05437.0158.00.99323.210.529.360white0
995.80.270.263.50.07126.069.00.989943.10.3811.560white1
1005.80.280.344.00.03140.099.00.98963.390.3912.871white1
Rows: 1-100 | Columns: 15

Note

Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the vDataFrame to the predict() function, but in this case, it’s essential that the column names of the vDataFrame match the predictors and response name in the model.

Probabilities

It is also easy to get the model’s probabilities:

model.predict_proba(
    test,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density"
    ],
    "prediction",
)
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
Abc
prediction
Varchar(1)
Abc
prediction_0
Varchar(128)
Abc
prediction_1
Varchar(128)
13.90.2250.44.20.0329.0118.00.9893.570.3612.881white101
24.40.320.394.30.0331.0127.00.989043.460.3612.881white101
34.40.460.12.80.02431.0111.00.988163.480.3413.160white101
44.60.4450.01.40.05311.0178.00.994263.790.5510.250white101
54.60.520.152.10.0548.065.00.99343.90.5613.140red010
64.70.670.091.00.025.09.00.987223.30.3413.650white010
74.80.2250.381.20.07447.0130.00.991323.310.410.360white010
84.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white010
95.00.170.561.50.02624.0115.00.99063.480.3910.871white101
105.00.240.212.20.03931.0100.00.990983.690.6211.760white101
115.00.330.161.50.04910.097.00.99173.480.4410.760white010
125.00.330.23