Loading...

verticapy.machine_learning.vertica.linear_model.PoissonRegressor

class verticapy.machine_learning.vertica.linear_model.PoissonRegressor(name: str = None, overwrite_model: bool = False, tol: float = 1e-06, penalty: Literal['none', 'l2', None] = 'none', C: Annotated[int | float | Decimal, 'Python Numbers'] = 1.0, max_iter: int = 100, solver: Literal['newton'] = 'newton', fit_intercept: bool = True)

Creates an PoissonRegressor object using the Vertica Poisson Regression algorithm.

Parameters

name: str, optional

Name of the model. The model is stored in the database.

overwrite_model: bool, optional

If set to True, training a model with the same name as an existing model overwrites the existing model.

tol: float, optional

Determines whether the algorithm has reached the specified accuracy result.

penalty: str, optional

Determines the method of regularization.

  • None:

    No Regularization.

  • l2:

    L2 Regularization.

C: PythonNumber, optional

The regularization parameter value. The value must be zero or non-negative.

max_iter: int, optional

Determines the maximum number of iterations the algorithm performs before achieving the specified accuracy result.

solver: str, optional

The optimizer method used to train the model.

  • newton:

    Newton Method.

fit_intercept: bool, optional

boolean, specifies whether the model includes an intercept. If set to False, no intercept is used in training the model. Note that setting fit_intercept to False does not work well with the BFGS optimizer.

Attributes

Many attributes are created during the fitting phase.

coef_: numpy.array

The regression coefficients. The order of coefficients is the same as the order of columns used during the fitting phase.

intercept_: float

The expected value of the dependent variable when all independent variables are zero, serving as the baseline or constant term in the model.

features_importance_: numpy.array

The importance of features is computed through the model coefficients, which are normalized based on their range. Subsequently, an activation function calculates the final score. It is necessary to use the features_importance() method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.

Note

All attributes can be accessed using the get_attributes() method.

Note

Several other attributes can be accessed by using the get_vertica_attributes() method.

Examples

The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.

Load data for machine learning

We import verticapy:

import verticapy as vp

Hint

By assigning an alias to verticapy, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions from verticapy are used as intended without interfering with functions from other libraries.

For this example, we will use the winequality dataset.

import verticapy.datasets as vpd

data = vpd.load_winequality()
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
13.80.310.0211.10.03620.0114.00.992483.750.4412.460white
23.90.2250.44.20.0329.0118.00.9893.570.3612.881white
34.20.170.361.80.02993.0161.00.989993.650.8912.071white
44.20.2150.235.10.04164.0157.00.996883.420.448.030white
54.40.320.394.30.0331.0127.00.989043.460.3612.881white
64.40.460.12.80.02431.0111.00.988163.480.3413.160white
74.40.540.095.10.03852.097.00.990223.410.412.271white
84.50.190.210.950.03389.0159.00.993323.340.428.050white
94.60.4450.01.40.05311.0178.00.994263.790.5510.250white
104.60.520.152.10.0548.065.00.99343.90.5613.140red
114.70.1450.291.00.04235.090.00.99083.760.4911.360white
124.70.3350.141.30.03669.0168.00.992123.470.4610.550white
134.70.4550.181.90.03633.0106.00.987463.210.8314.071white
144.70.60.172.30.05817.0106.00.99323.850.612.960red
154.70.670.091.00.025.09.00.987223.30.3413.650white
164.70.7850.03.40.03623.0134.00.989813.530.9213.860white
174.80.130.321.20.04240.098.00.98983.420.6411.871white
184.80.170.282.90.0322.0111.00.99023.380.3411.371white
194.80.210.2110.20.03717.0112.00.993243.660.4812.271white
204.80.2250.381.20.07447.0130.00.991323.310.410.360white
214.80.260.2310.60.03423.0111.00.992743.460.2811.571white
224.80.290.231.10.04438.0180.00.989243.280.3411.960white
234.80.330.06.50.02834.0163.00.99373.350.619.950white
244.80.340.06.50.02833.0163.00.99393.360.619.960white
254.80.650.121.10.0134.010.00.992463.320.3613.540white
264.90.2350.2711.750.0334.0118.00.99543.070.59.460white
274.90.330.311.20.01639.0150.00.987133.330.5914.081white
284.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white
294.90.3350.141.30.03669.0168.00.992123.470.4610.466666666666750white
304.90.3450.341.00.06832.0143.00.991383.240.410.150white
314.90.3450.341.00.06832.0143.00.991383.240.410.150white
324.90.420.02.10.04816.042.00.991543.710.7414.071red
334.90.470.171.90.03560.0148.00.989643.270.3511.560white
345.00.170.561.50.02624.0115.00.99063.480.3910.871white
355.00.20.41.90.01520.098.00.98973.370.5512.0560white
365.00.2350.2711.750.0334.0118.00.99543.070.59.460white
375.00.240.195.00.04317.0101.00.994383.670.5710.050white
385.00.240.212.20.03931.0100.00.990983.690.6211.760white
395.00.240.341.10.03449.0158.00.987743.320.3213.171white
405.00.2550.222.70.04346.0153.00.992383.750.7611.360white
415.00.270.324.50.03258.0178.00.989563.450.3112.671white
425.00.270.324.50.03258.0178.00.989563.450.3112.671white
435.00.270.41.20.07642.0124.00.992043.320.4710.160white
445.00.290.545.70.03554.0155.00.989763.270.3412.981white
455.00.30.333.70.0354.0173.00.98873.360.313.071white
465.00.310.06.40.04643.0166.00.9943.30.639.960white
475.00.330.161.50.04910.097.00.99173.480.4410.760white
485.00.330.161.50.04910.097.00.99173.480.4410.760white
495.00.330.161.50.04910.097.00.99173.480.4410.760white
505.00.330.184.60.03240.0124.00.991143.180.411.060white
515.00.330.2311.80.0323.0158.00.993223.410.6411.860white
525.00.350.257.80.03124.0116.00.992413.390.411.360white
535.00.350.257.80.03124.0116.00.992413.390.411.360white
545.00.380.011.60.04826.060.00.990843.70.7514.060red
555.00.40.54.30.04629.080.00.99023.490.6613.660red
565.00.420.242.00.0619.050.00.99173.720.7414.081red
575.00.440.0418.60.03938.0128.00.99853.370.5710.260white
585.00.4550.181.90.03633.0106.00.987463.210.8314.071white
595.00.550.148.30.03235.0164.00.99183.530.5112.581white
605.00.610.121.30.00965.0100.00.98743.260.3713.550white
615.00.740.01.20.04116.046.00.992584.010.5912.560red
625.01.020.041.40.04541.085.00.99383.750.4810.540red
635.01.040.241.60.0532.096.00.99343.740.6211.550red
645.10.110.321.60.02812.090.00.990083.570.5212.260white
655.10.140.250.70.03915.089.00.99193.220.439.260white
665.10.1650.225.70.04742.0146.00.99343.180.559.960white
675.10.210.281.40.04748.0148.00.991683.50.4910.450white
685.10.230.181.00.05313.099.00.989563.220.3911.550white
695.10.250.361.30.03540.078.00.98913.230.6412.171white
705.10.260.331.10.02746.0113.00.989463.350.4311.471white
715.10.260.346.40.03426.099.00.994493.230.419.260white
725.10.290.288.30.02627.0107.00.993083.360.3711.060white
735.10.290.288.30.02627.0107.00.993083.360.3711.060white
745.10.30.32.30.04840.0150.00.989443.290.4612.260white
755.10.3050.131.750.03617.073.00.993.40.5112.333333333333350white
765.10.310.30.90.03728.0152.00.9923.540.5610.160white
775.10.330.221.60.02718.089.00.98933.510.3812.571white
785.10.330.221.60.02718.089.00.98933.510.3812.571white
795.10.330.221.60.02718.089.00.98933.510.3812.571white
805.10.330.276.70.02244.0129.00.992213.360.3911.071white
815.10.350.266.80.03436.0120.00.991883.380.411.560white
825.10.350.266.80.03436.0120.00.991883.380.411.560white
835.10.350.266.80.03436.0120.00.991883.380.411.560white
845.10.390.211.70.02715.072.00.98943.50.4512.560white
855.10.420.01.80.04418.088.00.991573.680.7313.671red
865.10.420.011.50.01725.0102.00.98943.380.3612.371white
875.10.470.021.30.03418.044.00.99213.90.6212.860red
885.10.510.182.10.04216.0101.00.99243.460.8712.971red
895.10.520.062.70.05230.079.00.99323.320.439.350white
905.10.5850.01.70.04414.086.00.992643.560.9412.971red
915.20.1550.331.60.02813.059.00.989753.30.8411.981white
925.20.1550.331.60.02813.059.00.989753.30.8411.981white
935.20.160.340.80.02926.077.00.991553.250.5110.160white
945.20.170.270.70.0311.068.00.992183.30.419.850white
955.20.1850.221.00.0347.0123.00.992183.550.4410.1560white
965.20.20.273.20.04716.093.00.992353.440.5310.171white
975.20.210.311.70.04817.061.00.989533.240.3712.071white
985.20.220.466.20.06641.0187.00.993623.190.429.7333333333333350white
995.20.240.157.10.04332.0134.00.993783.240.489.960white
1005.20.240.453.80.02721.0128.00.9923.550.4911.281white
Rows: 1-100 | Columns: 14

Note

VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.

You can easily divide your dataset into training and testing subsets using the vDataFrame.train_test_split() method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.

data = vpd.load_winequality()
train, test = data.train_test_split(test_size = 0.2)

Warning

In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the vDataFrame.to_db() method to save your results into tables or temporary tables. This will help enhance the overall performance of the process.

Model Initialization

First we import the PoissonRegressor model:

from verticapy.machine_learning.vertica import PoissonRegressor

Then we can create the model:

model = PoissonRegressor(
    tol = 1e-6,
    penalty = 'L2',
    C = 1,
    max_iter = 100,
    fit_intercept = True,
)

Hint

In verticapy 1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.

Important

The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.

Model Training

We can now fit the model:

model.fit(
    train,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density",
    ],
    "quality",
    test,
)



=======
details
=======
   predictor    |coefficient|std_err |z_value |p_value 
----------------+-----------+--------+--------+--------
   Intercept    |  3.65794  | 0.94834| 3.85719| 0.00011
 fixed_acidity  |  0.00092  | 0.00534| 0.17155| 0.86379
volatile_acidity| -0.20763  | 0.04511|-4.60246| 0.00000
  citric_acid   |  0.02345  | 0.04964| 0.47232| 0.63670
 residual_sugar | -0.00262  | 0.00133|-1.96490| 0.04943
   chlorides    | -0.55681  | 0.19276|-2.88854| 0.00387
    density     | -1.80625  | 0.96386|-1.87397| 0.06093


==============
regularization
==============
type| lambda 
----+--------
 l2 | 1.00000


===========
call_string
===========
poisson_reg('"public"."_verticapy_tmp_poissonregressor_v_demo_2d640b5a55a411ef880f0242ac120002_"', '"public"."_verticapy_tmp_view_v_demo_2d7315b455a411ef880f0242ac120002_"', '"quality"', '"fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density"'
USING PARAMETERS optimizer='newton', epsilon=1e-06, max_iterations=100, regularization='l2', lambda=1, alpha=0.5, fit_intercept=true)

===============
Additional Info
===============
       Name       |Value
------------------+-----
 iteration_count  |  8  
rejected_row_count|  0  
accepted_row_count|5192 

Important

To train a model, you can directly use the vDataFrame or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. In verticapy, we don’t work using X matrices and y vectors. Instead, we work directly with lists of predictors and the response name.

Metrics

We can get the entire report using:

model.report()
value
explained_variance0.0923429105128023
max_error3.0186250311259
median_absolute_error0.570532817904302
mean_absolute_error0.631846275437284
mean_squared_error0.668059644914875
root_mean_squared_error0.81734915728523
r20.0920520986208625
r2_adj0.0878551129442254
aic-512.246143826135
bic-476.190347692694
Rows: 1-10 | Columns: 2

Important

Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g. model.report(metrics = ["mse", "r2"]).

For LinearModel, we can easily get the ANOVA table using:

model.report(metrics = "anova")
Df
SS
MS
F
p_value
Regression6100.8844451197416.81407418662333325.033518904594523.146018901546638e-28
Residual1298871.8178366139130.6716624319059422
Total1304960.206896551724
Rows: 1-3 | Columns: 6

You can also use the LinearModel.score function to compute the R-squared value:

model.score()
Out[2]: 0.0920520986208624

Prediction

Prediction is straight-forward:

model.predict(
    test,
    [
        "fixed_acidity",
        "volatile_acidity",
        "citric_acid",
        "residual_sugar",
        "chlorides",
        "density",
    ],
    "prediction",
)
123
fixed_acidity
Numeric(8)
123
volatile_acidity
Numeric(9)
123
citric_acid
Numeric(8)
123
residual_sugar
Numeric(9)
123
chlorides
Float(22)
123
free_sulfur_dioxide
Numeric(9)
123
total_sulfur_dioxide
Numeric(9)
123
density
Float(22)
123
pH
Numeric(8)
123
sulphates
Numeric(8)
123
alcohol
Float(22)
123
quality
Integer
123
good
Integer
Abc
color
Varchar(20)
123
prediction
Float(22)
14.20.2150.235.10.04164.0157.00.996883.420.448.030white5.96408524587908
24.70.3350.141.30.03669.0168.00.992123.470.4610.550white5.93286581364805
34.70.60.172.30.05817.0106.00.99323.850.612.960red5.52549594605966
44.80.260.2310.60.03423.0111.00.992743.460.2811.571white5.89411324433011
54.80.330.06.50.02834.0163.00.99373.350.619.950white5.84950090283437
64.90.2350.2711.750.0334.0118.00.99543.070.59.460white5.89785947921917
74.90.3450.341.00.06832.0143.00.991383.240.410.150white5.85682090235143
85.00.240.341.10.03449.0158.00.987743.320.3213.171white6.13950678022681
95.00.30.333.70.0354.0173.00.98873.360.313.071white6.02396627899044
105.00.330.184.60.03240.0124.00.991143.180.411.060white5.91875139522335
115.00.420.242.00.0619.050.00.99173.720.7414.081red5.76060739052605
125.10.110.321.60.02812.090.00.990083.570.5212.260white6.29128725708322
135.10.140.250.70.03915.089.00.99193.220.439.260white6.19806381799953
145.10.250.361.30.03540.078.00.98913.230.6412.171white6.10856758815882
155.10.290.288.30.02627.0107.00.993083.360.3711.060white5.92409281926805
165.10.330.221.60.02718.089.00.98933.510.3812.571white6.0080967959057
175.10.390.211.70.02715.072.00.98943.50.4512.560white5.92969979046602
185.10.420.01.80.04418.088.00.991573.680.7313.671red5.78445509430058
195.10.5850.01.70.04414.086.00.992643.560.9412.971red5.58030787281983
205.20.1550.331.60.02813.059.00.989753.30.8411.981white6.23852932063882
215.20.220.466.20.06641.0187.00.993623.190.429.7333333333333350white5.9305556160846
225.20.2850.295.150.03564.0138.00.98953.190.3412.481white5.98995983668608
235.20.360.021.60.03124.0104.00.98963.440.3512.260white5.92696551786351
245.30.160.391.00.02840.0101.00.991563.570.5910.660white6.23079854962436
255.30.240.331.30.03325.097.00.99063.590.3811.081white6.10832562596103
265.30.320.126.60.04322.0141.00.99373.360.610.460white5.83042961132736
275.30.40.253.90.03145.0130.00.990723.310.5811.7571white5.86295368987045
285.30.430.111.10.0296.051.00.990763.510.4811.240white5.85622945296247
295.30.470.112.20.04816.089.00.991823.540.8813.566666666666771red5.71920671828528
305.30.570.011.70.0545.027.00.99343.570.8412.571red5.56131892598952
315.30.7150.191.50.1617.062.00.993953.620.6111.050red5.10336722355192
325.30.760.032.70.04327.093.00.99323.340.389.250white5.36944934113766
335.40.1850.197.10.04836.0110.00.994383.260.419.560white5.97469953107445
345.40.230.361.50.0374.0121.00.989763.240.9912.171white6.14223476157019
355.40.240.182.30.0522.0145.00.992073.240.4610.350white5.99839125569445
365.40.290.381.20.02931.0132.00.988953.280.3612.460white6.08607975001349
375.40.290.381.20.02931.0132.00.988953.280.3612.460white6.08607975001349
385.40.290.473.00.05247.0145.00.9933.290.7510.060white5.94936902425593
395.40.310.473.00.05346.0144.00.99313.290.7610.050white5.9203473744162
405.40.450.276.40.03320.0102.00.989443.220.2713.481white5.77473862569968
415.40.460.152.10.02629.0130.00.989533.390.7713.481white5.83330331307712
425.50.240.328.70.0619.0102.00.9943.270.3110.450white5.86542061894812
435.50.280.211.60.03223.085.00.990273.420.4212.550white6.04411740746269
445.50.310.293.00.02716.0102.00.990673.230.5611.260white6.00824413316825
455.50.320.131.30.03745.0156.00.991843.260.3810.750white5.95403565649581
465.50.350.351.10.04514.0167.00.9923.340.689.960white5.92261520832905
475.50.3750.381.70.03617.098.00.991423.290.3910.560white5.92262942886831
485.60.190.312.70.02711.0100.00.989643.460.413.271white6.17960126470233
495.60.190.461.10.03233.0115.00.99093.360.510.460white6.19594511681524
505.60.20.6610.20.04378.0175.00.99452.980.4310.471white5.98994723320613
515.60.2050.1612.550.05131.0115.00.995643.40.3810.860white5.83959865129805
525.60.210.244.40.02737.0150.00.9913.30.3111.571white6.10164011194004
535.60.2550.5710.70.05666.0171.00.994643.250.6110.471white5.85768114337298
545.60.260.010.20.03813.0111.00.993153.440.4612.460white5.85544893531896
555.60.280.46.10.03436.0118.00.991443.210.4312.171white5.98129257454964
565.60.330.281.20.03133.097.00.991263.490.5810.960white5.99095892954565
575.60.340.11.30.03120.068.00.99063.360.5111.271white5.95889733839898
585.60.390.244.70.03427.077.00.99063.280.3612.750white5.85458087587969
595.60.490.134.50.03917.0116.00.99073.420.913.771white5.7055571513039
605.60.540.041.70.0495.013.00.99423.720.5811.450red5.60905068290718
615.60.6150.01.60.08916.059.00.99433.580.529.950red5.39611902311535
625.60.660.02.20.0873.011.00.993783.710.6312.871red5.348522633416
635.70.180.361.20.0469.071.00.991993.70.6810.971white6.13305346565877
645.70.210.320.90.03838.0121.00.990743.240.4610.660white6.13507796164131
655.70.220.2216.650.04439.0110.00.998553.240.489.060white5.76016006196222
665.70.230.289.650.02526.0121.00.99253.280.3811.360white5.99010568418281
675.70.2450.331.10.04928.0150.00.99273.130.429.350white6.03033177422262
685.70.260.2417.80.05923.0124.00.997733.30.510.150white5.65901900041832
695.70.260.2417.80.05923.0124.00.997733.30.510.150white5.65901900041832
705.70.260.274.10.20173.5189.50.99423.270.389.460white5.45804901449927
715.70.2650.286.90.03646.0150.00.992993.360.4410.871white5.94781412007608
725.70.270.169.00.05332.0111.00.994743.360.3710.460white5.81856443239386
735.70.290.167.90.04448.0197.00.995123.210.369.450white5.83633204594768
745.70.310.297.30.0533.0143.00.993323.310.511.066666666666760white5.83852064699744
755.70.320.181.40.02926.0104.00.99063.440.3711.060white6.00058810853733
765.70.320.52.60.04917.0155.00.99273.220.6410.060white5.93752781573745
775.70.330.151.90.0520.093.00.99343.380.629.950white5.87684338121465
785.70.3850.0412.60.03422.0115.00.99643.280.639.960white5.65493616230026
795.70.410.211.90.04830.0112.00.991383.290.5511.260white5.81579997560291
805.70.450.421.10.05161.0197.00.99323.020.49.050white5.77958103329099
815.71.130.091.50.1727.019.00.9943.50.489.840red4.64383501812902
825.80.130.2212.70.05824.0183.00.99563.320.4211.760white5.91568152182088
835.80.130.265.10.03919.0103.00.994783.360.479.360white6.11338297243243
845.80.140.156.10.04227.0123.00.993623.060.69.960white6.07167527217327
855.80.170.31.40.03755.0130.00.99093.290.3811.360white6.17748308429788
865.80.170.341.80.04596.0170.00.990353.380.911.881white6.15547355330887
875.80.180.371.20.03619.074.00.988533.090.4912.771white6.20798588330179
885.80.20.161.40.04244.099.00.989123.230.3712.260white6.12164355237213
895.80.20.31.50.03121.057.00.991153.440.5511.060white6.15527698127614
905.80.250.2413.30.04441.0137.00.99723.340.429.550white5.79210711777424
915.80.250.2811.10.05645.0175.00.997553.420.439.550white5.78850189715883
925.80.260.249.20.04455.0152.00.99613.310.389.450white5.85402183490828
935.80.270.27.30.0442.0145.00.994423.150.489.850white5.89641953291704
945.80.280.272.60.05430.0156.00.99143.530.4212.450white5.95279667074333
955.80.280.31.50.02631.0114.00.989523.320.612.571white6.08865699433472
965.80.280.344.00.03140.099.00.98963.390.3912.871white6.03695005693814
975.80.290.261.70.0633.011.00.99153.390.5413.560red5.92222453424129
985.80.30.333.50.03325.0116.00.990573.20.4411.760white6.00116587958363
995.80.3150.271.550.02615.070.00.989943.370.411.981white6.03495071671737
1005.80.320.384.750.03323.094.00.9913.420.4211.871white5.95914978269256
Rows: 1-100 | Columns: 15

Note

Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the vDataFrame to the predict() function, but in this case, it’s essential that the column names of the vDataFrame match the predictors and response name in the model.

Plots

If the model allows, you can also generate relevant plots. For example, regression plots can be found in the Machine Learning - Regression Plots.

model.plot()

Important

The plotting feature is typically suitable for models with fewer than three predictors.

Parameter Modification

In order to see the parameters:

model.get_params()
Out[3]: 
{'penalty': 'l2',
 'tol': 1e-06,
 'C': 1,
 'max_iter': 100,
 'solver': 'newton',
 'fit_intercept': True}

And to manually change some of the parameters:

model.set_params({'tol': 0.001})

Model Register

In order to register the model for tracking and versioning:

model.register("model_v1")

Please refer to Model Tracking and Versioning for more details on model tracking and versioning.

Model Exporting

To Memmodel

model.to_memmodel()

Note

MemModel objects serve as in-memory representations of machine learning models. They can be used for both in-database and in-memory prediction tasks. These objects can be pickled in the same way that you would pickle a scikit-learn model.

The following methods for exporting the model use MemModel, and it is recommended to use MemModel directly.

To SQL

You can get the SQL code by:

model.to_sql()
Out[5]: '3.65793639898766 + 0.000916177178773125 * "fixed_acidity" + -0.207630569132817 * "volatile_acidity" + 0.0234463122665005 * "citric_acid" + -0.00261517074737228 * "residual_sugar" + -0.556805941021139 * "chlorides" + -1.80624981742585 * "density"'

To Python

To obtain the prediction function in Python syntax, use the following code:

X = [[4.2, 0.17, 0.36, 1.8, 0.029, 0.9899]]

model.to_python()(X)
Out[7]: array([1.82606644])

Hint

The to_python() method is used to retrieve predictions, probabilities, or cluster distances. For specific details on how to use this method for different model types, refer to the relevant documentation for each model.

__init__(name: str = None, overwrite_model: bool = False, tol: float = 1e-06, penalty: Literal['none', 'l2', None] = 'none', C: Annotated[int | float | Decimal, 'Python Numbers'] = 1.0, max_iter: int = 100, solver: Literal['newton'] = 'newton', fit_intercept: bool = True) None

Methods

__init__([name, overwrite_model, tol, ...])

contour([nbins, chart])

Draws the model's contour plot.

deploySQL([X])

Returns the SQL code needed to deploy the model.

does_model_exists(name[, raise_error, ...])

Checks whether the model is stored in the Vertica database.

drop()

Drops the model from the Vertica database.

export_models(name, path[, kind])

Exports machine learning models.

features_importance([show, chart])

Computes the model's features importance.

fit(input_relation, X, y[, test_relation, ...])

Trains the model.

get_attributes([attr_name])

Returns the model attributes.

get_match_index(x, col_list[, str_check])

Returns the matching index.

get_params()

Returns the parameters of the model.

get_plotting_lib([class_name, chart, ...])

Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.

get_vertica_attributes([attr_name])

Returns the model Vertica attributes.

import_models(path[, schema, kind])

Imports machine learning models.

plot([max_nb_points, chart])

Draws the model.

predict(vdf[, X, name, inplace])

Predicts using the input relation.

register(registered_name[, raise_error])

Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.

regression_report([metrics])

Computes a regression report

report([metrics])

Computes a regression report

score([metric])

Computes the model score.

set_params([parameters])

Sets the parameters of the model.

summarize()

Summarizes the model.

to_binary(path)

Exports the model to the Vertica Binary format.

to_memmodel()

Converts the model to an InMemory object that can be used for different types of predictions.

to_pmml(path)

Exports the model to PMML.

to_python([return_proba, ...])

Returns the Python function needed for in-memory scoring without using built-in Vertica functions.

to_sql([X, return_proba, ...])

Returns the SQL code needed to deploy the model without using built-in Vertica functions.

to_tf(path)

Exports the model to the Frozen Graph format (TensorFlow).

Attributes

object_type