
verticapy.machine_learning.vertica.linear_model.PoissonRegressor¶
- class verticapy.machine_learning.vertica.linear_model.PoissonRegressor(name: str = None, overwrite_model: bool = False, tol: float = 1e-06, penalty: Literal['none', 'l2', None] = 'none', C: Annotated[int | float | Decimal, 'Python Numbers'] = 1.0, max_iter: int = 100, solver: Literal['newton'] = 'newton', fit_intercept: bool = True)¶
Creates an
PoissonRegressor
object using the Vertica Poisson Regression algorithm.Parameters¶
- name: str, optional
Name of the model. The model is stored in the database.
- overwrite_model: bool, optional
If set to
True
, training a model with the same name as an existing model overwrites the existing model.- tol: float, optional
Determines whether the algorithm has reached the specified accuracy result.
- penalty: str, optional
Determines the method of regularization.
- None:
No Regularization.
- l2:
L2
Regularization.
- C: PythonNumber, optional
The regularization parameter value. The value must be zero or non-negative.
- max_iter: int, optional
Determines the maximum number of iterations the algorithm performs before achieving the specified accuracy result.
- solver: str, optional
The optimizer method used to train the model.
- newton:
Newton Method.
- fit_intercept: bool, optional
boolean
, specifies whether the model includes an intercept. If set toFalse
, no intercept is used in training the model. Note that settingfit_intercept
toFalse
does not work well with the BFGS optimizer.
Attributes¶
Many attributes are created during the fitting phase.
- coef_: numpy.array
The regression coefficients. The order of coefficients is the same as the order of columns used during the fitting phase.
- intercept_: float
The expected value of the dependent variable when all independent variables are zero, serving as the baseline or constant term in the model.
- features_importance_: numpy.array
The importance of features is computed through the model coefficients, which are normalized based on their range. Subsequently, an activation function calculates the final score. It is necessary to use the
features_importance()
method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.
Note
All attributes can be accessed using the
get_attributes()
method.Note
Several other attributes can be accessed by using the
get_vertica_attributes()
method.Examples¶
The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.
Load data for machine learning¶
We import
verticapy
:import verticapy as vp
Hint
By assigning an alias to
verticapy
, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions fromverticapy
are used as intended without interfering with functions from other libraries.For this example, we will use the winequality dataset.
import verticapy.datasets as vpd data = vpd.load_winequality()
123fixed_acidity123volatile_acidity123citric_acid123residual_sugar123chlorides123free_sulfur_dioxide123total_sulfur_dioxide123density123pH123sulphates123alcohol123quality123goodAbccolor1 3.8 0.31 0.02 11.1 0.036 20.0 114.0 0.99248 3.75 0.44 12.4 6 0 white 2 3.9 0.225 0.4 4.2 0.03 29.0 118.0 0.989 3.57 0.36 12.8 8 1 white 3 4.2 0.17 0.36 1.8 0.029 93.0 161.0 0.98999 3.65 0.89 12.0 7 1 white 4 4.2 0.215 0.23 5.1 0.041 64.0 157.0 0.99688 3.42 0.44 8.0 3 0 white 5 4.4 0.32 0.39 4.3 0.03 31.0 127.0 0.98904 3.46 0.36 12.8 8 1 white 6 4.4 0.46 0.1 2.8 0.024 31.0 111.0 0.98816 3.48 0.34 13.1 6 0 white 7 4.4 0.54 0.09 5.1 0.038 52.0 97.0 0.99022 3.41 0.4 12.2 7 1 white 8 4.5 0.19 0.21 0.95 0.033 89.0 159.0 0.99332 3.34 0.42 8.0 5 0 white 9 4.6 0.445 0.0 1.4 0.053 11.0 178.0 0.99426 3.79 0.55 10.2 5 0 white 10 4.6 0.52 0.15 2.1 0.054 8.0 65.0 0.9934 3.9 0.56 13.1 4 0 red 11 4.7 0.145 0.29 1.0 0.042 35.0 90.0 0.9908 3.76 0.49 11.3 6 0 white 12 4.7 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.5 5 0 white 13 4.7 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 14 4.7 0.6 0.17 2.3 0.058 17.0 106.0 0.9932 3.85 0.6 12.9 6 0 red 15 4.7 0.67 0.09 1.0 0.02 5.0 9.0 0.98722 3.3 0.34 13.6 5 0 white 16 4.7 0.785 0.0 3.4 0.036 23.0 134.0 0.98981 3.53 0.92 13.8 6 0 white 17 4.8 0.13 0.32 1.2 0.042 40.0 98.0 0.9898 3.42 0.64 11.8 7 1 white 18 4.8 0.17 0.28 2.9 0.03 22.0 111.0 0.9902 3.38 0.34 11.3 7 1 white 19 4.8 0.21 0.21 10.2 0.037 17.0 112.0 0.99324 3.66 0.48 12.2 7 1 white 20 4.8 0.225 0.38 1.2 0.074 47.0 130.0 0.99132 3.31 0.4 10.3 6 0 white 21 4.8 0.26 0.23 10.6 0.034 23.0 111.0 0.99274 3.46 0.28 11.5 7 1 white 22 4.8 0.29 0.23 1.1 0.044 38.0 180.0 0.98924 3.28 0.34 11.9 6 0 white 23 4.8 0.33 0.0 6.5 0.028 34.0 163.0 0.9937 3.35 0.61 9.9 5 0 white 24 4.8 0.34 0.0 6.5 0.028 33.0 163.0 0.9939 3.36 0.61 9.9 6 0 white 25 4.8 0.65 0.12 1.1 0.013 4.0 10.0 0.99246 3.32 0.36 13.5 4 0 white 26 4.9 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 27 4.9 0.33 0.31 1.2 0.016 39.0 150.0 0.98713 3.33 0.59 14.0 8 1 white 28 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 29 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 30 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 31 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 32 4.9 0.42 0.0 2.1 0.048 16.0 42.0 0.99154 3.71 0.74 14.0 7 1 red 33 4.9 0.47 0.17 1.9 0.035 60.0 148.0 0.98964 3.27 0.35 11.5 6 0 white 34 5.0 0.17 0.56 1.5 0.026 24.0 115.0 0.9906 3.48 0.39 10.8 7 1 white 35 5.0 0.2 0.4 1.9 0.015 20.0 98.0 0.9897 3.37 0.55 12.05 6 0 white 36 5.0 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 37 5.0 0.24 0.19 5.0 0.043 17.0 101.0 0.99438 3.67 0.57 10.0 5 0 white 38 5.0 0.24 0.21 2.2 0.039 31.0 100.0 0.99098 3.69 0.62 11.7 6 0 white 39 5.0 0.24 0.34 1.1 0.034 49.0 158.0 0.98774 3.32 0.32 13.1 7 1 white 40 5.0 0.255 0.22 2.7 0.043 46.0 153.0 0.99238 3.75 0.76 11.3 6 0 white 41 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 42 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 43 5.0 0.27 0.4 1.2 0.076 42.0 124.0 0.99204 3.32 0.47 10.1 6 0 white 44 5.0 0.29 0.54 5.7 0.035 54.0 155.0 0.98976 3.27 0.34 12.9 8 1 white 45 5.0 0.3 0.33 3.7 0.03 54.0 173.0 0.9887 3.36 0.3 13.0 7 1 white 46 5.0 0.31 0.0 6.4 0.046 43.0 166.0 0.994 3.3 0.63 9.9 6 0 white 47 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 48 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 49 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 50 5.0 0.33 0.18 4.6 0.032 40.0 124.0 0.99114 3.18 0.4 11.0 6 0 white 51 5.0 0.33 0.23 11.8 0.03 23.0 158.0 0.99322 3.41 0.64 11.8 6 0 white 52 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 53 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 54 5.0 0.38 0.01 1.6 0.048 26.0 60.0 0.99084 3.7 0.75 14.0 6 0 red 55 5.0 0.4 0.5 4.3 0.046 29.0 80.0 0.9902 3.49 0.66 13.6 6 0 red 56 5.0 0.42 0.24 2.0 0.06 19.0 50.0 0.9917 3.72 0.74 14.0 8 1 red 57 5.0 0.44 0.04 18.6 0.039 38.0 128.0 0.9985 3.37 0.57 10.2 6 0 white 58 5.0 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 59 5.0 0.55 0.14 8.3 0.032 35.0 164.0 0.9918 3.53 0.51 12.5 8 1 white 60 5.0 0.61 0.12 1.3 0.009 65.0 100.0 0.9874 3.26 0.37 13.5 5 0 white 61 5.0 0.74 0.0 1.2 0.041 16.0 46.0 0.99258 4.01 0.59 12.5 6 0 red 62 5.0 1.02 0.04 1.4 0.045 41.0 85.0 0.9938 3.75 0.48 10.5 4 0 red 63 5.0 1.04 0.24 1.6 0.05 32.0 96.0 0.9934 3.74 0.62 11.5 5 0 red 64 5.1 0.11 0.32 1.6 0.028 12.0 90.0 0.99008 3.57 0.52 12.2 6 0 white 65 5.1 0.14 0.25 0.7 0.039 15.0 89.0 0.9919 3.22 0.43 9.2 6 0 white 66 5.1 0.165 0.22 5.7 0.047 42.0 146.0 0.9934 3.18 0.55 9.9 6 0 white 67 5.1 0.21 0.28 1.4 0.047 48.0 148.0 0.99168 3.5 0.49 10.4 5 0 white 68 5.1 0.23 0.18 1.0 0.053 13.0 99.0 0.98956 3.22 0.39 11.5 5 0 white 69 5.1 0.25 0.36 1.3 0.035 40.0 78.0 0.9891 3.23 0.64 12.1 7 1 white 70 5.1 0.26 0.33 1.1 0.027 46.0 113.0 0.98946 3.35 0.43 11.4 7 1 white 71 5.1 0.26 0.34 6.4 0.034 26.0 99.0 0.99449 3.23 0.41 9.2 6 0 white 72 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 73 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 74 5.1 0.3 0.3 2.3 0.048 40.0 150.0 0.98944 3.29 0.46 12.2 6 0 white 75 5.1 0.305 0.13 1.75 0.036 17.0 73.0 0.99 3.4 0.51 12.3333333333333 5 0 white 76 5.1 0.31 0.3 0.9 0.037 28.0 152.0 0.992 3.54 0.56 10.1 6 0 white 77 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 78 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 79 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 80 5.1 0.33 0.27 6.7 0.022 44.0 129.0 0.99221 3.36 0.39 11.0 7 1 white 81 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 82 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 83 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 84 5.1 0.39 0.21 1.7 0.027 15.0 72.0 0.9894 3.5 0.45 12.5 6 0 white 85 5.1 0.42 0.0 1.8 0.044 18.0 88.0 0.99157 3.68 0.73 13.6 7 1 red 86 5.1 0.42 0.01 1.5 0.017 25.0 102.0 0.9894 3.38 0.36 12.3 7 1 white 87 5.1 0.47 0.02 1.3 0.034 18.0 44.0 0.9921 3.9 0.62 12.8 6 0 red 88 5.1 0.51 0.18 2.1 0.042 16.0 101.0 0.9924 3.46 0.87 12.9 7 1 red 89 5.1 0.52 0.06 2.7 0.052 30.0 79.0 0.9932 3.32 0.43 9.3 5 0 white 90 5.1 0.585 0.0 1.7 0.044 14.0 86.0 0.99264 3.56 0.94 12.9 7 1 red 91 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 92 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 93 5.2 0.16 0.34 0.8 0.029 26.0 77.0 0.99155 3.25 0.51 10.1 6 0 white 94 5.2 0.17 0.27 0.7 0.03 11.0 68.0 0.99218 3.3 0.41 9.8 5 0 white 95 5.2 0.185 0.22 1.0 0.03 47.0 123.0 0.99218 3.55 0.44 10.15 6 0 white 96 5.2 0.2 0.27 3.2 0.047 16.0 93.0 0.99235 3.44 0.53 10.1 7 1 white 97 5.2 0.21 0.31 1.7 0.048 17.0 61.0 0.98953 3.24 0.37 12.0 7 1 white 98 5.2 0.22 0.46 6.2 0.066 41.0 187.0 0.99362 3.19 0.42 9.73333333333333 5 0 white 99 5.2 0.24 0.15 7.1 0.043 32.0 134.0 0.99378 3.24 0.48 9.9 6 0 white 100 5.2 0.24 0.45 3.8 0.027 21.0 128.0 0.992 3.55 0.49 11.2 8 1 white Rows: 1-100 | Columns: 14Note
VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.
You can easily divide your dataset into training and testing subsets using the
vDataFrame.
train_test_split()
method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.data = vpd.load_winequality() train, test = data.train_test_split(test_size = 0.2)
Warning
In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the
vDataFrame.
to_db()
method to save your results intotables
ortemporary tables
. This will help enhance the overall performance of the process.Model Initialization¶
First we import the
PoissonRegressor
model:from verticapy.machine_learning.vertica import PoissonRegressor
Then we can create the model:
model = PoissonRegressor( tol = 1e-6, penalty = 'L2', C = 1, max_iter = 100, fit_intercept = True, )
Hint
In
verticapy
1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.Important
The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.
Model Training¶
We can now fit the model:
model.fit( train, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "quality", test, ) ======= details ======= predictor |coefficient|std_err |z_value |p_value ----------------+-----------+--------+--------+-------- Intercept | 3.65794 | 0.94834| 3.85719| 0.00011 fixed_acidity | 0.00092 | 0.00534| 0.17155| 0.86379 volatile_acidity| -0.20763 | 0.04511|-4.60246| 0.00000 citric_acid | 0.02345 | 0.04964| 0.47232| 0.63670 residual_sugar | -0.00262 | 0.00133|-1.96490| 0.04943 chlorides | -0.55681 | 0.19276|-2.88854| 0.00387 density | -1.80625 | 0.96386|-1.87397| 0.06093 ============== regularization ============== type| lambda ----+-------- l2 | 1.00000 =========== call_string =========== poisson_reg('"public"."_verticapy_tmp_poissonregressor_v_demo_2d640b5a55a411ef880f0242ac120002_"', '"public"."_verticapy_tmp_view_v_demo_2d7315b455a411ef880f0242ac120002_"', '"quality"', '"fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density"' USING PARAMETERS optimizer='newton', epsilon=1e-06, max_iterations=100, regularization='l2', lambda=1, alpha=0.5, fit_intercept=true) =============== Additional Info =============== Name |Value ------------------+----- iteration_count | 8 rejected_row_count| 0 accepted_row_count|5192
Important
To train a model, you can directly use the
vDataFrame
or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. Inverticapy
, we don’t work usingX
matrices andy
vectors. Instead, we work directly with lists of predictors and the response name.Metrics¶
We can get the entire report using:
model.report()
value explained_variance 0.0923429105128023 max_error 3.0186250311259 median_absolute_error 0.570532817904302 mean_absolute_error 0.631846275437284 mean_squared_error 0.668059644914875 root_mean_squared_error 0.81734915728523 r2 0.0920520986208625 r2_adj 0.0878551129442254 aic -512.246143826135 bic -476.190347692694 Rows: 1-10 | Columns: 2Important
Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g.
model.report(metrics = ["mse", "r2"])
.For
LinearModel
, we can easily get the ANOVA table using:model.report(metrics = "anova")
Df SS MS F p_value Regression 6 100.88444511974 16.814074186623333 25.03351890459452 3.146018901546638e-28 Residual 1298 871.817836613913 0.6716624319059422 Total 1304 960.206896551724 Rows: 1-3 | Columns: 6You can also use the
LinearModel.score
function to compute the R-squared value:model.score() Out[2]: 0.0920520986208624
Prediction¶
Prediction is straight-forward:
model.predict( test, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "prediction", )
123fixed_acidity123volatile_acidity123citric_acid123residual_sugar123chlorides123free_sulfur_dioxide123total_sulfur_dioxide123density123pH123sulphates123alcohol123quality123goodAbccolor123prediction1 4.2 0.215 0.23 5.1 0.041 64.0 157.0 0.99688 3.42 0.44 8.0 3 0 white 5.96408524587908 2 4.7 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.5 5 0 white 5.93286581364805 3 4.7 0.6 0.17 2.3 0.058 17.0 106.0 0.9932 3.85 0.6 12.9 6 0 red 5.52549594605966 4 4.8 0.26 0.23 10.6 0.034 23.0 111.0 0.99274 3.46 0.28 11.5 7 1 white 5.89411324433011 5 4.8 0.33 0.0 6.5 0.028 34.0 163.0 0.9937 3.35 0.61 9.9 5 0 white 5.84950090283437 6 4.9 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 5.89785947921917 7 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 5.85682090235143 8 5.0 0.24 0.34 1.1 0.034 49.0 158.0 0.98774 3.32 0.32 13.1 7 1 white 6.13950678022681 9 5.0 0.3 0.33 3.7 0.03 54.0 173.0 0.9887 3.36 0.3 13.0 7 1 white 6.02396627899044 10 5.0 0.33 0.18 4.6 0.032 40.0 124.0 0.99114 3.18 0.4 11.0 6 0 white 5.91875139522335 11 5.0 0.42 0.24 2.0 0.06 19.0 50.0 0.9917 3.72 0.74 14.0 8 1 red 5.76060739052605 12 5.1 0.11 0.32 1.6 0.028 12.0 90.0 0.99008 3.57 0.52 12.2 6 0 white 6.29128725708322 13 5.1 0.14 0.25 0.7 0.039 15.0 89.0 0.9919 3.22 0.43 9.2 6 0 white 6.19806381799953 14 5.1 0.25 0.36 1.3 0.035 40.0 78.0 0.9891 3.23 0.64 12.1 7 1 white 6.10856758815882 15 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 5.92409281926805 16 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 6.0080967959057 17 5.1 0.39 0.21 1.7 0.027 15.0 72.0 0.9894 3.5 0.45 12.5 6 0 white 5.92969979046602 18 5.1 0.42 0.0 1.8 0.044 18.0 88.0 0.99157 3.68 0.73 13.6 7 1 red 5.78445509430058 19 5.1 0.585 0.0 1.7 0.044 14.0 86.0 0.99264 3.56 0.94 12.9 7 1 red 5.58030787281983 20 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 6.23852932063882 21 5.2 0.22 0.46 6.2 0.066 41.0 187.0 0.99362 3.19 0.42 9.73333333333333 5 0 white 5.9305556160846 22 5.2 0.285 0.29 5.15 0.035 64.0 138.0 0.9895 3.19 0.34 12.4 8 1 white 5.98995983668608 23 5.2 0.36 0.02 1.6 0.031 24.0 104.0 0.9896 3.44 0.35 12.2 6 0 white 5.92696551786351 24 5.3 0.16 0.39 1.0 0.028 40.0 101.0 0.99156 3.57 0.59 10.6 6 0 white 6.23079854962436 25 5.3 0.24 0.33 1.3 0.033 25.0 97.0 0.9906 3.59 0.38 11.0 8 1 white 6.10832562596103 26 5.3 0.32 0.12 6.6 0.043 22.0 141.0 0.9937 3.36 0.6 10.4 6 0 white 5.83042961132736 27 5.3 0.4 0.25 3.9 0.031 45.0 130.0 0.99072 3.31 0.58 11.75 7 1 white 5.86295368987045 28 5.3 0.43 0.11 1.1 0.029 6.0 51.0 0.99076 3.51 0.48 11.2 4 0 white 5.85622945296247 29 5.3 0.47 0.11 2.2 0.048 16.0 89.0 0.99182 3.54 0.88 13.5666666666667 7 1 red 5.71920671828528 30 5.3 0.57 0.01 1.7 0.054 5.0 27.0 0.9934 3.57 0.84 12.5 7 1 red 5.56131892598952 31 5.3 0.715 0.19 1.5 0.161 7.0 62.0 0.99395 3.62 0.61 11.0 5 0 red 5.10336722355192 32 5.3 0.76 0.03 2.7 0.043 27.0 93.0 0.9932 3.34 0.38 9.2 5 0 white 5.36944934113766 33 5.4 0.185 0.19 7.1 0.048 36.0 110.0 0.99438 3.26 0.41 9.5 6 0 white 5.97469953107445 34 5.4 0.23 0.36 1.5 0.03 74.0 121.0 0.98976 3.24 0.99 12.1 7 1 white 6.14223476157019 35 5.4 0.24 0.18 2.3 0.05 22.0 145.0 0.99207 3.24 0.46 10.3 5 0 white 5.99839125569445 36 5.4 0.29 0.38 1.2 0.029 31.0 132.0 0.98895 3.28 0.36 12.4 6 0 white 6.08607975001349 37 5.4 0.29 0.38 1.2 0.029 31.0 132.0 0.98895 3.28 0.36 12.4 6 0 white 6.08607975001349 38 5.4 0.29 0.47 3.0 0.052 47.0 145.0 0.993 3.29 0.75 10.0 6 0 white 5.94936902425593 39 5.4 0.31 0.47 3.0 0.053 46.0 144.0 0.9931 3.29 0.76 10.0 5 0 white 5.9203473744162 40 5.4 0.45 0.27 6.4 0.033 20.0 102.0 0.98944 3.22 0.27 13.4 8 1 white 5.77473862569968 41 5.4 0.46 0.15 2.1 0.026 29.0 130.0 0.98953 3.39 0.77 13.4 8 1 white 5.83330331307712 42 5.5 0.24 0.32 8.7 0.06 19.0 102.0 0.994 3.27 0.31 10.4 5 0 white 5.86542061894812 43 5.5 0.28 0.21 1.6 0.032 23.0 85.0 0.99027 3.42 0.42 12.5 5 0 white 6.04411740746269 44 5.5 0.31 0.29 3.0 0.027 16.0 102.0 0.99067 3.23 0.56 11.2 6 0 white 6.00824413316825 45 5.5 0.32 0.13 1.3 0.037 45.0 156.0 0.99184 3.26 0.38 10.7 5 0 white 5.95403565649581 46 5.5 0.35 0.35 1.1 0.045 14.0 167.0 0.992 3.34 0.68 9.9 6 0 white 5.92261520832905 47 5.5 0.375 0.38 1.7 0.036 17.0 98.0 0.99142 3.29 0.39 10.5 6 0 white 5.92262942886831 48 5.6 0.19 0.31 2.7 0.027 11.0 100.0 0.98964 3.46 0.4 13.2 7 1 white 6.17960126470233 49 5.6 0.19 0.46 1.1 0.032 33.0 115.0 0.9909 3.36 0.5 10.4 6 0 white 6.19594511681524 50 5.6 0.2 0.66 10.2 0.043 78.0 175.0 0.9945 2.98 0.43 10.4 7 1 white 5.98994723320613 51 5.6 0.205 0.16 12.55 0.051 31.0 115.0 0.99564 3.4 0.38 10.8 6 0 white 5.83959865129805 52 5.6 0.21 0.24 4.4 0.027 37.0 150.0 0.991 3.3 0.31 11.5 7 1 white 6.10164011194004 53 5.6 0.255 0.57 10.7 0.056 66.0 171.0 0.99464 3.25 0.61 10.4 7 1 white 5.85768114337298 54 5.6 0.26 0.0 10.2 0.038 13.0 111.0 0.99315 3.44 0.46 12.4 6 0 white 5.85544893531896 55 5.6 0.28 0.4 6.1 0.034 36.0 118.0 0.99144 3.21 0.43 12.1 7 1 white 5.98129257454964 56 5.6 0.33 0.28 1.2 0.031 33.0 97.0 0.99126 3.49 0.58 10.9 6 0 white 5.99095892954565 57 5.6 0.34 0.1 1.3 0.031 20.0 68.0 0.9906 3.36 0.51 11.2 7 1 white 5.95889733839898 58 5.6 0.39 0.24 4.7 0.034 27.0 77.0 0.9906 3.28 0.36 12.7 5 0 white 5.85458087587969 59 5.6 0.49 0.13 4.5 0.039 17.0 116.0 0.9907 3.42 0.9 13.7 7 1 white 5.7055571513039 60 5.6 0.54 0.04 1.7 0.049 5.0 13.0 0.9942 3.72 0.58 11.4 5 0 red 5.60905068290718 61 5.6 0.615 0.0 1.6 0.089 16.0 59.0 0.9943 3.58 0.52 9.9 5 0 red 5.39611902311535 62 5.6 0.66 0.0 2.2 0.087 3.0 11.0 0.99378 3.71 0.63 12.8 7 1 red 5.348522633416 63 5.7 0.18 0.36 1.2 0.046 9.0 71.0 0.99199 3.7 0.68 10.9 7 1 white 6.13305346565877 64 5.7 0.21 0.32 0.9 0.038 38.0 121.0 0.99074 3.24 0.46 10.6 6 0 white 6.13507796164131 65 5.7 0.22 0.22 16.65 0.044 39.0 110.0 0.99855 3.24 0.48 9.0 6 0 white 5.76016006196222 66 5.7 0.23 0.28 9.65 0.025 26.0 121.0 0.9925 3.28 0.38 11.3 6 0 white 5.99010568418281 67 5.7 0.245 0.33 1.1 0.049 28.0 150.0 0.9927 3.13 0.42 9.3 5 0 white 6.03033177422262 68 5.7 0.26 0.24 17.8 0.059 23.0 124.0 0.99773 3.3 0.5 10.1 5 0 white 5.65901900041832 69 5.7 0.26 0.24 17.8 0.059 23.0 124.0 0.99773 3.3 0.5 10.1 5 0 white 5.65901900041832 70 5.7 0.26 0.27 4.1 0.201 73.5 189.5 0.9942 3.27 0.38 9.4 6 0 white 5.45804901449927 71 5.7 0.265 0.28 6.9 0.036 46.0 150.0 0.99299 3.36 0.44 10.8 7 1 white 5.94781412007608 72 5.7 0.27 0.16 9.0 0.053 32.0 111.0 0.99474 3.36 0.37 10.4 6 0 white 5.81856443239386 73 5.7 0.29 0.16 7.9 0.044 48.0 197.0 0.99512 3.21 0.36 9.4 5 0 white 5.83633204594768 74 5.7 0.31 0.29 7.3 0.05 33.0 143.0 0.99332 3.31 0.5 11.0666666666667 6 0 white 5.83852064699744 75 5.7 0.32 0.18 1.4 0.029 26.0 104.0 0.9906 3.44 0.37 11.0 6 0 white 6.00058810853733 76 5.7 0.32 0.5 2.6 0.049 17.0 155.0 0.9927 3.22 0.64 10.0 6 0 white 5.93752781573745 77 5.7 0.33 0.15 1.9 0.05 20.0 93.0 0.9934 3.38 0.62 9.9 5 0 white 5.87684338121465 78 5.7 0.385 0.04 12.6 0.034 22.0 115.0 0.9964 3.28 0.63 9.9 6 0 white 5.65493616230026 79 5.7 0.41 0.21 1.9 0.048 30.0 112.0 0.99138 3.29 0.55 11.2 6 0 white 5.81579997560291 80 5.7 0.45 0.42 1.1 0.051 61.0 197.0 0.9932 3.02 0.4 9.0 5 0 white 5.77958103329099 81 5.7 1.13 0.09 1.5 0.172 7.0 19.0 0.994 3.5 0.48 9.8 4 0 red 4.64383501812902 82 5.8 0.13 0.22 12.7 0.058 24.0 183.0 0.9956 3.32 0.42 11.7 6 0 white 5.91568152182088 83 5.8 0.13 0.26 5.1 0.039 19.0 103.0 0.99478 3.36 0.47 9.3 6 0 white 6.11338297243243 84 5.8 0.14 0.15 6.1 0.042 27.0 123.0 0.99362 3.06 0.6 9.9 6 0 white 6.07167527217327 85 5.8 0.17 0.3 1.4 0.037 55.0 130.0 0.9909 3.29 0.38 11.3 6 0 white 6.17748308429788 86 5.8 0.17 0.34 1.8 0.045 96.0 170.0 0.99035 3.38 0.9 11.8 8 1 white 6.15547355330887 87 5.8 0.18 0.37 1.2 0.036 19.0 74.0 0.98853 3.09 0.49 12.7 7 1 white 6.20798588330179 88 5.8 0.2 0.16 1.4 0.042 44.0 99.0 0.98912 3.23 0.37 12.2 6 0 white 6.12164355237213 89 5.8 0.2 0.3 1.5 0.031 21.0 57.0 0.99115 3.44 0.55 11.0 6 0 white 6.15527698127614 90 5.8 0.25 0.24 13.3 0.044 41.0 137.0 0.9972 3.34 0.42 9.5 5 0 white 5.79210711777424 91 5.8 0.25 0.28 11.1 0.056 45.0 175.0 0.99755 3.42 0.43 9.5 5 0 white 5.78850189715883 92 5.8 0.26 0.24 9.2 0.044 55.0 152.0 0.9961 3.31 0.38 9.4 5 0 white 5.85402183490828 93 5.8 0.27 0.2 7.3 0.04 42.0 145.0 0.99442 3.15 0.48 9.8 5 0 white 5.89641953291704 94 5.8 0.28 0.27 2.6 0.054 30.0 156.0 0.9914 3.53 0.42 12.4 5 0 white 5.95279667074333 95 5.8 0.28 0.3 1.5 0.026 31.0 114.0 0.98952 3.32 0.6 12.5 7 1 white 6.08865699433472 96 5.8 0.28 0.34 4.0 0.031 40.0 99.0 0.9896 3.39 0.39 12.8 7 1 white 6.03695005693814 97 5.8 0.29 0.26 1.7 0.063 3.0 11.0 0.9915 3.39 0.54 13.5 6 0 red 5.92222453424129 98 5.8 0.3 0.33 3.5 0.033 25.0 116.0 0.99057 3.2 0.44 11.7 6 0 white 6.00116587958363 99 5.8 0.315 0.27 1.55 0.026 15.0 70.0 0.98994 3.37 0.4 11.9 8 1 white 6.03495071671737 100 5.8 0.32 0.38 4.75 0.033 23.0 94.0 0.991 3.42 0.42 11.8 7 1 white 5.95914978269256 Rows: 1-100 | Columns: 15Note
Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the
vDataFrame
to thepredict()
function, but in this case, it’s essential that the column names of thevDataFrame
match the predictors and response name in the model.Plots¶
If the model allows, you can also generate relevant plots. For example, regression plots can be found in the Machine Learning - Regression Plots.
model.plot()
Important
The plotting feature is typically suitable for models with fewer than three predictors.
Parameter Modification¶
In order to see the parameters:
model.get_params() Out[3]: {'penalty': 'l2', 'tol': 1e-06, 'C': 1, 'max_iter': 100, 'solver': 'newton', 'fit_intercept': True}
And to manually change some of the parameters:
model.set_params({'tol': 0.001})
Model Register¶
In order to register the model for tracking and versioning:
model.register("model_v1")
Please refer to Model Tracking and Versioning for more details on model tracking and versioning.
Model Exporting¶
To Memmodel
model.to_memmodel()
Note
MemModel
objects serve as in-memory representations of machine learning models. They can be used for both in-database and in-memory prediction tasks. These objects can be pickled in the same way that you would pickle ascikit-learn
model.The following methods for exporting the model use
MemModel
, and it is recommended to useMemModel
directly.To SQL
You can get the SQL code by:
model.to_sql() Out[5]: '3.65793639898766 + 0.000916177178773125 * "fixed_acidity" + -0.207630569132817 * "volatile_acidity" + 0.0234463122665005 * "citric_acid" + -0.00261517074737228 * "residual_sugar" + -0.556805941021139 * "chlorides" + -1.80624981742585 * "density"'
To Python
To obtain the prediction function in Python syntax, use the following code:
X = [[4.2, 0.17, 0.36, 1.8, 0.029, 0.9899]] model.to_python()(X) Out[7]: array([1.82606644])
Hint
The
to_python()
method is used to retrieve predictions, probabilities, or cluster distances. For specific details on how to use this method for different model types, refer to the relevant documentation for each model.- __init__(name: str = None, overwrite_model: bool = False, tol: float = 1e-06, penalty: Literal['none', 'l2', None] = 'none', C: Annotated[int | float | Decimal, 'Python Numbers'] = 1.0, max_iter: int = 100, solver: Literal['newton'] = 'newton', fit_intercept: bool = True) None ¶
Methods
__init__
([name, overwrite_model, tol, ...])contour
([nbins, chart])Draws the model's contour plot.
deploySQL
([X])Returns the SQL code needed to deploy the model.
does_model_exists
(name[, raise_error, ...])Checks whether the model is stored in the Vertica database.
drop
()Drops the model from the Vertica database.
export_models
(name, path[, kind])Exports machine learning models.
features_importance
([show, chart])Computes the model's features importance.
fit
(input_relation, X, y[, test_relation, ...])Trains the model.
get_attributes
([attr_name])Returns the model attributes.
get_match_index
(x, col_list[, str_check])Returns the matching index.
Returns the parameters of the model.
get_plotting_lib
([class_name, chart, ...])Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.
get_vertica_attributes
([attr_name])Returns the model Vertica attributes.
import_models
(path[, schema, kind])Imports machine learning models.
plot
([max_nb_points, chart])Draws the model.
predict
(vdf[, X, name, inplace])Predicts using the input relation.
register
(registered_name[, raise_error])Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.
regression_report
([metrics])Computes a regression report
report
([metrics])Computes a regression report
score
([metric])Computes the model score.
set_params
([parameters])Sets the parameters of the model.
Summarizes the model.
to_binary
(path)Exports the model to the Vertica Binary format.
Converts the model to an InMemory object that can be used for different types of predictions.
to_pmml
(path)Exports the model to PMML.
to_python
([return_proba, ...])Returns the Python function needed for in-memory scoring without using built-in Vertica functions.
to_sql
([X, return_proba, ...])Returns the SQL code needed to deploy the model without using built-in Vertica functions.
to_tf
(path)Exports the model to the Frozen Graph format (TensorFlow).
Attributes
object_type