
verticapy.machine_learning.vertica.svm.LinearSVR¶
- class verticapy.machine_learning.vertica.svm.LinearSVR(name: str = None, overwrite_model: bool = False, tol: float = 0.0001, C: float = 1.0, intercept_scaling: float = 1.0, intercept_mode: Literal['regularized', 'unregularized'] = 'regularized', acceptable_error_margin: float = 0.1, max_iter: int = 100)¶
Creates a LinearSVR object using the Vertica SVM (Support Vector Machine) algorithm. This algorithm finds the hyperplane used to approximate distribution of the data.
Parameters¶
- name: str, optional
Name of the model. The model is stored in the database.
- overwrite_model: bool, optional
If set to
True
, training a model with the same name as an existing model overwrites the existing model.- tol: float, optional
Tolerance for stopping criteria. This is used to control accuracy.
- C: float, optional
Weight for misclassification cost. The algorithm minimizes the regularization cost and the misclassification cost.
- intercept_scaling: float
A float value, serves as the value of a dummy feature whose coefficient Vertica uses to calculate the model intercept. Because the dummy feature is not in the training data, its values are set to a constant, by default set to 1.
- intercept_mode: str, optional
Specify how to treat the intercept.
- regularized:
Fits the intercept and applies a regularization.
- unregularized:
Fits the intercept but does not include it in regularization.
- acceptable_error_margin: float, optional
Defines the acceptable error margin. Any data points outside this region add a penalty to the cost function.
- max_iter: int, optional
The maximum number of iterations that the algorithm performs.
Attributes¶
Many attributes are created during the fitting phase.
- coef_: numpy.array
The regression coefficients. The order of coefficients is the same as the order of columns used during the fitting phase.
- intercept_: float
The expected value of the dependent variable when all independent variables are zero, serving as the baseline or constant term in the model.
- features_importance_: numpy.array
The importance of features is computed through the model coefficients, which are normalized based on their range. Subsequently, an activation function calculates the final score. It is necessary to use the
features_importance()
method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.
Note
All attributes can be accessed using the
get_attributes()
method.Note
Several other attributes can be accessed by using the
get_vertica_attributes()
method.Examples¶
The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.
Load data for machine learning¶
We import
verticapy
:import verticapy as vp
Hint
By assigning an alias to
verticapy
, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions fromverticapy
are used as intended without interfering with functions from other libraries.For this example, we will use the winequality dataset.
import verticapy.datasets as vpd data = vpd.load_winequality()
123fixed_acidity123volatile_acidity123citric_acid123residual_sugar123chlorides123free_sulfur_dioxide123total_sulfur_dioxide123density123pH123sulphates123alcohol123quality123goodAbccolor1 3.8 0.31 0.02 11.1 0.036 20.0 114.0 0.99248 3.75 0.44 12.4 6 0 white 2 3.9 0.225 0.4 4.2 0.03 29.0 118.0 0.989 3.57 0.36 12.8 8 1 white 3 4.2 0.17 0.36 1.8 0.029 93.0 161.0 0.98999 3.65 0.89 12.0 7 1 white 4 4.2 0.215 0.23 5.1 0.041 64.0 157.0 0.99688 3.42 0.44 8.0 3 0 white 5 4.4 0.32 0.39 4.3 0.03 31.0 127.0 0.98904 3.46 0.36 12.8 8 1 white 6 4.4 0.46 0.1 2.8 0.024 31.0 111.0 0.98816 3.48 0.34 13.1 6 0 white 7 4.4 0.54 0.09 5.1 0.038 52.0 97.0 0.99022 3.41 0.4 12.2 7 1 white 8 4.5 0.19 0.21 0.95 0.033 89.0 159.0 0.99332 3.34 0.42 8.0 5 0 white 9 4.6 0.445 0.0 1.4 0.053 11.0 178.0 0.99426 3.79 0.55 10.2 5 0 white 10 4.6 0.52 0.15 2.1 0.054 8.0 65.0 0.9934 3.9 0.56 13.1 4 0 red 11 4.7 0.145 0.29 1.0 0.042 35.0 90.0 0.9908 3.76 0.49 11.3 6 0 white 12 4.7 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.5 5 0 white 13 4.7 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 14 4.7 0.6 0.17 2.3 0.058 17.0 106.0 0.9932 3.85 0.6 12.9 6 0 red 15 4.7 0.67 0.09 1.0 0.02 5.0 9.0 0.98722 3.3 0.34 13.6 5 0 white 16 4.7 0.785 0.0 3.4 0.036 23.0 134.0 0.98981 3.53 0.92 13.8 6 0 white 17 4.8 0.13 0.32 1.2 0.042 40.0 98.0 0.9898 3.42 0.64 11.8 7 1 white 18 4.8 0.17 0.28 2.9 0.03 22.0 111.0 0.9902 3.38 0.34 11.3 7 1 white 19 4.8 0.21 0.21 10.2 0.037 17.0 112.0 0.99324 3.66 0.48 12.2 7 1 white 20 4.8 0.225 0.38 1.2 0.074 47.0 130.0 0.99132 3.31 0.4 10.3 6 0 white 21 4.8 0.26 0.23 10.6 0.034 23.0 111.0 0.99274 3.46 0.28 11.5 7 1 white 22 4.8 0.29 0.23 1.1 0.044 38.0 180.0 0.98924 3.28 0.34 11.9 6 0 white 23 4.8 0.33 0.0 6.5 0.028 34.0 163.0 0.9937 3.35 0.61 9.9 5 0 white 24 4.8 0.34 0.0 6.5 0.028 33.0 163.0 0.9939 3.36 0.61 9.9 6 0 white 25 4.8 0.65 0.12 1.1 0.013 4.0 10.0 0.99246 3.32 0.36 13.5 4 0 white 26 4.9 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 27 4.9 0.33 0.31 1.2 0.016 39.0 150.0 0.98713 3.33 0.59 14.0 8 1 white 28 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 29 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 30 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 31 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 32 4.9 0.42 0.0 2.1 0.048 16.0 42.0 0.99154 3.71 0.74 14.0 7 1 red 33 4.9 0.47 0.17 1.9 0.035 60.0 148.0 0.98964 3.27 0.35 11.5 6 0 white 34 5.0 0.17 0.56 1.5 0.026 24.0 115.0 0.9906 3.48 0.39 10.8 7 1 white 35 5.0 0.2 0.4 1.9 0.015 20.0 98.0 0.9897 3.37 0.55 12.05 6 0 white 36 5.0 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 37 5.0 0.24 0.19 5.0 0.043 17.0 101.0 0.99438 3.67 0.57 10.0 5 0 white 38 5.0 0.24 0.21 2.2 0.039 31.0 100.0 0.99098 3.69 0.62 11.7 6 0 white 39 5.0 0.24 0.34 1.1 0.034 49.0 158.0 0.98774 3.32 0.32 13.1 7 1 white 40 5.0 0.255 0.22 2.7 0.043 46.0 153.0 0.99238 3.75 0.76 11.3 6 0 white 41 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 42 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 43 5.0 0.27 0.4 1.2 0.076 42.0 124.0 0.99204 3.32 0.47 10.1 6 0 white 44 5.0 0.29 0.54 5.7 0.035 54.0 155.0 0.98976 3.27 0.34 12.9 8 1 white 45 5.0 0.3 0.33 3.7 0.03 54.0 173.0 0.9887 3.36 0.3 13.0 7 1 white 46 5.0 0.31 0.0 6.4 0.046 43.0 166.0 0.994 3.3 0.63 9.9 6 0 white 47 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 48 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 49 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 50 5.0 0.33 0.18 4.6 0.032 40.0 124.0 0.99114 3.18 0.4 11.0 6 0 white 51 5.0 0.33 0.23 11.8 0.03 23.0 158.0 0.99322 3.41 0.64 11.8 6 0 white 52 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 53 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 54 5.0 0.38 0.01 1.6 0.048 26.0 60.0 0.99084 3.7 0.75 14.0 6 0 red 55 5.0 0.4 0.5 4.3 0.046 29.0 80.0 0.9902 3.49 0.66 13.6 6 0 red 56 5.0 0.42 0.24 2.0 0.06 19.0 50.0 0.9917 3.72 0.74 14.0 8 1 red 57 5.0 0.44 0.04 18.6 0.039 38.0 128.0 0.9985 3.37 0.57 10.2 6 0 white 58 5.0 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 59 5.0 0.55 0.14 8.3 0.032 35.0 164.0 0.9918 3.53 0.51 12.5 8 1 white 60 5.0 0.61 0.12 1.3 0.009 65.0 100.0 0.9874 3.26 0.37 13.5 5 0 white 61 5.0 0.74 0.0 1.2 0.041 16.0 46.0 0.99258 4.01 0.59 12.5 6 0 red 62 5.0 1.02 0.04 1.4 0.045 41.0 85.0 0.9938 3.75 0.48 10.5 4 0 red 63 5.0 1.04 0.24 1.6 0.05 32.0 96.0 0.9934 3.74 0.62 11.5 5 0 red 64 5.1 0.11 0.32 1.6 0.028 12.0 90.0 0.99008 3.57 0.52 12.2 6 0 white 65 5.1 0.14 0.25 0.7 0.039 15.0 89.0 0.9919 3.22 0.43 9.2 6 0 white 66 5.1 0.165 0.22 5.7 0.047 42.0 146.0 0.9934 3.18 0.55 9.9 6 0 white 67 5.1 0.21 0.28 1.4 0.047 48.0 148.0 0.99168 3.5 0.49 10.4 5 0 white 68 5.1 0.23 0.18 1.0 0.053 13.0 99.0 0.98956 3.22 0.39 11.5 5 0 white 69 5.1 0.25 0.36 1.3 0.035 40.0 78.0 0.9891 3.23 0.64 12.1 7 1 white 70 5.1 0.26 0.33 1.1 0.027 46.0 113.0 0.98946 3.35 0.43 11.4 7 1 white 71 5.1 0.26 0.34 6.4 0.034 26.0 99.0 0.99449 3.23 0.41 9.2 6 0 white 72 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 73 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 74 5.1 0.3 0.3 2.3 0.048 40.0 150.0 0.98944 3.29 0.46 12.2 6 0 white 75 5.1 0.305 0.13 1.75 0.036 17.0 73.0 0.99 3.4 0.51 12.3333333333333 5 0 white 76 5.1 0.31 0.3 0.9 0.037 28.0 152.0 0.992 3.54 0.56 10.1 6 0 white 77 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 78 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 79 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 80 5.1 0.33 0.27 6.7 0.022 44.0 129.0 0.99221 3.36 0.39 11.0 7 1 white 81 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 82 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 83 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 84 5.1 0.39 0.21 1.7 0.027 15.0 72.0 0.9894 3.5 0.45 12.5 6 0 white 85 5.1 0.42 0.0 1.8 0.044 18.0 88.0 0.99157 3.68 0.73 13.6 7 1 red 86 5.1 0.42 0.01 1.5 0.017 25.0 102.0 0.9894 3.38 0.36 12.3 7 1 white 87 5.1 0.47 0.02 1.3 0.034 18.0 44.0 0.9921 3.9 0.62 12.8 6 0 red 88 5.1 0.51 0.18 2.1 0.042 16.0 101.0 0.9924 3.46 0.87 12.9 7 1 red 89 5.1 0.52 0.06 2.7 0.052 30.0 79.0 0.9932 3.32 0.43 9.3 5 0 white 90 5.1 0.585 0.0 1.7 0.044 14.0 86.0 0.99264 3.56 0.94 12.9 7 1 red 91 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 92 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 93 5.2 0.16 0.34 0.8 0.029 26.0 77.0 0.99155 3.25 0.51 10.1 6 0 white 94 5.2 0.17 0.27 0.7 0.03 11.0 68.0 0.99218 3.3 0.41 9.8 5 0 white 95 5.2 0.185 0.22 1.0 0.03 47.0 123.0 0.99218 3.55 0.44 10.15 6 0 white 96 5.2 0.2 0.27 3.2 0.047 16.0 93.0 0.99235 3.44 0.53 10.1 7 1 white 97 5.2 0.21 0.31 1.7 0.048 17.0 61.0 0.98953 3.24 0.37 12.0 7 1 white 98 5.2 0.22 0.46 6.2 0.066 41.0 187.0 0.99362 3.19 0.42 9.73333333333333 5 0 white 99 5.2 0.24 0.15 7.1 0.043 32.0 134.0 0.99378 3.24 0.48 9.9 6 0 white 100 5.2 0.24 0.45 3.8 0.027 21.0 128.0 0.992 3.55 0.49 11.2 8 1 white Rows: 1-100 | Columns: 14Note
VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.
You can easily divide your dataset into training and testing subsets using the
vDataFrame.
train_test_split()
method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.data = vpd.load_winequality() train, test = data.train_test_split(test_size = 0.2)
Warning
In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the
vDataFrame.
to_db()
method to save your results intotables
ortemporary tables
. This will help enhance the overall performance of the process.Model Initialization¶
First we import the
LinearSVR
model:from verticapy.machine_learning.vertica import LinearSVR
Then we can create the model:
model = LinearSVR( tol = 1e-4, C = 1.0, intercept_scaling = 1.0, intercept_mode = "regularized", acceptable_error_margin = 0.1, max_iter = 100, )
Hint
In
verticapy
1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.Important
The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.
Model Training¶
We can now fit the model:
model.fit( train, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "quality", test, ) ======= details ======= predictor |coefficient ----------------+----------- Intercept | 3.33836 fixed_acidity | -0.00627 volatile_acidity| -1.39647 citric_acid | -0.04746 residual_sugar | -0.02000 chlorides | -0.75937 density | 3.18272 =========== call_string =========== SELECT svm_regressor('"public"."_verticapy_tmp_linearsvr_v_demo_efea8d4855a411ef880f0242ac120002_"', '"public"."_verticapy_tmp_view_v_demo_eff817ce55a411ef880f0242ac120002_"', '"quality"', '"fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density"' USING PARAMETERS error_tolerance=0.1, C=1, max_iterations=100, intercept_mode='regularized', intercept_scaling=1, epsilon=0.0001); =============== Additional Info =============== Name |Value ------------------+----- accepted_row_count|5197 rejected_row_count| 0 iteration_count | 7
Important
To train a model, you can directly use the
vDataFrame
or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. Inverticapy
, we don’t work usingX
matrices andy
vectors. Instead, we work directly with lists of predictors and the response name.Features Importance¶
We can conveniently get the features importance:
result = model.features_importance()
Note
For
LinearModel
, feature importance is computed using the coefficients. These coefficients are then normalized using the feature distribution. An activation function is applied to get the final score.Metrics¶
We can get the entire report using:
model.report()
value explained_variance 0.0930381379144025 max_error 3.00895862059503 median_absolute_error 0.559560190859324 mean_absolute_error 0.633405217886435 mean_squared_error 0.694760165402962 root_mean_squared_error 0.833522744382517 r2 0.0926372182085375 r2_adj 0.0884267180610134 aic -459.282613641858 bic -423.254315537401 Rows: 1-10 | Columns: 2Important
Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g.
model.report(metrics = ["mse", "r2"])
.For
LinearModel
, we can easily get the ANOVA table using:model.report(metrics = "anova")
Df SS MS F p_value Regression 6 71.9698044618974 11.994967410316233 17.17193891987322 3.505605540441714e-19 Residual 1293 903.188215023851 0.6985214346665514 Total 1299 995.399230769231 Rows: 1-3 | Columns: 6You can also use the
LinearModel.score
function to compute the R-squared value:model.score() Out[2]: 0.0926372182085377
Prediction¶
Prediction is straight-forward:
model.predict( test, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "prediction", )
123fixed_acidity123volatile_acidity123citric_acid123residual_sugar123chlorides123free_sulfur_dioxide123total_sulfur_dioxide123density123pH123sulphates123alcohol123quality123goodAbccolor123prediction1 4.2 0.17 0.36 1.8 0.029 93.0 161.0 0.98999 3.65 0.89 12.0 7 1 white 6.15037649814531 2 4.5 0.19 0.21 0.95 0.033 89.0 159.0 0.99332 3.34 0.42 8.0 5 0 white 6.15224237407368 3 4.8 0.29 0.23 1.1 0.044 38.0 180.0 0.98924 3.28 0.34 11.9 6 0 white 5.98542637362549 4 4.8 0.34 0.0 6.5 0.028 33.0 163.0 0.9939 3.36 0.61 9.9 6 0 white 5.84552043718996 5 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 5.7854198566214 6 5.0 0.38 0.01 1.6 0.048 26.0 60.0 0.99084 3.7 0.75 14.0 6 0 red 5.86098808199817 7 5.0 0.42 0.24 2.0 0.06 19.0 50.0 0.9917 3.72 0.74 14.0 8 1 red 5.77983922478535 8 5.0 0.61 0.12 1.3 0.009 65.0 100.0 0.9874 3.26 0.37 13.5 5 0 white 5.55924563655952 9 5.0 1.02 0.04 1.4 0.045 41.0 85.0 0.9938 3.75 0.48 10.5 4 0 red 4.98152369106327 10 5.0 1.04 0.24 1.6 0.05 32.0 96.0 0.9934 3.74 0.62 11.5 5 0 red 4.93503267326862 11 5.1 0.165 0.22 5.7 0.047 42.0 146.0 0.9934 3.18 0.55 9.9 6 0 white 6.07755607381282 12 5.1 0.26 0.34 6.4 0.034 26.0 99.0 0.99449 3.23 0.41 9.2 6 0 white 5.93853977410454 13 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 5.93126236843472 14 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 5.80034929552591 15 5.1 0.51 0.18 2.1 0.042 16.0 101.0 0.9924 3.46 0.87 12.9 7 1 red 5.67027454223647 16 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 6.1704687149577 17 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 6.1704687149577 18 5.2 0.2 0.27 3.2 0.047 16.0 93.0 0.99235 3.44 0.53 10.1 7 1 white 6.07232831647228 19 5.2 0.24 0.45 3.8 0.027 21.0 128.0 0.992 3.55 0.49 11.2 8 1 white 6.01000205774974 20 5.2 0.25 0.23 1.4 0.047 20.0 77.0 0.99001 3.32 0.62 11.4 5 0 white 6.03294934639537 21 5.2 0.28 0.29 1.1 0.028 18.0 69.0 0.99168 3.24 0.54 10.0 6 0 white 6.01394969264853 22 5.2 0.3 0.34 1.5 0.038 18.0 96.0 0.98942 3.56 0.48 13.0 8 1 white 5.96086203733818 23 5.2 0.34 0.37 6.2 0.031 42.0 133.0 0.99076 3.25 0.41 12.5 6 0 white 5.81917703887501 24 5.2 0.5 0.18 2.0 0.036 23.0 129.0 0.98949 3.36 0.77 13.4 7 1 white 5.68090599889567 25 5.2 0.645 0.0 2.15 0.08 15.0 28.0 0.99444 3.78 0.61 12.5 6 0 red 5.46630426586991 26 5.3 0.16 0.39 1.0 0.028 40.0 101.0 0.99156 3.57 0.59 10.6 6 0 white 6.17776984159764 27 5.3 0.2 0.31 3.6 0.036 22.0 91.0 0.99278 3.41 0.5 9.8 6 0 white 6.07152557794754 28 5.3 0.275 0.24 7.4 0.038 28.0 114.0 0.99313 3.38 0.51 11.0 6 0 white 5.89372197138889 29 5.3 0.3 0.2 1.1 0.077 48.0 166.0 0.9944 3.3 0.54 8.7 4 0 white 5.96111248660795 30 5.3 0.3 0.3 1.2 0.029 25.0 93.0 0.98742 3.31 0.4 13.6 7 1 white 5.96860098072255 31 5.3 0.32 0.23 9.65 0.026 26.0 119.0 0.99168 3.18 0.53 12.2 6 0 white 5.79086127024622 32 5.3 0.395 0.07 1.3 0.035 26.0 102.0 0.992 3.5 0.35 10.6 6 0 white 5.8548739866318 33 5.3 0.47 0.11 2.2 0.048 16.0 89.0 0.99182 3.54 0.88 13.6 7 1 red 5.71979905282808 34 5.3 0.57 0.01 1.7 0.054 5.0 27.0 0.9934 3.57 0.84 12.5 7 1 red 5.59536926677149 35 5.4 0.17 0.27 2.7 0.049 28.0 104.0 0.99224 3.46 0.55 10.3 6 0 white 6.12109697251029 36 5.4 0.23 0.36 1.5 0.03 74.0 121.0 0.98976 3.24 0.99 12.1 7 1 white 6.06356786705338 37 5.4 0.29 0.47 3.0 0.052 47.0 145.0 0.993 3.29 0.75 10.0 6 0 white 5.93817033622146 38 5.4 0.45 0.27 6.4 0.033 20.0 102.0 0.98944 3.22 0.27 13.4 8 1 white 5.65933802653297 39 5.4 0.53 0.16 2.7 0.036 34.0 128.0 0.98856 3.2 0.53 13.2 8 1 white 5.62174916688011 40 5.5 0.14 0.27 4.6 0.029 22.0 104.0 0.9949 3.34 0.44 9.0 5 0 white 6.14802399626162 41 5.5 0.14 0.27 4.6 0.029 22.0 104.0 0.9949 3.34 0.44 9.0 5 0 white 6.14802399626162 42 5.5 0.18 0.22 5.5 0.037 10.0 86.0 0.99156 3.46 0.44 12.2 5 0 white 6.059836457712 43 5.5 0.28 0.21 1.6 0.032 23.0 85.0 0.99027 3.42 0.42 12.5 5 0 white 5.99834134523132 44 5.5 0.335 0.3 2.5 0.071 27.0 128.0 0.9924 3.14 0.51 9.6 6 0 white 5.87643109441375 45 5.5 0.34 0.26 2.2 0.021 31.0 119.0 0.98919 3.55 0.49 13.0 8 1 white 5.90509813417776 46 5.6 0.13 0.27 4.8 0.028 22.0 104.0 0.9948 3.34 0.45 9.2 6 0 white 6.15780314574585 47 5.6 0.15 0.26 5.55 0.051 51.0 139.0 0.99336 3.47 0.5 11.0 6 0 white 6.09330252420676 48 5.6 0.175 0.29 0.8 0.043 20.0 67.0 0.99112 3.28 0.48 9.9 6 0 white 6.15089536000729 49 5.6 0.18 0.29 2.3 0.04 5.0 47.0 0.99126 3.07 0.45 10.1 4 0 white 6.11664217654382 50 5.6 0.18 0.3 10.2 0.028 28.0 131.0 0.9954 3.49 0.42 10.8 7 1 white 5.98048521925106 51 5.6 0.18 0.58 1.25 0.034 29.0 129.0 0.98984 3.51 0.6 12.0 7 1 white 6.12391103075642 52 5.6 0.19 0.27 0.9 0.04 52.0 103.0 0.99026 3.5 0.39 11.2 5 0 white 6.12843893724582 53 5.6 0.19 0.31 2.7 0.027 11.0 100.0 0.98964 3.46 0.4 13.2 7 1 white 6.09844551891228 54 5.6 0.19 0.39 1.1 0.043 17.0 67.0 0.9918 3.23 0.53 10.3 6 0 white 6.12136746700856 55 5.6 0.2 0.22 1.3 0.049 25.0 155.0 0.99296 3.74 0.43 10.0 5 0 white 6.11060786224903 56 5.6 0.23 0.25 8.0 0.043 31.0 101.0 0.99429 3.19 0.42 10.4 6 0 white 5.94210361254704 57 5.6 0.26 0.26 5.7 0.031 12.0 80.0 0.9923 3.25 0.38 10.8 5 0 white 5.94850543865597 58 5.6 0.28 0.27 3.9 0.043 52.0 158.0 0.99202 3.35 0.44 10.7 7 1 white 5.94609132276543 59 5.6 0.34 0.25 2.5 0.046 47.0 182.0 0.99093 3.21 0.4 11.3 5 0 white 5.88550018656167 60 5.6 0.35 0.14 5.0 0.046 48.0 198.0 0.9937 3.3 0.71 10.3 5 0 white 5.83558161718611 61 5.6 0.39 0.24 4.7 0.034 27.0 77.0 0.9906 3.28 0.36 12.7 5 0 white 5.78022161946794 62 5.6 0.46 0.24 4.8 0.042 24.0 72.0 0.9908 3.29 0.37 12.6 6 0 white 5.67503088846106 63 5.6 0.5 0.09 2.3 0.049 17.0 99.0 0.9937 3.63 0.63 13.0 5 0 red 5.68019677513631 64 5.6 0.5 0.09 2.3 0.049 17.0 99.0 0.9937 3.63 0.63 13.0 5 0 red 5.68019677513631 65 5.6 0.615 0.0 1.6 0.089 16.0 59.0 0.9943 3.58 0.52 9.9 5 0 red 5.50940697741581 66 5.6 0.85 0.05 1.4 0.045 12.0 88.0 0.9924 3.56 0.82 12.9 8 1 red 5.21022854759132 67 5.7 0.1 0.27 1.3 0.047 21.0 100.0 0.9928 3.27 0.46 9.5 5 0 white 6.24826358745066 68 5.7 0.16 0.26 6.3 0.043 28.0 113.0 0.9936 3.06 0.58 9.9 6 0 white 6.07055205660226 69 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 5.81086498731988 70 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 5.81086498731988 71 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 5.81086498731988 72 5.7 0.22 0.25 1.1 0.05 97.0 175.0 0.99099 3.44 0.62 11.1 6 0 white 6.07759724812493 73 5.7 0.22 0.28 1.3 0.027 26.0 101.0 0.98948 3.35 0.38 12.5 7 1 white 6.08483370975602 74 5.7 0.25 0.22 9.8 0.049 50.0 125.0 0.99571 3.2 0.45 10.1 6 0 white 5.87894059767869 75 5.7 0.26 0.27 4.1 0.201 73.5 189.5 0.9942 3.27 0.38 9.4 6 0 white 5.85635187609888 76 5.7 0.31 0.29 7.3 0.05 33.0 143.0 0.99332 3.31 0.5 11.0666666666667 6 0 white 5.83345504795619 77 5.7 0.32 0.5 2.6 0.049 17.0 155.0 0.9927 3.22 0.64 10.0 6 0 white 5.90229226617571 78 5.7 0.37 0.3 1.1 0.029 24.0 88.0 0.98883 3.18 0.39 11.7 6 0 white 5.87482620143893 79 5.7 0.45 0.42 1.1 0.051 61.0 197.0 0.9932 3.02 0.4 9.0 5 0 white 5.75461572956588 80 5.7 1.13 0.09 1.5 0.172 7.0 19.0 0.994 3.5 0.48 9.8 4 0 red 4.72334469635034 81 5.8 0.13 0.22 12.7 0.058 24.0 183.0 0.9956 3.32 0.42 11.7 6 0 white 5.98071539276373 82 5.8 0.13 0.22 12.7 0.058 24.0 183.0 0.9956 3.32 0.42 11.7 6 0 white 5.98071539276373 83 5.8 0.15 0.31 5.9 0.036 7.0 73.0 0.99152 3.2 0.43 11.9 6 0 white 6.08821033503654 84 5.8 0.17 0.34 1.8 0.045 96.0 170.0 0.99035 3.38 0.9 11.8 8 1 white 6.13028408959425 85 5.8 0.18 0.28 1.3 0.034 9.0 94.0 0.99092 3.21 0.52 11.2 6 0 white 6.13933256772528 86 5.8 0.2 0.16 1.4 0.042 44.0 99.0 0.98912 3.23 0.37 12.2 6 0 white 6.10329521221097 87 5.8 0.23 0.21 1.5 0.044 21.0 110.0 0.99138 3.3 0.57 11.0 6 0 white 6.06270266851031 88 5.8 0.24 0.26 10.05 0.039 63.0 162.0 0.99375 3.33 0.5 11.2 6 0 white 5.88673590252837 89 5.8 0.26 0.18 1.2 0.031 40.0 114.0 0.9908 3.42 0.4 11.0 7 1 white 6.03625727652373 90 5.8 0.27 0.26 3.5 0.071 26.0 69.0 0.98994 3.1 0.38 11.5 6 0 white 5.93939203772056 91 5.8 0.28 0.34 4.0 0.031 40.0 99.0 0.9896 3.39 0.39 12.8 7 1 white 5.94092488664849 92 5.8 0.28 0.66 9.1 0.039 26.0 159.0 0.9965 3.66 0.55 10.8 5 0 white 5.83964132470247 93 5.8 0.29 0.05 0.8 0.038 11.0 30.0 0.9924 3.36 0.35 9.2 5 0 white 6.00830868622429 94 5.8 0.29 0.21 2.6 0.025 12.0 120.0 0.9894 3.39 0.79 14.0 7 1 white 5.96504490531856 95 5.8 0.29 0.26 1.7 0.063 3.0 11.0 0.9915 3.39 0.54 13.5 6 0 red 5.95849616160281 96 5.8 0.29 0.33 3.7 0.029 30.0 88.0 0.98994 3.25 0.42 12.3 6 0 white 5.93603461663623 97 5.8 0.3 0.33 3.5 0.033 25.0 116.0 0.99057 3.2 0.44 11.7 6 0 white 5.92503685521875 98 5.8 0.31 0.31 7.5 0.052 55.0 230.0 0.9949 3.19 0.46 9.8 5 0 white 5.83138914671991 99 5.8 0.31 0.32 4.5 0.024 28.0 94.0 0.98906 3.25 0.52 13.7 7 1 white 5.89357886975542 100 5.8 0.31 0.33 1.2 0.036 23.0 99.0 0.9916 3.18 0.6 10.5 6 0 white 5.95806391142243 Rows: 1-100 | Columns: 15Note
Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the
vDataFrame
to thepredict()
function, but in this case, it’s essential that the column names of thevDataFrame
match the predictors and response name in the model.Plots¶
If the model allows, you can also generate relevant plots. For example, regression plots can be found in the Machine Learning - Regression Plots.
model.plot()
Important
The plotting feature is typically suitable for models with fewer than three predictors.
Contour plot is another useful plot that can be produced for models with two predictors.
model.contour()
Important
Machine learning models with two predictors can usually benefit from their own contour plot. This visual representation aids in exploring predictions and gaining a deeper understanding of how these models perform in different scenarios. Please refer to Contour Plot for more examples.
Parameter Modification¶
In order to see the parameters:
model.get_params() Out[3]: {'tol': 0.0001, 'C': 1.0, 'intercept_scaling': 1.0, 'intercept_mode': 'regularized', 'acceptable_error_margin': 0.1, 'max_iter': 100}
And to manually change some of the parameters:
model.set_params({'tol': 0.001})
Model Register¶
In order to register the model for tracking and versioning:
model.register("model_v1")
Please refer to Model Tracking and Versioning for more details on model tracking and versioning.
Model Exporting¶
To Memmodel
model.to_memmodel()
Note
MemModel
objects serve as in-memory representations of machine learning models. They can be used for both in-database and in-memory prediction tasks. These objects can be pickled in the same way that you would pickle ascikit-learn
model.The following methods for exporting the model use
MemModel
, and it is recommended to useMemModel
directly.To SQL
You can get the SQL code by:
model.to_sql() Out[5]: '3.33836171310431 + -0.00627344463542344 * "fixed_acidity" + -1.39646681976227 * "volatile_acidity" + -0.0474623425893748 * "citric_acid" + -0.0199963609649769 * "residual_sugar" + -0.75937027345625 * "chlorides" + 3.18272330313309 * "density"'
To Python
To obtain the prediction function in Python syntax, use the following code:
X = [[4.2, 0.17, 0.36, 1.8, 0.029, 0.9899]] model.to_python()(X) Out[7]: array([6.15009005])
Hint
The
to_python()
method is used to retrieve predictions, probabilities, or cluster distances. For specific details on how to use this method for different model types, refer to the relevant documentation for each model.- __init__(name: str = None, overwrite_model: bool = False, tol: float = 0.0001, C: float = 1.0, intercept_scaling: float = 1.0, intercept_mode: Literal['regularized', 'unregularized'] = 'regularized', acceptable_error_margin: float = 0.1, max_iter: int = 100) None ¶
Methods
__init__
([name, overwrite_model, tol, C, ...])contour
([nbins, chart])Draws the model's contour plot.
deploySQL
([X])Returns the SQL code needed to deploy the model.
does_model_exists
(name[, raise_error, ...])Checks whether the model is stored in the Vertica database.
drop
()Drops the model from the Vertica database.
export_models
(name, path[, kind])Exports machine learning models.
features_importance
([show, chart])Computes the model's features importance.
fit
(input_relation, X, y[, test_relation, ...])Trains the model.
get_attributes
([attr_name])Returns the model attributes.
get_match_index
(x, col_list[, str_check])Returns the matching index.
Returns the parameters of the model.
get_plotting_lib
([class_name, chart, ...])Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.
get_vertica_attributes
([attr_name])Returns the model Vertica attributes.
import_models
(path[, schema, kind])Imports machine learning models.
plot
([max_nb_points, chart])Draws the model.
predict
(vdf[, X, name, inplace])Predicts using the input relation.
register
(registered_name[, raise_error])Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.
regression_report
([metrics])Computes a regression report
report
([metrics])Computes a regression report
score
([metric])Computes the model score.
set_params
([parameters])Sets the parameters of the model.
Summarizes the model.
to_binary
(path)Exports the model to the Vertica Binary format.
Converts the model to an InMemory object that can be used for different types of predictions.
to_pmml
(path)Exports the model to PMML.
to_python
([return_proba, ...])Returns the Python function needed for in-memory scoring without using built-in Vertica functions.
to_sql
([X, return_proba, ...])Returns the SQL code needed to deploy the model without using built-in Vertica functions.
to_tf
(path)Exports the model to the Frozen Graph format (TensorFlow).
Attributes