
verticapy.machine_learning.vertica.ensemble.XGBRegressor¶
- class verticapy.machine_learning.vertica.ensemble.XGBRegressor(name: str = None, overwrite_model: bool = False, max_ntree: int = 10, max_depth: int = 5, nbins: int = 32, split_proposal_method: Literal['local', 'global'] = 'global', tol: float = 0.001, learning_rate: float = 0.1, min_split_loss: float = 0.0, weight_reg: float = 0.0, sample: float = 1.0, col_sample_by_tree: float = 1.0, col_sample_by_node: float = 1.0)¶
Creates an
XGBRegressor
object using the Vertica XGB_REGRESSOR algorithm.Parameters¶
- name: str, optional
Name of the model. The model is stored in the DB.
- overwrite_model: bool, optional
If set to
True
, training a model with the same name as an existing model overwrites the existing model.- max_ntree: int, optional
Maximum number of trees that can be created.
- max_depth: int, optional
aximum depth of each tree, an
integer
between1
and20
, inclusive.- nbins: int, optional
Number of bins used to find splits in each column, where more splits leads to a longer runtime but more fine-grained, possibly better splits. Must be an
integer
between2
and1000
, inclusive.- split_proposal_method: str, optional
Approximate splitting strategy, either
global
orlocal
(not yet supported).- tol: float, optional
Approximation error of quantile summary structures used in the approximate split finding method.
- learning_rate: float, optional
Weight applied to each tree’s prediction. This reduces each tree’s impact, allowing for later trees to contribute and keeping earlier trees from dominating.
- min_split_loss: float, optional
Each split must improve the model’s objective function value by at least this much in order to avoid pruning. A value of
0
is the same as turning off this parameter (trees are still pruned based on positive / negative objective function values).- weight_reg: float, optional
Regularization term that is applied to the weights of the leaves in the regression tree. A higher value leads to more sparse/smooth weights, which often helps to prevent overfitting.
- sample: float, optional
Fraction of rows used per iteration in training.
- col_sample_by_tree: float, optional
float
in the range(0,1]
that specifies the fraction of columns (features), chosen at random, to use when building each tree.- col_sample_by_node: float, optional
float
in the range(0,1]
that specifies the fraction of columns (features), chosen at random, to use when evaluating each split.
Attributes¶
Many attributes are created during the fitting phase.
- trees_: list of BinaryTreeRegressor
Tree models are instances of `
BinaryTreeRegressor
, each possessing various attributes. For more detailed information, refer to the documentation forBinaryTreeRegressor
.- features_importance_: numpy.array
The importance of features. It is calculated using the average gain of each tree. To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them. It is necessary to use the
features_importance()
method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.- features_importance_trees_: dict of numpy.array
Each element of the array represents the feature importance of tree i. The importance of features is calculated using the average gain of each tree. It is necessary to use the
features_importance()
method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.- mean_: float
The mean of the response column.
- eta_: float
The learning rate, is a crucial hyperparameter in machine learning algorithms. It determines the step size at each iteration during the model training process. A well-chosen learning rate is essential for achieving optimal convergence and preventing overshooting or slow convergence in the training phase. Adjusting the learning rate is often necessary to strike a balance between model accuracy and computational efficiency.
- n_estimators_: int
The number of model estimators.
Note
All attributes can be accessed using the
get_attributes()
method.Note
Several other attributes can be accessed by using the
get_vertica_attributes()
method.Examples¶
The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.
Important
Many tree-based models inherit from the
XGB
base class, and it’s recommended to use it directly for access to a wider range of options.Load data for machine learning¶
We import
verticapy
:import verticapy as vp
Hint
By assigning an alias to
verticapy
, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions fromverticapy
are used as intended without interfering with functions from other libraries.For this example, we will use the winequality dataset.
import verticapy.datasets as vpd data = vpd.load_winequality()
123fixed_acidity123volatile_acidity123citric_acid123residual_sugar123chlorides123free_sulfur_dioxide123total_sulfur_dioxide123density123pH123sulphates123alcohol123quality123goodAbccolor1 3.8 0.31 0.02 11.1 0.036 20.0 114.0 0.99248 3.75 0.44 12.4 6 0 white 2 3.9 0.225 0.4 4.2 0.03 29.0 118.0 0.989 3.57 0.36 12.8 8 1 white 3 4.2 0.17 0.36 1.8 0.029 93.0 161.0 0.98999 3.65 0.89 12.0 7 1 white 4 4.2 0.215 0.23 5.1 0.041 64.0 157.0 0.99688 3.42 0.44 8.0 3 0 white 5 4.4 0.32 0.39 4.3 0.03 31.0 127.0 0.98904 3.46 0.36 12.8 8 1 white 6 4.4 0.46 0.1 2.8 0.024 31.0 111.0 0.98816 3.48 0.34 13.1 6 0 white 7 4.4 0.54 0.09 5.1 0.038 52.0 97.0 0.99022 3.41 0.4 12.2 7 1 white 8 4.5 0.19 0.21 0.95 0.033 89.0 159.0 0.99332 3.34 0.42 8.0 5 0 white 9 4.6 0.445 0.0 1.4 0.053 11.0 178.0 0.99426 3.79 0.55 10.2 5 0 white 10 4.6 0.52 0.15 2.1 0.054 8.0 65.0 0.9934 3.9 0.56 13.1 4 0 red 11 4.7 0.145 0.29 1.0 0.042 35.0 90.0 0.9908 3.76 0.49 11.3 6 0 white 12 4.7 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.5 5 0 white 13 4.7 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 14 4.7 0.6 0.17 2.3 0.058 17.0 106.0 0.9932 3.85 0.6 12.9 6 0 red 15 4.7 0.67 0.09 1.0 0.02 5.0 9.0 0.98722 3.3 0.34 13.6 5 0 white 16 4.7 0.785 0.0 3.4 0.036 23.0 134.0 0.98981 3.53 0.92 13.8 6 0 white 17 4.8 0.13 0.32 1.2 0.042 40.0 98.0 0.9898 3.42 0.64 11.8 7 1 white 18 4.8 0.17 0.28 2.9 0.03 22.0 111.0 0.9902 3.38 0.34 11.3 7 1 white 19 4.8 0.21 0.21 10.2 0.037 17.0 112.0 0.99324 3.66 0.48 12.2 7 1 white 20 4.8 0.225 0.38 1.2 0.074 47.0 130.0 0.99132 3.31 0.4 10.3 6 0 white 21 4.8 0.26 0.23 10.6 0.034 23.0 111.0 0.99274 3.46 0.28 11.5 7 1 white 22 4.8 0.29 0.23 1.1 0.044 38.0 180.0 0.98924 3.28 0.34 11.9 6 0 white 23 4.8 0.33 0.0 6.5 0.028 34.0 163.0 0.9937 3.35 0.61 9.9 5 0 white 24 4.8 0.34 0.0 6.5 0.028 33.0 163.0 0.9939 3.36 0.61 9.9 6 0 white 25 4.8 0.65 0.12 1.1 0.013 4.0 10.0 0.99246 3.32 0.36 13.5 4 0 white 26 4.9 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 27 4.9 0.33 0.31 1.2 0.016 39.0 150.0 0.98713 3.33 0.59 14.0 8 1 white 28 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 29 4.9 0.335 0.14 1.3 0.036 69.0 168.0 0.99212 3.47 0.46 10.4666666666667 5 0 white 30 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 31 4.9 0.345 0.34 1.0 0.068 32.0 143.0 0.99138 3.24 0.4 10.1 5 0 white 32 4.9 0.42 0.0 2.1 0.048 16.0 42.0 0.99154 3.71 0.74 14.0 7 1 red 33 4.9 0.47 0.17 1.9 0.035 60.0 148.0 0.98964 3.27 0.35 11.5 6 0 white 34 5.0 0.17 0.56 1.5 0.026 24.0 115.0 0.9906 3.48 0.39 10.8 7 1 white 35 5.0 0.2 0.4 1.9 0.015 20.0 98.0 0.9897 3.37 0.55 12.05 6 0 white 36 5.0 0.235 0.27 11.75 0.03 34.0 118.0 0.9954 3.07 0.5 9.4 6 0 white 37 5.0 0.24 0.19 5.0 0.043 17.0 101.0 0.99438 3.67 0.57 10.0 5 0 white 38 5.0 0.24 0.21 2.2 0.039 31.0 100.0 0.99098 3.69 0.62 11.7 6 0 white 39 5.0 0.24 0.34 1.1 0.034 49.0 158.0 0.98774 3.32 0.32 13.1 7 1 white 40 5.0 0.255 0.22 2.7 0.043 46.0 153.0 0.99238 3.75 0.76 11.3 6 0 white 41 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 42 5.0 0.27 0.32 4.5 0.032 58.0 178.0 0.98956 3.45 0.31 12.6 7 1 white 43 5.0 0.27 0.4 1.2 0.076 42.0 124.0 0.99204 3.32 0.47 10.1 6 0 white 44 5.0 0.29 0.54 5.7 0.035 54.0 155.0 0.98976 3.27 0.34 12.9 8 1 white 45 5.0 0.3 0.33 3.7 0.03 54.0 173.0 0.9887 3.36 0.3 13.0 7 1 white 46 5.0 0.31 0.0 6.4 0.046 43.0 166.0 0.994 3.3 0.63 9.9 6 0 white 47 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 48 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 49 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 50 5.0 0.33 0.18 4.6 0.032 40.0 124.0 0.99114 3.18 0.4 11.0 6 0 white 51 5.0 0.33 0.23 11.8 0.03 23.0 158.0 0.99322 3.41 0.64 11.8 6 0 white 52 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 53 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 54 5.0 0.38 0.01 1.6 0.048 26.0 60.0 0.99084 3.7 0.75 14.0 6 0 red 55 5.0 0.4 0.5 4.3 0.046 29.0 80.0 0.9902 3.49 0.66 13.6 6 0 red 56 5.0 0.42 0.24 2.0 0.06 19.0 50.0 0.9917 3.72 0.74 14.0 8 1 red 57 5.0 0.44 0.04 18.6 0.039 38.0 128.0 0.9985 3.37 0.57 10.2 6 0 white 58 5.0 0.455 0.18 1.9 0.036 33.0 106.0 0.98746 3.21 0.83 14.0 7 1 white 59 5.0 0.55 0.14 8.3 0.032 35.0 164.0 0.9918 3.53 0.51 12.5 8 1 white 60 5.0 0.61 0.12 1.3 0.009 65.0 100.0 0.9874 3.26 0.37 13.5 5 0 white 61 5.0 0.74 0.0 1.2 0.041 16.0 46.0 0.99258 4.01 0.59 12.5 6 0 red 62 5.0 1.02 0.04 1.4 0.045 41.0 85.0 0.9938 3.75 0.48 10.5 4 0 red 63 5.0 1.04 0.24 1.6 0.05 32.0 96.0 0.9934 3.74 0.62 11.5 5 0 red 64 5.1 0.11 0.32 1.6 0.028 12.0 90.0 0.99008 3.57 0.52 12.2 6 0 white 65 5.1 0.14 0.25 0.7 0.039 15.0 89.0 0.9919 3.22 0.43 9.2 6 0 white 66 5.1 0.165 0.22 5.7 0.047 42.0 146.0 0.9934 3.18 0.55 9.9 6 0 white 67 5.1 0.21 0.28 1.4 0.047 48.0 148.0 0.99168 3.5 0.49 10.4 5 0 white 68 5.1 0.23 0.18 1.0 0.053 13.0 99.0 0.98956 3.22 0.39 11.5 5 0 white 69 5.1 0.25 0.36 1.3 0.035 40.0 78.0 0.9891 3.23 0.64 12.1 7 1 white 70 5.1 0.26 0.33 1.1 0.027 46.0 113.0 0.98946 3.35 0.43 11.4 7 1 white 71 5.1 0.26 0.34 6.4 0.034 26.0 99.0 0.99449 3.23 0.41 9.2 6 0 white 72 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 73 5.1 0.29 0.28 8.3 0.026 27.0 107.0 0.99308 3.36 0.37 11.0 6 0 white 74 5.1 0.3 0.3 2.3 0.048 40.0 150.0 0.98944 3.29 0.46 12.2 6 0 white 75 5.1 0.305 0.13 1.75 0.036 17.0 73.0 0.99 3.4 0.51 12.3333333333333 5 0 white 76 5.1 0.31 0.3 0.9 0.037 28.0 152.0 0.992 3.54 0.56 10.1 6 0 white 77 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 78 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 79 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 80 5.1 0.33 0.27 6.7 0.022 44.0 129.0 0.99221 3.36 0.39 11.0 7 1 white 81 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 82 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 83 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 84 5.1 0.39 0.21 1.7 0.027 15.0 72.0 0.9894 3.5 0.45 12.5 6 0 white 85 5.1 0.42 0.0 1.8 0.044 18.0 88.0 0.99157 3.68 0.73 13.6 7 1 red 86 5.1 0.42 0.01 1.5 0.017 25.0 102.0 0.9894 3.38 0.36 12.3 7 1 white 87 5.1 0.47 0.02 1.3 0.034 18.0 44.0 0.9921 3.9 0.62 12.8 6 0 red 88 5.1 0.51 0.18 2.1 0.042 16.0 101.0 0.9924 3.46 0.87 12.9 7 1 red 89 5.1 0.52 0.06 2.7 0.052 30.0 79.0 0.9932 3.32 0.43 9.3 5 0 white 90 5.1 0.585 0.0 1.7 0.044 14.0 86.0 0.99264 3.56 0.94 12.9 7 1 red 91 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 92 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 93 5.2 0.16 0.34 0.8 0.029 26.0 77.0 0.99155 3.25 0.51 10.1 6 0 white 94 5.2 0.17 0.27 0.7 0.03 11.0 68.0 0.99218 3.3 0.41 9.8 5 0 white 95 5.2 0.185 0.22 1.0 0.03 47.0 123.0 0.99218 3.55 0.44 10.15 6 0 white 96 5.2 0.2 0.27 3.2 0.047 16.0 93.0 0.99235 3.44 0.53 10.1 7 1 white 97 5.2 0.21 0.31 1.7 0.048 17.0 61.0 0.98953 3.24 0.37 12.0 7 1 white 98 5.2 0.22 0.46 6.2 0.066 41.0 187.0 0.99362 3.19 0.42 9.73333333333333 5 0 white 99 5.2 0.24 0.15 7.1 0.043 32.0 134.0 0.99378 3.24 0.48 9.9 6 0 white 100 5.2 0.24 0.45 3.8 0.027 21.0 128.0 0.992 3.55 0.49 11.2 8 1 white Rows: 1-100 | Columns: 14Note
VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.
You can easily divide your dataset into training and testing subsets using the
vDataFrame.
train_test_split()
method. This is a crucial step when preparing your data for machine learning, as it allows you to evaluate the performance of your models accurately.data = vpd.load_winequality() train, test = data.train_test_split(test_size = 0.2)
Warning
In this case, VerticaPy utilizes seeded randomization to guarantee the reproducibility of your data split. However, please be aware that this approach may lead to reduced performance. For a more efficient data split, you can use the
vDataFrame.
to_db()
method to save your results intotables
ortemporary tables
. This will help enhance the overall performance of the process.Model Initialization¶
First we import the
XGBRegressor
model:from verticapy.machine_learning.vertica import XGBRegressor
Then we can create the model:
model = XGBRegressor( max_ntree = 3, max_depth = 3, nbins = 6, split_proposal_method = 'global', tol = 0.001, learning_rate = 0.1, min_split_loss = 0, weight_reg = 0, sample = 0.7, col_sample_by_tree = 1, col_sample_by_node = 1, )
Hint
In
verticapy
1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.Important
The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.
Model Training¶
We can now fit the model:
model.fit( train, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "quality", test, ) =========== call_string =========== xgb_regressor('"public"."_verticapy_tmp_xgbregressor_v_demo_f08eefa655a311ef880f0242ac120002_"', '"public"."_verticapy_tmp_view_v_demo_f09cffd855a311ef880f0242ac120002_"', '"quality"', '"fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density"' USING PARAMETERS exclude_columns='', max_ntree=3, max_depth=3, learning_rate=0.1, min_split_loss=0, weight_reg=0, nbins=6, objective=squarederror, sampling_size=0.7, col_sample_by_tree=1, col_sample_by_node=1) ======= details ======= predictor | type ----------------+---------------- fixed_acidity |float or numeric volatile_acidity|float or numeric citric_acid |float or numeric residual_sugar |float or numeric chlorides |float or numeric density |float or numeric =============== Additional Info =============== Name | Value ------------------+-------- tree_count | 3 rejected_row_count| 0 accepted_row_count| 5198 initial_prediction| 5.81935
Important
To train a model, you can directly use the
vDataFrame
or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. Inverticapy
, we don’t work usingX
matrices andy
vectors. Instead, we work directly with lists of predictors and the response name.Features Importance¶
We can conveniently get the features importance:
result = model.features_importance()
Note
In models such as
XGBoost
, feature importance is calculated using the average gain of each tree. To determine the final score, VerticaPy sums the scores of each tree, normalizes them and applies an activation function to scale them.Metrics¶
We can get the entire report using:
model.report()
value explained_variance 0.0431108945382566 max_error 3.19566557837489 median_absolute_error 0.705816035985473 mean_absolute_error 0.65468510780303 mean_squared_error 0.71776362411141 root_mean_squared_error 0.847209315406417 r2 0.0430190762569392 r2_adj 0.0385748924005473 aic -416.605192426716 bic -380.582406911107 Rows: 1-10 | Columns: 2Important
Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g.
model.report(metrics = ["mse", "r2"])
.You can utilize the
score()
function to calculate various regression metrics, with the R-squared being the default.model.score() Out[4]: 0.0430190762569393
Prediction¶
Prediction is straight-forward:
model.predict( test, [ "fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density", ], "prediction", )
123fixed_acidity123volatile_acidity123citric_acid123residual_sugar123chlorides123free_sulfur_dioxide123total_sulfur_dioxide123density123pH123sulphates123alcohol123quality123goodAbccolor123prediction1 4.4 0.46 0.1 2.8 0.024 31.0 111.0 0.98816 3.48 0.34 13.1 6 0 white 5.80647475140303 2 4.5 0.19 0.21 0.95 0.033 89.0 159.0 0.99332 3.34 0.42 8.0 5 0 white 5.80647475140303 3 4.7 0.785 0.0 3.4 0.036 23.0 134.0 0.98981 3.53 0.92 13.8 6 0 white 5.80647475140303 4 4.8 0.13 0.32 1.2 0.042 40.0 98.0 0.9898 3.42 0.64 11.8 7 1 white 5.89579664371411 5 4.8 0.21 0.21 10.2 0.037 17.0 112.0 0.99324 3.66 0.48 12.2 7 1 white 5.80647475140303 6 4.9 0.42 0.0 2.1 0.048 16.0 42.0 0.99154 3.71 0.74 14.0 7 1 red 5.80647475140303 7 4.9 0.47 0.17 1.9 0.035 60.0 148.0 0.98964 3.27 0.35 11.5 6 0 white 5.80647475140303 8 5.0 0.31 0.0 6.4 0.046 43.0 166.0 0.994 3.3 0.63 9.9 6 0 white 5.80647475140303 9 5.0 0.33 0.16 1.5 0.049 10.0 97.0 0.9917 3.48 0.44 10.7 6 0 white 5.80647475140303 10 5.0 0.35 0.25 7.8 0.031 24.0 116.0 0.99241 3.39 0.4 11.3 6 0 white 5.80647475140303 11 5.0 0.55 0.14 8.3 0.032 35.0 164.0 0.9918 3.53 0.51 12.5 8 1 white 5.80647475140303 12 5.0 0.61 0.12 1.3 0.009 65.0 100.0 0.9874 3.26 0.37 13.5 5 0 white 5.80647475140303 13 5.0 1.02 0.04 1.4 0.045 41.0 85.0 0.9938 3.75 0.48 10.5 4 0 red 5.80647475140303 14 5.1 0.11 0.32 1.6 0.028 12.0 90.0 0.99008 3.57 0.52 12.2 6 0 white 5.89579664371411 15 5.1 0.26 0.34 6.4 0.034 26.0 99.0 0.99449 3.23 0.41 9.2 6 0 white 5.89579664371411 16 5.1 0.31 0.3 0.9 0.037 28.0 152.0 0.992 3.54 0.56 10.1 6 0 white 5.89579664371411 17 5.1 0.33 0.22 1.6 0.027 18.0 89.0 0.9893 3.51 0.38 12.5 7 1 white 5.80647475140303 18 5.1 0.35 0.26 6.8 0.034 36.0 120.0 0.99188 3.38 0.4 11.5 6 0 white 5.80647475140303 19 5.1 0.51 0.18 2.1 0.042 16.0 101.0 0.9924 3.46 0.87 12.9 7 1 red 5.80647475140303 20 5.2 0.155 0.33 1.6 0.028 13.0 59.0 0.98975 3.3 0.84 11.9 8 1 white 5.89579664371411 21 5.2 0.24 0.45 3.8 0.027 21.0 128.0 0.992 3.55 0.49 11.2 8 1 white 5.89579664371411 22 5.2 0.24 0.45 3.8 0.027 21.0 128.0 0.992 3.55 0.49 11.2 8 1 white 5.89579664371411 23 5.2 0.32 0.25 1.8 0.103 13.0 50.0 0.9957 3.38 0.55 9.2 5 0 red 5.80647475140303 24 5.2 0.335 0.2 1.7 0.033 17.0 74.0 0.99002 3.34 0.48 12.3 6 0 white 5.80647475140303 25 5.2 0.37 0.33 1.2 0.028 13.0 81.0 0.9902 3.37 0.38 11.7 6 0 white 5.89579664371411 26 5.2 0.48 0.04 1.6 0.054 19.0 106.0 0.9927 3.54 0.62 12.2 7 1 red 5.80647475140303 27 5.2 0.49 0.26 2.3 0.09 23.0 74.0 0.9953 3.71 0.62 12.2 6 0 red 5.80647475140303 28 5.2 0.645 0.0 2.15 0.08 15.0 28.0 0.99444 3.78 0.61 12.5 6 0 red 5.80647475140303 29 5.3 0.165 0.24 1.1 0.051 25.0 105.0 0.9925 3.32 0.47 9.1 5 0 white 5.80647475140303 30 5.3 0.21 0.29 0.7 0.028 11.0 66.0 0.99215 3.3 0.4 9.8 5 0 white 5.89579664371411 31 5.3 0.23 0.56 0.9 0.041 46.0 141.0 0.99119 3.16 0.62 9.7 5 0 white 5.89579664371411 32 5.3 0.275 0.24 7.4 0.038 28.0 114.0 0.99313 3.38 0.51 11.0 6 0 white 5.80647475140303 33 5.3 0.33 0.3 1.2 0.048 25.0 119.0 0.99045 3.32 0.62 11.3 6 0 white 5.89579664371411 34 5.3 0.36 0.27 6.3 0.028 40.0 132.0 0.99186 3.37 0.4 11.6 6 0 white 5.80647475140303 35 5.3 0.43 0.11 1.1 0.029 6.0 51.0 0.99076 3.51 0.48 11.2 4 0 white 5.80647475140303 36 5.3 0.76 0.03 2.7 0.043 27.0 93.0 0.9932 3.34 0.38 9.2 5 0 white 5.80647475140303 37 5.4 0.17 0.27 2.7 0.049 28.0 104.0 0.99224 3.46 0.55 10.3 6 0 white 5.80647475140303 38 5.4 0.185 0.19 7.1 0.048 36.0 110.0 0.99438 3.26 0.41 9.5 6 0 white 5.80647475140303 39 5.4 0.23 0.36 1.5 0.03 74.0 121.0 0.98976 3.24 0.99 12.1 7 1 white 5.89579664371411 40 5.4 0.3 0.3 1.2 0.029 25.0 93.0 0.98742 3.31 0.4 13.6 7 1 white 5.89579664371411 41 5.4 0.53 0.16 2.7 0.036 34.0 128.0 0.98856 3.2 0.53 13.2 8 1 white 5.80647475140303 42 5.4 0.74 0.09 1.7 0.089 16.0 26.0 0.99402 3.67 0.56 11.6 6 0 red 5.80647475140303 43 5.5 0.15 0.32 14.0 0.031 16.0 99.0 0.99437 3.26 0.38 11.5 8 1 white 5.89579664371411 44 5.5 0.19 0.27 0.9 0.04 52.0 103.0 0.99026 3.5 0.39 11.2 5 0 white 5.80647475140303 45 5.5 0.21 0.25 1.2 0.04 18.0 75.0 0.99006 3.31 0.56 11.3 6 0 white 5.80647475140303 46 5.5 0.23 0.19 2.2 0.044 39.0 161.0 0.99209 3.19 0.43 10.4 6 0 white 5.80647475140303 47 5.5 0.32 0.45 4.9 0.028 25.0 191.0 0.9922 3.51 0.49 11.5 7 1 white 5.89579664371411 48 5.5 0.42 0.09 1.6 0.019 18.0 68.0 0.9906 3.33 0.51 11.4 7 1 white 5.80647475140303 49 5.6 0.12 0.33 2.9 0.044 21.0 73.0 0.98896 3.17 0.32 12.9 8 1 white 5.89579664371411 50 5.6 0.15 0.31 5.3 0.038 8.0 79.0 0.9923 3.3 0.39 10.5 6 0 white 5.89579664371411 51 5.6 0.18 0.58 1.25 0.034 29.0 129.0 0.98984 3.51 0.6 12.0 7 1 white 5.89579664371411 52 5.6 0.185 0.19 7.1 0.048 36.0 110.0 0.99438 3.26 0.41 9.5 6 0 white 5.80647475140303 53 5.6 0.185 0.49 1.1 0.03 28.0 117.0 0.9918 3.55 0.45 10.3 6 0 white 5.89579664371411 54 5.6 0.19 0.26 1.4 0.03 12.0 76.0 0.9905 3.25 0.37 10.9 7 1 white 5.80647475140303 55 5.6 0.19 0.39 1.1 0.043 17.0 67.0 0.9918 3.23 0.53 10.3 6 0 white 5.89579664371411 56 5.6 0.21 0.4 1.3 0.041 81.0 147.0 0.9901 3.22 0.95 11.6 8 1 white 5.89579664371411 57 5.6 0.23 0.25 8.0 0.043 31.0 101.0 0.99429 3.19 0.42 10.4 6 0 white 5.80647475140303 58 5.6 0.255 0.57 10.7 0.056 66.0 171.0 0.99464 3.25 0.61 10.4 7 1 white 5.89579664371411 59 5.6 0.27 0.37 0.9 0.025 11.0 49.0 0.98845 3.29 0.33 13.1 6 0 white 5.89579664371411 60 5.6 0.31 0.78 13.9 0.074 23.0 92.0 0.99677 3.39 0.48 10.5 6 0 red 5.80433442162511 61 5.6 0.32 0.33 7.4 0.037 25.0 95.0 0.99268 3.25 0.49 11.1 6 0 white 5.89579664371411 62 5.6 0.33 0.28 1.2 0.031 33.0 97.0 0.99126 3.49 0.58 10.9 6 0 white 5.89579664371411 63 5.6 0.34 0.1 1.3 0.031 20.0 68.0 0.9906 3.36 0.51 11.2 7 1 white 5.80647475140303 64 5.6 0.34 0.3 6.9 0.038 23.0 89.0 0.99266 3.25 0.49 11.1 6 0 white 5.89579664371411 65 5.6 0.35 0.37 1.0 0.038 6.0 72.0 0.9902 3.37 0.34 11.4 5 0 white 5.89579664371411 66 5.6 0.42 0.34 2.4 0.022 34.0 97.0 0.98915 3.22 0.38 12.8 7 1 white 5.89579664371411 67 5.6 0.54 0.04 1.7 0.049 5.0 13.0 0.9942 3.72 0.58 11.4 5 0 red 5.80647475140303 68 5.6 0.915 0.0 2.1 0.041 17.0 78.0 0.99346 3.68 0.73 11.4 5 0 red 5.80647475140303 69 5.7 0.2 0.3 2.5 0.046 38.0 125.0 0.99276 3.34 0.5 9.9 6 0 white 5.89579664371411 70 5.7 0.21 0.32 0.9 0.038 38.0 121.0 0.99074 3.24 0.46 10.6 6 0 white 5.89579664371411 71 5.7 0.21 0.32 1.6 0.03 33.0 122.0 0.99044 3.33 0.52 11.9 6 0 white 5.89579664371411 72 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 5.80433442162511 73 5.7 0.22 0.2 16.0 0.044 41.0 113.0 0.99862 3.22 0.46 8.9 6 0 white 5.80433442162511 74 5.7 0.22 0.28 1.3 0.027 26.0 101.0 0.98948 3.35 0.38 12.5 7 1 white 5.89579664371411 75 5.7 0.24 0.47 6.3 0.069 35.0 182.0 0.99391 3.11 0.46 9.75 5 0 white 5.89579664371411 76 5.7 0.25 0.21 1.5 0.044 21.0 108.0 0.99142 3.3 0.59 11.0 6 0 white 5.80647475140303 77 5.7 0.25 0.26 12.5 0.049 52.5 120.0 0.99691 3.08 0.45 9.4 6 0 white 5.80433442162511 78 5.7 0.25 0.27 11.5 0.04 24.0 120.0 0.99411 3.33 0.31 10.8 6 0 white 5.80647475140303 79 5.7 0.265 0.28 6.9 0.036 46.0 150.0 0.99299 3.36 0.44 10.8 7 1 white 5.89579664371411 80 5.7 0.27 0.16 9.0 0.053 32.0 111.0 0.99474 3.36 0.37 10.4 6 0 white 5.80647475140303 81 5.7 0.31 0.28 4.1 0.03 22.0 86.0 0.99062 3.31 0.38 11.7 7 1 white 5.89579664371411 82 5.7 0.31 0.29 7.3 0.05 33.0 143.0 0.99332 3.31 0.5 11.0666666666667 6 0 white 5.89579664371411 83 5.7 0.32 0.5 2.6 0.049 17.0 155.0 0.9927 3.22 0.64 10.0 6 0 white 5.89579664371411 84 5.7 0.385 0.04 12.6 0.034 22.0 115.0 0.9964 3.28 0.63 9.9 6 0 white 5.70581603598547 85 5.7 0.695 0.06 6.8 0.042 9.0 84.0 0.99432 3.44 0.44 10.2 5 0 white 5.80647475140303 86 5.8 0.15 0.28 0.8 0.037 43.0 127.0 0.99198 3.24 0.51 9.3 5 0 white 5.89579664371411 87 5.8 0.15 0.31 5.9 0.036 7.0 73.0 0.99152 3.2 0.43 11.9 6 0 white 5.89579664371411 88 5.8 0.18 0.28 1.3 0.034 9.0 94.0 0.99092 3.21 0.52 11.2 6 0 white 5.89579664371411 89 5.8 0.18 0.37 1.1 0.036 31.0 96.0 0.98942 3.16 0.48 12.0 6 0 white 5.89579664371411 90 5.8 0.19 0.24 1.3 0.044 38.0 128.0 0.99362 3.77 0.6 10.6 5 0 white 5.80647475140303 91 5.8 0.19 0.33 4.2 0.038 49.0 133.0 0.99107 3.16 0.42 11.3 7 1 white 5.89579664371411 92 5.8 0.2 0.16 1.4 0.042 44.0 99.0 0.98912 3.23 0.37 12.2 6 0 white 5.80647475140303 93 5.8 0.22 0.29 1.3 0.036 25.0 68.0 0.98865 3.24 0.35 12.6 6 0 white 5.89579664371411 94 5.8 0.23 0.2 2.0 0.043 39.0 154.0 0.99226 3.21 0.39 10.2 6 0 white 5.80647475140303 95 5.8 0.23 0.27 1.8 0.043 24.0 69.0 0.9933 3.38 0.31 9.4 6 0 white 5.80647475140303 96 5.8 0.23 0.31 4.5 0.046 42.0 124.0 0.99324 3.31 0.64 10.8 6 0 white 5.89579664371411 97 5.8 0.24 0.44 3.5 0.029 5.0 109.0 0.9913 3.53 0.43 11.7 3 0 white 5.89579664371411 98 5.8 0.25 0.26 13.1 0.051 44.0 148.0 0.9972 3.29 0.38 9.3 5 0 white 5.80433442162511 99 5.8 0.25 0.28 11.1 0.056 45.0 175.0 0.99755 3.42 0.43 9.5 5 0 white 5.80433442162511 100 5.8 0.26 0.18 1.2 0.031 40.0 114.0 0.9908 3.42 0.4 11.0 7 1 white 5.80647475140303 Rows: 1-100 | Columns: 15Note
Predictions can be made automatically using the test set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the
vDataFrame
to thepredict()
function, but in this case, it’s essential that the column names of thevDataFrame
match the predictors and response name in the model.Plots¶
Tree models can be visualized by drawing their tree plots. For more examples, check out Machine Learning - Tree Plots.
model.plot_tree()
Note
The above example may not render properly in the doc because of the huge size of the tree. But it should render nicely in jupyter environment.
In order to plot graph using graphviz separately, you can extract the graphviz DOT file code as follows:
model.to_graphviz() Out[5]: 'digraph Tree {\ngraph [bgcolor="#FFFFFFDD"];\n0 [label="\\"density\\"", shape="box", style="filled", fillcolor="#FFFFFFDD", fontcolor="#000000", color="#000000"]\n0 -> 1 [label="<= 0.995755", color="#000000", fontcolor="#000000"]\n0 -> 2 [label="> 0.995755", color="#000000", fontcolor="#000000"]\n1 [label="\\"citric_acid\\"", shape="box", style="filled", fillcolor="#FFFFFFDD", fontcolor="#000000", color="#000000"]\n1 -> 3 [label="<= 0.276667", color="#000000", fontcolor="#000000"]\n1 -> 4 [label="> 0.276667", color="#000000", fontcolor="#000000"]\n2 [label="\\"volatile_acidity\\"", shape="box", style="filled", fillcolor="#FFFFFFDD", fontcolor="#000000", color="#000000"]\n2 -> 5 [label="<= 0.33", color="#000000", fontcolor="#000000"]\n2 -> 6 [label="> 0.33", color="#000000", fontcolor="#000000"]\n3 [label="-0.052193", fillcolor="#FFFFFFDD", fontcolor="#000000", shape="none", color="#000000"]\n4 [label="0.266675", fillcolor="#FFFFFFDD", fontcolor="#000000", shape="none", color="#000000"]\n5 [label="-0.058517", fillcolor="#FFFFFFDD", fontcolor="#000000", shape="none", color="#000000"]\n6 [label="-0.426867", fillcolor="#FFFFFFDD", fontcolor="#000000", shape="none", color="#000000"]\n}'
This string can then be copied into a DOT file which can beparsed by graphviz.
Contour plot is another useful plot that can be produced for models with two predictors.
model.contour()
Important
Machine learning models with two predictors can usually benefit from their own contour plot. This visual representation aids in exploring predictions and gaining a deeper understanding of how these models perform in different scenarios. Please refer to Contour Plot for more examples.
Model Register¶
In order to register the model for tracking and versioning:
model.register("model_v1")
Please refer to Model Tracking and Versioning for more details on model tracking and versioning.
Model Exporting¶
To Memmodel
model.to_memmodel()
Note
MemModel
objects serve as in-memory representations of machine learning models. They can be used for both in-database and in-memory prediction tasks. These objects can be pickled in the same way that you would pickle ascikit-learn
model.The preceding methods for exporting the model use
MemModel
, and it is recommended to useMemModel
directly.To SQL
You can get the SQL query equivalent of the XGB model by:
model.to_sql() Out[6]: '((CASE WHEN "density" < 0.995755 THEN (CASE WHEN "citric_acid" < 0.276667 THEN -0.052193 ELSE 0.266675 END) ELSE (CASE WHEN "volatile_acidity" < 0.33 THEN -0.058517 ELSE -0.426867 END) END) + (CASE WHEN "density" < 0.995755 THEN (CASE WHEN "citric_acid" < 0.276667 THEN -0.017569 ELSE 0.256275 END) ELSE (CASE WHEN "volatile_acidity" < 0.33 THEN -0.053442 ELSE -0.375113 END) END) + (CASE WHEN "density" < 0.995755 THEN (CASE WHEN "citric_acid" < 0.276667 THEN -0.059026 ELSE 0.24148 END) ELSE (CASE WHEN "volatile_acidity" < 0.33 THEN -0.038233 ELSE -0.333397 END) END)) * 0.1 + 5.81935359753751'
Note
This SQL query can be directly used in any database.
Deploy SQL
To get the SQL query which uses Vertica functions use below:
model.deploySQL() Out[7]: 'PREDICT_XGB_REGRESSOR("fixed_acidity", "volatile_acidity", "citric_acid", "residual_sugar", "chlorides", "density" USING PARAMETERS model_name = \'"public"."_verticapy_tmp_xgbregressor_v_demo_f08eefa655a311ef880f0242ac120002_"\', match_by_pos = \'true\')'
To Python
To obtain the prediction function in Python syntax, use the following code:
X = [[4.2, 0.17, 0.36, 1.8, 0.029, 0.9899]] model.to_python()(X) Out[9]: array([5.8957966])
Hint
The
to_python()
method is used to retrieve predictions, probabilities, or cluster distances. For specific details on how to use this method for different model types, refer to the relevant documentation for each model.- __init__(name: str = None, overwrite_model: bool = False, max_ntree: int = 10, max_depth: int = 5, nbins: int = 32, split_proposal_method: Literal['local', 'global'] = 'global', tol: float = 0.001, learning_rate: float = 0.1, min_split_loss: float = 0.0, weight_reg: float = 0.0, sample: float = 1.0, col_sample_by_tree: float = 1.0, col_sample_by_node: float = 1.0) None ¶
Must be overridden in the child class
Methods
__init__
([name, overwrite_model, max_ntree, ...])Must be overridden in the child class
contour
([nbins, chart])Draws the model's contour plot.
deploySQL
([X])Returns the SQL code needed to deploy the model.
does_model_exists
(name[, raise_error, ...])Checks whether the model is stored in the Vertica database.
drop
()Drops the model from the Vertica database.
export_models
(name, path[, kind])Exports machine learning models.
features_importance
([tree_id, show, chart])Computes the model's features importance.
fit
(input_relation, X, y[, test_relation, ...])Trains the model.
get_attributes
([attr_name])Returns the model attributes.
get_match_index
(x, col_list[, str_check])Returns the matching index.
Returns the parameters of the model.
get_plotting_lib
([class_name, chart, ...])Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.
get_score
([tree_id])Returns the feature importance metrics for the input tree.
get_tree
([tree_id])Returns a table with all the input tree information.
get_vertica_attributes
([attr_name])Returns the model Vertica attributes.
import_models
(path[, schema, kind])Imports machine learning models.
plot
([max_nb_points, chart])Draws the model.
plot_tree
([tree_id, pic_path])Draws the input tree.
predict
(vdf[, X, name, inplace])Predicts using the input relation.
register
(registered_name[, raise_error])Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.
regression_report
([metrics])Computes a regression report
report
([metrics])Computes a regression report
score
([metric])Computes the model score.
set_params
([parameters])Sets the parameters of the model.
Summarizes the model.
to_binary
(path)Exports the model to the Vertica Binary format.
to_graphviz
([tree_id, classes_color, ...])Returns the code for a Graphviz tree.
to_json
([path])Creates a Python
XGBoost
JSON fileConverts the model to an InMemory object that can be used for different types of predictions.
to_pmml
(path)Exports the model to PMML.
to_python
([return_proba, ...])Returns the Python function needed for in-memory scoring without using built-in Vertica functions.
to_sql
([X, return_proba, ...])Returns the SQL code needed to deploy the model without using built-in Vertica functions.
to_tf
(path)Exports the model to the Frozen Graph format (TensorFlow).
Attributes