
VerticaPy
Example: XGBoost.to_json¶
Starting from VerticaPy 0.7.1, you can export any native Vertica XGBoost model to the Python XGBoost JSON file format. This page demonstrates the exporting process and the nuances involved.
Connect to Vertica¶
For a demonstration on how to create a new connection to Vertica, see connection. In this example, we will use an existing connection named 'VerticaDSN'.
import verticapy as vp
vp.connect("VerticaDSN")
Create a Schema (Optional)¶
Schemas allow you to organize database objects in a collection, similar to a namespace. If you create a database object without specifying a schema, Vertica uses the 'public' schema. For example, to specify the 'example_table' in 'example_schema', you would use: 'example_schema.example_table'.
To keep things organized, this example creates the 'xgb_to_json' schema and drops it (and its associated tables, views, etc.) at the end:
vp.drop("xgb_to_json", method = "schema")
vp.create_schema("xgb_to_json")
Load Data¶
VerticaPy lets you load many well-known datasets like Iris, Titanic, Amazon, etc.
This example loads the Titanic dataset with the load_titanic function into a table called 'titanic' in the 'xgb_to_json' schema:
from verticapy.datasets import load_titanic
vdf = load_titanic(name = "titanic",
schema = "xgb_to_json",)
You can also load your own data. To ingest data from a CSV file, use the read_csv() function.
The read_csv() function uses parses the dataset and uses flex tables to identify data types.
If read_csv() runs for too long, you can use the 'parse_nrows' parameters to limit the number of lines read_csv() parses before guessing the data types at the possible expense of data type identification accuracy.
For example, to load the 'iris.csv' file with the read_csv() function:
vdf = vp.read_csv("data/iris.csv",
table_name = "iris",
schema = "xgb_to_json",)
Create a vDataFrame¶
vDataFrames allow you to prepare and explore your data without modifying its representation in your Vertica database. Any changes you make are applied to the vDataFrame as modifications to the SQL query for the table underneath.
To create a vDataFrame out of a table in your Vertica database, specify its schema and table name with the standard SQL syntax. For example, to create a vDataFrame out of the 'titanic' table in the 'xgb_to_json' schema:
vdf = vp.vDataFrame("xgb_to_json.titanic")
Create an XGB model¶
Create a XGBoostClassifier XGBoostClassifier model.
Unlike a vDataFrame object, which simply queries the table it was created with, the VerticaPy XGBoostClassifier object creates and then references a model in Vertica, so it must be stored in a schema like any other database object.
This example creates the 'my_model' XGBoostClassifier model in the 'xgb_to_json' schema:
from verticapy.learn.ensemble import XGBoostClassifier
model = XGBoostClassifier("xgb_to_json.my_model",
max_ntree = 4,
max_depth = 3,)
Prepare the Data¶
While Vertica XGBoost supports columns of type VARCHAR, Python XGBoost does not, so you must encode the categorical columns you want to use. You must also drop or impute missing values.
This example drops 'age,' 'fare,' 'sex,' 'embarked,' and 'survived' columns from the vDataFrame and then encodes the 'sex' and 'embarked' columns. These changes are applied to the vDataFrame's query and does not affect the main "xgb_to_json.titanic' table stored in Vertica:
vdf = vdf[["age", "fare", "sex", "embarked", "survived"]]
vdf.dropna()
vdf["sex"].label_encode()
vdf["embarked"].label_encode()
Train the Model¶
Define the predictor and the response columns:
relation = "xgb_to_json.titanic"
X = ["age", "fare", "sex", "embarked"]
y = "survived"
Train the model with fit():
model.fit(relation, X, y)
Evaluate the Model¶
Evaluate the model with .report():
model.report()
Use to_json() to export the model to a JSON file. If you omit a filename, VerticaPy prints the model:
model.to_json()
To export and save the model as a JSON file, specify a filename:
model.to_json("exported_xgb_model.json")
Unlike Python XGBoost, Vertica does not store some information like 'sum_hessian' or 'loss_changes,' and the exported model from to_json() replaces this information with a list of zeroes These information are replaced by a list filled with zeros.
Make Predictions with an Exported Model¶
This exported model can be used with the Python XGBoost API right away, and exported models make identical predictions in Vertica and Python:
import xgboost as xgb
model_python = xgb.XGBClassifier()
model_python.load_model("exported_xgb_model.json")
y_test_vertica = model.to_python(return_proba = True)(X_test)
y_test_python = model_python.predict_proba(X_test)
result = (y_test_vertica - y_test_python) ** 2
result = result.sum() / len(result)
assert result == pytest.approx(0.0, abs = 1.0E-14)
For multiclass classifiers, the probabilities returned by the VerticaPy and the exported model may differ slightly because of normalization; while Vertica uses multinomial logistic regression, XGBoost Python uses Softmax. Again, this difference does not affect the model's final predictions. Categorical predictors must be encoded.
Clean the Example Environment¶
Drop the 'xgb_to_json' schema, using CASCADE to drop any database objects stored inside (the 'titanic' table, the XGBoostClassifier model, etc.), then delete the 'exported_xgb_model.json' file:
import os
os.remove("exported_xgb_model.json")
vp.drop("xgb_to_json", method = "schema")
Conclusion¶
VerticaPy lets you to create, train, evaluate, and export Vertica machine learning models. There are some notable nuances when importing a Vertica XGBoost model into Python XGBoost, but these do not affect the accuracy of the model or its predictions:
- Some information computed during the training phase may not be stored (e.g. 'sum_hessian' and 'loss_changes').
- The exact probabilities of multiclass classifiers in a Vertica model may differ from those in Python, but both will make the same predictions.
- Python XGBoost does not support categorical predictors, so you must encode them before training the model in VerticaPy.