Model.plot

In [ ]:
Model.plot(mltype: str = "champion",
           ax=None,
           **style_kwds,)

Draws the AutoML plot.

Parameters

Name Type Optional Description
mltype
str
The plot type.
  • champion: champion challenger plot.
  • step: stepwise plot.
ax
Matplotlib axes object
The axes to plot on.
**style_kwds
any
Any optional parameter to pass to the Matplotlib functions.

Returns

ax : Matplotlib axes object

Example

In [11]:
from verticapy.learn.delphi import AutoML

model = AutoML("titanic_autoML", stepwise = True)
model.fit("public.titanic", X = ["age", "fare", "boat"], y = "survived")
Starting AutoML

Testing Model - LogisticRegression

Model: LogisticRegression; Parameters: {'tol': 1e-06, 'max_iter': 100, 'penalty': 'none', 'solver': 'bfgs'}; Test_score: 0.05415476764718847; Train_score: 0.0459641328787705; Time: 5.802977005640666;
Model: LogisticRegression; Parameters: {'tol': 1e-06, 'max_iter': 100, 'penalty': 'l1', 'solver': 'cgd', 'C': 1.0}; Test_score: 0.301029995663981; Train_score: 0.301029995663981; Time: 0.35042834281921387;
Model: LogisticRegression; Parameters: {'tol': 1e-06, 'max_iter': 100, 'penalty': 'l2', 'solver': 'bfgs', 'C': 1.0}; Test_score: 0.04706029899483853; Train_score: 0.0509592610337126; Time: 5.978296359380086;
Model: LogisticRegression; Parameters: {'tol': 1e-06, 'max_iter': 100, 'penalty': 'enet', 'solver': 'cgd', 'C': 1.0, 'l1_ratio': 0.5}; Test_score: 0.301029995663981; Train_score: 0.301029995663981; Time: 0.39499767621358234;

Grid Search Selected Model
LogisticRegression; Parameters: {'solver': 'bfgs', 'penalty': 'l2', 'max_iter': 100, 'C': 1.0, 'tol': 1e-06}; Test_score: 0.04706029899483853; Train_score: 0.0509592610337126; Time: 5.978296359380086;

Testing Model - RandomForestClassifier

Model: RandomForestClassifier; Parameters: {'max_features': 'auto', 'max_leaf_nodes': 1000, 'max_depth': 5, 'min_samples_leaf': 2, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.0544300619388821; Train_score: 0.04501935010175823; Time: 0.6173566182454427;
Model: RandomForestClassifier; Parameters: {'max_features': 'max', 'max_leaf_nodes': 1000, 'max_depth': 4, 'min_samples_leaf': 1, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.6312681720374336; Train_score: 0.031202981698407836; Time: 0.4157187143961589;
Model: RandomForestClassifier; Parameters: {'max_features': 'auto', 'max_leaf_nodes': 32, 'max_depth': 6, 'min_samples_leaf': 2, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.05236463318341973; Train_score: 0.0398911986275914; Time: 0.389815886815389;
Model: RandomForestClassifier; Parameters: {'max_features': 'max', 'max_leaf_nodes': 128, 'max_depth': 4, 'min_samples_leaf': 1, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.3316650118364283; Train_score: 0.03646912211209103; Time: 0.39275368054707843;
Model: RandomForestClassifier; Parameters: {'max_features': 'auto', 'max_leaf_nodes': 32, 'max_depth': 4, 'min_samples_leaf': 1, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.05414261225636453; Train_score: 0.050383735618949134; Time: 0.377108097076416;

Grid Search Selected Model
RandomForestClassifier; Parameters: {'n_estimators': 10, 'max_features': 'auto', 'max_leaf_nodes': 32, 'sample': 0.632, 'max_depth': 6, 'min_samples_leaf': 2, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.05236463318341973; Train_score: 0.0398911986275914; Time: 0.389815886815389;

Testing Model - NaiveBayes

Model: NaiveBayes; Parameters: {'alpha': 0.01}; Test_score: 0.36836053552781106; Train_score: 0.15408547152435914; Time: 0.20411856969197592;
Model: NaiveBayes; Parameters: {'alpha': 1.0}; Test_score: 0.07481405574966224; Train_score: 0.08633565841200003; Time: 0.19905169804890951;
Model: NaiveBayes; Parameters: {'alpha': 10.0}; Test_score: 0.08853409488745793; Train_score: 0.108652291216694; Time: 0.18989038467407227;

Grid Search Selected Model
NaiveBayes; Parameters: {'alpha': 1.0, 'nbtype': 'auto'}; Test_score: 0.07481405574966224; Train_score: 0.08633565841200003; Time: 0.19905169804890951;

Final Model

LogisticRegression; Best_Parameters: {'solver': 'bfgs', 'penalty': 'l2', 'max_iter': 100, 'C': 1.0, 'tol': 1e-06}; Best_Test_score: 0.04706029899483853; Train_score: 0.0509592610337126; Time: 5.978296359380086;


Starting Stepwise
[Model 0] aic: -4627.899765590429; Variables: ['"age"', '"boat_8"', '"boat_5"', '"boat_3"', '"boat_14"', '"boat_10"', '"boat_C"', '"boat_4"', '"boat_15"', '"boat_13"', '"fare"', '"boat_Others"', '"boat_NULL"']
[Model 1] aic: -4630.178327745027; (-) Variable: "age"
[Model 2] aic: -4631.253613624401; (-) Variable: "boat_8"
[Model 3] aic: -4632.353422709695; (-) Variable: "boat_5"
[Model 4] aic: -4633.732589669867; (-) Variable: "boat_3"
[Model 5] aic: -4635.975819643486; (-) Variable: "boat_14"
[Model 6] aic: -4637.332957195352; (-) Variable: "boat_10"
[Model 7] aic: -4639.638795804347; (-) Variable: "boat_C"
[Model 8] aic: -4641.229191331785; (-) Variable: "boat_4"
[Model 9] aic: -4642.7207946214685; (-) Variable: "boat_15"
[Model 10] aic: -4644.363932172027; (-) Variable: "boat_13"
[Model 11] aic: -4646.606529359335; (-) Variable: "fare"
[Model 12] aic: -4647.1794053049225; (-) Variable: "boat_Others"

Selected Model

[Model 12] aic: -4647.1794053049225; Variables: ['"boat_NULL"']
Out[11]:
model_type
avg_score
avg_train_score
avg_time
score_std
score_train_std
1LogisticRegression0.047060298994838530.05095926103371265.9782963593800860.0070148047121217380.0037970956390688043
2RandomForestClassifier0.052364633183419730.03989119862759140.3898158868153890.0067869076603156960.001715079706071567
3RandomForestClassifier0.054142612256364530.0503837356189491340.3771080970764160.009574250536466130.016168149572132636
4LogisticRegression0.054154767647188470.04596413287877055.8029770056406660.0089369713205738920.0026899969002502915
5RandomForestClassifier0.05443006193888210.045019350101758230.61735661824544270.00278985986807007550.008288064326826454
6NaiveBayes0.074814055749662240.086335658412000030.199051698048909510.0215919474855640820.0016479358747771464
7NaiveBayes0.088534094887457930.1086522912166940.189890384674072270.0317507963270601960.004316562703051541
8LogisticRegression0.3010299956639810.3010299956639810.350428342819213870.00.0
9LogisticRegression0.3010299956639810.3010299956639810.394997676213582340.00.0
10RandomForestClassifier0.33166501183642830.036469122112091030.392753680547078430.121271234301659190.002074126815648942
11NaiveBayes0.368360535527811060.154085471524359140.204118569691975920.50580693111653230.12168159397297726
12RandomForestClassifier0.63126817203743360.0312029816984078360.41571871439615890.489717160507429470.0025874656795455003
Rows: 1-12 | Columns: 8
In [12]:
model.plot("stepwise")
Out[12]:
<AxesSubplot:xlabel='n_features', ylabel='aic'>
In [13]:
model.plot()
Out[13]:
<AxesSubplot:xlabel='time', ylabel='score'>