Expand Menu
-
Home / Documentation / Delphi / Automl / Plot / Index
Model.plot¶
In [ ]:
Model.plot(mltype: str = "champion",
ax=None,
**style_kwds,)
Draws the AutoML plot.
Parameters¶
Name | Type | Optional | Description |
---|---|---|---|
mltype | str | ✓ | The plot type.
|
ax | Matplotlib axes object | ✓ | The axes to plot on. |
**style_kwds | any | ✓ | Any optional parameter to pass to the Matplotlib functions. |
Returns¶
ax : Matplotlib axes object
Example¶
In [11]:
from verticapy.learn.delphi import AutoML
model = AutoML("titanic_autoML", stepwise = True)
model.fit("public.titanic", X = ["age", "fare", "boat"], y = "survived")
Starting AutoML
Testing Model - LogisticRegression
Model: LogisticRegression; Parameters: {'tol': 1e-06, 'max_iter': 100, 'penalty': 'none', 'solver': 'bfgs'}; Test_score: 0.05415476764718847; Train_score: 0.0459641328787705; Time: 5.802977005640666; Model: LogisticRegression; Parameters: {'tol': 1e-06, 'max_iter': 100, 'penalty': 'l1', 'solver': 'cgd', 'C': 1.0}; Test_score: 0.301029995663981; Train_score: 0.301029995663981; Time: 0.35042834281921387; Model: LogisticRegression; Parameters: {'tol': 1e-06, 'max_iter': 100, 'penalty': 'l2', 'solver': 'bfgs', 'C': 1.0}; Test_score: 0.04706029899483853; Train_score: 0.0509592610337126; Time: 5.978296359380086; Model: LogisticRegression; Parameters: {'tol': 1e-06, 'max_iter': 100, 'penalty': 'enet', 'solver': 'cgd', 'C': 1.0, 'l1_ratio': 0.5}; Test_score: 0.301029995663981; Train_score: 0.301029995663981; Time: 0.39499767621358234; Grid Search Selected Model LogisticRegression; Parameters: {'solver': 'bfgs', 'penalty': 'l2', 'max_iter': 100, 'C': 1.0, 'tol': 1e-06}; Test_score: 0.04706029899483853; Train_score: 0.0509592610337126; Time: 5.978296359380086; Testing Model - RandomForestClassifier
Model: RandomForestClassifier; Parameters: {'max_features': 'auto', 'max_leaf_nodes': 1000, 'max_depth': 5, 'min_samples_leaf': 2, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.0544300619388821; Train_score: 0.04501935010175823; Time: 0.6173566182454427; Model: RandomForestClassifier; Parameters: {'max_features': 'max', 'max_leaf_nodes': 1000, 'max_depth': 4, 'min_samples_leaf': 1, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.6312681720374336; Train_score: 0.031202981698407836; Time: 0.4157187143961589; Model: RandomForestClassifier; Parameters: {'max_features': 'auto', 'max_leaf_nodes': 32, 'max_depth': 6, 'min_samples_leaf': 2, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.05236463318341973; Train_score: 0.0398911986275914; Time: 0.389815886815389; Model: RandomForestClassifier; Parameters: {'max_features': 'max', 'max_leaf_nodes': 128, 'max_depth': 4, 'min_samples_leaf': 1, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.3316650118364283; Train_score: 0.03646912211209103; Time: 0.39275368054707843; Model: RandomForestClassifier; Parameters: {'max_features': 'auto', 'max_leaf_nodes': 32, 'max_depth': 4, 'min_samples_leaf': 1, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.05414261225636453; Train_score: 0.050383735618949134; Time: 0.377108097076416; Grid Search Selected Model RandomForestClassifier; Parameters: {'n_estimators': 10, 'max_features': 'auto', 'max_leaf_nodes': 32, 'sample': 0.632, 'max_depth': 6, 'min_samples_leaf': 2, 'min_info_gain': 0.0, 'nbins': 32}; Test_score: 0.05236463318341973; Train_score: 0.0398911986275914; Time: 0.389815886815389; Testing Model - NaiveBayes
Model: NaiveBayes; Parameters: {'alpha': 0.01}; Test_score: 0.36836053552781106; Train_score: 0.15408547152435914; Time: 0.20411856969197592; Model: NaiveBayes; Parameters: {'alpha': 1.0}; Test_score: 0.07481405574966224; Train_score: 0.08633565841200003; Time: 0.19905169804890951; Model: NaiveBayes; Parameters: {'alpha': 10.0}; Test_score: 0.08853409488745793; Train_score: 0.108652291216694; Time: 0.18989038467407227; Grid Search Selected Model NaiveBayes; Parameters: {'alpha': 1.0, 'nbtype': 'auto'}; Test_score: 0.07481405574966224; Train_score: 0.08633565841200003; Time: 0.19905169804890951; Final Model LogisticRegression; Best_Parameters: {'solver': 'bfgs', 'penalty': 'l2', 'max_iter': 100, 'C': 1.0, 'tol': 1e-06}; Best_Test_score: 0.04706029899483853; Train_score: 0.0509592610337126; Time: 5.978296359380086; Starting Stepwise
[Model 0] aic: -4627.899765590429; Variables: ['"age"', '"boat_8"', '"boat_5"', '"boat_3"', '"boat_14"', '"boat_10"', '"boat_C"', '"boat_4"', '"boat_15"', '"boat_13"', '"fare"', '"boat_Others"', '"boat_NULL"'] [Model 1] aic: -4630.178327745027; (-) Variable: "age" [Model 2] aic: -4631.253613624401; (-) Variable: "boat_8" [Model 3] aic: -4632.353422709695; (-) Variable: "boat_5" [Model 4] aic: -4633.732589669867; (-) Variable: "boat_3" [Model 5] aic: -4635.975819643486; (-) Variable: "boat_14" [Model 6] aic: -4637.332957195352; (-) Variable: "boat_10" [Model 7] aic: -4639.638795804347; (-) Variable: "boat_C" [Model 8] aic: -4641.229191331785; (-) Variable: "boat_4" [Model 9] aic: -4642.7207946214685; (-) Variable: "boat_15" [Model 10] aic: -4644.363932172027; (-) Variable: "boat_13" [Model 11] aic: -4646.606529359335; (-) Variable: "fare" [Model 12] aic: -4647.1794053049225; (-) Variable: "boat_Others" Selected Model [Model 12] aic: -4647.1794053049225; Variables: ['"boat_NULL"']
Out[11]:
model_type | avg_score | avg_train_score | avg_time | score_std | score_train_std | |||
1 | LogisticRegression | 0.04706029899483853 | 0.0509592610337126 | 5.978296359380086 | 0.007014804712121738 | 0.0037970956390688043 | ||
2 | RandomForestClassifier | 0.05236463318341973 | 0.0398911986275914 | 0.389815886815389 | 0.006786907660315696 | 0.001715079706071567 | ||
3 | RandomForestClassifier | 0.05414261225636453 | 0.050383735618949134 | 0.377108097076416 | 0.00957425053646613 | 0.016168149572132636 | ||
4 | LogisticRegression | 0.05415476764718847 | 0.0459641328787705 | 5.802977005640666 | 0.008936971320573892 | 0.0026899969002502915 | ||
5 | RandomForestClassifier | 0.0544300619388821 | 0.04501935010175823 | 0.6173566182454427 | 0.0027898598680700755 | 0.008288064326826454 | ||
6 | NaiveBayes | 0.07481405574966224 | 0.08633565841200003 | 0.19905169804890951 | 0.021591947485564082 | 0.0016479358747771464 | ||
7 | NaiveBayes | 0.08853409488745793 | 0.108652291216694 | 0.18989038467407227 | 0.031750796327060196 | 0.004316562703051541 | ||
8 | LogisticRegression | 0.301029995663981 | 0.301029995663981 | 0.35042834281921387 | 0.0 | 0.0 | ||
9 | LogisticRegression | 0.301029995663981 | 0.301029995663981 | 0.39499767621358234 | 0.0 | 0.0 | ||
10 | RandomForestClassifier | 0.3316650118364283 | 0.03646912211209103 | 0.39275368054707843 | 0.12127123430165919 | 0.002074126815648942 | ||
11 | NaiveBayes | 0.36836053552781106 | 0.15408547152435914 | 0.20411856969197592 | 0.5058069311165323 | 0.12168159397297726 | ||
12 | RandomForestClassifier | 0.6312681720374336 | 0.031202981698407836 | 0.4157187143961589 | 0.48971716050742947 | 0.0025874656795455003 |
Rows: 1-12 | Columns: 8
In [12]:
model.plot("stepwise")
Out[12]:
<AxesSubplot:xlabel='n_features', ylabel='aic'>
In [13]:
model.plot()
Out[13]:
<AxesSubplot:xlabel='time', ylabel='score'>
(c) Copyright [2020-2023] OpenText