classification_report

In [ ]:
classification_report(y_true: str = "", 
                      y_score: list = [], 
                      input_relation: Union[str, vDataFrame] = "",
                      labels: list = [],
                      cutoff: Union[float, list] = [],
                      estimator = None,
                      nbins: int = 10000)

Computes a classification report using multiple metrics (AUC, accuracy, PRC AUC, F1...). It will consider each category as positive and switch to the next one during the computation.

Parameters

Name Type Optional Description
y_true
str
Response column.
y_score
list
List containing the probability and the prediction.
input_relation
str / vDataFrame
The relation used for scoring. This relation can be a view, table, or customized relation. For example, you could write: "(SELECT ... FROM ...) x" as long as an alias is given at the end of the relation.
labels
list
List of the response column categories to use.
cutoff
float / list
Cutoff for which the tested category will be accepted as prediction. In case of multiclass classification, the list will represent the the classes threshold. If it is empty, the best cutoff will be used.
estimator
object
Estimator to use to compute the classification report.
nbins
int
[Used to compute ROC AUC, PRC AUC and the best cutoff] An integer value that determines the number of decision boundaries. Decision boundaries are set at equally spaced intervals between 0 and 1, inclusive. Greater values for nbins give more precise estimations of the metrics, but can potentially decrease performance. The maximum value is 999,999. If negative, the maximum value is used.

Returns

tablesample : An object containing the result. For more information, see utilities.tablesample.

Example

In [110]:
from verticapy import vDataFrame
vDataFrame("example_classification")
123
y_score
Float
123
y_true
Int
123
y_pred
Int
10.26199263649447100
20.27176694921201100
30.28171556512481600
40.28770460382082500
50.29350374558954700
60.29475473826400300
70.29956294998781710
80.30291632758509100
90.30407271840812700
100.30557534591911110
110.31083566904061900
120.31083566904061910
130.31467749853206810
140.31477253779470100
150.31750485088406700
160.32104751313806500
170.32489831897123600
180.32731363487109100
190.32829095386794900
200.32953009702602600
210.33066748150804800
220.33169482269100700
230.33489128962507700
240.33922497543052100
250.34064941171806900
260.34091423176713810
270.3417832186944800
280.34245682384610900
290.34260753857422700
300.34338024998316910
310.34338024998316910
320.34561150957992100
330.34570335374319900
340.34634145992485510
350.34656637772198600
360.3472057446561800
370.34732028699124600
380.34736187971757800
390.34789564692799800
400.35004136859187200
410.35071366860872600
420.35109732823426910
430.35201683051264400
440.35205422026265600
450.35214685519674900
460.35223950125486600
470.35317513776240400
480.353939591619710
490.35399633359628110
500.35438162426483500
510.35438162426483500
520.35534256573520300
530.35713218534487400
540.35745549633516600
550.35767970835400900
560.35873803392421500
570.35921007559093200
580.35956601676426500
590.35964157471663400
600.3606028610532500
610.3606028610532500
620.3610107520836400
630.36225548197537500
640.36317857203857300
650.36360041860322700
660.36469384477953900
670.36490799275620200
680.36528884868614600
690.36710003571199800
700.36726258876552400
710.36742721303533500
720.36804829476348800
730.36922573004505800
740.36979771665863300
750.37055385826795500
760.37055385826795510
770.37075382592579300
780.3714004011799410
790.3720796383258600
800.3727222853597310
810.3727907403899600
820.3733725448129800
830.37380960466500900
840.37397337319599210
850.37397343724878400
860.37459538158701910
870.37658833829903100
880.37665348322195800
890.37728892220561600
900.37821722973346300
910.37882574407064300
920.37882574407064300
930.37882574407064310
940.37882574407064310
950.3791113212458100
960.38018743774818900
970.3803398381150100
980.38056806607898300
990.38073299525004600
1000.38087661750590600
Out[110]:
Rows: 1-100 of 1234 | Columns: 3
In [111]:
from verticapy.learn.metrics import classification_report
classification_report("y_true", 
                      ["y_score", "y_pred"], 
                      "example_classification")
value
auc0.6974762740166146
prc_auc0.6003540469187277
accuracy0.5996758508914101
log_loss0.281741002875517
precision0.460431654676259
recall0.5688888888888889
f1_score0.5921281517022536
mcc0.18016701318842082
informedness0.18623582766439917
markedness0.17429596146091964
csi0.3413333333333333
cutoff0.999
Out[111]: