Loading...

verticapy.machine_learning.vertica.tsa.ARMA#

class verticapy.machine_learning.vertica.tsa.ARMA(name: str = None, overwrite_model: bool = False, order: tuple[int] | list[int] = (0, 0), tol: float = 1e-06, max_iter: int = 100, init: Literal['zero', 'hr'] = 'zero', missing: Literal['drop', 'raise', 'zero', 'linear_interpolation'] = 'linear_interpolation')#

Creates a inDB ARMA model.

New in version 12.0.3.

Note

The AR model is much faster than ARIMA(p, 0, 0) or ARMA(p, 0) because the underlying algorithm of AR is quite different.

Note

The MA model may be faster and more accurate than ARIMA(0, 0, q) or ARMA(0, q) because the underlying algorithm of MA is quite different.

Parameters#

name: str, optional

Name of the model. The model is stored in the database.

overwrite_model: bool, optional

If set to True, training a model with the same name as an existing model overwrites the existing model.

order: tuple, optional

The (p,q) order of the model for the autoregressive, and moving average components.

tol: float, optional

Determines whether the algorithm has reached the specified accuracy result.

max_iter: int, optional

Determines the maximum number of iterations the algorithm performs before achieving the specified accuracy result.

init: str, optional

Initialization method, one of the following:

  • ‘zero’:

    Coefficients are initialized to zero.

  • ‘hr’:

    Coefficients are initialized using the Hannan-Rissanen algorithm.

missing: str, optional

Method for handling missing values, one of the following strings:

  • ‘drop’:

    Missing values are ignored.

  • ‘raise’:

    Missing values raise an error.

  • ‘zero’:

    Missing values are set to zero.

  • ‘linear_interpolation’:

    Missing values are replaced by a linearly interpolated value based on the nearest valid entries before and after the missing value. In cases where the first or last values in a dataset are missing, the function errors.

Attributes#

Many attributes are created during the fitting phase.

phi_: numpy.array

The coefficient of the AutoRegressive process. It represents the strength and direction of the relationship between a variable and its past values.

theta_: numpy.array

The theta coefficient of the Moving Average process. It signifies the impact and contribution of the lagged error terms in determining the current value within the time series model.

mean_: float

The mean of the time series values.

features_importance_: numpy.array

The importance of features is computed through the AutoRegressive part coefficients, which are normalized based on their range. Subsequently, an activation function calculates the final score. It is necessary to use the features_importance() method to compute it initially, and the computed values will be subsequently utilized for subsequent calls.

mse_: float

The mean squared error (MSE) of the model, based on one-step forward forecasting, may not always be relevant. Utilizing a full forecasting approach is recommended to compute a more meaningful and comprehensive metric.

n_: int

The number of rows used to fit the model.

Note

All attributes can be accessed using the get_attributes() method.

Note

Several other attributes can be accessed by using the get_vertica_attributes() method.

Examples#

The following examples provide a basic understanding of usage. For more detailed examples, please refer to the Machine Learning or the Examples section on the website.

Initialization#

We import verticapy:

import verticapy as vp

Hint

By assigning an alias to verticapy, we mitigate the risk of code collisions with other libraries. This precaution is necessary because verticapy uses commonly known function names like “average” and “median”, which can potentially lead to naming conflicts. The use of an alias ensures that the functions from verticapy are used as intended without interfering with functions from other libraries.

For this example, we will use the airline passengers dataset.

import verticapy.datasets as vpd

data = vpd.load_airline_passengers()
📅
date
Date
123
passengers
Integer
11949-01-01112
21949-02-01118
31949-03-01132
41949-04-01129
51949-05-01121
61949-06-01135
71949-07-01148
81949-08-01148
91949-09-01136
101949-10-01119
111949-11-01104
121949-12-01118
131950-01-01115
141950-02-01126
151950-03-01141
161950-04-01135
171950-05-01125
181950-06-01149
191950-07-01170
201950-08-01170
211950-09-01158
221950-10-01133
231950-11-01114
241950-12-01140
251951-01-01145
261951-02-01150
271951-03-01178
281951-04-01163
291951-05-01172
301951-06-01178
311951-07-01199
321951-08-01199
331951-09-01184
341951-10-01162
351951-11-01146
361951-12-01166
371952-01-01171
381952-02-01180
391952-03-01193
401952-04-01181
411952-05-01183
421952-06-01218
431952-07-01230
441952-08-01242
451952-09-01209
461952-10-01191
471952-11-01172
481952-12-01194
491953-01-01196
501953-02-01196
511953-03-01236
521953-04-01235
531953-05-01229
541953-06-01243
551953-07-01264
561953-08-01272
571953-09-01237
581953-10-01211
591953-11-01180
601953-12-01201
611954-01-01204
621954-02-01188
631954-03-01235
641954-04-01227
651954-05-01234
661954-06-01264
671954-07-01302
681954-08-01293
691954-09-01259
701954-10-01229
711954-11-01203
721954-12-01229
731955-01-01242
741955-02-01233
751955-03-01267
761955-04-01269
771955-05-01270
781955-06-01315
791955-07-01364
801955-08-01347
811955-09-01312
821955-10-01274
831955-11-01237
841955-12-01278
851956-01-01284
861956-02-01277
871956-03-01317
881956-04-01313
891956-05-01318
901956-06-01374
911956-07-01413
921956-08-01405
931956-09-01355
941956-10-01306
951956-11-01271
961956-12-01306
971957-01-01315
981957-02-01301
991957-03-01356
1001957-04-01348
Rows: 1-100 | Columns: 2

Note

VerticaPy offers a wide range of sample datasets that are ideal for training and testing purposes. You can explore the full list of available datasets in the Datasets, which provides detailed information on each dataset and how to use them effectively. These datasets are invaluable resources for honing your data analysis and machine learning skills within the VerticaPy environment.

We can plot the data to visually inspect it for the presence of any trends:

data["passengers"].plot(ts = "date")

Though the increasing trend is obvious in our example, we can confirm it by the mkt() (Mann Kendall test) test:

from verticapy.machine_learning.model_selection.statistical_tests import mkt

mkt(data, column = "passengers", ts = "date")
value
Mann Kendall Test Statistic14.381116595942574
S8327.0
STDS578.953653873376
p_value6.798871501067664e-47
Monotonic Trend
Trendincreasing
Rows: 1-6 | Columns: 2

The above tests gives us some more insights into the data such as that the data is monotonic, and is increasing. Furthermore, the low p-value confirms the presence of a trend with respect to time. Now we are sure of the trend so we can apply the appropriate time-series model to fit it.

Model Initialization#

First we import the ARMA model:

from verticapy.machine_learning.vertica.tsa import ARMA

Then we can create the model:

model = ARMA(order = (12, 1, 2))

Hint

In verticapy 1.0.x and higher, you do not need to specify the model name, as the name is automatically assigned. If you need to re-use the model, you can fetch the model name from the model’s attributes.

Important

The model name is crucial for the model management system and versioning. It’s highly recommended to provide a name if you plan to reuse the model later.

Model Fitting#

We can now fit the model:

model.fit(data, "date", "passengers")

Important

To train a model, you can directly use the vDataFrame or the name of the relation stored in the database. The test set is optional and is only used to compute the test metrics. In verticapy, we don’t work using X matrices and y vectors. Instead, we work directly with lists of predictors and the response name.

Features Importance#

We can conveniently get the features importance:

model.features_importance()
Out[5]: 

Important

Feature importance is determined by using the coefficients of the auto-regressive (AR) process and normalizing them. This method tends to be precise when your time series primarily consists of an auto-regressive component. However, its accuracy may be a topic of discussion if the time series contains other components as well.

Model Register#

In order to register the model for tracking and versioning:

model.register("model_v1")

Please refer to Model Tracking and Versioning for more details on model tracking and versioning.


One important thing in time-series forecasting is that it has two types of forecasting:

  • One-step ahead forecasting

  • Full forecasting

Important

The default method is one-step ahead forecasting. To use full forecasting, use ``method = “forecast” ``.

One-step ahead#

In this type of forecasting, the algorithm utilizes the true value of the previous timestamp (t-1) to predict the immediate next timestamp (t). Subsequently, to forecast additional steps into the future (t+1), it relies on the actual value of the immediately preceding timestamp (t).

A notable drawback of this forecasting method is its tendency to exhibit exaggerated accuracy, particularly when predicting more than one step into the future.

Metrics#

We can get the entire report using:

model.report()
value
explained_variance0.843011800385913
max_error108.703124575763
median_absolute_error23.5457433749146
mean_absolute_error31.195252646127
mean_squared_error1692.48056292341
root_mean_squared_error41.1397686299207
r20.842975867228999
r2_adj0.841494507485876
aic807.057132402344
bic812.230918666116
Rows: 1-10 | Columns: 2

You can also choose the number of predictions and where to start the forecast. For example, the following code will allow you to generate a report with 30 predictions, starting the forecasting process at index 40.

model.report(start = 40, npredictions = 30)
value
explained_variance0.421076426699653
max_error52.5240603081696
median_absolute_error13.4454792561496
mean_absolute_error19.8817394978026
mean_squared_error607.492387897198
root_mean_squared_error24.6473606679741
r20.420420176407039
r2_adj0.399720896993005
aic197.020930088516
bic199.082584111099
Rows: 1-10 | Columns: 2

Important

Most metrics are computed using a single SQL query, but some of them might require multiple SQL queries. Selecting only the necessary metrics in the report can help optimize performance. E.g. model.report(metrics = ["mse", "r2"]).

You can utilize the score() function to calculate various regression metrics, with the explained variance being the default.

model.score()
Out[6]: 0.842975867228999

The same applies to the score. You can choose where to start and the number of predictions to use.

model.score(start = 40, npredictions = 30)
Out[7]: 0.420420176407039

Important

If you do not specify a starting point and the number of predictions, the forecast will begin at one-fourth of the dataset, which can result in an inaccurate score, especially for large datasets. It’s important to choose these parameters carefully.

Prediction#

Prediction is straight-forward:

model.predict()
123
prediction
Float(22)
1436.808245506626
2411.303769750774
3456.591517112856
4497.165582992911
5523.414142302269
6579.634194756896
7670.753858449996
8648.086244158784
9558.685139438718
10498.606577143251
Rows: 1-10 | Column: prediction | Type: Float(22)

Hint

You can control the number of prediction steps by changing the npredictions parameter: model.predict(npredictions = 30).

Note

Predictions can be made automatically by using the training set, in which case you don’t need to specify the predictors. Alternatively, you can pass only the vDataFrame to the predict() function, but in this case, it’s essential that the column names of the vDataFrame match the predictors and response name in the model.

If you would like to have the ‘time-stamps’ (ts) in the output then you can switch the output_estimated_ts the parameter. And if you also would like to see the standard error then you can switch the ``output_standard_errors``parameter:

model.predict(output_estimated_ts = True, output_standard_errors = True)
📅
date
Date
123
prediction
Float(22)
123
std_err
Float(22)
11961-01-01436.8082455066261.0
21961-02-01411.3037697507741.00174003420373
31961-03-01456.5915171128561.01233294298172
41961-04-01497.1655829929111.01300655943781
51961-05-01523.4141423022691.027160664119
61961-06-01579.6341947568961.02738774823065
71961-07-01670.7538584499961.06683194222182
81961-08-01648.0862441587841.06683209333843
91961-09-01558.6851394387181.07835995410046
101961-10-01498.6065771432511.08127865934304
Rows: 1-10 | Columns: 3

Important

The output_estimated_ts parameter provides an estimation of ‘ts’ assuming that ‘ts’ is regularly spaced.

If you don’t provide any input, the function will begin forecasting after the last known value. If you want to forecast starting from a specific value within the input dataset or another dataset, you can use the following syntax.

model.predict(
    data,
    "date",
    "passengers",
    start = 40,
    npredictions = 20,
    output_estimated_ts = True,
    output_standard_errors = True,
)
📅
date
Date
123
prediction
Float(22)
123
std_err
Float(22)
11952-05-01171.5482322358181.0
21952-06-01194.5123427049461.0
31952-07-01222.6719403368351.0
41952-08-01252.9822297271321.0
51952-09-01238.0619125893941.0
61952-10-01196.9808517551861.0
71952-11-01164.6486915064611.0
81952-12-01159.2455340105661.0
91953-01-01205.9121039997881.0
101953-02-01202.3227300359951.0
111953-03-01201.6795663197631.0
121953-04-01256.802012797661.0
131953-05-01221.861327546431.0
141953-06-01239.2459609794111.0
151953-07-01267.3271566321421.0
161953-08-01279.0323603751911.0
171953-09-01281.3402367847061.0
181953-10-01203.7604421188141.0
191953-11-01191.4534823789211.0
201953-12-01159.7706074599341.0
Rows: 1-20 | Columns: 3

Plots#

We can conveniently plot the predictions on a line plot to observe the efficacy of our model:

model.plot(data, "date", "passengers", npredictions = 20, start=135)

Note

You can control the number of prediction steps by changing the npredictions parameter: model.plot(npredictions = 30).

Please refer to Machine Learning - Time Series Plots for more examples.

Full forecasting#

In this forecasting approach, the algorithm relies solely on a chosen true value for initiation. Subsequently, all predictions are established based on a series of previously predicted values.

This methodology aligns the accuracy of predictions more closely with reality. In practical forecasting scenarios, the goal is to predict all future steps, and this technique ensures a progressive sequence of predictions.

Metrics#

We can get the report using:

model.report(start = 40, method = "forecast")

By selecting start = 40, we will measure the accuracy from 40th time-stamp and continue the assessment until the last available time-stamp.

value
explained_variance0.856355581856155
max_error171.905938422592
median_absolute_error39.8392278219606
mean_absolute_error46.4958633427347
mean_squared_error3472.99371220737
root_mean_squared_error58.932111044891
r20.664855563897496
r2_adj0.661569834131785
aic852.086332997406
bic857.177094993708
Rows: 1-10 | Columns: 2

Notice that the accuracy using method = forecast is poorer than the one-step ahead forecasting.

You can utilize the score() function to calculate various regression metrics, with the explained variance being the default.

model.score(start = 40, npredictions = 30, method = "forecast")
Out[8]: 0.285565495885585

Prediction#

Prediction is straight-forward:

model.predict(start = 100, npredictions = 40, method = "forecast")
123
prediction
Float(22)
11011.09669062909
21148.32678897059
31090.51794877614
41009.87358230212
5747.754338701048
6587.098901215861
7444.495918307164
8317.747911374787
9374.322358690382
10473.284471095515
11729.643678064786
12887.429050463949
131092.29300784565
141234.03689920842
151181.1887847047
161088.14776653289
17783.979083705412
18605.804632247447
19412.222285053335
20277.702017385629
21330.11196614039
22444.619365164586
23745.827031912083
24927.420632459289
251182.77854631629
261328.35058838066
271285.85843434179
281173.59198840309
29824.83628957681
30623.054243934679
31370.093556127764
32229.024666603705
33270.850967558915
34408.032452903617
35756.504475660798
36968.822129499929
371283.40758688914
381432.86496297949
391407.00508221321
401267.20307757442
Rows: 1-40 | Column: prediction | Type: Float(22)

If you want to forecast starting from a specific value within the input dataset or another dataset, you can use the following syntax.

model.predict(
    data,
    "date",
    "passengers",
    start = 40,
    npredictions = 20,
    output_estimated_ts = True,
    output_standard_errors = True,
    method = "forecast"
)
📅
date
Date
123
prediction
Float(22)
123
std_err
Float(22)
11952-05-01171.5482322358181.0
21952-06-01183.1668711796421.00174003420373
31952-07-01187.9733378947221.01233294298172
41952-08-01208.6453064051471.01300655943781
51952-09-01207.7366985543111.027160664119
61952-10-01193.5394468590851.02738774823065
71952-11-01172.3773937473021.06683194222182
81952-12-01154.7890949313891.06683209333843
91953-01-01174.7534190654151.07835995410046
101953-02-01174.9203623128531.08127865934304
111953-03-01189.02812018911.08992715239033
121953-04-01201.5150107302621.09183098148237
131953-05-01197.6035719671111.99988128107435
141953-06-01211.1974977000651.99993755844603
151953-07-01214.268566666442.00161582411564
161953-08-01235.6120776155752.00694633503065
171953-09-01231.9991714737652.02903658560943
181953-10-01219.3592693388582.03175369433929
191953-11-01196.9046044196152.14039154204599
201953-12-01178.5755902074972.14043910030784
Rows: 1-20 | Columns: 3

Plots#

We can conveniently plot the predictions on a line plot to observe the efficacy of our model:

model.plot(data, "date", "passengers", npredictions = 40, start = 120, method = "forecast")
__init__(name: str = None, overwrite_model: bool = False, order: tuple[int] | list[int] = (0, 0), tol: float = 1e-06, max_iter: int = 100, init: Literal['zero', 'hr'] = 'zero', missing: Literal['drop', 'raise', 'zero', 'linear_interpolation'] = 'linear_interpolation') None#

Must be overridden in the child class

Methods

__init__([name, overwrite_model, order, ...])

Must be overridden in the child class

contour([nbins, chart])

Draws the model's contour plot.

deploySQL([ts, y, start, npredictions, ...])

Returns the SQL code needed to deploy the model.

does_model_exists(name[, raise_error, ...])

Checks whether the model is stored in the Vertica database.

drop()

Drops the model from the Vertica database.

export_models(name, path[, kind])

Exports machine learning models.

features_importance([show, chart])

Computes the model's features importance.

fit(input_relation, ts, y[, test_relation, ...])

Trains the model.

get_attributes([attr_name])

Returns the model attributes.

get_match_index(x, col_list[, str_check])

Returns the matching index.

get_params()

Returns the parameters of the model.

get_plotting_lib([class_name, chart, ...])

Returns the first available library (Plotly, Matplotlib, or Highcharts) to draw a specific graphic.

get_vertica_attributes([attr_name])

Returns the model Vertica attributes.

import_models(path[, schema, kind])

Imports machine learning models.

plot([vdf, ts, y, start, npredictions, ...])

Draws the model.

predict([vdf, ts, y, start, npredictions, ...])

Predicts using the input relation.

register(registered_name[, raise_error])

Registers the model and adds it to in-DB Model versioning environment with a status of 'under_review'.

regression_report([metrics, start, ...])

Computes a regression report using multiple metrics to evaluate the model (r2, mse, max error...).

report([metrics, start, npredictions, method])

Computes a regression report using multiple metrics to evaluate the model (r2, mse, max error...).

score([metric, start, npredictions, method])

Computes the model score.

set_params([parameters])

Sets the parameters of the model.

summarize()

Summarizes the model.

to_binary(path)

Exports the model to the Vertica Binary format.

to_pmml(path)

Exports the model to PMML.

to_python([return_proba, ...])

Returns the Python function needed for in-memory scoring without using built-in Vertica functions.

to_sql([X, return_proba, ...])

Returns the SQL code needed to deploy the model without using built-in Vertica functions.

to_tf(path)

Exports the model to the Frozen Graph format (TensorFlow).

Attributes