TIME SERIES (AUTOREGRESSIVE) MODELS

 

INTRODUCTION]

 

1.  Causal premise:  historical pattern of the dependent variable can be used to forecast future values of the dependent variable under the assumption that past influences will continue into the future.

 

2.  Extrapolation of past time series into the future (ex ante) can vary based upon the mathematical form that most nearly described its pattern in the past (ex post).

 

3.  Implications of extrapolation of historical data for model selection:

          a.  Time series models are best applied to an immediate or short term future horizon.

          b.  Time series models are most satisfactory when historical data patterns are changing slowly and consistently (stationary series).

          c.  Models can be simple and inexpensive (naive) to more complicated and expensive (Box-Jenkins).

          d.  Forecasts based upon past time patterns must be augmented by intuitive judgment to determine other influences, especially as the time frame increases to, say, six months.

 

PATTERN IDENTIFICATION

 

1.  The ACF of original data can be used to determine if the data is stationary (no trend).

2.  First differences removes a linear trend.

3.  Second differences removes a quadratic trend.

4.  First differences of logarithms of data removes a constant growth trend.

 

SIMPLE TIME SERIES MODELS

 

1.  The performance of a model is based upon its ex post error terms rather than its mathematical sophistication.

2.  Mean forecast for stationary data implies that all other variation around its mean is either small or random.

3.  A no change naive model allows for variation in the forecast but without trend or seasonal variation.

4.  Average change models adjust for historical trends, but there will be a lag in turning points and all past values are weighted equally.

5.  Average percent changes give better forecast for data with a constant growth rate but forecasts based upon more than one or two months percent change will have a compounding effect of future forecasts that must be avoided.

 

 

MODEL EVALUATION

 

1.  Table 8-2 shows the evaluation of historical wage data presented in table 8-2.  The best model appears to be the average change model with n=2.  Note that the evaluation of each model is based upon its MAPE, MAD, mean error, and mean percent error.  It does have a positive bias (over forecasting wages on the average).

 

2.  To evaluate the most recent performance the last three data points may be removed from the data and the model estimated.  Table 8-3 shows that the naive model outperforms the other two models on a simulated ex ante basis because it is less likely to build up positive error terms over the three month period.  Because of its simplicity and better more recent performance the naive model would be the best model. 

 

3.  We may decide to use the average change model and the naive model as benchmarks against which more sophisticated models could be evaluated.  The model is always updated each time a new data point is recorded.

 

4.  Example 8-1 shows that we may combine the two forecasts into one forecast by using a weighted average of the two forecast values.  The weighting scheme should assign a higher weight to the forecast that generates the smallest error.  A method of determining these weights is as follows:

          a.  Take each mean error as a percent of their combined mean error (ignoring the signs).

          b.  Determine the inverse of these percentages.

          c.  Weight each forecast by this inverse to determine a combined forecast.

 

 

Autoregressive Models

 

All autoregressive models involve a determination of the order of the model (the number of lagged values of the variable on the right-hand side of the equation) and the weights assigned to each of the lagged values in the model.

 

Moving Average Models

 

Simple Moving Average Models

 

1.  Each data series may be converted into a new series that is a moving average over any number of periods.  This moving average smooths out irregularities and captures cyclical influences if the data is stationary and seasonally adjusted.  Simple moving average models have an order = n and weights = 1/n.  Any value of n may be used, but  the higher the value of n the less the amount of variation in the forecasts.

 

2.  A forecast for the next period is the moving average of the current period. 

 

3.  The value and bias of the error terms is evaluation in determining the usefulness of the model or if an alternative number of periods should be tried.

 

4.  Table 14.2 compares the forecasts and error measures for two alternative simple moving average forecasting models with n = 2 and n = 4.  Clearly n = 4 is preferred over n = 2 based upon the lower MAD.

 

Problems with simple moving averages:

 

1.  The forecast will lag turning points if it captures them at all (oversmoothing for high values of n).

 

2.     Forecasts will be unreliable (biased) when there is a strong trend in the variable.

 

3.   Past observations are given the same “weight.”  This can be overcome with a weighted moving average as shown in Table 4.2.  This model has decreasing weights with the fraction of n decreasing each term but the sum of the weights equal to one. 

 

Double Moving Average Models

 

1.  Double moving average models correct for a trend. 

 

2.  The original data series is smoothed with a single moving average of order n,  (M)

 

3.  The new smoothed series is smothed again with a second moving average of order n, (Md)

 

4.  For the two new series the following parameters are calculated for each time period beginning with the first period when both M and Md are available:

 

          a = 2M - Md

          b = (2/(n-1)) (M - Md)

          Predicted Y (t+T) = a + b T

 

Alternative method of dealing with a trend applies simple moving average forecast of first differences of a data series.  The forecasted change can be added to the last value to determine next period’s forecast.

 

Limitations of Moving Average Models

 

1.  May require lengthly time series, especially if double moving average required.

2.  Weights equal to 1/n are arbitrary and give equal value to all past values.

3.  The “trial and error” determination of the optimal value of n is time consuming.

4.     Forecasts are mechanistic and unreliable except for immediate time period forecasts.

 

Exponential Smoothing

 

Simple exponential smoothing forecast

          1.  Begin with an initial smoothed value (often the initial value or an average of several recent values) and an assumed smoothing constant that is a positive fraction.

          2.  The smoothed series is updated by multiplying the most recent value times the smoothing constant plus 1 minus the smoothing constant times the previous smoothed value.

          3.  The forecast for the next period is the smoothed value of the previous period.

 

Double exponential smoothing forecast (Brown model)

          1.  For data that is not stationary a single exponential smoothing forecast will be biased.  Double exponential smoothing is one method of correcting for the trend in the data.

          2.  Begin by determining an exponential smoothed series for the original data based upon an assumed value of alpha and an initial value of S.

          3.  Calculate an exponential smoothed series of the first smoothed series using the same value of alpha and initial value for Sd equal to S.

          4.  Calculate a equal to 2*S - Sd

          5.  Calculate b equal to (alpha/1 - alpha)*(S - Sd)

          6.  The forecast for the T period is a + b T.

 

Holt’s model for nonstationary data

          1.  An alternative model for adjusting for the trend in series Y uses two smoothing constants, alpha for the average of the smoothed series and beta for the change in the smoothed series, called the trend series.

          2.  The average series is computed by assuming an initial value for the series, A, (either the present value of Y or an average of recent values) and a smoothing constant, alpha, to update the A series by multiplying alpha times the most recent value of Y and adding one minus alpha times the sum of the previous values of the A series and the T series.

          3.  The trend series is computed by assuming an initial value of the trend (an average of the change in several recent values of Y or zero if a large number of observations) and updating this value by multiplying beta times the change in A and adding one minus beta times the previous value of T.

          4.  The forecast for the p period is the sum of the previous value of A plus p times the previous value of T.

 

Eviews enables the forecaster to choose among various exponential smoothing models with the command:  SMOOTH