Preface for Second Edition |
|
xiii | |
Acknowledgments |
|
xv | |
|
|
1 | (60) |
|
|
3 | (2) |
|
1.2 Stationary Time Series |
|
|
5 | (2) |
|
1.3 Autocovariance and Autocorrelation Functions for Stationary Time Series |
|
|
7 | (4) |
|
1.4 Estimation of the Mean, Autocovariance, and Autocorrelation for Stationary Time Series |
|
|
11 | (11) |
|
|
12 | (1) |
|
|
12 | (5) |
|
|
17 | (1) |
|
|
18 | (2) |
|
|
20 | (2) |
|
|
22 | (10) |
|
1.6 Estimating the Power Spectrum and Spectral Density for Discrete Time Series |
|
|
32 | (4) |
|
|
36 | (25) |
|
|
36 | (5) |
|
|
41 | (5) |
|
Appendix 1A Fourier Series |
|
|
46 | (2) |
|
|
48 | (5) |
|
|
53 | (8) |
|
|
61 | (22) |
|
2.1 Introduction to Linear Filters |
|
|
61 | (2) |
|
2.1.1 Relationship between the Spectra of the Input and Output of a Linear Filter |
|
|
63 | (1) |
|
2.2 Stationary General Linear Processes |
|
|
63 | (3) |
|
2.2.1 Spectrum and Spectral Density for a General Linear Process |
|
|
65 | (1) |
|
2.3 Wold Decomposition Theorem |
|
|
66 | (1) |
|
2.4 Filtering Applications |
|
|
66 | (17) |
|
2.4.1 Butterworth Filters |
|
|
69 | (6) |
|
Appendix 2A Theorem Poofs |
|
|
75 | (3) |
|
|
78 | (1) |
|
|
78 | (5) |
|
3 ARMA Time Series Models |
|
|
83 | (98) |
|
|
83 | (6) |
|
|
86 | (2) |
|
|
88 | (1) |
|
|
89 | (44) |
|
3.2.1 Inverting the Operator |
|
|
93 | (1) |
|
|
94 | (6) |
|
3.2.3 AR(p) Model for p ≥ 1 |
|
|
100 | (1) |
|
3.2.4 Autocorrelations of an AR(p) Model |
|
|
101 | (1) |
|
3.2.5 Linear Difference Equations |
|
|
102 | (3) |
|
3.2.6 Spectral Density of an AR(p) Model |
|
|
105 | (1) |
|
|
105 | (1) |
|
3.2.7.1 Autocorrelations of an AR(2) Model |
|
|
105 | (4) |
|
3.2.7.2 Spectral Density of an AR(2) |
|
|
109 | (1) |
|
3.2.7.3 Stationary/Causal Region of an AR(2) |
|
|
109 | (1) |
|
3.2.7.4 ψ-Weights of an AR(2) Model |
|
|
109 | (8) |
|
3.2.8 Summary of AR(1) and AR(2) Behavior |
|
|
117 | (2) |
|
|
119 | (3) |
|
3.2.10 AR(1) and AR(2) Building Blocks of an AR(p) Model |
|
|
122 | (2) |
|
|
124 | (7) |
|
3.2.12 Invertibility/Infinite-Order AR Processes |
|
|
131 | (1) |
|
3.2.13 Two Reasons for Imposing Invertibility |
|
|
132 | (1) |
|
|
133 | (13) |
|
3.3.1 Stationarity and Invertibility Conditions for an ARMA(p,q) Model |
|
|
136 | (1) |
|
3.3.2 Spectral Density of an ARMA(p,q) Model |
|
|
136 | (1) |
|
3.3.3 Factor Tables and AKMA(p,q) Models |
|
|
137 | (3) |
|
3.3.4 Autocorrelations of an ARMA(p,q) Model |
|
|
140 | (4) |
|
3.3.5 ψ-Weights of an ARMA(p,q) |
|
|
144 | (2) |
|
3.3.6 Approximating ARMA(p,q) Processes Using High-Order AR(p) Models |
|
|
146 | (1) |
|
3.4 Visualizing AR Components |
|
|
146 | (3) |
|
3.5 Seasonal ARMA(p,q) x (PS/QS)S Models |
|
|
149 | (6) |
|
3.6 Generating Realizations from ARMA(p,q) Processes |
|
|
155 | (2) |
|
|
155 | (1) |
|
|
155 | (1) |
|
|
156 | (1) |
|
|
157 | (24) |
|
3.7.1 Memoryless Transformations |
|
|
157 | (1) |
|
|
158 | (3) |
|
Appendix 3A Proofs of Theorems |
|
|
161 | (5) |
|
|
166 | (6) |
|
|
172 | (9) |
|
4 Other Stationary Time Series Models |
|
|
181 | (24) |
|
4.1 Stationary Harmonic Models |
|
|
181 | (10) |
|
4.1.1 Pure Harmonic Models |
|
|
183 | (2) |
|
4.1.2 Harmonic Signal-Plus-Noise Models |
|
|
185 | (2) |
|
4.1.3 ARMA Approximation to the Harmonic Signal-Plus-Noise Model |
|
|
187 | (4) |
|
4.2 ARCH and GARCH Processes |
|
|
191 | (14) |
|
|
193 | (1) |
|
4.2.1.1 The ARCH(1) Model |
|
|
193 | (3) |
|
4.2.1.2 The ARCH(qo) Model |
|
|
196 | (1) |
|
4.2.2 The GARCH(po,qo) Process |
|
|
197 | (2) |
|
4.2.3 AR Processes with ARCH or GARCH Noise |
|
|
199 | (2) |
|
|
201 | (1) |
|
|
202 | (3) |
|
5 Nonstationary Time Series Models |
|
|
205 | (24) |
|
5.1 Deterministic Signal-Plus-Noise Models |
|
|
205 | (5) |
|
5.1.1 Trend-Component Models |
|
|
206 | (2) |
|
5.1.2 Harmonic Component Models |
|
|
208 | (2) |
|
5.2 ARIMA(p,d,q) and ARUMA(p,d,q) Processes |
|
|
210 | (7) |
|
5.2.1 Extended Autocorrelations of an ARUMA(p,d,q) Process |
|
|
211 | (6) |
|
|
217 | (1) |
|
5.3 Multiplicative Seasonal ARUMA (p,d,q) × (Ps, Ds, Qs)s Process |
|
|
217 | (3) |
|
5.3.1 Factor Tables for Seasonal Models of the Form of Equation 5.17 with s = 4 and s = 12 |
|
|
218 | (2) |
|
|
220 | (1) |
|
|
220 | (1) |
|
5.4.2 Random Walk with Drift |
|
|
221 | (1) |
|
5.5 G-Stationary Models for Data with Time-Varying Frequencies |
|
|
221 | (8) |
|
|
222 | (3) |
|
|
225 | (4) |
|
|
229 | (44) |
|
6.1 Mean-Square Prediction Background |
|
|
230 | (2) |
|
6.2 Box--Jenkins Forecasting for AKMA(p,q) Models |
|
|
232 | (1) |
|
6.2.1 General Linear Process Form of the Best Forecast Equation |
|
|
233 | (1) |
|
6.3 Properties of the Best Forecast Xt0 (l) |
|
|
233 | (2) |
|
6.4 π-Weight Form of the Forecast Function |
|
|
235 | (1) |
|
6.5 Forecasting Based on the Difference Equation |
|
|
236 | (6) |
|
6.5.1 Difference Equation Form of the Best Forecast Equation |
|
|
237 | (1) |
|
6.5.2 Basic Difference Equation Form for Calculating Forecasts from an ARMA(p,q) Model |
|
|
238 | (4) |
|
6.6 Eventual Forecast Function |
|
|
242 | (1) |
|
6.7 Assessing Forecast Performance |
|
|
243 | (5) |
|
6.7.1 Probability Limits for Forecasts |
|
|
243 | (4) |
|
6.7.2 Forecasting the Last k Values |
|
|
247 | (1) |
|
6.8 Forecasts Using ARUMA(p,d,q) Models |
|
|
248 | (7) |
|
6.9 Forecasts Using Multiplicative Seasonal ARUMA Models |
|
|
255 | (4) |
|
6.10 Forecasts Based on Signal--Plus--Noise Models |
|
|
259 | (14) |
|
Appendix 6A Proof of Projection Theorem |
|
|
262 | (2) |
|
Appendix 6B Basic Forecasting Routines |
|
|
264 | (4) |
|
|
268 | (5) |
|
|
273 | (48) |
|
|
273 | (1) |
|
7.2 Preliminary Estimates |
|
|
274 | (12) |
|
7.2.1 Preliminary Estimates for AR(p) Models |
|
|
274 | (1) |
|
7.2.1.1 Yule--Walker Estimates |
|
|
274 | (2) |
|
7.2.1.2 Least Squares Estimation |
|
|
276 | (2) |
|
|
278 | (2) |
|
7.2.2 Preliminary Estimates for MA(q) Models |
|
|
280 | (1) |
|
7.2.2.1 MM Estimation for an MA(q) |
|
|
280 | (1) |
|
7.2.2.2 MA(q) Estimation Using the Innovations Algorithm |
|
|
281 | (2) |
|
7.2.3 Preliminary Estimates for ARMA(p,q) Models |
|
|
283 | (1) |
|
7.2.3.1 Extended Yule-Walker Estimates of the AR Parameters |
|
|
283 | (1) |
|
7.2.3.2 Tsay--Tiao Estimates of the AR Parameters |
|
|
284 | (1) |
|
7.2.3.3 Estimating the MA Parameters |
|
|
285 | (1) |
|
7.3 ML Estimation of ARMA(p,q) Parameters |
|
|
286 | (6) |
|
7.3.1 Conditional and Unconditional ML Estimation |
|
|
286 | (5) |
|
7.3.2 ML Estimation Using the Innovations Algorithm |
|
|
291 | (1) |
|
7.4 Backcasting and Estimating σ |
|
|
292 | (3) |
|
7.5 Asymptotic Properties of Estimators |
|
|
295 | (8) |
|
|
295 | (1) |
|
7.5.1.1 Confidence Intervals: AR Case |
|
|
296 | (1) |
|
|
297 | (3) |
|
7.5.2.1 Confidence Intervals for AKMA(p,q) Parameters |
|
|
300 | (1) |
|
7.5.3 Asymptotic Comparisons of Estimators for an MA(1)... |
|
|
301 | (2) |
|
7.6 Estimation Examples Using Data |
|
|
303 | (6) |
|
7.7 ARMA Spectral Estimation |
|
|
309 | (4) |
|
7.8 ARUMA Spectral Estimation |
|
|
313 | (8) |
|
|
315 | (2) |
|
|
317 | (4) |
|
|
321 | (54) |
|
8.1 Preliminary Check for White Noise |
|
|
321 | (3) |
|
8.2 Model Identification for Stationary ARMA Models |
|
|
324 | (4) |
|
8.2.1 Model Identification Based on AIC and Related Measures |
|
|
325 | (3) |
|
8.3 Model Identification for Nonstationary ARUMA(p,d,q) Models |
|
|
328 | (47) |
|
8.3.1 Including a Nonstationary Factor in the Model |
|
|
330 | (1) |
|
8.3.2 Identifying Nonstationary Components) in a Model |
|
|
330 | (5) |
|
8.3.3 Decision Between a Stationary or a Nonstationary Model |
|
|
335 | (1) |
|
8.3.4 Deriving a Final ARUMA Model |
|
|
335 | (3) |
|
8.3.5 More on the Identification of Nonstationary Components |
|
|
338 | (1) |
|
8.3.5.1 Including a Factor (1 -- B)d in the Model |
|
|
338 | (3) |
|
8.3.5.2 Testing for a Unit Root |
|
|
341 | (3) |
|
8.3.5.3 Including a Seasonal Factor (1 - Bs) in the Model |
|
|
344 | (9) |
|
Appendix 8A Model Identification Based on Pattern Recognition |
|
|
353 | (15) |
|
Appendix 8B Model Identification Functions in t swge |
|
|
368 | (3) |
|
|
371 | (4) |
|
|
375 | (24) |
|
|
375 | (5) |
|
9.1.1 Check Sample Autocorrelations of Residuals versus 95% Limit Lines |
|
|
376 | (1) |
|
|
376 | (1) |
|
9.1.3 Other Tests for Randomness |
|
|
377 | (3) |
|
9.1.4 Testing Residuals for Normality |
|
|
380 | (1) |
|
9.2 Stationarity versus Nonstationarity |
|
|
380 | (6) |
|
9.3 Signal-Plus-Noise versus Purely Autocorrelation-Driven Models |
|
|
386 | (3) |
|
9.3.1 Cochrane--Orcutt and Other Methods |
|
|
386 | (2) |
|
9.3.2 A Bootstrapping Approach |
|
|
388 | (1) |
|
9.3.3 Other Methods for Trend Testing |
|
|
388 | (1) |
|
9.4 Checking Realization Characteristics |
|
|
389 | (5) |
|
9.5 Comprehensive Analysis of Time Series Data: A Summary |
|
|
394 | (5) |
|
|
395 | (1) |
|
|
396 | (3) |
|
10 Vector-Valued (Multivariate) Time Series |
|
|
399 | (56) |
|
10.1 Multivariate Time Series Basics |
|
|
399 | (2) |
|
10.2 Stationary Multivariate Time Series |
|
|
401 | (6) |
|
10.2.1 Estimating the Mean and Covariance for Stationary Multivariate Processes |
|
|
406 | (1) |
|
|
406 | (1) |
|
|
406 | (1) |
|
10.3 Multivariate (Vector) ARMA Processes |
|
|
407 | (14) |
|
10.3.1 Forecasting Using VAR(p) Models |
|
|
414 | (2) |
|
10.3.2 Spectrum of a VAR(p) Model |
|
|
416 | (1) |
|
10.3.3 Estimating the Coefficients of a VAR(p) Model |
|
|
416 | (1) |
|
10.3.3.1 Yule-Walker Estimation |
|
|
416 | (1) |
|
10.3.3.2 Least Squares and Conditional ML Estimation |
|
|
417 | (1) |
|
10.3.3.3 Burg-Type Estimation |
|
|
418 | (1) |
|
10.3.4 Calculating the Residuals and Estimating Γa |
|
|
418 | (1) |
|
10.3.5 VAR(p) Spectral Density Estimation |
|
|
419 | (1) |
|
10.3.6 Fitting a VAR(p) Model to Data |
|
|
419 | (1) |
|
|
419 | (1) |
|
10.3.6.2 Estimating the Parameters |
|
|
419 | (1) |
|
10.3.6.3 Testing the Residuals for White Noise |
|
|
419 | (2) |
|
10.4 Nonstationary VARMA Processes |
|
|
421 | (1) |
|
10.5 Testing for Association between Time Series |
|
|
422 | (7) |
|
10.5.1 Testing for Independence of Two Stationary Time Series |
|
|
424 | (3) |
|
10.5.2 Testing for Cointegration between Nonstationary Time Series |
|
|
427 | (2) |
|
|
429 | (26) |
|
|
429 | (1) |
|
10.6.2 Observation Equation |
|
|
429 | (3) |
|
10.6.3 Goals of State-Space Modeling |
|
|
432 | (1) |
|
|
433 | (1) |
|
10.6.4.1 Prediction (Forecasting) |
|
|
433 | (1) |
|
|
433 | (1) |
|
10.6.4.3 Smoothing Using the Kalman Filter |
|
|
434 | (1) |
|
10.6.4.4 h-Step Ahead Predictions |
|
|
434 | (2) |
|
10.6.5 Kalman Filter and Missing Data |
|
|
436 | (3) |
|
10.6.6 Parameter Estimation |
|
|
439 | (1) |
|
10.6.7 Using State-Space Methods to Find Additive Components of a Univariate AR Realization |
|
|
440 | (1) |
|
10.6.7.1 Revised State-Space Model |
|
|
441 | (1) |
|
|
441 | (1) |
|
|
442 | (1) |
|
Appendix 10A Derivation of State-Space Results |
|
|
443 | (6) |
|
Appendix 10B Basic Kalman Filtering Routines |
|
|
449 | (3) |
|
|
452 | (3) |
|
|
455 | (44) |
|
|
456 | (1) |
|
11.2 Fractional Difference and FARMA Processes |
|
|
457 | (7) |
|
11.3 Gegenbauer and GARMA Processes |
|
|
464 | (8) |
|
11.3.1 Gegenbauer Polynomials |
|
|
464 | (1) |
|
11.3.2 Gegenbauer Process |
|
|
465 | (4) |
|
|
469 | (3) |
|
11.4 it-Factor Gegenbauer and GARMA Processes |
|
|
472 | (7) |
|
11.4.1 Calculating Autocovariances |
|
|
476 | (2) |
|
11.4.2 Generating Realizations |
|
|
478 | (1) |
|
11.5 Parameter Estimation and Model Identification |
|
|
479 | (4) |
|
11.6 Forecasting Based on the k-Factor GARMA Model |
|
|
483 | (1) |
|
11.7 Testing for Long Memory |
|
|
484 | (3) |
|
11.7.1 Testing for Long Memory in the Fractional and FARMA Setting |
|
|
486 | (1) |
|
11.7.2 Testing for Long Memory in the Gegenbauer Setting |
|
|
486 | (1) |
|
11.8 Modeling Atmospheric CO2 Data Using Long-Memory Models |
|
|
487 | (12) |
|
|
490 | (7) |
|
|
497 | (2) |
|
|
499 | (48) |
|
12.1 Shortcomings of Traditional Spectral Analysis for TVF Data |
|
|
499 | (3) |
|
12.2 Window-Based Methods that Localize the "Spectrum" in Time |
|
|
502 | (3) |
|
|
502 | (3) |
|
12.2.2 Wigner-Ville Spectrum |
|
|
505 | (1) |
|
|
505 | (32) |
|
12.3.1 Fourier Series Background |
|
|
506 | (1) |
|
12.3.2 Wavelet Analysis Introduction |
|
|
506 | (4) |
|
12.3.3 Fundamental Wavelet Approximation Result |
|
|
510 | (2) |
|
12.3.4 Discrete Wavelet Transform for Data Sets of Finite Length |
|
|
512 | (3) |
|
|
515 | (1) |
|
12.3.6 Multiresolution Analysis |
|
|
516 | (5) |
|
|
521 | (3) |
|
12.3.8 Scalogram: Time-Scale Plot |
|
|
524 | (3) |
|
|
527 | (7) |
|
12.3.10 Two-Dimensional Wavelets |
|
|
534 | (3) |
|
12.4 Concluding Remarks on Wavelets |
|
|
537 | (10) |
|
Appendix 12A Mathematical Preliminaries for This Chapter |
|
|
539 | (2) |
|
Appendix 12B Mathematical Preliminaries |
|
|
541 | (4) |
|
|
545 | (2) |
|
13 G-Stationary Processes |
|
|
547 | (48) |
|
13.1 Generalized-Stationary Processes |
|
|
547 | (2) |
|
13.1.1 General Strategy for Analyzing G-Stationary Processes |
|
|
548 | (1) |
|
13.2 M-Stationary Processes |
|
|
549 | (7) |
|
13.2.1 Continuous M-Stationary Process |
|
|
549 | (2) |
|
13.2.2 Discrete M-Stationary Process |
|
|
551 | (1) |
|
13.2.3 Discrete Euler(p) Model |
|
|
551 | (1) |
|
13.2.4 Time Transformation and Sampling |
|
|
552 | (4) |
|
13.3 G(A)-Stationary Processes |
|
|
556 | (17) |
|
13.3.1 Continuous G(p; λ) Model |
|
|
557 | (2) |
|
13.3.2 Sampling the Continuous G(λ))-Stationary Processes |
|
|
559 | (1) |
|
13.3.2.1 Equally Spaced Sampling from G(p; λ)) Processes |
|
|
560 | (1) |
|
13.3.3 Analyzing TVF Data Using the G(p; λ)) Model |
|
|
561 | (2) |
|
13.3.3.1 G(p; λ)) Spectral Density |
|
|
563 | (10) |
|
13.4 Linear Chirp Processes |
|
|
573 | (6) |
|
13.4.1 Models for Generalized Linear Chirps |
|
|
576 | (3) |
|
|
579 | (3) |
|
|
582 | (13) |
|
Appendix 13A G-Stationary Basics |
|
|
583 | (4) |
|
|
587 | (5) |
|
|
592 | (3) |
References |
|
595 | (10) |
Index |
|
605 | |