Essentials of time series for financial applications

Essentials of Time Series for Financial Applications serves as an agile reference for upper level students and practitioners who desire a formal, easy-to-follow introduction to the most important time series methods applied in financial applications (pricing, asset management, quant strategies, and...

Descripción completa

Detalles Bibliográficos
Otros Autores: Guidolin, Massimo, author (author), Pedio, Manuela, author
Formato: Libro electrónico
Idioma:Inglés
Publicado: London, United Kingdom : Academic Press, An imprint of Elsevier [2018]
Edición:1st edition
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009630463206719
Tabla de Contenidos:
  • Front Cover
  • Essentials of Time Series for Financial Applications
  • Copyright Page
  • Contents
  • List of Figures
  • List of Tables
  • Preface
  • 1 Linear Regression Model
  • 1.1 Inference in Linear Regression Models
  • 1.1.1 The Ordinary Least Squares Estimator
  • 1.1.2 Goodness of Fit Measures
  • 1.1.3 The Generalized Least Squared Estimator
  • 1.1.4 Maximum Likelihood Estimator
  • 1.1.5 Hypotheses Testing, Confidence Intervals, and Predictive Intervals
  • 1.1.6 Linear Regression Model With Stochastic Regressors
  • 1.1.7 Asymptotic Theory for Linear Regressions
  • 1.2 Testing for Violations of the Linear Regression Framework
  • 1.2.1 Linearity
  • 1.2.2 Structural Breaks and Parameter Stability Test
  • 1.3 Specifying the Regressors
  • 1.3.1 How to Select the Regressors
  • 1.3.2 Multicollinearity
  • 1.3.3 Measurement Errors in the Regressors
  • 1.4 Issues With Heteroskedasticity and Autocorrelation of the Errors
  • 1.4.1 Heteroskedastic Errors
  • 1.4.2 Autocorrelated Errors
  • 1.5 The Interpretation of Regression Results
  • References
  • Appendix 1.A
  • Appendix 1.B Principal Component Analysis
  • 2 Autoregressive Moving Average (ARMA) Models and Their Practical Applications
  • 2.1 Essential Concepts in Time Series Analysis
  • 2.1.1 Time Series and Their Properties
  • 2.1.2 Stationarity
  • 2.1.3 Sample Autocorrelations and Sample Partial Autocorrelations
  • 2.2 Moving Average and Autoregressive Processes
  • 2.2.1 Finite Order Moving Average Processes
  • 2.2.2 Autoregressive Processes
  • 2.2.3 Autoregressive Moving Average Processes
  • 2.3 Selection and Estimation of AR, MA, and ARMA Models
  • 2.3.1 The Selection of the Model and the Role of Information Criteria
  • 2.3.2 Estimation Methods
  • 2.3.3 Residual Diagnostics
  • 2.4 Forecasting ARMA Processes
  • 2.4.1 Standard Principles of Forecasting
  • 2.4.2 Forecasting an AR(p) Process.
  • 2.4.3 Forecasting the Future Value of an MA(q) Process
  • 2.4.4 Evaluating the Accuracy of a Forecast Function
  • References
  • Appendix 2.A
  • 3 Vector Autoregressive Moving Average (VARMA) Models
  • 3.1 Foundations of Multivariate Time Series Analysis
  • 3.1.1 Weak Stationarity of Multivariate Time Series
  • 3.1.2 Cross-Covariance and Cross-Correlation Matrices
  • 3.1.3 Sample Cross-Covariance and Cross-Correlation Matrices
  • 3.1.4 Multivariate Portmanteau Tests
  • 3.1.5 Multivariate White Noise Process
  • 3.2 Introduction to Vector Autoregressive Analysis
  • 3.2.1 From Structural to Reduced-Form Vector Autoregressive Models
  • 3.2.2 Stationarity Conditions and the Population Moments of a VAR(1) Process
  • 3.2.3 Generalization to a VAR(p) Model
  • 3.2.4 Estimation of a VAR(p) Model
  • 3.2.5 Specification of a Vector Autoregressive Model and Hypothesis Testing
  • 3.2.6 Forecasting With a Vector Autoregressive Model
  • 3.3 Structural Analysis With Vector Autoregressive Models
  • 3.3.1 Impulse Response Functions
  • 3.3.2 Variance Decompositions
  • 3.3.3 Granger Causality
  • 3.4 Vector Moving Average and Vector Autoregressive Moving Average Models
  • 3.4.1 Vector Moving Average Models
  • 3.4.2 Vector Autoregressive Moving Average Models
  • References
  • 4 Unit Roots and Cointegration
  • 4.1 Defining Unit Root Processes
  • 4.1.1 What Happens If One Incorrectly Detrends a Unit Root Series?
  • 4.1.2 What Happens If One Incorrectly Applies Differencing to (Deterministic) Trend-Stationary Series?
  • 4.1.3 What Happens If One Incorrectly Applies Differencing to a Stationary Series?
  • 4.1.4 What Happens If One Incorrectly Applies Differencing d+r Times to an I(d) Series?
  • 4.2 The Spurious Regression Problem
  • 4.3 Unit Root Tests
  • 4.3.1 Classical Dickey-Fuller Tests
  • 4.3.2 The Augmented Dickey-Fuller Test
  • 4.3.3 Other Unit Root Tests.
  • 4.3.4 Testing for Unit Roots in Moving-Average Processes
  • 4.4 Cointegration and Error-Correction Models
  • 4.4.1 The Relationship Between Cointegration and Economic Theory
  • 4.4.2 Definition of Cointegration
  • 4.4.3 Error-Correction Models
  • 4.4.4 Testing for Cointegration
  • References
  • 5 Single-Factor Conditionally Heteroskedastic Models, ARCH and GARCH
  • 5.1 Stylized Facts and Preliminaries
  • 5.1.1 The Stylized Facts of Conditional Heteroskedasticity
  • 5.2 Simple Univariate Parametric Models
  • 5.2.1 Rolling Window Forecasts
  • 5.2.2 Exponential Smoothing Variance Forecasts: RiskMetrics
  • 5.2.3 ARCH Models
  • 5.2.4 Comparing the Performance of Alternative Variance Forecast Models: Do We Need More Than ARCH?
  • 5.2.5 Generalized ARCH Models and Their Statistical Properties
  • 5.2.6 A Few Additional, Popular ARCH Models
  • 5.2.6.1 The Threshold GARCH (TARCH) Model
  • 5.2.6.2 The Power ARCH and NAGARCH Models
  • 5.2.6.3 The Component GARCH Model and the Differences Between Transitory and Permanent Variance Components
  • 5.2.6.4 The GARCH-In-Mean Model
  • 5.3 Advanced Univariate Volatility Modeling
  • 5.3.1 Non-Gaussian Marginal Innovations
  • 5.3.2 GARCH Models Augmented by Exogenous (Predetermined) Factors
  • 5.4 Testing for ARCH
  • 5.4.1 Lagrange Multiplier ARCH Tests
  • 5.4.2 News Impact Curves and Testing for Asymmetric ARCH
  • 5.5 Forecasting With GARCH Models
  • 5.5.1 Long-Horizon, Point Forecasts
  • 5.5.2 Forecasts of Variance for Sums of Returns or Shocks
  • 5.6 Estimation of and Inference on GARCH Models
  • 5.6.1 Maximum Likelihood Estimation
  • 5.6.2 The Properties of MLE
  • 5.6.3 Quasi MLE
  • 5.6.4 Misspecification Tests
  • 5.6.5 Sequential Estimation and QMLE
  • 5.6.6 Data Frequency in Estimation and Temporal Aggregation
  • References
  • Appendix 5.A Nonparametric Kernel Density Estimation.
  • 6 Multivariate GARCH and Conditional Correlation Models
  • 6.1 Introduction and Preliminaries
  • 6.2 Simple Models of Covariance Prediction
  • 6.3 Full, Multivariate GARCH Models
  • 6.4 Constant and Dynamic Conditional Correlation Models
  • 6.5 Factor GARCH Models
  • 6.6 Inference and Model Specification
  • References
  • 7 Multifactor Heteroskedastic Models, Stochastic Volatility
  • 7.1 A Primer on the Kalman Filter
  • 7.1.1 A Simple Univariate Example
  • 7.1.2 The General Case
  • 7.2 Simple Stochastic Volatility Models and their Estimation Using the Kalman Filter
  • 7.2.1 The Economics of Stochastic Volatility: The Normal Mixture Model
  • 7.2.2 One Benchmark Case: The Log-Normal Two-Factor Stochastic Volatility Model
  • 7.3 Extended, Second-Generation Stochastic Volatility Models
  • 7.4 GARCH versus Stochastic Volatility: Which One?
  • 7.4.1 Some GARCH Models Are (Asymptotically) Stochastic Volatility Models
  • 7.4.2 Stressing the Differences: What Have We Learned So Far?
  • References
  • 8 Models With Breaks, Recurrent Regime Switching, and Nonlinearities
  • 8.1 A Primer on the Key Features and Classification of Statistical Model of Instability
  • 8.2 Detecting and Exploiting Structural Change in Linear Models
  • 8.2.1 Chow Tests for Given Break Dates
  • 8.2.2 CUSUM and CUSUM Square Tests
  • 8.2.3 Andrews and Quandt's Single-Break Test
  • 8.2.4 Bai and Perron's Multiple, Endogenous Breaks Test
  • 8.2.5 Testing for Breaks When Testing for Unit Roots and Cointegration, and Vice Versa
  • 8.3 Threshold and Smooth Transition Regime Switching Models
  • 8.3.1 Threshold Regression and Autoregressive Models
  • 8.3.2 Smooth Transition Regression and Autoregressive Models
  • 8.3.3 Testing (Non-)Linearities
  • References
  • 9 Markov Switching Models
  • 9.1 Definitions and Classifications
  • 9.2 Understanding Markov Switching Dynamics Through Simulations.
  • 9.2.1 Markov Switching Models as Normal Mixtures and Density Approximation
  • 9.3 Markov Switching Regressions
  • 9.4 Markov Chain Processes and Their Properties
  • 9.5 Estimation and Inference for Markov Switching Models
  • 9.5.1 Maximum Likelihood Estimation and the Expectation-Maximization Algorithm
  • 9.5.2 Tests of Hypotheses
  • 9.5.3 Testing and Selecting the Number of Regimes and the Nuisance Parameters Problem
  • 9.6 Forecasting With Markov Switching Models
  • 9.7 Markov Switching ARCH and DCC Models
  • 9.8 Do Nonlinear and Markov Switching Models Work in Practice?
  • References
  • Appendix 9.A Some Notions Concerning Ergodic Markov Chains
  • Appendix 9.B State-Space Representation of an Markov Switching Model
  • Appendix 9.C First-Order Conditions for Maximum Likelihood Estimation of Markov Switching Models
  • 10 Realized Volatility and Covariance
  • 10.1 Measuring Realized Variance
  • 10.1.1 Quadratic Variation and Its Estimators
  • 10.1.2 Microstructure Noise and the Choice of the Sampling Frequency
  • 10.1.3 Other Bias-Adjusted Measures of Realized Volatility
  • 10.1.4 Jumps and Bipower Variation
  • 10.2 Forecasting Realized Variance
  • 10.2.1 Stylized Facts About Realized Variance
  • 10.2.2 Forecasting Realized Variance: Heterogeneous Autoregressions
  • 10.2.3 Range-Based Variance Forecasts
  • 10.3 Multivariate Applications
  • 10.3.1 Realized Covariance Matrix Estimation
  • 10.3.2 Range-Based Covariance Estimation
  • References
  • Appendix A: Mathematical and Statistical Appendix
  • A. Fundamental Statistical Definitions
  • A.1 Random Variables
  • A.2 Stochastic Processes
  • A.2.1 Convergence in Probability
  • A.2.2 Convergence in Distribution
  • A.3 Key Theorems Concerning Stochastic Processes
  • A.3.1 Law of Large Numbers
  • A.3.2 Lindeberg-Lévy's Central Limit Theorem
  • B. Matrix Algebra.
  • B.1 Rank of a Matrix, Eigenvalues, and Eigenvectors.