Description
This course for practicing and aspiring data scientists and statisticians. It is the fourth of a four-course sequence introducing the fundamentals of Bayesian statistics. It builds on the course Bayesian Statistics: From Concept to Data Analysis, Techniques and Models, and Mixture models.
Time series analysis is concerned with modeling the dependency among elements of a sequence of temporally related variables. To succeed in this course, you should be familiar with calculus-based probability, the principles of maximum likelihood estimation, and Bayesian inference. You will learn how to build models that can describe temporal dependencies and how to perform Bayesian inference and forecasting for the models. You will apply what you've learned with the open-source, freely available software R with sample databases. Your instructor Raquel Prado will take you from basic concepts for modeling temporally dependent data to implementation of specific classes of models
Tags
Syllabus
- Week 1: Introduction to time series and the AR(1) process
- This module defines stationary time series processes, the autocorrelation function and the autoregressive process of order one or AR(1). Parameter estimation via maximum likelihood and Bayesian inference in the AR(1) are also discussed.
- Week 2: The AR(p) process
- This module extends the concepts learned in Week 1 about the AR(1) process to the general case of the AR(p). Maximum likelihood estimation and Bayesian posterior inference in the AR(p) are discussed.
- Week 3: Normal dynamic linear models, Part I
- Normal Dynamic Linear Models (NDLMs) are defined and illustrated in this module using several examples. Model building based on the forecast function via the superposition principle is explained. Methods for Bayesian filtering, smoothing and forecasting for NDLMs in the case of known observational variances and known system covariance matrices are discussed and illustrated.
- Week 4: Normal dynamic linear models, Part II
- Week 5: Final Project
- In this final project you will use normal dynamic linear models to analyze a time series dataset downloaded from Google trend.
Bayesian Statistics: Time Series Analysis
-
TypeOnline Courses
-
ProviderCoursera
Time series analysis is concerned with modeling the dependency among elements of a sequence of temporally related variables. To succeed in this course, you should be familiar with calculus-based probability, the principles of maximum likelihood estimation, and Bayesian inference. You will learn how to build models that can describe temporal dependencies and how to perform Bayesian inference and forecasting for the models. You will apply what you've learned with the open-source, freely available software R with sample databases. Your instructor Raquel Prado will take you from basic concepts for modeling temporally dependent data to implementation of specific classes of models
- Week 1: Introduction to time series and the AR(1) process
- This module defines stationary time series processes, the autocorrelation function and the autoregressive process of order one or AR(1). Parameter estimation via maximum likelihood and Bayesian inference in the AR(1) are also discussed.
- Week 2: The AR(p) process
- This module extends the concepts learned in Week 1 about the AR(1) process to the general case of the AR(p). Maximum likelihood estimation and Bayesian posterior inference in the AR(p) are discussed.
- Week 3: Normal dynamic linear models, Part I
- Normal Dynamic Linear Models (NDLMs) are defined and illustrated in this module using several examples. Model building based on the forecast function via the superposition principle is explained. Methods for Bayesian filtering, smoothing and forecasting for NDLMs in the case of known observational variances and known system covariance matrices are discussed and illustrated.
- Week 4: Normal dynamic linear models, Part II
- Week 5: Final Project
- In this final project you will use normal dynamic linear models to analyze a time series dataset downloaded from Google trend.