Abstract
I often think of state space systems with continuously distributed states and observations in terms of equations such as
where
X ∈ ℝ
n is the state variable,
Y ∈ ℝ
m is the observation, and η(
t) and ϵ(
t) are noise terms. Equations (4.1) define the conditional probability densities
and
Having each of the noise terms η(
t) and ϵ(
t) in (4.1) be independent of all other noise terms is sufficient to ensure that the following assumptions hold:
1. Given the state at time t, the observation Y(t) is independent of everything else.
2. The dynamics of the state variable X are Markov.
These are the same as the assumptions of (1.19) and (1.18), which characterize HMMs with discrete observations. In this chapter, I write forward and backward algorithms by replacing sums of finite probabilities with integrals over probability densities. If the functions F and G are linear in X and the noise terms η(t) and ϵ(t) are independent and Gaussian, then the forward algorithm is called Kalman filtering.
In this chapter, I go over the forward and backward algorithms for continuous states and observations three times. First, in Section 4.1, I emphasize the parallel to Chapter 2 by simply replacing sums in the development of that chapter with integrals. By themselves, the results are not immediately useful because one must specify parametric forms for the probability densities and do the integrals to implement the algorithms. Next, in Section 4.2, I concisely present the algorithms one obtains when the functions F and G in (4.1) are linear and the noise terms η(t) and ϵ(t) are Gaussian.