Hidden Markov Models
Page 1 of 1
Hidden Markov Models
https://stats.stackexchange.com/tags/hidden-markov-model/info
Hidden Markov Models are used for modelling systems that are assumed to be Markov processes with hidden (i.e. unobserved) states.
A hidden Markov model is a three-tuple $\left\langle\vec{\pi},A,B\right\rangle$ with $\vec{\pi}$ a probability vector over the $n$ hidden states, $A$ an $n\times n$ transition matrix and $B$ an $n\times m$ emission matrix. $\pi_i$ describes the probability of the system being in hidden state $i$ at time step $0$, $a_{ij}$ describes the probability of the system being in hidden state $j$ at time $t+1$ given it was in hidden state $i$ at time $t$. $b_{ik}$ describes the probability of observing $k$ given the system is in hidden state $i$.
A Markov model thus describes a Markov process, but where the state of the system is "hidden" and only observations can be seen. Such process can be trained for several applications (speech recognition, part-of-speech tagging and computational biology). It can be trained using the popular Baum-Welch algorithm and the most probable sequence of hidden states can be calculated using the Viterbi algorithm.
The standard HMM can be viewed graphically as follows:
enter image description here
Where $Z_t$ is the hidden state and $X_t$ is the observation at time $t$. We can use the conditional independence structure expressed by this graphical model to derive the algorithms above.
Extensions exist such that continuous output is supported as well.
Hidden Markov Models are used for modelling systems that are assumed to be Markov processes with hidden (i.e. unobserved) states.
A hidden Markov model is a three-tuple $\left\langle\vec{\pi},A,B\right\rangle$ with $\vec{\pi}$ a probability vector over the $n$ hidden states, $A$ an $n\times n$ transition matrix and $B$ an $n\times m$ emission matrix. $\pi_i$ describes the probability of the system being in hidden state $i$ at time step $0$, $a_{ij}$ describes the probability of the system being in hidden state $j$ at time $t+1$ given it was in hidden state $i$ at time $t$. $b_{ik}$ describes the probability of observing $k$ given the system is in hidden state $i$.
A Markov model thus describes a Markov process, but where the state of the system is "hidden" and only observations can be seen. Such process can be trained for several applications (speech recognition, part-of-speech tagging and computational biology). It can be trained using the popular Baum-Welch algorithm and the most probable sequence of hidden states can be calculated using the Viterbi algorithm.
The standard HMM can be viewed graphically as follows:
enter image description here
Where $Z_t$ is the hidden state and $X_t$ is the observation at time $t$. We can use the conditional independence structure expressed by this graphical model to derive the algorithms above.
Extensions exist such that continuous output is supported as well.
Page 1 of 1
Permissions in this forum:
You cannot reply to topics in this forum