隱馬爾科夫模型HMM自學(xué) (5-2)Viterbi Algorithm書接前文,viterbi算法已經(jīng)基本成形...... 崔曉源 翻譯 一般化上一篇最后得到的公式我們可以把概率的求解寫成:
2d. 反向指針, ‘s 考慮下面trellis
現(xiàn)在我們可以得到到達(dá)每一個(gè)中間或者終點(diǎn)狀態(tài)的概率最大的路徑,。但是我們需要采取一些方法來記錄這條路徑,。這就需要在每個(gè)狀態(tài)記錄得到該狀態(tài)最優(yōu)路徑的前一狀態(tài)。記為:
這樣argmax操作符就會(huì)選擇使得括號(hào)中式子最大的索引j,。 如果有人問,,為什么沒有乘以混淆矩陣中的觀察概率因子。這是因?yàn)槲覀冴P(guān)心的是在到達(dá)當(dāng)前狀態(tài)的最優(yōu)路徑中,,前一狀態(tài)的信息,,而與他對(duì)應(yīng)的觀察狀態(tài)無(wú)關(guān)。 2e. viterbi算法的兩個(gè)優(yōu)點(diǎn) 1)與Forward算法一樣,,它極大的降低了計(jì)算復(fù)雜度 2)viterbi會(huì)根據(jù)輸入的觀察序列,,“自左向右”的根據(jù)上下文給出最優(yōu)的理解,。由于viterbi會(huì)在給出最終選擇前考慮所有的觀察序列因素,這樣就避免了由于突然的噪聲使得決策原理正確答案,。這種情況在真實(shí)的數(shù)據(jù)中經(jīng)常出現(xiàn),。 ================================================== 下面給出viterbi算法完整的定義1. Formal definition of algorithm The algorithm may be summarised formally as: For each i,, i = 1, ... , n, let :
- this intialises the probability calculations by taking the product of the intitial hidden state probabilities with the associated observation probabilities. For t = 2, ..., T, and i = 1, ... , n let :
- thus determining the most probable route to the next state, and remembering how to get there. This is done by considering all products of transition probabilities with the maximal probabilities already derived for the preceding step. The largest such is remembered, together with what provoked it. Let :
- thus determining which state at system completion (t=T) is the most probable. For t = T - 1, ..., 1 Let :
- thus backtracking through the trellis, following the most probable route. On completion, the sequence i1 ... iT will hold the most probable sequence of hidden states for the observation sequence in hand. ==================================================
我們還用天氣的例子來說明如何計(jì)算狀態(tài)CLOUDY的部分概率,注意它與Forward算法的區(qū)別
還是那句話:
怎么樣,?看到這里豁然開朗了吧,。要是還不明白,我就.....................還有辦法,,看個(gè)動(dòng)畫效果:
參數(shù)定義:
別忘了,,viterbi算法的目的是根據(jù)給定的觀察狀態(tài)序列找出最有可能的隱含狀態(tài)序列,別忘了viterbi算法不會(huì)被中間的噪音所干擾,。
|
|