1. Trang chủ
  2. » Luận Văn - Báo Cáo

Tutorial on EM algorithm

185 3 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 185
Dung lượng 2,88 MB

Nội dung

                                 ሺଶሻ ሺଶሻ ் ߤ௠௜௦ ሺʹሻ ൌ ቀߤଵ ൌ ͲǤͷǡ ߤଷ ൌ ͳǤͷቁ ሺଶሻ ߪ ൌ ͲǤ͹ͷ ȭ௠௜௦ ሺʹሻ ൌ ቆ ଵଵ ሺଶሻ ߪଷଵ ൌ ͲǤ͹ͷ ሺଶሻ ሺଶሻ ሺଶሻ ߪଵଷ ൌ ͲǤ͹ͷ ሺଶሻ ߪଷଷ ߤ௢௕௦ ሺʹሻ ൌ ቀߤଶ ൌ ͳǡ ߤସ ൌ ʹቁ ȭ௢௕௦ ሺʹሻ ൌ ቆ ሺଶሻ ߪଶଶ ൌ ͳǤͷ ሺଶሻ ௠௜௦ ሺʹሻ ൌ ቆ ܸ௢௕௦ ሺଶሻ ௢௕௦ ሺʹሻ ൌ ቆ ܸ௠௜௦ ߪସଶ ൌ ʹ ሺଶሻ ሺଶሻ ൌ ʹǤ͹ͷ ் ቇ ߪଶସ ൌ ʹ ቇ ሺଶሻ ߪସସ ൌ ͶǤͷ ሺଶሻ ߪଵଶ ൌ െͲǤͷ ߪଵସ ൌ െͳ ቇ ሺଶሻ ሺଶሻ ߪଷଶ ൌ െͳǤͷ ߪଷସ ൌ െ͵ ሺଶሻ ሺଶሻ ߪଶଵ ൌ െͲǤͷ ߪଶଷ ൌ െͳǤͷ ቇ ሺଶሻ ሺଶሻ ߪସଵ ൌ െͳ ߪସଷ ൌ െ͵ ିଵ ௠௜௦ ሺʹሻቁ ൫ȭ௢௕௦ ሺʹሻ൯ ൫ܺ௢௕௦ ሺʹሻ െ ߤ௢௕௦ ሺʹሻ൯ ߤெమ ൌ ߤ௠௜௦ ሺʹሻ ൅ ቀܸ௢௕௦ ሺଶሻ ሺଶሻ ሺଶሻ ் ൌ ቀߤெమ ሺͳሻ ؆ ͲǤͲͷǡ ߤெమ ሺ͵ሻ ൌ ͲǤͳͶቁ ିଵ ௢௕௦ ௠௜௦ ሺʹሻቁ ൫ȭ௢௕௦ ሺʹሻ൯ ൫ܸ௠௜௦ ൯ ȭெమ ൌ ȭ௠௜௦ ሺʹሻ െ ቀܸ௢௕௦ ൌ൭ ሺଶሻ ሺଶሻ ሺଶሻ ‫ݏ‬ଵଷ ሺଶሻ ‫ݏ‬ଵସ ሺଶሻ ‫ݏ‬ଶଶ ሺଶሻ ‫ݏ‬ଶଷ ሺଶሻ ȭெమ ሺ͵ǡͳሻ ؆ ͲǤͲ͹ ሺଶሻ ȭெమ ሺͳǡ͵ሻ ؆ ͲǤͲ͹ ሺଶሻ ȭெమ ሺ͵ǡ͵ሻ ؆ ͲǤ͹ ͳ ሺଶሻ ሺଶሻ ‫ݔ‬ҧଵ ൌ ቀ‫ݔ‬ଵଵ ൅ ߤெమ ሺͳሻቁ ؆ ͲǤͷʹ ʹ ͳ ሺଶሻ ሺଶሻ ‫ݔ‬ҧ ଶ ൌ ቀߤெభ ሺʹሻ ൅ ‫ݔ‬ଶଶ ቁ ؆ ͳǤͳ ʹ ͳ ሺଶሻ ሺଶሻ ‫ݔ‬ҧଷ ൌ ቀ‫ݔ‬ଵଷ ൅ ߤெమ ሺ͵ሻቁ ؆ ͳǤͷ͹ ʹ ͳ ሺଶሻ ሺଶሻ ‫ݔ‬ҧ ସ ൌ ቀߤெభ ሺͶሻ ൅ ‫ݔ‬ଶସ ቁ ؆ ʹǤͳ͹ ʹ ൱ ଶ ͳ ሺଶሻ ሺଶሻ ቆሺ‫ݔ‬ଵଵ ሻଶ ൅ ൬ȭெమ ሺͳǡͳሻ ൅ ቀߤெమ ሺͳሻቁ ൰ቇ ؆ ͲǤ͹͸ ʹ ͳ ሺଶሻ ሺଶሻ ሺଶሻ ൌ ‫ݏ‬ଶଵ ൌ ቀ‫ݔ‬ଵଵ ߤெభ ሺʹሻ ൅ ߤெమ ሺͳሻ‫ݔ‬ଶଶ ቁ ؆ ͲǤͳ͵ ʹ ͳ ሺଶሻ ሺଶሻ ሺଶሻ ሺଶሻ ൌ ‫ݏ‬ଷଵ ൌ ൬‫ݔ‬ଵଵ ‫ݔ‬ଵଷ ൅ ቀȭெమ ሺͳǡ͵ሻ ൅ ߤெమ ሺͳሻߤெమ ሺ͵ሻቁ൰ ؆ ͳǤͷͶ ʹ ͳ ሺଶሻ ሺଶሻ ሺଶሻ ൌ ‫ݏ‬ସଵ ൌ ቀ‫ݔ‬ଵଵ ߤெభ ሺͶሻ ൅ ߤெమ ሺͳሻ‫ݔ‬ଶସ ቁ ؆ ͲǤͳ͹ ʹ ଶ ͳ ሺଶሻ ሺଶሻ ൌ ቆ൬ȭெభ ሺʹǡʹሻ ൅ ቀߤெభ ሺʹሻቁ ൰ ൅ ሺ‫ݔ‬ଶଶ ሻଶ ቇ ؆ ʹǤ͵ͷ ʹ ͳ ሺଶሻ ሺଶሻ ሺଶሻ ൌ ‫ݏ‬ଷଶ ൌ ቀߤெభ ሺʹሻ‫ݔ‬ଵଷ ൅ ‫ݔ‬ଶଶ ߤெమ ሺ͵ሻቁ ؆ ͲǤ͵ͻ ʹ ‫ݏ‬ଵଵ ൌ ‫ݏ‬ଵଶ ሺଶሻ ȭெమ ሺͳǡͳሻ ؆ ͲǤͷʹ 139 ሺଶሻ ሺଶሻ ‫ݏ‬ଶସ ൌ ‫ݏ‬ସଶ ൌ ͳ ሺଶሻ ሺଶሻ ሺଶሻ ൬ቀȭெభ ሺʹǡͶሻ ൅ ߤெభ ሺʹሻߤெభ ሺͶሻቁ ൅ ‫ݔ‬ଶଶ ‫ݔ‬ଶସ ൰ ؆ ͶǤͳͻ ʹ ଶ ͳ ሺଶሻ ሺଶሻ ቆሺ‫ݔ‬ଵଷ ሻଶ ൅ ൬ȭெమ ሺ͵ǡ͵ሻ ൅ ቀߤெమ ሺ͵ሻቁ ൰ቇ ؆ ͶǤͺ͸ ʹ ͳ ሺଶሻ ሺଶሻ ሺଶሻ ሺଵሻ ‫ݏ‬ଷସ ൌ ‫ݏ‬ସଷ ൌ ቀ‫ݔ‬ଵଷ ߤெభ ሺͶሻ ൅ ߤெమ ሺ͵ሻ‫ݔ‬ଶସ ቁ ؆ ͲǤ͹͹ ʹ ଶ ͳ ሺଶሻ ሺଶሻ ሺଶሻ ‫ݏ‬ସସ ൌ ቆ൬ȭெభ ሺͶǡͶሻ ൅ ቀߤெభ ሺͶሻቁ ൰ ൅ ሺ‫ݔ‬ଶସ ሻଶ ቇ ؆ ͺǤ͸Ͷ ʹ At 2nd iteration, M-step, we have: ሺଶሻ ‫ݏ‬ଷଷ ൌ ሺଷሻ ሺଶሻ ߤଵ ൌ ‫ݔ‬ҧଵ ؆ ͲǤͷʹ ሺଷሻ ሺଶሻ ߤଶ ൌ ‫ݔ‬ҧଶ ؆ ͳǤͳ ሺଷሻ ߤଷ ሺଷሻ ߤସ ሺଷሻ ሺଶሻ ሺଶሻ ൌ ‫ݔ‬ҧଷ ؆ ͳǤͷ͹ ሺଶሻ ൌ ‫ݔ‬ҧସ ؆ ʹǤͳ͹ ሺଶሻ ଶ ߪଵଵ ൌ ‫ݏ‬ଵଵ െ ቀ‫ݔ‬ҧଵ ቁ ؆ ͲǤͶͻ ሺଷሻ ሺଷሻ ሺଶሻ ൌ ሺଷሻ ߪସଵ ሺଶሻ ‫ݏ‬ଵସ ൌ ൌ ሺଷሻ ߪଷଶ ሺଷሻ ߪସଶ ൌ ሺଷሻ ߪସଷ ሺଶሻ ሺଶሻ ߪଵଶ ൌ ߪଶଵ ൌ ‫ݏ‬ଵଶ െ ‫ݔ‬ҧଵ ‫ݔ‬ҧଶ ؆ െͲǤͶͶ ሺଷሻ ሺଷሻ ሺଶሻ ሺଶሻ ሺଶሻ ߪଵଷ ൌ ߪଷଵ ൌ ‫ݏ‬ଵଷ െ ‫ݔ‬ҧଵ ‫ݔ‬ҧଷ ؆ ͲǤ͹ʹ ሺଷሻ ߪଵସ ሺଷሻ ߪଶଶ ሺଷሻ ߪଶଷ ሺଷሻ ߪଶସ ሺଷሻ ൌ ሺଶሻ ‫ݏ‬ଶଶ ሺଶሻ ൌ െ ቀ‫ݔ‬ҧଶ ቁ ؆ ͳǤͳ͹ ሺଶሻ ሺଷሻ ሺଶሻ ؆ െͲǤͻ͸ ሺଶሻ ሺଶሻ ൌ ‫ݏ‬ଶଷ െ ‫ݔ‬ҧଶ ‫ݔ‬ҧଷ ؆ െͳǤ͵ͳ ሺଶሻ ሺଶሻ ሺଶሻ ൌ ‫ݏ‬ଶସ െ ‫ݔ‬ҧଶ ‫ݔ‬ҧସ ؆ ͳǤͺͷ ߪଷଷ ൌ ‫ݏ‬ଷଷ െ ሺଷሻ ߪଷସ ሺଶሻ ሺଶሻ െ ‫ݔ‬ҧଵ ‫ݔ‬ҧସ ଶ ሺଶሻ ൌ ߪସସ ൌ ‫ݏ‬ସସ െ ሺଶሻ ଶ ቀ‫ݔ‬ҧଷ ቁ ሺଶሻ ‫ݏ‬ଷସ ؆ ʹǤͶ ሺଶሻ ሺଶሻ െ ‫ݔ‬ҧଷ ‫ݔ‬ҧସ ؆ െʹǤ͸͵ ሺଶሻ ଶ ቀ‫ݔ‬ҧସ ቁ ؆ ͵ǤͻͶ ܿሺܼଵ ሻ ൅ ܿሺܼଶ ሻ ʹ ൅ ʹ ൌ ൌ ͲǤͷ Ͷ‫ʹכ‬ Ͷ‫ʹכ‬ Because the sample is too small for GEM to converge to an exact maximizer with small enough number of iterations, we can stop GEM at the second iteration with Θ(3) = Θ* = (μ*, Σ*)T and Φ(3) = Φ* = p* when difference between Θ(2) and Θ(3) is insignificant ߤ ‫ כ‬ൌ ሺߤଵ‫ כ‬ൌ ͲǤͷʹǡ ߤଶ‫ כ‬ൌ ͳǤͳǡ ߤଷ‫ כ‬ൌ ͳǤͷ͹ǡ ߤସ‫ כ‬ൌ ʹǤͳ͹ሻ் ‫כ‬ ‫כ‬ ‫כ‬ ‫כ‬ ߪଵଵ ൌ ͲǤͶͻ ߪଵଶ ൌ െͲǤͶͶ ߪଵଷ ൌ ͲǤ͹ʹ ߪଵସ ൌ െͲǤͻ͸ ‫כ‬ ‫כ‬ ‫כ‬ ‫כ‬ ߪଶଷ ൌ െͳǤ͵ͳ ߪଶସ ൌ ͳǤͺͷ ߪଶଵ ൌ െͲǤͶͶ ߪଶଶ ൌ ͳǤͳ͹ ‫כ‬ ൲ ȭ ൌ൮ ‫כ‬ ‫כ‬ ‫כ‬ ‫כ‬ ߪଷଵ ൌ ͲǤ͹ʹ ߪଷଶ ൌ െͳǤ͵ͳ ߪଷଷ ൌ ʹǤͶ ߪଷସ ൌ െʹǤ͸͵ ‫כ‬ ‫כ‬ ‫כ‬ ‫כ‬ ߪସଷ ൌ െʹǤ͸͵ ߪସସ ൌ ͵ǤͻͶ ߪସଵ ൌ െͲǤͻ͸ ߪସଶ ൌ ͳǤͺͷ ‫ כ݌‬ൌ ͲǤͷ ‫݌‬ሺଷሻ ൌ 140 As aforementioned, because Xmis is a part of X and f(Xmis | ΘM) is derived directly from f(X|Θ), in practice we can stop GEM after its first iteration was done, which is reasonable enough to handle missing data As aforementioned, an interesting application of handling missing data is to fill in or predict missing values For instance, the missing part Xmis(1) of X1 = (x11=1, x12=?, x13=3, ‫כ‬ according to equation 5.2.44 as follows: x14=?)T is fulfilled by ߤெ భ ‫ݔ‬ଵଶ ൌ ߤଶ‫ כ‬ൌ ͳǤͳ ‫ݔ‬ଵସ ൌ ߤସ‫ כ‬ൌ ʹǤͳ͹ Now we survey another interesting case that sample ࣲ = {X1, X2,…, XN} whose Xi (s) are iid is MCAR data and f(X|Θ) is multinomial PDF of K trials We ignore missingness variable Z here because it is included in the case of multinormal PDF Let X = {Xobs, Xmis} be random variable representing every Xi Suppose dimension of X is n According to equation 5.2.9, recall that ܺ௜ ൌ ሼܺ௢௕௦ ሺ݅ሻǡ ܺ௠௜௦ ሺ݅ሻሽ ൌ ሺ‫ݔ‬௜ଵ ǡ ‫ݔ‬௜ଶ ǡ ǥ ǡ ‫ݔ‬௜௡ ሻ் ܺ௠௜௦ ሺ݅ሻ ൌ ቀ‫ݔ‬௜௠భ ǡ ‫ݔ‬௜௠మ ǡ ǥ ǡ ‫ݔ‬௜௠หಾ ห ቁ ೔ ் ் ܺ௢௕௦ ሺ݅ሻ ൌ ቀ‫ݔ‬௜௠ഥ೔భ ǡ ‫ݔ‬௜௠ഥ೔మ ǡ ǥ ǡ ‫ݔ‬௜௠ഥ೔หಾ ቁ തതത ห The PDF of X is: ‫ܯ‬௜ ൌ ൛݉௜ଵ ǡ ݉௜ଶ ǡ ǥ ǡ ݉௜ȁெ೔ ȁ ൟ ഥ௜ ൌ ൛݉ ‫ܯ‬ ഥ ௜ଵ ǡ ݉ ഥ ௜ଶ ǡ ǥ ǡ ݉ ഥ ௜ȁெഥ೔ ȁ ൟ ݂ሺܺȁȣሻ ൌ ݂ሺܺ௢௕௦ ǡ ܺ௠௜௦ ȁȣሻ ൌ ೔ ‫ܭ‬Ǩ ς௡௝ୀଵ൫‫ݔ‬௝ Ǩ൯ ௡ ௫ ෑ ‫݌‬௝ ೕ ௝ୀଵ (5.2.45) Where xj are integers and Θ = (p1, p2,…, pn)T is the set of probabilities such that ௡ ෍ ‫݌‬௝ ൌ ͳ ௝ୀଵ ௡ ෍ ‫ݔ‬௝ ൌ ‫ܭ‬ ௝ୀଵ ‫ݔ‬௝ ‫ א‬ሼͲǡͳǡ ǥ ǡ ‫ܭ‬ሽ Note, xj is the number of trials generating nominal value j Therefore, Where, ݂ሺܺ௜ ȁȣሻ ൌ ݂ሺܺ௢௕௦ ሺ݅ሻǡ ܺ௠௜௦ ሺ݅ሻȁȣሻ ൌ ௡ ෍ ‫ݔ‬௜௝ ൌ ‫ܭ‬ ௝ୀଵ ‫ݔ‬௜௝ ‫ א‬ሼͲǡͳǡ ǥ ǡ ‫ܭ‬ሽ 141 ‫ܭ‬Ǩ ς௡௝ୀଵ൫‫ݔ‬௜௝ Ǩ൯ ௡ ௫ ෑ ‫݌‬௝ ೔ೕ ௝ୀଵ The most important task here is to define equation 5.2.11 and equation 5.2.15 in order to compose τ(X) from τ(Xobs), τ(Xmis) and to extract ΘM from Θ when f(X|Θ) is multinomial PDF Let Θmis be parameter of marginal PDF of Xmis, we have: ‫ܭ‬௠௜௦ Ǩ Therefore, ݂ሺܺ௠௜௦ ȁȣ௠௜௦ ሻ ൌ Where, ݂൫ܺ௠௜௦ ሺ݅ሻหȣ௠௜௦ ሺ݅ሻ൯ ൌ ς௠ೕ ‫א‬ெ ቀ‫ݔ‬௠ೕ ȁெȁ ‫݌‬௠ೕ ௫೘ೕ ൰ ෑ൬ ܲ௠௜௦ Ǩቁ ௝ୀଵ ‫ܭ‬௠௜௦ ሺ݅ሻǨ ς௠ೕ ‫א‬ெ೔ ቀ‫ݔ‬௜௠ೕ (5.2.46) ȁெ೔ ȁ ‫݌‬௠೔ೕ ௫೔೘ೕ ൰ ෑ൬ ܲ௠௜௦ ሺ݅ሻ Ǩቁ ௝ୀଵ ‫݌‬௠೔หಾ ห ் ‫݌‬௠೔భ ‫݌‬௠೔మ ೔ ǡ ǡǥǡ ቇ ȣ௠௜௦ ሺ݅ሻ ൌ ቆ ܲ௠௜௦ ሺ݅ሻ ܲ௠௜௦ ሺ݅ሻ ܲ௠௜௦ ሺ݅ሻ ȁெ೔ ȁ ܲ௠௜௦ ሺ݅ሻ ൌ ෍ ‫݌‬௠೔ೕ (5.2.47) ௝ୀଵ ȁெ೔ ȁ ‫ܭ‬௠௜௦ ሺ݅ሻ ൌ ෍ ‫ݔ‬௠೔ೕ ௝ୀଵ Obviously, Θmis(i) is extracted from Θ given indicator Mi Let Θobs be parameter of marginal PDF of Xobs, we have: Therefore, Where, ݂ሺܺ௢௕௦ ȁȣ௢௕௦ ሻ ൌ ‫ܭ‬௢௕௦ Ǩ ഥȁ ȁெ ෑ൬ ‫݌‬௠ഥೕ ς௠ഥೕ ‫א‬ெഥ ቀ‫ݔ‬௠ഥೕ Ǩቁ ௝ୀଵ ܲ௢௕௦ ݂൫ܺ௢௕௦ ሺ݅ሻหȣ௢௕௦ ሺ݅ሻ൯ ൌ ‫ܭ‬௢௕௦ ሺ݅ሻǨ ൰ ഥ೔ȁ ȁெ ௫೘ തതതೕ ෑ൬ ς௠ഥೕ‫א‬ெഥ೔ ቀ‫ݔ‬௜௠ഥೕ Ǩቁ ௝ୀଵ ܲ௢௕௦ ሺ݅ሻ ் ‫݌‬௠ഥ೔หಾ ‫݌‬௠ഥ೔మ ‫݌‬௠ഥ೔భ തതത ห ೔ ǡ ǡǥǡ ቇ ȣ௢௕௦ ሺ݅ሻ ൌ ቆ ܲ௢௕௦ ሺ݅ሻ ܲ௢௕௦ ሺ݅ሻ ܲ௢௕௦ ሺ݅ሻ ഥ೔ȁ ȁெ ܲ௢௕௦ ሺ݅ሻ ൌ ෍ ‫݌‬௠ഥ೔ೕ ‫݌‬௠ഥ೔ೕ (5.2.48) ൰ ௫೔೘ തതത ೕ (5.2.49) ௝ୀଵ ഥ೔ȁ ȁெ ‫ܭ‬௢௕௦ ሺ݅ሻ ൌ ෍ ‫ݔ‬௠ഥ೔ೕ ௝ୀଵ ഥ௜ or Mi Obviously, Θobs(i) is extracted from Θ given indicator ‫ܯ‬ The conditional PDF of Xmis given Xobs is calculated based on the PDF of X and the marginal PDF of Xobs as follows: 142 ݂ሺܺ௠௜௦ ȁȣெ ሻ ൌ ݂ሺܺ௠௜௦ ȁܺ௢௕௦ ǡ ȣሻ ൌ ௫ ‫ܭ‬Ǩ ς௡ ‫ ݌‬ೕ ς௡௝ୀଵ൫‫ݔ‬௝ Ǩ൯ ௝ୀଵ ௝ ൌ ௫೘ തതതೕ ഥೕ ഥ ȁ ‫݌‬௠ ‫ܭ‬௢௕௦ Ǩ ςȁெ ൰ ഥȁ ௝ୀଵ ൬ܲ ȁெ ௢௕௦ ς ‫ ݔ‬Ǩ ൌ ൌ ൌ ൌ ഥೕ ௝ୀଵ ௠ ഥȁ ȁெ ς ‫ܭ‬Ǩ ഥೕǨ ௝ୀଵ ‫ݔ‬௠ ௡ ‫ܭ‬௢௕௦ Ǩ ς௝ୀଵ൫‫ݔ‬௝ Ǩ൯ ‫ܭ‬Ǩ ȁெȁ ‫ܭ‬௢௕௦ Ǩ ς௝ୀଵ ቀ‫ݔ‬௠ೕ ‫ܭ‬Ǩ ‫כ‬ ௫ ς௡௝ୀଵ ‫݌‬௝ ೕ ௫೘ തതതೕ ഥ ഥ ȁ ‫݌‬௠ ȁெ ς௝ୀଵ ൬ ೕ൰ ܲ௢௕௦ ȁெȁ ഥȁ ȁெ ௫೘ ܲ௢௕௦ തതത ‫ כ‬ቌෑ ‫݌‬௠ೕ ቍ ‫ כ‬ቌෑ ‫݌‬௠ഥ ೕ ቆ ቇ ೕ ‫݌‬௠ഥೕ Ǩቁ ȁெȁ ‫ܭ‬௢௕௦ Ǩ ς௝ୀଵ ቀ‫ݔ‬௠ೕ Ǩቁ ‫ܭ‬Ǩ ݂ሺܺ௢௕௦ ǡ ܺ௠௜௦ ȁȣሻ ݂ሺܺ௢௕௦ ȁȣ௢௕௦ ሻ ȁெȁ ‫ܭ‬௢௕௦ Ǩ ς௝ୀଵ ቀ‫ݔ‬௠ೕ Ǩቁ ௝ୀଵ ȁெȁ ௫೘ೕ ௝ୀଵ ഥȁ ȁெ ௫೘ೕ ‫ כ‬ቌෑ ‫݌‬௠ೕ ቍ ‫ כ‬ቌෑሺܲ௢௕௦ ሻ ௝ୀଵ ȁெȁ ௝ୀଵ ௫೘ ௫೘ തതതೕ ௫ത೘ തതೕ ቍ ቍ ‫ כ‬ቌෑ ‫݌‬௠ೕ ೕ ቍ ‫ כ‬ሺሺܲ௢௕௦ ሻ௄೚್ೞ ሻ ௝ୀଵ This implies that the conditional PDF of Xmis given Xobs is multinomial PDF of K trials ݂ሺܺ௠௜௦ ȁܺ௢௕௦ ǡ ȣெ ሻ ൌ ݂ሺܺ௠௜௦ ȁܺ௢௕௦ ǡ ȣሻ ൌ ‫ܭ‬Ǩ ȁெȁ ‫ܭ‬௢௕௦ Ǩ ς௝ୀଵ ቀ‫ݔ‬௠ೕ Ǩቁ ȁெȁ ௫೘ ೕ ‫ כ‬ቌෑ ‫݌‬௠ೕ ቍ ‫ כ‬ሺሺܲ௢௕௦ ሻ௄೚್ೞ ሻ ௝ୀଵ Therefore, ݂൫ܺ௠௜௦ ሺ݅ሻหܺ௢௕௦ ሺ݅ሻǡ ȣெ೔ ൯ ൌ ݂ሺܺ௠௜௦ ሺ݅ሻȁܺ௢௕௦ ሺ݅ሻǡ ȣሻ Where ൌ ‫ܭ‬Ǩ ȁெ ȁ ‫ܭ‬௢௕௦ ሺ݅ሻǨ ς௝ୀଵ೔ ቀ‫ݔ‬௜௠ೕ Ǩቁ ȁெ೔ ȁ ௫೔೘ ‫ כ‬ቌෑ ‫݌‬௠೔ೕ ೕ ቍ ‫ כ‬ቀ൫ܲ௢௕௦ ሺ݅ሻ൯ ഥ೔ȁ ȁெ ௝ୀଵ (5.2.50) ௄೚್ೞ ሺ௜ሻ ቁ ܲ௢௕௦ ሺ݅ሻ ൌ ෍ ‫݌‬௠ഥ೔ೕ ௝ୀଵ ഥ ೔ȁ ȁெ ‫ܭ‬௢௕௦ ሺ݅ሻ ൌ ෍ ‫ݔ‬௠ഥ೔ೕ ௝ୀଵ Obviously, the parameter ȣெ೔ of the conditional PDF ݂൫ܺ௠௜௦ ሺ݅ሻหܺ௢௕௦ ሺ݅ሻǡ ȣெ೔ ൯ is: 143 ȣெ೔ ‫݌‬௠భ ‫݌‬௠మ ‫ڭ‬ ‫݌‬௠ೖ ‫ۇ‬ ‫ۈ‬ ൌ ‫ݑ‬൫ȣǡ ܺ௢௕௦ ሺ݅ሻ൯ ൌ ‫ۈ‬ ‫ۈ‬ ‫ۈ‬ ഥ೔ȁ ȁெ ‫ۊ‬ ‫ۋ‬ ‫ۋ‬ ‫ۋ‬ ‫ۋ‬ (5.2.51) ܲ௢௕௦ ሺ݅ሻ ൌ ෍ ‫݌‬௠ഥ೔ೕ ‫ۉ‬ ‫ی‬ ௝ୀଵ Therefore, equation 5.2.51 to extract ȣெ೔ from Θ given Xobs(i) is an instance of equation 5.2.15 It is easy to check that ȁெ೔ ȁ ȁெ೔ ȁ ෍ ‫ݔ‬௠೔ೕ ൅ ‫ܭ‬௢௕௦ ሺ݅ሻ ൌ ‫ܭ‬௠௜௦ ሺ݅ሻ ൅ ‫ܭ‬௢௕௦ ሺ݅ሻ ൌ ‫ܭ‬ ௝ୀଵ ȁெ೔ ȁ ഥ೔ȁ ȁெ ௡ ௝ୀଵ ௝ୀଵ ௝ୀଵ ෍ ‫݌‬௠೔ೕ ൅ ܲ௢௕௦ ሺ݅ሻ ൌ ෍ ‫݌‬௠೔ೕ ൅ ෍ ‫݌‬௠ഥ೔ೕ ൌ ෍ ‫݌‬௝ ൌ ͳ ௝ୀଵ At E-step of some tth iteration, given current parameter Θ(t) = (p1(t), p2(t),…, pn(t))T, the sufficient statistic of X is calculated according to equation 5.2.22 Let, ߬ ሺ௧ሻ ൌ ே ͳ ሺ௧ሻ ෍ ቄ߬൫ܺ௢௕௦ ሺ݅ሻ൯ǡ ‫ܧ‬ቀ߬ሺܺ௠௜௦ ሻቚȣெ೔ ቁቅ ܰ ௜ୀଵ The sufficient statistic of Xobs(i) is: ߬൫ܺ௢௕௦ ሺ݅ሻ൯ ൌ ቀ‫ݔ‬௜௠ഥభ ǡ ‫ݔ‬௜௠ഥమ ǡ ǥ ǡ ‫ݔ‬௜௠ഥหಾ ቁ തതത ห ೔ ் The sufficient statistic of Xmis(i) with regard to ݂൫ܺ௠௜௦ ሺ݅ሻหܺ௢௕௦ ሺ݅ሻǡ ȣெ೔ ൯ is: ‫ݔ‬௜௠భ ‫ݔ‬௜௠మ ‫ۊ ڭ ۇ‬ ‫ݔۈ‬ ‫ۋ‬ ߬൫ܺ௠௜௦ ሺ݅ሻ൯ ൌ ‫ ۈ‬௜௠หಾ೔ห ‫ۋ‬ ‫ۈ‬ȁெഥ೔ ȁ ‫ۋ‬ ‫ۈ‬ ‫ۋ‬ ෍ ‫ݔ‬௠ഥ೔ೕ ‫ۉ‬௝ୀଵ ‫ی‬ We also have: ‫݌ܭ‬௠భ ‫݌ܭ‬௠మ ‫ۇ‬ ‫ۊ‬ ‫ڭ‬ ‫ۈ‬ ‫ۋ‬ ሺ௧ሻ ሺ௧ሻ ‫ܧ‬ቀ߬ሺܺ௠௜௦ ሻቚȣெ೔ ቁ ൌ න ݂ቀܺ௠௜௦ ቚܺ௢௕௦ ǡ ȣெ೔ ቁ߬ሺܺ௠௜௦ ሻ†ܺ௠௜௦ ൌ ‫݌ܭ ۈ‬௠หಾ೔ ห ‫ۋ‬ ‫ۈ‬ȁெഥ ȁ ‫ۋ‬ ௑೘೔ೞ ‫ ۈ‬೔ ‫ۋ‬ ෍ ‫݌ܭ‬௠ഥ೔ೕ ‫ۉ‬௝ୀଵ ‫ی‬ Therefore, the sufficient statistic of X at E-step of some tth iteration given current parameter Θ(t) = (p1(t), p2(t),…, pn(t))T is defined as follows: 144 ሺ௧ሻ ሺ௧ሻ ் ሺ௧ሻ ߬ ሺ௧ሻ ൌ ቀ‫ݔ‬ҧଵ ǡ ‫ݔ‬ҧଶ ǡ ǥ ǡ ‫ݔ‬ҧ௡ ቁ ሺ௧ሻ ‫ݔ‬ҧ௝ ൌ ே ‫ݔ‬௜௝ ‹ˆ݆ ‫ܯ ב‬௜ ͳ ‫݆׊‬ ෍ ቊ ሺ௧ሻ ܰ ‫݌ܭ‬௝ ‹ˆ݆ ‫ܯ א‬௜ (5.2.52) ௜ୀଵ Equation 5.2.52 is an instance of equation 5.2.11, which compose τ(X) from τ(Xobs) and τ(Xmis) when f(X|Θ) is multinomial PDF At M-step of some tth iteration, we need to maximize Q1(Θ’|Θ) with following constraint ௡ ෍ ‫݌‬௝ ൌ ͳ ௝ୀଵ According to equation 5.2.19, we have: ே ܳଵ ሺȣᇱ ȁȣሻ ൌ ෍ ‫ܧ‬൫Ž‘‰൫ܾሺܺ௢௕௦ ሺ݅ሻǡ ܺ௠௜௦ ሻ൯หȣெ೔ ൯ ௜ୀଵ ൅ ሺȣᇱ ሻ் ே ෍൛߬൫ܺ௢௕௦ ሺ݅ሻ൯ǡ ‫ܧ‬൫߬ሺܺ௠௜௦ ሻหȣெ೔ ൯ൟ െ ܰŽ‘‰൫ܽሺȣᇱ ሻ൯ ௜ୀଵ Where quantities b(Xobs(i), Xmis) and a(Θ’) belongs to the PDF f(X|Θ) of X Because there is the constraint σ௡௝ୀଵ ‫݌‬௝ ൌ ͳ, we use Lagrange duality method to maximize Q1(Θ’|Θ) The Lagrange function la(Θ’, λ | Θ) is sum of Q1(Θ’|Θ) and the constraint σ௡௝ୀଵ ‫݌‬௝ ൌ ͳ, as follows: ௡ ݈ܽሺȣᇱ ǡ ɉȁȣሻ ൌ ܳଵ ሺȣᇱ ȁȣሻ ൅ ߣ ቌͳ െ ෍ ‫݌‬௝ᇱ ቍ ௝ୀଵ ே ൌ ෍ ‫ܧ‬൫Ž‘‰൫ܾሺܺ௢௕௦ ሺ݅ሻǡ ܺ௠௜௦ ሻ൯หȣெ೔ ൯ ௜ୀଵ ே ൅ ሺȣᇱ ሻ் ෍൛߬൫ܺ௢௕௦ ሺ݅ሻ൯ǡ ‫ܧ‬൫߬ሺܺ௠௜௦ ሻหȣெ೔ ൯ൟ െ ܰŽ‘‰൫ܽሺȣᇱ ሻ൯ ௜ୀଵ ௡ ൅ ߣ ቌͳ െ ෍ ‫݌‬௝ᇱ ቍ ௝ୀଵ Where Θ’ = (p1’, p2’,…, pn’)T Note, λ ≥ is called Lagrange multiplier Of course, la(Θ’, λ | Θ) is function of Θ’ and λ The next parameter Θ(t+1) that maximizes Q1(Θ’|Θ) is solution of the equation formed by setting the first-order derivative of Lagrange function regarding Θ’ and λ to be zero The first-order partial derivative of la(Θ’, λ | Θ) with regard to Θ’ is: 145 ே ் ߲݈ܽሺȣᇱ ǡ ɉȁȣሻ ൌ ෍ ቀ‫ܧ‬൫߬ሺܺ௢௕௦ ሺ݅ሻǡ ܺ௠௜௦ ሻหȣெ೔ ൯ቁ െ ܰŽ‘‰ ᇱ ൫ܽሺȣᇱ ሻ൯ ߲ȣᇱ ௜ୀଵ ே ் ൌ ෍൛߬൫ܺ௢௕௦ ሺ݅ሻ൯ǡ ‫ܧ‬൫߬ሺܺ௠௜௦ ሻหȣெ೔ ൯ൟ െ ܰŽ‘‰ ᇱ ൫ܽሺȣᇱ ሻ൯ െ ሺߣǡ ߣǡ ǥ ǡ ߣሻ் ௜ୀଵ By referring table 1.2, we have: Thus, ் ் Ž‘‰ ᇱ ൫ܽሺȣᇱ ሻ൯ ൌ ൫‫ܧ‬ሺ߬ሺܺሻȁȣᇱ ሻ൯ ൌ න ݂ሺܺȁȣሻ൫߬ሺܺሻ൯ †ܺ ே ௑ ߲݈ܽሺȣᇱ ǡ ɉȁȣሻ ் ் ൌ ෍൛߬൫ܺ௢௕௦ ሺ݅ሻ൯ǡ ‫ܧ‬൫߬ሺܺ௠௜௦ ሻหȣெ೔ ൯ൟ െ ܰ൫‫ܧ‬ሺ߬ሺܺሻȁȣᇱ ሻ൯ െ ሺߣǡ ߣǡ ǥ ǡ ߣሻ் ߲ȣᇱ ௜ୀଵ The first-order partial derivative of la(Θ’, λ | Θ) with regard to λ is: ௡ ߲݈ܽሺȣᇱ ǡ ɉȁȣሻ ൌ ͳ െ ෍ ‫݌‬௝ᇱ ߲ɉ ௝ୀଵ Therefore, at M-step of some tth iteration, given current parameter Θ(t) = (p1(t), p2(t),…, pn(t))T, the next parameter Θ(t+1) is solution of the following equation: ே This implies: ‫ۓ‬෍ ቄ߬൫ܺ ሺ݅ሻ൯ǡ ‫ܧ‬ቀ߬ሺܺ ሻቚȣሺ௧ሻ ቁቅ் ௠௜௦ ௢௕௦ ெ೔ ۖ ۖ ௜ୀଵ ் െܰ൫‫ܧ‬ሺ߬ሺܺሻȁȣሻ൯ െ ሺߣǡ ߣǡ ǥ ǡ ߣሻ ൌ ૙்  ‫۔‬ ௡ ۖ ۖͳ െ ෍ ‫݌‬௝ ൌ Ͳ ‫ە‬ ௝ୀଵ ߣΤܰ ‫ۓ‬ ߣΤܰ ሺ௧ሻ ۖ ۖ‫ܧ‬ሺ߬ሺܺሻȁȣሻ ൌ ߬ െ ൮ߣΤܰ ൲ ߣΤܰ ‫ ۔‬௡ ۖ ۖ෍ ‫ ݌‬ൌ ͳ ‫ە‬௝ୀଵ Where, Due to ߬ ሺ௧ሻ ൌ ௝ ே ͳ ሺ௧ሻ ෍ ቄ߬൫ܺ௢௕௦ ሺ݅ሻ൯ǡ ‫ܧ‬ቀ߬ሺܺ௠௜௦ ሻቚȣெ೔ ቁቅ ܰ ௜ୀଵ ‫ܧ‬ሺ߬ሺܺሻȁȣሻ ൌ න ߬ሺܺሻ݂ሺܺȁȣሻ†ܺ ൌ ሺ‫݌ܭ‬ଵ ǡ ‫݌ܭ‬ଶ ǡ ǥ ǡ ‫݌ܭ‬௡ ሻ் ௑ 146 ሺ௧ሻ ሺ௧ሻ ் ሺ௧ሻ ߬ ሺ௧ሻ ൌ ቀ‫ݔ‬ҧଵ ǡ ‫ݔ‬ҧଶ ǡ ǥ ǡ ‫ݔ‬ҧ௡ ቁ ሺ௧ሻ ‫ݔ‬ҧ௝ ൌ ே ‫ݔ‬௜௝ ‹ˆ݆ ‫ܯ ב‬௜ ͳ ‫݆׊‬ ෍ ቊ ሺ௧ሻ ܰ ‫݌ܭ‬௝ ‹ˆ݆ ‫ܯ א‬௜ ௜ୀଵ ሺ௧ሻ We obtain n equations Kpj = –ɉ/N + ‫ݔ‬ҧ௝ have: ‫݌‬௝ ൌ െ ே ‫ݔ‬௜௝ ‹ˆ݆ ‫ܯ ב‬௜ ߣ ͳ ൅ ෍ ቊ ሺ௧ሻ ‫݆׊‬ ‫ܰܭ ܰܭ‬ ‫݌ܭ‬௝ ‹ˆ݆ ‫ܯ א‬௜ ௜ୀଵ Summing n equations above, we have: ௡ ͳ ൌ ෍ ‫݌‬௝ ൌ െ ௝ୀଵ and constraint σ௡௝ୀଵ ‫݌‬௝ ൌ ͳ Therefore, we ௡ ே ‫ݔ‬௜௝ ‹ˆ݆ ‫ܯ ב‬௜ ͳ ߣ ൱ ൅ ෍ ൭෍ ቊ ሺ௧ሻ ‫ܰܭ ܰܭ‬ ‫݌ܭ‬௝ ‹ˆ݆ ‫ܯ א‬௜ ௝ୀଵ ௜ୀଵ ே ഥ೔ȁ ȁெ ȁெ೔ ȁ ௜ୀଵ ௝ୀଵ ௝ୀଵ ߣ ͳ ሺ௧ሻ ൌെ ൅ ෍ ቌ෍ ‫ݔ‬௜௠ഥೕ ൅ ෍ ‫݌ܭ‬௠ೕ ቍ ‫ܰܭ ܰܭ‬ Suppose every missing value ‫ݔ‬௜௠ೕ is estimated by ‫݌ܭ‬௠ೕ such that: ȁெ೔ ȁ ௝ୀଵ ௝ୀଵ ሺ௧ሻ ෍ ‫ݔ‬௠ഥ೔ೕ ൌ ෍ ‫݌ܭ‬௠ೕ We obtain: ͳൌെ ഥ೔ȁ ȁெ ே ഥ೔ȁ ȁெ ௜ୀଵ ௝ୀଵ ȁெ೔ ȁ ே ߣ ͳ ͳ ߣ ߣ ൅ ෍ ቌ෍ ‫ݔ‬௜௠ഥೕ ൅ ෍ ‫ݔ‬௜௠ೕ ቍ ൌ െ ൅ ෍‫ ܭ‬ൌ െ ൅ͳ ‫ܰܭ ܰܭ‬ ‫ܰܭ‬ ‫ܰܭ ܰܭ‬ This implies ௝ୀଵ ௜ୀଵ ߣൌͲ Such that ‫݌‬௝ ൌ ே ‫ݔ‬௜௝ ‹ˆ݆ ‫ܯ ב‬௜ ͳ ෍ ቊ ሺ௧ሻ ‫݆׊‬ ‫ܰܭ‬ ‫݌ܭ‬௝ ‹ˆ݆ ‫ܯ א‬௜ ௜ୀଵ Therefore, at M-step of some tth iteration, given current parameter Θ(t) = (p1(t), p2(t),…, pn(t))T, the next parameter Θ(t+1) is specified by following equation ே ‫ݔ‬௜௝ ‹ˆ݆ ‫ܯ ב‬௜ ͳ ሺ௧ାଵሻ (5.2.53) ൌ ෍ ቊ ሺ௧ሻ ‫݆׊‬ ‫݌‬௝ ‫ܰܭ‬ ‫݌ܭ‬௝ ‹ˆ݆ ‫ܯ א‬௜ ௜ୀଵ In general, given sample ࣲ = {X1, X2,…, XN} whose Xi (s) are iid is MCAR data and f(X|Θ) is multinomial PDF of K trials, GEM for handling missing data is summarized in table 5.2.3 M-step: Given τ(t) and Θ(t) = (p1(t), p2(t),…, pn(t))T, the next parameter Θ(t+1) is specified by equation 5.2.53 147 ሺ௧ାଵሻ ‫݌‬௝ ൌ ே ‫ݔ‬௜௝ ‹ˆ݆ ‫ܯ ב‬௜ ͳ ෍ ቊ ሺ௧ሻ ‫݆׊‬ ‫ܰܭ‬ ‫݌ܭ‬௝ ‹ˆ݆ ‫ܯ א‬௜ ௜ୀଵ Table 5.2.3 E-step and M-step of GEM algorithm for handling missing data given multinomial PDF In table 5.2.3, E-step is implied in how to perform M-step As aforementioned, in practice we can stop GEM after its first iteration was done, which is reasonable enough to handle missing data Example 5.2.2 Given sample of size two, ࣲ = {X1, X2 } in which X1 = (x11=1, x12=?, x13=3, x14=?)T and X2 = (x21=?, x22=2, x23=?, x24=4)T are iid x1 x2 x3 x4 X1 ? ? X2 ? ? Of course, we have Xobs(1) = (x11=1, x13=3)T, Xmis(1) = (x12=?, x14=?)T, Xobs(2) = (x22=2, ഥଵ = {݉ x24=4)T and Xmis(2) = (x21=?, x23=?)T We also have M1 = {m11=2, m12=4}, ‫ܯ‬ ഥ ଵଵ =1, ഥଶ = {݉ ഥ ଶଵ =2, ݉ ഥ ଶଶ =4} Let X be random variable ݉ ഥ ଵଶ =3}, M2 = {m21=1, m22=3}, and ‫ܯ‬ representing every Xi Suppose f(X|Θ) is multinomial PDF of 10 trials We will estimate Θ = (p1, p2, p3, p4)T The parameters p1, p2, p3, and p2 are initialized arbitrarily as 0.25 as follows: ሺଵሻ ሺଵሻ ሺଵሻ ሺଵሻ ் ȣሺଵሻ ൌ ቀ‫݌‬ଵ ൌ ͲǤʹͷǡ ‫݌‬ଶ ൌ ͲǤʹͷǡ ‫݌‬ଷ ൌ ͲǤʹͷǡ ‫݌‬ସ ൌ ͲǤʹͷቁ At 1st iteration, M-step, we have: ͳ ሺͳ ൅ ͳͲ ‫Ͳ כ‬Ǥʹͷሻ ൌ ͲǤͳ͹ͷ ͳͲ ‫ʹ כ‬ ͳ ሺଶሻ ሺͳͲ ‫Ͳ כ‬Ǥʹͷ ൅ ʹሻ ൌ ͲǤʹʹͷ ‫݌‬ଶ ൌ ͳͲ ‫ʹ כ‬ ͳ ሺଶሻ ሺ͵ ൅ ͳͲ ‫Ͳ כ‬Ǥʹͷሻ ൌ ͲǤʹ͹ͷ ‫݌‬ଷ ൌ ͳͲ ‫ʹ כ‬ ͳ ሺଶሻ ሺͳͲ ‫Ͳ כ‬Ǥʹͷ ൅ Ͷሻ ൌ ͲǤ͵ʹͷ ‫݌‬ସ ൌ ͳͲ ‫ʹ כ‬ We stop GEM after the first iteration was done, which results the estimate Θ(2) = Θ* = (p1*, p2*, p3*, p4*)T as follows: ‫݌‬ଵ‫ כ‬ൌ ͲǤͳ͹ͷ ‫݌‬ଶ‫ כ‬ൌ ͲǤʹʹͷ ‫݌‬ଷ‫ כ‬ൌ ͲǤʹ͹ͷ ‫݌‬ସ‫ כ‬ൌ ͲǤ͵ʹͷ In general, GEM is a powerful tool to handle missing data, which is not so difficult except that how to extract the parameter ΘM of the conditional PDF f(Xmis | Xobs, ΘM) from the whole parameter Θ of the PDF f(X|ΘM) is most important with note that only f(X|Θ) is defined firstly and then f(Xmis | Xobs, ΘM) is derived from f(X|Θ) Therefore, equation 5.2.15 is cornerstone of this method Note, equation 5.2.35 and 5.2.51 are instances of equation 5.2.15 when f(X|Θ) is multinormal PDF or multinomial PDF ሺଶሻ ‫݌‬ଵ ൌ 148 5.3 Learning hidden Markov model Simple ideology about EM algorithm was kindled from learning hidden Markov model (HMM) by iterative improvement process but EM is more general After EM was popularized, it was conversely used to make clear and explain how to learn HMM There are many real-world phenomena (so-called states) that we would like to model in order to explain our observations Often, given sequence of observations symbols, there is demand of discovering real states For example, there are some states of weather: sunny, cloudy, rainy (Fosler-Lussier, 1998, p 1) Suppose you are in the room and not know the weather outside but you are notified observations such as wind speed, atmospheric pressure, humidity, and temperature from someone else Basing on these observations, it is possible for you to forecast the weather by using hidden Markov model (HMM) Before discussing about HMM, we should glance over the definition of Markov model (MM) First, MM is the statistical model which is used to model the stochastic process MM is defined as below (Schmolze, 2001): x Given a finite set of state S={s1, s2,…, sn} whose cardinality is n Let ∏ be the initial state distribution where πi‫ ∏א‬represents the probability that the stochastic process begins in state si In other words πi is the initial probability of state si, where σ௦೔ ‫א‬ௌ ߨ௜ ൌ ͳ x x The stochastic process which is modeled gets only one state from S at all time points This stochastic process is defined as a finite vector X=(x1, x2,…, xT) whose element xt is a state at time point t The process X is called state stochastic process and xt ‫ א‬S equals some state si ‫ א‬S Note that X is also called state sequence Time point can be in terms of second, minute, hour, day, month, year, etc It is easy to infer that the initial probability πi = P(x1=si) where x1 is the first state of the stochastic process The state stochastic process X must meet fully the Markov property, namely, given previous state xt–1 of process X, the conditional probability of current state xt is only dependent on the previous state xt–1, not relevant to any further past state (xt–2, xt–3,…, x1) In other words, P(xt | xt–1, xt–2, xt–3,…, x1) = P(xt | xt–1) with note that P(.) also denotes probability in this research Such process is called first-order Markov process At each time point, the process changes to the next state based on the transition probability distribution aij, which depends only on the previous state So aij is the probability that the stochastic process changes current state si to next state sj It means that aij = P(xt=sj | xt–1=si) = P(xt+1=sj | xt=si) The probability of transitioning from any given state to some next state is 1, we have ‫ݏ׊‬௜ ‫א‬ ܵǡ σ௦ೕ ‫א‬ௌ ܽ௜௝ ൌ ͳ All transition probabilities aij (s) constitute the transition probability matrix A Note that A is n by n matrix because there are n distinct states It is easy to infer that matrix A represents state stochastic process X It is possible to understand that the initial probability matrix ∏ is degradation case of matrix A 149 Briefly, MM is the triple ‫ۃ‬S, A, ∏‫ۄ‬ In typical MM, states are observed directly by users and transition probabilities (A and ∏) are unique parameters Otherwise, hidden Markov model (HMM) is similar to MM except that the underlying states become hidden from observer, they are hidden parameters HMM adds more output parameters which are called observations Each state (hidden parameter) has the conditional probability distribution upon such observations HMM is responsible for discovering hidden parameters (states) from output parameters (observations), given the stochastic process The HMM has further properties as below (Schmolze, 2001): x Suppose there is a finite set of possible observations Φ = {φ1, φ2,…, φm} whose cardinality is m There is the second stochastic process which produces observations correlating with hidden states This process is called observable stochastic process, which is defined as a finite vector O = (o1, o2,…, oT) whose element ot is an observation at time point t Note that ot ‫ א‬Φ equals some φk The process O is often known as observation sequence x There is a probability distribution of producing a given observation in each state Let bi(k) be the probability of observation φk when the state stochastic process is in state si It means that bi(k) = bi(ot=φk) = P(ot=φk | xt=si) The sum of probabilities of all observations which observed in a certain state is 1, we have ‫ݏ׊‬௜ ‫ܵ א‬ǡ σఏೖ‫א‬஍ ܾ௜ ሺ݇ሻ ൌ ͳ All probabilities of observations bi(k) constitute the observation probability matrix B It is convenient for us to use notation bik instead of notation bi(k) Note that B is n by m matrix because there are n distinct states and m distinct observations While matrix A represents state stochastic process X, matrix B represents observable stochastic process O Thus, HMM is the 5-tuple ∆ = ‫ۃ‬S, Φ, A, B, ∏‫ۄ‬ Note that components S, Φ, A, B, and ∏ are often called parameters of HMM in which A, B, and ∏ are essential parameters Going back weather example, suppose you need to predict how weather tomorrow is: sunny, cloudy or rainy since you know only observations about the humidity: dry, dryish, damp, soggy The HMM is totally determined based on its parameters S, Φ, A, B, and ∏ according to weather example We have S = {s1=sunny, s2=cloudy, s3=rainy}, Φ = {φ1=dry, φ2=dryish, φ3=damp, φ4=soggy} Transition probability matrix A is shown in table 5.3.1 Weather current day (Time point t) sunny cloudy rainy sunny a11=0.50 a12=0.25 a13=0.25 Weather previous day cloudy a21=0.30 a22=0.40 a23=0.30 (Time point t –1) rainy a31=0.25 a32=0.25 a33=0.50 Table 5.3.1 Transition probability matrix A From table 5.3.1, we have a11+a12+a13=1, a21+a22+a23=1, a31+a32+a33=1 Initial state distribution specified as uniform distribution is shown in table 5.3.2 sunny cloudy rainy 150 π1=0.33 π2=0.33 π3=0.33 Table 5.3.2 Uniform initial state distribution ∏ From table 5.3.2, we have π1+π2+π3=1 Observation probability matrix B is shown in table 5.3.3 Humidity dry dryish damp soggy sunny b11=0.60 b12=0.20 b13=0.15 b14=0.05 Weather cloudy b21=0.25 b22=0.25 b23=0.25 b24=0.25 rainy b31=0.05 b32=0.10 b33=0.35 b34=0.50 Table 5.3.3 Observation probability matrix B From table 5.3.3, we have b11+b12+b13+b14=1, b21+b22+b23+b24=1, b31+b32+b33+b34=1 The whole weather HMM is depicted in figure 5.3.1 Figure 5.3.1 HMM of weather forecast (hidden states are shaded) There are three problems of HMM (Schmolze, 2001) (Rabiner, 1989, pp 262-266): x Given HMM ∆ and an observation sequence O = {o1, o2,…, oT} where ot ‫ א‬Φ, how to calculate the probability P(O|∆) of this observation sequence Such probability P(O|∆) indicates how much the HMM ∆ affects on sequence O This is evaluation problem or explanation problem Note that it is possible to denote O = {o1 → o2 →…→ oT} and the sequence O is aforementioned observable stochastic process x Given HMM ∆ and an observation sequence O = {o1, o2,…, oT} where ot ‫ א‬Φ, how to find the sequence of states X = {x1, x2,…, xT} where xt ‫ א‬S so that X is most likely to have produced the observation sequence O This is uncovering problem Note that the sequence X is aforementioned state stochastic process x Given HMM ∆ and an observation sequence O = {o1, o2,…, oT} where ot ‫ א‬Φ, how to adjust parameters of ∆ such as initial state distribution ∏, transition probability matrix A, and observation probability matrix B so that the quality of HMM ∆ is enhanced This is learning problem 151 This sub-section focuses on the third problem which is learning problem because HMM learning relates to EM algorithm Before mentioning learning problem, we need to comprehend the important concept of forward-backward procedure related to evaluation problem Therefore, this sub-section also mentions evaluation problem Indeed, evaluation problem is solved by forward-backward procedure According to (Rabiner, 1989, pp 262-263), there is a so-called forward-backward procedure to decrease computational cost for determining the probability P(O|Δ) Let αt(i) be the joint probability of partial observation sequence {o1, o2,…, ot} and state xt=si where ͳ ൑ ‫ ݐ‬൑ ܶ, specified by equation 5.3.1 (5.3.1) ߙ௧ ሺ݅ሻ ൌ ܲሺ‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ǡ ‫ݔ‬௧ ൌ ‫ݏ‬௜ ȁοሻ The joint probability αt(i) is also called forward variable at time point t and state si The product αt(i)aij where aij is the transition probability from state i to state j counts for probability of join event that partial observation sequence {o1, o2,…, ot} exists and the state si at time point t is changed to sj at time point t+1 ߙ௧ ሺ݅ሻܽ௜௝ ൌ ܲሺ‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ǡ ‫ݔ‬௧ ൌ ‫ݏ‬௜ ȁοሻܲ൫‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ ห‫ݔ‬௧ ൌ ‫ݏ‬௜ ൯ ൌ ܲሺ‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ȁ‫ݔ‬௧ ൌ ‫ݏ‬௜ ሻܲሺ‫ݔ‬௧ ൌ ‫ݏ‬௜ ሻܲ൫‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ ห‫ݔ‬௧ ൌ ‫ݏ‬௜ ൯ (Due to multiplication rule) ൌ ܲሺ‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ȁ‫ݔ‬௧ ൌ ‫ݏ‬௜ ሻܲ൫‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ ห‫ݔ‬௧ ൌ ‫ݏ‬௜ ൯ܲሺ‫ݔ‬௧ ൌ ‫ݏ‬௜ ሻ ൌ ܲ൫‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ǡ ‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ ห‫ݔ‬௧ ൌ ‫ݏ‬௜ ൯ܲሺ‫ݔ‬௧ ൌ ‫ݏ‬௜ ሻ (Because the partial observation sequence {o1, o2,…, ot} is independent from next state xt+1 given current state xt) ൌ ܲ൫‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ǡ ‫ݔ‬௧ ൌ ‫ݏ‬௜ ǡ ‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ ൯ (Due to multiplication rule) Summing product αt(i)aij over all n possible states of xt produces probability of join event that partial observation sequence {o1, o2,…, ot} exists and the next state is xt+1=sj regardless of the state xt ௡ ௡ ௜ୀଵ ௜ୀଵ ෍ ߙ௧ ሺ݅ሻܽ௜௝ ൌ ෍ ܲ൫‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ǡ ‫ݔ‬௧ ൌ ‫ݏ‬௜ ǡ ‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ ൯ ൌ ܲ൫‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ǡ ‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ ൯ The forward variable at time point t+1 and state sj is calculated on αt(i) as follows: ߙ௧ାଵ ሺ݆ሻ ൌ ܲ൫‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ǡ ‫݋‬௧ାଵ ǡ ‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ หο൯ ൌ ܲ൫‫݋‬௧ାଵ ห‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ǡ ‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ ൯ܲ൫‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ǡ ‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ ൯ (Due to multiplication rule) ൌ ܲ൫‫݋‬௧ାଵ ห‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ ൯ܲ൫‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫݋‬௧ ǡ ‫ݔ‬௧ାଵ ൌ ‫ݏ‬௝ ൯ (Due to observations are mutually independent) ௡ ൌ ܾ௝ ሺ‫݋‬௧ାଵ ሻ ෍ ߙ௧ ሺ݅ሻܽ௜௝ ௜ୀଵ Where bj(ot+1) is the probability of observation ot+1 when the state stochastic process is in state sj, please see an example of observation probability matrix shown in table 5.3.3 152 In brief, please pay attention to recurrence property of forward variable specified by equation 5.3.2 ௡ ߙ௧ାଵ ሺ݆ሻ ൌ ൭෍ ߙ௧ ሺ݅ሻܽ௜௝ ൱ ܾ௝ ሺ‫݋‬௧ାଵ ሻ (5.3.2) ௜ୀଵ The aforementioned construction of forward recurrence equation 5.3.2 is essentially to build up Markov chain, illustrated by figure 5.3.2 (Rabiner, 1989, p 262) Figure 5.3.2 Construction of recurrence equation for forward variable According to the forward recurrence equation 5.3.2, given observation sequence O = {o1, o2,…, oT}, we have: ߙ ் ሺ݅ሻ ൌ ܲሺ‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫ ்݋‬ǡ ‫ ்ݔ‬ൌ ‫ݏ‬௜ ȁοሻ The probability P(O|Δ) is sum of αT(i) over all n possible states of xT, specified by equation 5.3.3 ௡ ௡ ௜ୀଵ ௜ୀଵ ܲሺܱȁοሻ ൌ ܲሺ‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫ ்݋‬ሻ ൌ ෍ ܲሺ‫݋‬ଵ ǡ ‫݋‬ଶ ǡ ǥ ǡ ‫ ்݋‬ǡ ‫ ்ݔ‬ൌ ‫ݏ‬௜ ȁοሻ ൌ ෍ ߙ ் ሺ݅ሻ (5.3.3) The forward-backward procedure to calculate the probability P(O|Δ), based on forward equation 5.3.2 and 5.3.3, includes three steps as shown in table 5.3.4 (Rabiner, 1989, p 262) Initialization step: Initializing α1(i) = bi(o1)πi for all ͳ ൑ ݅ ൑ ݊ Recurrence step: Calculating all αt+1(j) for all ͳ ൑ ݆ ൑ ݊ and ͳ ൑ ‫ ݐ‬൑ ܶ െ ͳ according to equation 5.3.2 ௡ ߙ௧ାଵ ሺ݆ሻ ൌ ൭෍ ߙ௧ ሺ݅ሻܽ௜௝ ൱ ܾ௝ ሺ‫݋‬௧ାଵ ሻ ௜ୀଵ Evaluation step: Calculating the probability ܲሺܱȁοሻ ൌ σ௡௜ୀଵ ߙ ் ሺ݅ሻ according to equation 5.3.3 Table 5.3.4 Forward-backward procedure based on forward variable to calculate the probability P(O|Δ) Thus, evaluation problem is solved by forward-backward procedure shown in table 5.3.4 There is interesting thing that the forward-backward procedure can be implemented based on so-called backward variable Let βt(i) be the backward variable which is conditional probability of partial observation sequence {ot, ot+1,…, oT} given state xt=si where ͳ ൑ ‫ ݐ‬൑ ܶ, specified by equation 5.3.4 153 ... Keywords: expectation maximization, EM, generalized expectation maximization, GEM, EM convergence Introduction Literature of expectation maximization (EM) algorithm in this tutorial is mainly... EM algorithm in order to help researchers comprehend it Some improvements of EM algorithm are also proposed in the tutorial such as combination of EM and third-order convergence Newton-Raphson... function of hidden data Pioneers in EM algorithm proved its convergence As a result, EM algorithm produces parameter estimators as well as MLE does This tutorial aims to provide explanations of EM

Ngày đăng: 02/01/2023, 12:24

w