1. Trang chủ
  2. » Tất cả

xử lý ngôn ngữ tự nhiên,regina barzilay,ocw mit edu

43 3 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 43
Dung lượng 438,63 KB

Nội dung

xử lý ngôn ngữ tự nhiên,regina barzilay,ocw mit edu 6 864 Lecture 3 (September 15, 2005) Smoothed Estimation, and Language Modeling CuuDuongThanCong com https //fb com/tailieudientucntt http //cuuduon[.]

6.864: Lecture (September 15, 2005) Smoothed Estimation, and Language Modeling CuuDuongThanCong.com https://fb.com/tailieudientucntt Overview • The language modeling problem • Smoothed “n-gram” estimates CuuDuongThanCong.com https://fb.com/tailieudientucntt The Language Modeling Problem • We have some vocabulary, say V = {the, a, man, telescope, Beckham, two, } • We have an (infinite) set of strings, V � the a the fan the fan saw Beckham the fan saw saw the fan saw Beckham play for Real Madrid CuuDuongThanCong.com https://fb.com/tailieudientucntt The Language Modeling Problem (Continued) • We have a training sample of example sentences in English • We need to “learn” a probability distribution Pˆ i.e., Pˆ is a function that satisfies � Pˆ (x) = 1, Pˆ (x) � for all x � V � x�V � Pˆ (the) = 10−12 Pˆ (the fan) = 10−8 Pˆ (the fan saw Beckham) = × 10−8 Pˆ (the fan saw saw) = 10−15 Pˆ (the fan saw Beckham play for Real Madrid) = × 10−9 • Usual assumption: training sample is drawn from some underlying distribution P , we want Pˆ to be “as close” to P as possible CuuDuongThanCong.com https://fb.com/tailieudientucntt Why on earth would we want to this?! • Speech recognition was the original motivation (Related problems are optical character recognition, recognition.) handwriting • The estimation techniques developed for this problem will be VERY useful for other problems in NLP CuuDuongThanCong.com https://fb.com/tailieudientucntt Deriving a Trigram Probability Model Step 1: Expand using the chain rule: P (w1 , w2 , , wn ) = P (w1 | START) ×P (w2 | START, w1 ) ×P (w3 | START, w1 , w2 ) ×P (w4 | START, w1 , w2 , w3 ) ×P (wn | START, w1 , w2 , , wn−1 ) ×P (STOP | START, w1 , w2 , , wn−1 , wn ) For Example P (the, dog, laughs) CuuDuongThanCong.com = P (the | START) ×P (dog | START, the) ×P (laughs | START, the, dog) ×P (STOP | START, the, dog, laughs) https://fb.com/tailieudientucntt Deriving a Trigram Probability Model Step 2: Make Markov independence assumptions: P (w1 , w2 , , wn ) = P (w1 | START) ×P (w2 | START, w1 ) ×P (w3 | w1 , w2 ) ×P (wn | wn−2 , wn−1 ) ×P (STOP | wn−1 , wn ) General assumption: P (wi | START, w1 , w2 , , wi−2 , wi−1 ) = P (wi | wi−2 , wi−1 ) For Example P (the, dog, laughs) = P (the | START) ×P (dog | START, the) ×P (laughs | the, dog) ×P (STOP | dog, laughs) CuuDuongThanCong.com https://fb.com/tailieudientucntt The Trigram Estimation Problem Remaining estimation problem: P (wi | wi−2 , wi−1 ) For example: P (laughs | the, dog) A natural estimate (the “maximum likelihood estimate”): Count(wi , wi−2 , wi−1 ) PM L (wi | wi−2 , wi−1 ) = Count(wi−2 , wi−1 ) Count(the, dog, laughs) PM L (laughs | the, dog) = Count(the, dog) CuuDuongThanCong.com https://fb.com/tailieudientucntt Evaluating a Language Model • We have some test data, n sentences S1 , S2 , S3 , , Sn • We could look at the probability under our model Or more conveniently, the log probability log n ⎧ P (Si ) = i=1 n � ⎩n i=1 P (Si ) log P (Si ) i=1 • In fact the usual evaluation measure is perplexity Perplexity = 2−x where n � x= log P (Si ) W i=1 and W is the total number of words in the test data CuuDuongThanCong.com https://fb.com/tailieudientucntt Some Intuition about Perplexity • Say we have a vocabulary V, of size N = |V| and model that predicts P (w) = N for all w � V • Easy to calculate the perplexity in this case: Perplexity = −x where x = log N ∈ Perplexity = N Perplexity is a measure of effective “branching factor” CuuDuongThanCong.com https://fb.com/tailieudientucntt

Ngày đăng: 27/11/2022, 21:16