Language modelling toolkit

Probabilistic Language Modeling and probability vocabulary words
Dr.DouglasPatton Profile Pic
Dr.DouglasPatton,United States,Teacher
Published Date:26-07-2017
Your Website URL(Optional)
Language Modeling Introduction to N-gramsDan Jurafsky Probabilistic Language Models • Today’s goal: assign a probability to a sentence • Machine Translation: • P(high winds tonite) P(large winds tonite) • Spell Correction Why? • The office is about fifteen minuets from my house • P(about fifteen minutes from) P(about fifteen minuets from) • Speech Recognition • P(I saw a van) P(eyes awe of an) • + Summarization, question-answering, etc., etc.Dan Jurafsky Probabilistic Language Modeling • Goal: compute the probability of a sentence or sequence of words: P(W) = P(w ,w ,w ,w ,w …w ) 1 2 3 4 5 n • Related task: probability of an upcoming word: P(w w ,w ,w ,w ) 5 1 2 3 4 • A model that computes either of these: P(W) or P(w w ,w …w ) is called a language model. n 1 2 n-1 • Better: the grammar But language model or LM is standard www.ThesisScientist.comDan Jurafsky How to compute P(W) • How to compute this joint probability: • P(its, water, is, so, transparent, that) • Intuition: let’s rely on the Chain Rule of Probability www.ThesisScientist.comDan Jurafsky Reminder: The Chain Rule • Recall the definition of conditional probabilities Rewriting: • More variables: P(A,B,C,D) = P(A)P(BA)P(CA,B)P(DA,B,C) • The Chain Rule in General P(x ,x ,x ,…,x ) = P(x )P(x x )P(x x ,x )…P(x x ,…,x ) 1 2 3 n 1 2 1 3 1 2 n 1 n-1 www.ThesisScientist.comDan Jurafsky The Chain Rule applied to compute joint probability of words in sentence P(ww… w ) = P(w ww… w ) Õ 1 2 n i 1 2 i-1 i P(“its water is so transparent”) = P(its) × P(waterits) × P(isits water)   × P(soits water is) × P(transparentits water is so)Dan Jurafsky How to estimate these probabilities • Could we just count and divide? P(the its water is so transparent that) = Count(its water is so transparent that the) Count(its water is so transparent that) • No Too many possible sentences • We’ll never see enough data for estimating these   Dan Jurafsky Markov Assumption • Simplifying assumption: Andrei Markov P(the its water is so transparent that) »P(the that) • Or maybe P(the its water is so transparent that) »P(the transparent that)     Dan Jurafsky Markov Assumption P(ww… w ) » P(w w… w ) Õ 1 2 n i i-k i-1 i • In other words, we approximate each component in the product P(w ww… w ) »P(w w… w )   i 1 2 i-1 i i-k i-1   Dan Jurafsky Simplest case: Unigram model P(ww… w ) » P(w ) Õ 1 2 n i i Some automatically generated sentences from a unigram model fifth, an, of, futures, the, an, incorporated, a, a, the, inflation, most, dollars, quarter, in, is, mass   thrift, did, eighty, said, hard, 'm, july, bullish that, or, limited, the www.ThesisScientist.comDan Jurafsky Bigram model Condition on the previous word: P(w ww… w ) »P(w w ) i 1 2 i-1 i i-1 texaco, rose, one, in, this, issue, is, pursuing, growth, in, a, boiler, house, said, mr., gurria, mexico, 's, motion, control, proposal, without, permission, from, five, hundred, fifty, five, yen outside, new, car, parking, lot, of, the, agreement, reached   this, would, be, a, record, november www.ThesisScientist.comDan Jurafsky N-gram models • We can extend to trigrams, 4-grams, 5-grams • In general this is an insufficient model of language • because language has long-distance dependencies: “The computer which I had just put into the machine room on the fifth floor crashed.” • But we can often get away with N-gram models www.ThesisScientist.comLanguage Modeling Introduction to N-gramsLanguage Modeling Estimating N-gram ProbabilitiesDan Jurafsky Estimating bigram probabilities • The Maximum Likelihood Estimate count(w ,w ) i-1 i P(w w ) = i i-1 count(w ) i-1 c(w ,w ) i-1 i P(w w ) = i i-1 c(w ) i-1     Dan Jurafsky An example s I am Sam /s c(w ,w ) i-1 i s Sam I am /s P(w w ) = i i-1 c(w ) i-1 s I do not like green eggs and ham /s   www.ThesisScientist.comDan Jurafsky More examples: Berkeley Restaurant Project sentences • can you tell me about any good cantonese restaurants close by • mid priced thai food is what i’m looking for • tell me about chez panisse • can you give me a listing of the kinds of food that are available • i’m looking for a good place to eat breakfast • when is caffe venezia open during the day www.ThesisScientist.comDan Jurafsky Raw bigram counts • Out of 9222 sentences www.ThesisScientist.comDan Jurafsky Raw bigram probabilities • Normalize by unigrams: • Result: www.ThesisScientist.comDan Jurafsky Bigram estimates of sentence probabilities P(s I want english food /s) = P(Is) × P(wantI) × P(englishwant) × P(foodenglish) × P(/sfood) = .000031

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.