AutoTutor Authoring Tool

Natural Language Processing Vasile Rus http://www.cs.memphis.edu/~vrus/teaching/nlp Outline Announcements Probabilistic CFG Learning the parameters

Dependency Grammar Parser Evaluation Parsing with Probabilistic CFGs Develop a probabilistic model Assigning probabilities to parse trees Get the probabilities for the model

Parse with probabilities Slight modification to dynamic programming approach Task is to find the max probability tree for an input Probability Model Attach probabilities to grammar rules The expansions for a given non-terminal

sum to 1 VP -> Verb VP -> Verb NP VP -> Verb NP NP .55 .40 .05

Probability Model (contd) A derivation (tree) consists of the set of grammar rules that are in the tree The probability of a derivation (tree) is just the product of the probabilities of the rules in the derivation The probability of a word sequence (sentence) is the probability of its tree in the unambiguous case is the sum of the probabilities of the trees in the ambiguous case

Getting the Probabilities From an annotated database (a treebank) Treebanks are annotated manually English: Penn Treebank (about 1.6 million words) Chinese: Chinese Treebank (about 500K words) German: German Treebank (size ??)

Learned from a raw corpus Learning from a Treebank The easiest and the most accurate way to learn probabilities Get a large collection of parsed sentences Collect counts for each non-terminal rule expansion in the collection Normalize

Done Learning What if you dont have a treebank (and cant get one) Take a large collection of text and parse it In the case of syntactically ambiguous sentences collect all the possible parses Prorate the rule statistics gathered for rules

in the ambiguous case by their probability Proceed as you did with a treebank Learning How? If you dont have the probabilities how can you prorate the ambiguous parses based on their probabilities

Magic Guess some random probabilities for the rules Use those to assign probabilities to the trees Learning Would that work? If you just made up the numbers, how could they possibly help? Well they constitute a prediction

We can re-adjust them incrementally until the predictions start to look right Assumptions Were assuming that there is a grammar to be used to parse with Were assuming the existence of a large robust dictionary with parts of speech Were assuming the ability to parse (i.e. a

parser) Given all that we can parse probabilistically Typical Approach Bottom-up dynamic programming approach Assign probabilities to constituents as they are completed and placed in the table Use the max probability for each

constituent going up Whats that last bullet mean? Say were talking about a final part of a parse S->0NPiVPj The probability of the S is P(S->NP VP)*P(NP)*P(VP) The green part is already known, since we are

doing bottom-up parsing Max P(NP) is known What if there are multiple NPs for the span of text in question (0 to i)? Take the max

Does not mean that other kinds of constituents for the same span are ignored (i.e. they might be in the solution) Problems The probability model discussed so far is just based on the rules in the derivation Doesnt use the words in any real way Doesnt take into account where in the

derivation a rule is used Solution Add lexical dependencies to the scheme Infiltrate the influence of particular words into the probabilities in the derivation i.e. condition on actual words All the words? No, only the right ones

Heads To do that were going to make use of the notion of the head of a phrase The head of an NP is its noun The head of a VP is its verb The head of a PP is its preposition A way of achieving some generalization

Replace phrases with their heads Heads Finding basic phrases (NP, VP, PP) and their heads: Shallow parsing http://l2r.cs.uiuc.edu/~cogcomp and Heuristics

Example (right) Example (wrong) How? We used to have VP -> V NP PP

P(r|VP) Thats the count of this rule divided by the number of VPs in a treebank Now we have VP(dumped)-> V(dumped) NP(sacks)PP(in) P(r|VP && dumped is the verb && sacks is the head of the NP && in is the head of the PP)

Not likely to have significant counts in any treebank Declare Independence When stuck, exploit independence and collect the statistics you can Well focus on capturing two things Verb subcategorization Particular verbs have affinities for particular VPs

Objects affinities for their predicates (mostly their mothers and grandmothers) Some objects fit better with some predicates than others Subcategorization Condition particular VP rules on their head so

r: VP -> V NP PP P(r|VP) becomes P(r | VP && dumped) Whats the count? How many times was this rule used with dump, divided by the number of VPs that dump appears in total Preferences

Subcategorization captures the affinity between VP heads (verbs) and the VP rules they go with What about the affinity between VP heads and the heads of the other daughters of the VP Preferences The issue here is the attachment of the PP. So

the affinities we care about are the ones between dumped and into vs. sacks and into So count the places where dumped is the head of a constituent that has a PP daughter with into as its head and normalize Vs. the situation where sacks is a constituent with into as the head of a PP daughter Preferences

Consider the VPs Ate spaghetti with gusto Ate spaghetti with marinara The affinity of gusto for eat is much larger than its affinity for spaghetti On the other hand, the affinity of marinara for spaghetti is much higher than its affinity for eat

Moving away from CFG To dependency grammars Dependency grammars Model binary dependencies between words For instance: I eat

eat apples Find the set of dependencies that best fits the input words (i.e. with no contradictions) Dependency grammars Link Grammar (CMU) A lexicon A set of rules indicating the type of links a

word can participate in Example: I S+ eat S- O+ apple O Match these links (using a dynamic programming approach) http://bobo.link.cs.cmu.edu/link/ Evaluating parsers Precision # of correct phrases/dependencies from the

parse / total # of dependencies in the parse Recall # of correct phrases/dependencies from the parse / total # of phrases/dependencies in the treebank (gold standard) Evaluating Parsers Cross-brackets

# of crossing brackets in the parse and the treebank E.g. (A (B C)) and ((A B) C) has one crossing bracket Summary Probabilistic CFG Learning the parameters

Dependency Grammar Parser Evaluation Next Time Unification Grammar

Recently Viewed Presentations

  • Academic presentation for college course (paper and pencil ...

    Academic presentation for college course (paper and pencil ...

    Methylenedioxypyrovalerone(MDPV), mephedrone and methylone are the chemicals most often found in Bath Salts. ... Mephedroneis a fine white, off-white or slightly yellow-colored powder. It can also be found in tablet and capsule form. K2 is typically sold in small, silvery...
  • Small Business Management 13e - Cengage

    Small Business Management 13e - Cengage

    Profitable small companies sometimes encounter cash flow problems by failing to understand the working-capital cycle or failing to anticipate the negative consequences of growth. Cash inflows and outflows are reconciled in the cash budget, which involves forecasts of cash receipts...
  • Diagnostic Forensic Solutions, Inc. - Campbell's Web Soup

    Diagnostic Forensic Solutions, Inc. - Campbell's Web Soup

    Once you have completed your assessment of the analysis you must complete a Diagnostic Forensic Solutions Invoice. You need to complete the invoice and submit it to me. If I approve your invoice, you will learn the results of your...
  • Biotechnoloogy Teacher Training Workshop

    Biotechnoloogy Teacher Training Workshop

    Biology: A Molecular Approach, Laboratory Welcome! Today's Plan Orientation Laboratory Notebooks The Scientific Method Controlled Experiments ORIENTATION Introductions - Instructors Dr. Jackie Crisman Lecture Dr. Ellen Lehning Lab Dr. Susan Greenwood Laboratory Technician Introductions - Students Name Why you are...
  • عرض تقديمي في PowerPoint - Weebly

    عرض تقديمي في PowerPoint - Weebly

    Due to the high electrical resistance of the indicator electrode's glass membrane, the meter must have a correspondingly high input impedance. Most pH meters currently sold contain built-in microprocessors that simplify pH measurement by performing and storing calibrations, doing diagnostics...
  • PowerPoint Presentation

    PowerPoint Presentation

    Before we compare two different theories of earth history, let's review a little background and some definitions. / Catastrophism is the idea that many geological formations were formed as a result of a catastrophe. Many of the earliest geologists worked...
  • Human Anatomy and Physiology Unit 5, Part 2:

    Human Anatomy and Physiology Unit 5, Part 2:

    - extends from gums and teeth to fauces. Cheeks - contain buccinator muscles. Lips (labia) ... floor of mouth deep to tongue; lesser sublingual ducts open into floor of mouth. Figure 24.7 The three major salivary glands - parotid, sublingual,...
  • ANTHROPOGENIC GLOBAL WARMING UNSETTLED SCIENCE Presented to the

    ANTHROPOGENIC GLOBAL WARMING UNSETTLED SCIENCE Presented to the

    ANTHROPOGENIC GLOBAL WARMING UNSETTLED SCIENCE Presented to the Lyncean Group 24 February 2010 by Hugh Kendrick * * * * * * * * /Figure 2: The cosmic ray link between solar activity and the terrestrial climate.