Range concatenation grammar

Range concatenation grammar (RCG) is a grammar formalism developed by Pierre Boullier [1] in 1998 as an attempt to characterize a number of phenomena of natural language, such as Chinese numbers and German word order scrambling, which are outside the bounds of the mildly context-sensitive languages.[2]

From a theoretical point of view, any language that can be parsed in polynomial time belongs to the subset of RCG called positive range concatenation grammars, and reciprocally.[4]

Though intended as a variant on Groenink's literal movement grammars (LMGs), RCGs treat the grammatical process more as a proof than as a production. Whereas LMGs produce a terminal string from a start predicate, RCGs aim to reduce a start predicate (which predicates of a terminal string) to the empty string, which constitutes a proof of the terminal strings membership in the language.

Description

edit

Formal definition

edit

A Positive Range Concatenation Grammar (PRCG) is a tuple  , where:

  •  ,   and   are disjoint finite sets of (respectively) predicate names, terminal symbols and variable names. Each predicate name has an associated arity given by the function  .
  •   is the start predicate name and verify  .
  •   is a finite set of clauses of the form  , where the   are predicates of the form   with   and  .

A Negative Range Concatenation Grammar (NRCG) is defined like a PRCG, but with the addition that some predicates occurring in the right-hand side of a clause can have the form  . Such predicates are called negative predicates.

A Range Concatenation Grammar is a positive or a negative one. Although PRCGs are technically NRCGs, the terms are used to highlight the absence (PRCG) or presence (NRCG) of negative predicates.

A range in a word   is a couple  , with  , where   is the length of  . Variables bind to ranges, not to arbitrary strings of nonterminals. Two ranges   and   can be concatenated iff  , and we then have:  . When instantiating a clause, where an argument consists of multiple elements from  , their ranges must concatenate.

For a word  , with  , the dotted notation for ranges is:  .

Recognition of strings

edit

The strings of predicates being rewritten represent constraints that the string being tested has to satisfy (if positive), or in the case of negative predicates not satisfy. The order of predicates is irrelevant. Rewrite steps amount to replacing one constraint by zero or more simpler constraints.

Like LMGs, RCG clauses have the general schema  , where in an RCG,   is either the empty string or a string of predicates. The arguments   consist of strings of terminal symbols and/or variable symbols, which pattern match against actual argument values like in LMG. Adjacent variables constitute a family of matches against partitions, so that the argument  , with two variables, matches the literal string   in three different ways:  . These would give rise to three different instantiations of the clause containing that argument  .

Predicate terms come in two forms, positive (which produce the empty string on success), and negative (which produce the empty string on failure/if the positive term does not produce the empty string). Negative terms are denoted the same as positive terms, with an overbar, as in  .

The rewrite semantics for RCGs is rather simple, identical to the corresponding semantics of LMGs. Given a predicate string  , where the symbols   are terminal strings, if there is a rule   in the grammar that the predicate string matches, the predicate string is replaced by  , substituting for the matched variables in each  .

For example, given the rule  , where   and   are variable symbols and   and   are terminal symbols, the predicate string   can be rewritten as  , because   matches   when  . Similarly, if there were a rule  ,   could be rewritten as  .

A proof/recognition of a string   is done by showing that   produces the empty string. For the individual rewrite steps, when multiple alternative variable matches are possible, any rewrite which could lead the whole proof to succeed is considered. Thus, if there is at least one way to produce the empty string from the initial string  , the proof is considered a success, regardless of how many other ways to fail exist.

Example

edit

RCGs are capable of recognizing the non-linear index language   as follows:

Letting x, y, and z be variable symbols:   The proof for abbabbabb is then

 

Or, using the more correct dotted notation for ranges:

   

For a string of   letters, there are   different instantiations of that first clause, but only the one which makes   all   letters each allows the derivation to reach  .

Properties

edit

Every context-free grammar (CFG) can be converted into a range concatenation grammar:

  • For every nonterminal   of the CFG, the RCG has an arity   predicate  .
  • For every CFG rule  , the RCG has  .
  • For every CFG rule   (where   terminal), the RCG has  .

The intersection and union of two range concatenation languages are trivially range concatenation languages:

  • For   the intersection of   and  , you have  .
  • For   the union of   and  , you have   and  .

Possibly negative range concatenation languages are also closed under set complement.

A consequence of the above is that it is undecidable whether a (positive) range concatenation language is nonempty, because it is undecidable whether the intersection of two context-free languages is nonempty. Hence range concatenation grammars are not generative.

References

edit
  1. ^ Boullier, Pierre (Jan 1998). Proposal for a Natural Language Processing Syntactic Backbone (PDF) (Technical report). Vol. 3342. INRIA Rocquencourt (France).
  2. ^ Pierre Boullier (1999). "Chinese Numbers, MIX, Scrambling, and Range Concatenation Grammars" (PDF). Proc. EACL. pp. 53–60. Archived from the original (PDF) on 2003-05-15.
  3. ^ Eberhard Bertsch and Mark-Jan Nederhof (Oct 2001). "On the complexity of some extensions of RCG parsing" (PDF). Proceedings of the Seventh International Workshop on Parsing Technologies (Beijing). pp. 66–77.
  4. ^ Laura Kallmeyer (2010). Parsing Beyond Context-Free Grammars. Springer Science & Business Media. p. 37. ISBN 978-3-642-14846-0. citing Bertsch, Nederhof (2001)[3]