Automatic Assignment of Section Structure to Texts of Dutch Court Judgments

Inferring a Section Hierarchy

Introduction

After we have labeled a sequence of text elements, we wish to infer the section hierarchy. That is: we need to invent some procedure of creating a tree structure in which these tagged text elements are the leaf nodes, and may be children of 'section' nodes. This problem is very much akin to constituency parsing for natural languages, and that is why we approach the problem as parsing a token sequence with a Probabilistic Context-Free Grammar (PCFG).

In this chapter, we introduce PCFGs and the Cocke–Younger–Kasami algorithm (CYK), a deterministic algorithm for finding the best parse tree in quadratic space and time. We conclude with an evaluation of the results.

Methods

Probabilistic Context-Free Grammars

Context-Free Grammars (CFGs) are grammars where each rule is of the form

AαA \rightarrow \alphaEq. 4.

where AA is a single non-terminal symbol and α\alpha is any string of terminals and non-terminals, including the empty string ϵ\epsilon.

A Probabilistic Context-Free Grammar (PCFG) is then a Context-Free Grammar in which each rule has a probability assigned to it. A derivation of a sequence with a PCFG has a probability score attached to it, which is the product of the probabilities of all of the applied rules.

In our discussions, we assume probability scores to be real numbers between 00 and 11, with the common operations of multiplication and addition, but in implementation we use the Log semiring to avoid arithmetic underflow.

CFGs are said to be in Chomsky Normal Form (CNF) if all rules are of the following form:

ABCA\rightarrow B CEq. 4.AtA\rightarrow tEq. 4.

Where AA, BB and CC are non-terminal types, and tt is a terminal type. In the following, we use an extension of CNF with unary rules. In this extension, tt is either a terminal or non-terminal type.

A lot of work has been done in parsing (P)CFGs in applications of natural language processing and parsing programming languages. More recently, PCFGs have been used for other applications such as modeling RNA structures, as in Sakakibara et al. (, pp. 5112‑5120).

Listing 1 shows a simplified version of the grammar that we use to create the section hierarchy.

Terminal rules
Texttext\text{Text} \rightarrow \text{text}1.01.0
Textnewline\text{Text} \rightarrow \text{newline}1.01.0
Numberingnumbering\text{Numbering} \rightarrow \text{numbering}1.01.0
TitleTextsection-title\text{TitleText} \rightarrow \text{section-title}1.01.0
Non-terminal rules
DocumentHeader DocumentContent\text{Document} \rightarrow \text{Header DocumentContent}1.01.0
DocumentDocumentContent\text{Document} \rightarrow \text{DocumentContent}1.01.0
DocumentContentSections\text{DocumentContent} \rightarrow \text{Sections}1.01.0
DocumentContentText Sections\text{DocumentContent} \rightarrow \text{Text Sections}0.80.8
DocumentContentSections Text\text{DocumentContent} \rightarrow \text{Sections Text}0.80.8
DocumentContentText Sections Text\text{DocumentContent} \rightarrow \text{Text Sections Text}0.80.8
TextText Text\text{Text} \rightarrow \text{Text Text}1.01.0
SectionsSections Sections\text{Sections} \rightarrow \text{Sections Sections}0.4+{0.6if numberings in sequence0otherwise0.4 + \begin{cases}0.6&\text{if numberings in sequence}\\0&\text{otherwise}\end{cases}
SectionsSection\text{Sections} \rightarrow \text{Section}1.01.0
SectionsSection Text\text{Sections} \rightarrow \text{Section Text}0.90.9
SectionsText Section\text{Sections} \rightarrow \text{Text Section}0.80.8
SectionSectionTitle SectionContent\text{Section} \rightarrow \text{SectionTitle SectionContent}1.01.0
SectionTitleNumbering\text{SectionTitle} \rightarrow \text{Numbering}1.01.0
SectionTitleTitleText\text{SectionTitle} \rightarrow \text{TitleText}1.01.0
SectionTitleNumbering TitleText\text{SectionTitle} \rightarrow \text{Numbering TitleText}1.01.0
SectionContentText\text{SectionContent} \rightarrow \text{Text}1.01.0
SectionContentSections\text{SectionContent} \rightarrow \text{Sections}1.01.0
SectionContentSectionContent SectionContent\text{SectionContent} \rightarrow \text{SectionContent SectionContent}1.01.0
Listing 1. Simplified grammar for creating section hierarchy in CNF with unary rules.

CYK Algorithm

The Cocke–Younger–Kasami (CYK) algorithm is an algorithm for parsing Context-Free Grammars that was separately discovered by Kasami (), Younger (, pp. 189‑208) and Cocke (). The algorithm has time complexity of Θ(n3G)\Theta (n^3\cdot \left | G \right |), where nn is the length of the input string and G\left | G \right | is the size of the grammar.

The standard version of the CYK algorithm is defined for ordinary context free grammars that are given in Chomsky normal form (CNF), but is easy to extend to include support for probabilistic and unary rules as well, as we do in this section. Note that any CFG may be transformed into an equivalent grammar in Chomsky normal form, and this also holds for probabilistic CFGs (Huang & Fu (, pp. 201‑224)). Also note that converting a grammar to CNF is not without cost: the increase in grammar size is O(G2)\mathrm O (\left | G \right |^2) for the best algorithm. The increase is linear if we use a variation of the CYK algorithm that works on grammars in binary normal form (2NF): see Lange and Leiß (, pp. 2008‑2010).

The CYK algorithm is a bottom-up parsing algorithm. The standard algorithm considers every substring from length 11 to nn, and keeps a list of all possible types for those substrings, along with their probabilities.

For substrings of length 11 (individual words), we use the terminal rules in the grammar. For substrings of length l>1l>1 (word sequences), we apply the production rules to every possible combination of two substrings of length l1l-1. This works, because CNF mandates that all production rules have 2 non-terminals. Every time we apply a rule, we multiply the probability attached to that rule and the probabilities of the constituent substrings.

In addition to binary production rules, we also allow unary rules in our grammar of the form AB\text A \rightarrow \text B, where A\text A and B\text B are both non-terminals. Extension of the algorithm is simple: after ordinary type assignment for substrings, we add those types to the list that result from applicable unary rules, if they produce a non-terminal that does not yet exist in the table with at least as much probability. We repeat until the cell does not change anymore.

A visual example of the result table can be found in Figure 10.

  1. N (20%)
  2. V (60%)
  3. NP (14%)
  4. VP (6%)
  5. S (0.6%)
fish
  1. NP (0.49%)
  2. VP (10.5%)
  3. S (1.05%)
  1. NP (0.0069%)
  2. VP (0.147%)
  3. S (0.09%)
  1. VP (0.002%)
  2. NP (0.00001%)
  3. S (0.019%)
  1. N (50%)
  2. V (10%)
  3. NP (35%)
  4. VP (1%)
  5. S (0.1%)
people
  1. NP (0.49%)
  2. VP (0.7%)
  3. S (1.89%)
  1. NP (0.007%)
  2. VP (0.010%)
  3. S (1.323%)
  1. N (20%)
  2. V (60%)
  3. NP (14%)
  4. VP (6%)
  5. S (0.6%)
fish
  1. NP (0.196%)
  2. VP (4.2%)
  3. S (0.42%)
  1. N (20%)
  2. V (30%)
  3. NP (14%)
  4. VP (3%)
  5. S (0.3%)
tanks
Fig 10. An example parse chart for the sentence "fish people fish tanks", based on the grammar in Listing 2. The constituents that make up the resulting parse to S are underlined. The top of the triangle represents the substring 11 to 44, i.e. the entire sentence. We can derive S by combining the substring from 11 to 22 (fish people) and the substring from 33 to 44 (fish tanks) using the rule S → NP VP.
SNP VP\text{S} \rightarrow \text{NP VP}0.90.9
SVP\text{S} \rightarrow \text{VP}0.10.1
VPV NP\text{VP} \rightarrow \text{V NP}0.50.5
VPV\text{VP} \rightarrow \text{V}0.10.1
NPNP NP\text{NP} \rightarrow \text{NP NP}0.10.1
NPN\text{NP} \rightarrow \text{N}0.70.7
Nfish\text{N} \rightarrow \text{fish}0.20.2
Npeople\text{N} \rightarrow \text{people}0.50.5
Ntanks\text{N} \rightarrow \text{tanks}0.20.2
Vpeople\text{V} \rightarrow \text{people}0.10.1
Vfish\text{V} \rightarrow \text{fish}0.60.6
Vtanks\text{V} \rightarrow \text{tanks}0.30.3
Listing 2. Simple natural language grammar for putting noun phrases (NP\text{NP}) and verb phrases (VP\text{VP}) together to create a sentence (S\text S).

Results

PARSEVAL

Evaluating performance on a parse tree is not as straightforward as it is for classification. Like in the previous chapter, we evaluate our grammar using an F-score, but notions of precision and recall are harder to define for constituency trees. To evaluate the parser, we use a metric known as PARSEVAL (due to Abney et al. (, pp. 306‑311)) with labeled precision and labeled recall as in Collins (, pp. 16‑23).

In this metric, precision and recall are defined as follows:

  • Precision is the fraction of correct constituents out of the total number of constituents in the candidate parse
  • Recall is the fraction of correct constituents out of the total number of constituents in the correct parse

Where 'correct constituent' means that each non-terminal node has the same label and the same yield, and yield is the ordered list of leaf nodes of a parse tree.

Results

Over a set of 10 random documents, we report an average F1-score of 0.92 and F1-score of 0.93 (precision 0.93; recall 0.92).

Delving deeper into problematic parses, we see that there are a number of recurring types of errors that our parsing grammar makes. Firstly, it often occurs that subsections are not preceded by a full numbering. For example, consider a section numbering sequence such as the following:

1.
2.
3.1
3.2

Our grammar assumes that section 3.1 is a subsection of section 2, since section 2 is the first preceding supersection to 3.1. However, this not the desired result. The desired result would be to wrap the 3.X subsections in a section that represents section 3, even though there is no explicit numbering for section 3. This could be achieved with an extension to the section grammar.

Another issue is that the grammar has difficulty in deciding whether non-numbered sections should be subsections or not. Indeed, this can be difficult to determine based purely on typography.

Discussion

Although we report promising results, there is room for improvement of the parser.

One way to improve parse quality is to incorporate more domain-specific knowledge in the grammar. For example, sections with titles like 'OVERWEGINGEN' (considerations) and 'CONCLUSIE' (conclusion) almost always appear as the first level of sectioning.

Another possibility to improve the grammar is for the grammar to recognize different 'modes': section siblings often share a typography that is internally consistent within a document, but not among documents. For example: in one document all sections are bold and capitalized, sub-sections are italic and sub-sub-sections are underlined, and another document might have no formatting at all.

Owing to the brittleness of the current grammar, we might benefit from implementing a Conditional Probabilistic Context-Free Grammar (Conditional PCFG), as introduced in McCallum & Sutton (). Conditional PCFGs are similar to Conditional Random Fields in that we describe a conditional model instead of a generative one (so the probability distribution P(yx)P(\mathbf y|\mathbf x) instead of P(x,y)P(\mathbf x,\mathbf y)), and this allows us to use a large feature set.

Another possibility is to implement a probabilistic version of the Earley parsing algorithm. The Earley algorithm is a more top-down parser which easily allows to intervene during parsing when some unexpected input is encountered. The Earley parser has a worst-case complexity of O(n3)O(n^3), but parses unambiguous grammars in O(n2)O(n^2) and left-recursive grammars in O(n)O(n), and so can be faster than CYK. In our experiments, CYK starts to become noticeably slow for documents with more than 500 tokens, even after optimizing the algorithm for resource re-use and parallellizing calculation of the table cells.