An unsupervised dependency parser.
This is an implementation of the unsupervised dependency parser described by Søgaard (2012). The parser is language independent and does not need any training data.
The parser operates in two stages. First, it constructs a directed graph from the words in a sentence using
The resulting graph structure is used to rank the words using the PageRank algorithm (Brin and Page, 1998). In the second stage, the parser constructs a dependency tree from that ranked list of words. If part-of-speech information is available, the parser can make use of universal dependency rules (Naseem et al., 2010).
|||The list of function words is extracted from the whole input text by applying a variant of Mihalcea and Tarau’s (2004) TextRank algorithm.|
|||The parser relies on a universal part-of-speech tagset (Petrov et al., 2012). The language-dependent input tags are mapped to that universal tagset using the mappings provided here.|
Usurper can be easily installed using pip:
pip install Usurper
You can use the parser as a standalone program from the command line. Your input text has to be either in CoNLL-X format or in a simple format with one token per line and an empty line between sentences. If your data is part-of-speech tagged, the tags should be separated from the tokens by a tab:
Many JJ people NNS need VBP our PRP$ help NN . . Please UH continue VB our PRP$ important JJ partnership NN . .
General usage information, including a list of supported part-of-speech tagsets, is available via the -h option:
If you want to use the full parser, i.e. you have part-of-speech tagged input data and you want to use the universal dependency rules, you can invoke the parser like this:
usrpr -t <tag-set> [--conll] <file>
If you do not want to use the universal dependency rules, you can use the --no-rules option:
usrpr --no-rules -t <tag-set> [--conll] <file>
If your data is untagged or you want to ignore the tags, simply omit the -t option (in that case it is not possible to make use of the universal dependency rules):
usrpr [--conll] <file>
Note that the parser tries to automatically identify function words. If your input file is too small, that cannot be done reliably and might have an impact on parser performance.
You can easily incorporate the parser into your own Python projects. All you have to do is import usurper.soegaard:
from usurper import soegaard parse = soegaard.parse_sentence(tokens, function_words, no_rules, tags, tagset)
The parse_sentence function returns a networkx DiGraph object. You can convert it into a nested list representation using the export_to_conll_format function in usurper.utils.conll.
The function’s docstring gives more detailed information about the arguments it takes:
parse_sentence(tokens, function_words, no_rules, tags=, tagset=None) Parse sentence using the algorithm by Søgaard (2012). Args: tokens: list of tokens function_words: set of function words no_rules: boolean; true if universal dependency rules should not be used tags: list of tags, if available; the nth element of tags should be the part-of-speech tag associated with the nth element of tokens tagset: string identifying one of the supported tagsets Returns: A networkx DiGraph representing the dependency structure.
Here is a table giving unlabeled attachment scores (ignoring punctuation) for a couple of languages. Test data for most of the languages is available from the CoNLL-X Shared Task website. Performance for English was evaluated on section 23 of the Penn Treebank.
|Language||no tags||no rules||full parser|