Skip to main content

Python package for creating labeled examples from wiki dumps

Project description

Wikipedia NER
-------------

Tool to train and obtain named entity recognition labeled examples
from Wikipedia dumps.

Usage in [IPython notebook](http://nbviewer.ipython.org/github/JonathanRaiman/wikipedia_ner/blob/master/Wikipedia%20to%20Named%20Entity%20Recognition.ipynb) (*nbviewer* link).

## Usage

Here is an example usage with the first 200 articles from the english wikipedia dump (dated lated 2013):

parseresult = wikipedia_ner.parse_dump("enwiki.bz2",
max_articles = 200)
most_common_category = wikipedia_ner.ParsedPage.categories_counter.most_common(1)[0][0]

most_common_category_children = [
parseresult.index2target[child] for child in list(wikipedia_ner.ParsedPage.categories[most_common_category].children)
]

"In '%s' the children are %r" % (
most_common_category,
", ".join(most_common_category_children)
)

#=> "In 'Category : Member states of the United Nations' the children are 'Afghanistan, Algeria, Andorra, Antigua and Barbuda, Azerbaijan, Angola, Albania'"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wikipedia-ner-0.0.21.tar.gz (76.8 kB view details)

Uploaded Source

File details

Details for the file wikipedia-ner-0.0.21.tar.gz.

File metadata

File hashes

Hashes for wikipedia-ner-0.0.21.tar.gz
Algorithm Hash digest
SHA256 0bfedf55b02f14225269b81bf4acc3857d8973ebb89e15422a4ef3c0fb845807
MD5 03bfb96544f2755cdf48bb7f060e5f3d
BLAKE2b-256 b4c1db0c1024c3e8310c2613eb5d4f5fc6b538026f6bd8660691651fb48d7ee3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page