Skip to main content

Create import CSVs for a Neo4j Wikipedia Page graph

Project description

wiki2neo

PyPI version shields.io

Produce Neo4j import CSVs from Wikipedia database dumps to build a graph of links between Wikipedia pages.

Installation

$ pip install wiki2neo

Usage

Usage: wiki2neo [OPTIONS] [WIKI_XML_INFILE]

  Parse Wikipedia pages-articles-multistream.xml dump into two Neo4j import
  CSV files:

      Node (Page) import, headers=["title:ID", "id"]
      Relationships (Links) import, headers=[":START_ID", ":END_ID"]

  Reads from stdin by default, pass [WIKI_XML_INFILE] to read from file.

Options:
  -p, --pages-outfile FILENAME  Node (Pages) CSV output file  [default:pages.csv]
  -l, --links-outfile FILENAME  Relationships (Links) CSV output file [default: links.csv]
  --help                        Show this message and exit.

Import resulting CSVs into Neo4j:
$ neo4j-admin import --nodes:Page pages.csv \
        --relationships:LINKS_TO links.csv \
        --ignore-duplicate-nodes --ignore-missing-nodes --multiline-fields

Downloads from Wikipedia are in compressed xml.bz2 format. Simplest usage is to pip extraction output straight into wiki2neo:

$ bzcat pages-articles-multistream.xml.dz2 | wiki2neo

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for wiki2neo, version 0.0.3
Filename, size File type Python version Upload date Hashes
Filename, size wiki2neo-0.0.3-py2.py3-none-any.whl (2.1 kB) File type Wheel Python version py2.py3 Upload date Hashes View
Filename, size wiki2neo-0.0.3.tar.gz (2.7 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page