Create import CSVs for a Neo4j Wikipedia Page graph
Project description
wiki2neo
Produce Neo4j import CSVs from Wikipedia database dumps to build a graph of links between Wikipedia pages.
Installation
$ pip install wiki2neo
Usage
Usage: wiki2neo [OPTIONS] [WIKI_XML_INFILE]
Parse Wikipedia pages-articles-multistream.xml dump into two Neo4j import
CSV files:
Node (Page) import, headers=["title:ID", "id"]
Relationships (Links) import, headers=[":START_ID", ":END_ID"]
Reads from stdin by default, pass [WIKI_XML_INFILE] to read from file.
Options:
-p, --pages-outfile FILENAME Node (Pages) CSV output file [default:pages.csv]
-l, --links-outfile FILENAME Relationships (Links) CSV output file [default: links.csv]
--help Show this message and exit.
Import resulting CSVs into Neo4j:
$ neo4j-admin import --nodes:Page pages.csv \
--relationships:LINKS_TO links.csv \
--ignore-duplicate-nodes --ignore-missing-nodes --multiline-fields
Downloads from Wikipedia are in compressed xml.bz2
format. Simplest usage is to pip extraction output straight into wiki2neo
:
$ bzcat pages-articles-multistream.xml.dz2 | wiki2neo
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
wiki2neo-0.0.3.tar.gz
(2.7 kB
view hashes)
Built Distribution
Close
Hashes for wiki2neo-0.0.3-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | be81aa623ad52ead3415a26bb2fcaa3ad1e72173c516f9f2512485a163c01090 |
|
MD5 | c56855c07b4a7202f87ae7ae39a1401c |
|
BLAKE2b-256 | a95a93dc8634b60808a00fa2b8eab2e54e48fa396817c27c1e653ed11fbf7885 |