Skip to main content

Easy scraper that extracts data from Wikipedia articles thanks to its URL slug

Project description

CC BY 4.0 Downloads

wikiscraper

Easy scraper that extracts data from Wikipedia articles thanks to its URL slug : title, images, summary, sections paragraphs, sidebar info

Developed by Alexandre MEYER

This work is licensed under a Creative Commons Attribution 4.0 International License.

CC BY 4.0

Installation

$ pip install wikiscraper

Initialization

Import

import wikiscraper as ws

Main request

# Set the language page in Wikipedia for the query
# (ISO 639-1 & by default "en" for English)
ws.lang("fr")
# Search and get content by the URL slug of the article
# (Example : https://fr.wikipedia.org/wiki/Paris)
result = ws.searchBySlug("Paris")

Examples

Title H1 & URL

# Get article's title
result.getTitle()
# Get article's URL
result.getURL()

Sidebar

# Get value of the sidebar information label
result.getSideInfo("Gentilé")

Abstract

# Get all paragraphs of abstract
print(result.getAbstract())
# Get the second paragraph of abstract
print(result.getAbstract()[1])
# Optional : Get the x paragraphs, starting from the beginning
print(result.getAbstract(2))

Images

# Get all illustration images
img = result.getImage()
# Get a specific image thanks to its position in the page
print(img[0]) # Main image

Sections

# Get table of contents
# Only first headlines
print(result.getContentsTable())
# All headelines (first and second levels)
print(result.getContentsTable(subcontents=True))
# Get paragraphs from a specific section thanks to the parents' header title
# All optional args : .getSection(h2Title, h3Title, h4Title)
# Exemple : https://fr.wikipedia.org/wiki/Paris#Politique_et_administration
print(result.getSection('Politique et administration', 'Statut et organisation administrative', 'Historique')[0])

Errors

"Unable to find the requested query: please check the spelling of the slug"

  • Check if the spelling of the slug is correct
  • Check if the article exists
  • Check if the language set for the query matches with the slug (by default the search is for English articles)

Versions

  • 1.1.0 = Error Handling
  • 1.0.0 = init

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wikiscraper-1.1.9.tar.gz (10.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wikiscraper-1.1.9-py3-none-any.whl (12.1 kB view details)

Uploaded Python 3

File details

Details for the file wikiscraper-1.1.9.tar.gz.

File metadata

  • Download URL: wikiscraper-1.1.9.tar.gz
  • Upload date:
  • Size: 10.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for wikiscraper-1.1.9.tar.gz
Algorithm Hash digest
SHA256 9790c6a07e1b36f4579aeb46adbf013a067365b4e14d29f5edcaddc9b714318a
MD5 f1d5108cf1e460fb2de1de582eeec88d
BLAKE2b-256 61063b996e96e8400f84d8d1e075b41ae5917fbfeab087ba931820bf744b4396

See more details on using hashes here.

File details

Details for the file wikiscraper-1.1.9-py3-none-any.whl.

File metadata

  • Download URL: wikiscraper-1.1.9-py3-none-any.whl
  • Upload date:
  • Size: 12.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for wikiscraper-1.1.9-py3-none-any.whl
Algorithm Hash digest
SHA256 0c1847ef6d62fe33dda762398c2121ebfba338f4bb94491be1b8eaaa7c772efa
MD5 c6eafddffdb32bd452c4e5805a20b7bd
BLAKE2b-256 452ddcd1498fa900a97d8b0800b94cb0dacef96e3ec1663aa6d61789831e7898

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page