A simple async python library for search pages and images in wikis
Project description
📚 wikipya
A simple async python library for search pages and images in wikis
🛠 Usage
# Import wikipya
from wikipya import Wikipya
# Create Wikipya object with Wikipedia methods
wiki = Wikipya(lang="en").get_instance()
# or use other MediaEiki server (or other service, but this is'n fully supported now)
wikipya = Wikipya(url="https://ipv6.lurkmo.re/api.php", lurk=True, prefix="").get_instance()
# for use Lurkmore (russian). simple and fast
# Get a pages list from search
search = await wiki.search("test")
# Get a pages list from opensearch
opensearch = await wiki.opensearch("test")
# Get page class
# You can give to wiki.page() search item, title of page, page id
# Search item (supported ONLY by wiki.search)
page = await wiki.page(search[0])
# Page title
page = await wiki.page("git")
# Pageid
page = await wiki.page(800543)
print(page.html) # Get page html
print(page.parsed) # Get html cleared of link, and other non-formating tags
# Get image
image = await wiki.image(page.title) # may not work in non-wikipedia services, check true prefix, or create issue
print(image.source) # Image url
print(image.width) # Image width
print(image.height) # Image height
🎉 Features
- Full async
- Support of other instances of MediaWiki
- Support cleaning of HTML with TgHTML
- Uses models by pydantic
🚀 Install
To install, run this code:
pip install wikipya
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
wikipya-4.0.1.post2.tar.gz
(9.3 kB
view hashes)
Built Distribution
Close
Hashes for wikipya-4.0.1.post2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | cf149a40b7892db8d8c57f2056c765e092f4dc826ee2d62067bcc487a07313b8 |
|
MD5 | 091aacee5212ad2d54020eefba9d47ee |
|
BLAKE2b-256 | 980beeca35a75f401921b9e12642b69f821e4fc65deadcbb959de6319b39268d |