The fastest web crawler written in Rust ported to nodejs.
Project description
spider-py
The spider project ported to Python (Incomplete Port WIP).
Getting Started
pip install spider_rs
import asyncio
from spider_rs import crawl
async def main():
website = await crawl("https://choosealicense.com")
print(website.links)
# print(website.pages)
asyncio.run(main())
Development
Install maturin pipx install maturin
and python.
maturin develop
Todo
- Add thread safe callback handling crawl/scrape.
- Add callback Cron.
- Add subscription callback.
Once these items are done the base of the module should be complete. Most of the code comes from the initial port to Node.js that was done.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
spider_rs-0.0.1.tar.gz
(27.7 kB
view hashes)
Built Distribution
Close
Hashes for spider_rs-0.0.1-cp39-cp39-macosx_11_0_arm64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 59d9b9f0c381ce27fdec75724100a564f4267c99ca26653bd3cfb1c0676bc23c |
|
MD5 | 653fc06c55cda397bccd89b64d6fd16f |
|
BLAKE2b-256 | 456556c931100f6fcb7738cb7d7033d06f23a4f71a078780d252e846e7244701 |