The fastest web crawler written in Rust ported to nodejs.
Project description
spider-py
The spider project ported to Python.
Getting Started
pip install spider_rs
import asyncio
from spider_rs import crawl
async def main():
website = await crawl("https://choosealicense.com")
print(website.links)
# print(website.pages)
asyncio.run(main())
Use the Website class to build the crawler you need.
import asyncio
from spider_rs import Website
async def main():
website = Website("https://choosealicense.com", False)
website.crawl()
print(website.get_links())
asyncio.run(main())
Development
Install maturin pipx install maturin
and python.
maturin develop
Todo
- Fix http headers custom assign.
- Add better docs.
Once these items are done the base of the module should be complete. Most of the code comes from the initial port to Node.js that was done.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
spider_rs-0.0.5.tar.gz
(27.8 kB
view hashes)
Built Distribution
Close
Hashes for spider_rs-0.0.5-cp39-cp39-macosx_11_0_arm64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | cfdb4149f653154b4891cfaddab7abddc84ca95cc2c094f6c1c914de779bc3b2 |
|
MD5 | bffcb6cd1f47b9cd0f1aca5bbac3c4ad |
|
BLAKE2b-256 | 069de44b52ea364a411e68e4539192eadc077d4a8ec096994a7f02c39da57b20 |