Skip to main content

Crawls and indexes websites

Project description

SmolCrawl

A lightweight web crawler and indexer for creating searchable document collections from websites.

Overview

SmolCrawl is a Python-based tool that helps you:

  • Crawl websites and extract content
  • Convert HTML content to readable markdown
  • Index pages for efficient searching
  • Query indexed content with relevance scoring

Perfect for creating local knowledge bases, documentation search, or personal research collections.

Features

  • Simple Web Crawling: Easily crawl and extract content from target websites
  • Content Extraction: Automatically extracts meaningful content from HTML using readability algorithms
  • Markdown Conversion: Converts HTML content to clean, readable markdown format
  • Fast Indexing: Uses Tantivy (Rust-based search library) for performant full-text search
  • Caching: Implements disk-based caching to avoid redundant crawling
  • CLI Interface: Simple command-line interface for all operations

Installation

# Clone the repository
git clone https://github.com/yourusername/smolcrawl.git
cd smolcrawl

# Install the package
pip install -e .

Requirements

  • Python 3.11 or higher
  • Dependencies are automatically installed with the package

Usage

Crawl a Website

smolcrawl crawl https://example.com

Index a Website

smolcrawl index https://example.com my_index_name

List Available Indices

smolcrawl list_indices

Query an Index

smolcrawl query my_index_name "your search query" --limit 10 --score_threshold 0.5

Delete an Index

smolcrawl delete_index my_index_name

Configuration

SmolCrawl uses environment variables for configuration:

  • STORAGE_PATH: Path to store data (default: ./data)
  • CACHE_PATH: Path for caching (default: ./data/cache)

You can set these in a .env file in the project root.

Project Structure

smolcrawl/
├── src/smolcrawl/
│   ├── __init__.py    # CLI and entry points
│   ├── crawl.py       # Web crawling functionality
│   ├── db.py          # Indexing and search functionality
│   └── utils.py       # Utility functions
├── data/              # Storage for indices and cache (gitignored)
├── .gitignore
└── pyproject.toml     # Project metadata and dependencies

How It Works

  1. Crawling: Uses BeautifulSoupCrawler to fetch web pages and extract links
  2. Content Processing: Extracts meaningful content using ReadabiliPy and converts to markdown
  3. Indexing: Stores extracted content in a Tantivy index for efficient searching
  4. Searching: Performs full-text search on indexed content with relevance ranking

License

[Your License Choice]

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

smolcrawl-0.1.0.tar.gz (23.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

smolcrawl-0.1.0-py3-none-any.whl (7.1 kB view details)

Uploaded Python 3

File details

Details for the file smolcrawl-0.1.0.tar.gz.

File metadata

  • Download URL: smolcrawl-0.1.0.tar.gz
  • Upload date:
  • Size: 23.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.10

File hashes

Hashes for smolcrawl-0.1.0.tar.gz
Algorithm Hash digest
SHA256 b2030b17b78692672460f752b41ecf3c0c069927e2035813a27da180eed9dcc8
MD5 f599691dea5dda8671d8eddcda20e8f3
BLAKE2b-256 aee0e6c5949885c98af110a7fa69187367912e453a78e63a4a3a2048f01ce86a

See more details on using hashes here.

File details

Details for the file smolcrawl-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: smolcrawl-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 7.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.10

File hashes

Hashes for smolcrawl-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6e936e35e1011490a27f7abc17283e751e56fc51eb78f29e585578f9be109858
MD5 585d3d07ec23678afdbd56bf0d8ee73e
BLAKE2b-256 d9f2d77d4b1ef7e6d96c55b24068d8eb744b07ae79cebd851d585234f74f1cbe

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page