Skip to main content

A Python package for efficiently checking if a URL is part of large whitelist or blacklist of URLs and domain names.

Project description

url-is-in

Python 3.8+ License: Apache 2.0

A Python package for efficiently checking if URLs are part of large whitelists or blacklists. Built for speed and scalability, url-is-in provides different matching algorithms based on dataset size and provides both URL and SURT-based matching capabilities.

Features

  • 🌐 URL Normalization: Uses SURT (Sort-friendly URI Reordering Transform) for consistent URL comparison
  • 🔍 Subdomain Matching: Optional subdomain matching for domain-based filtering
  • 📊 Scalable: Efficiently handles large URL lists using Trie matching (tested with > 1M of URLs)
  • 🎯 Flexible: Support for both URL and SURT-based matching
  • 🐍 Python 3.8+: Modern Python support with type hints

Installation

Using pip

pip install url-is-in

Using uv (recommended for development)

uv add url-is-in

From source

git clone https://github.com/commoncrawl/url-is-in.git
cd url-is-in
pip install -e .

Requirements

  • Python: 3.8 or higher
  • Dependencies:
    • surt - For URL normalization and SURT conversion

Quick Start

Basic URL Matching

from url_is_in import URLMatcher

# Create a matcher with a list of URLs
urls = [
    'https://example.com',
    'https://test.org/specific/path',
    'https://github.com/user/repo'
]

matcher = URLMatcher(urls)

# Check if URLs match
print(matcher.is_in('https://example.com/any/path'))  # True
print(matcher.is_in('https://test.org/specific/path/file.html'))  # True
print(matcher.is_in('https://other.com'))  # False

Subdomain Matching

from url_is_in import URLMatcher

# Enable subdomain matching (default: True)
matcher = URLMatcher(['https://example.com'], match_subdomains=True)

print(matcher.is_in('https://www.example.com'))      # True
print(matcher.is_in('https://api.example.com'))      # True
print(matcher.is_in('https://sub.example.com/path')) # True

SURT-based Matching

For advanced use cases, you can work directly with SURT strings:

from url_is_in import SURTMatcher

# Work with SURT strings directly
surts = [
    'com,example)/',
    'org,test)/specific/path',
    'com,github)/user/repo'
]

matcher = SURTMatcher(surts)

# Check SURT strings
print(matcher.is_in('com,example)/any/path'))  # True
print(matcher.is_in('org,test)/other'))        # False

Algorithm Selection

The package automatically selects the optimal matching algorithm:

from url_is_in import URLMatcher

# Automatic selection (default)
matcher = URLMatcher(urls, mode="auto")  # Trie for >100 URLs, tuple for ≤100

# Manual selection
fast_matcher = URLMatcher(urls, mode="trie")    # Always use trie
simple_matcher = URLMatcher(urls, mode="tuple") # Always use tuple

Setting up development environment

# Clone the repository
git clone https://github.com/commoncrawl/url-is-in.git
cd url-is-in

# Install with development dependencies
uv sync --extra dev

# Run tests
pytest

# Run linting
ruff check .
ruff format .

Running tests

# Run all tests
pytest

# Run with coverage
pytest --cov=src --cov-fail-under=95

Contributing

Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

url_is_in-0.1.0.tar.gz (86.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

url_is_in-0.1.0-py3-none-any.whl (10.3 kB view details)

Uploaded Python 3

File details

Details for the file url_is_in-0.1.0.tar.gz.

File metadata

  • Download URL: url_is_in-0.1.0.tar.gz
  • Upload date:
  • Size: 86.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for url_is_in-0.1.0.tar.gz
Algorithm Hash digest
SHA256 b234672921a0ec0708756f703c646f10e15ea2e04bad1bae79e635e2823e2ba9
MD5 03a71c7aded6eb642db8d8f355a9b174
BLAKE2b-256 cfae34f73362ad47cd41725f0fe1f893c6a5b2c3aa6c63a35fd0c6355558f985

See more details on using hashes here.

Provenance

The following attestation bundles were made for url_is_in-0.1.0.tar.gz:

Publisher: publish.yml on commoncrawl/url-is-in

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file url_is_in-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: url_is_in-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 10.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for url_is_in-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 31e634bd1b2f21001d0e96be08925051357661db16ccfd1a2cf05d0f7e59f345
MD5 386fa066379fbfb8e8937e64059cd4ad
BLAKE2b-256 f48675948f90fe60c827dcfc469e03862db60489e985cfdea4dc36de41a25ed7

See more details on using hashes here.

Provenance

The following attestation bundles were made for url_is_in-0.1.0-py3-none-any.whl:

Publisher: publish.yml on commoncrawl/url-is-in

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page