Skip to main content

Web Scraping Tool for Swarmauri

Project description

Swarmauri Logo

PyPI - Downloads Hits PyPI - Python Version PyPI - License PyPI - swarmauri_tool_webscraping


Swarmauri Tool · Web Scraping

A Swarmauri-compatible scraper that fetches HTML with requests, parses it via BeautifulSoup, and extracts content with CSS selectors. Ideal for lightweight data collection, compliance checks, or enriching agent answers with live webpage snippets.

  • Accepts any valid URL and CSS selector; returns joined text content from the matching nodes.
  • Handles HTTP/network failures gracefully by surfacing structured error messages.
  • Integrates with Swarmauri agents so scraping can be triggered through natural-language prompts.

Requirements

  • Python 3.10 – 3.13.
  • requests and beautifulsoup4 (installed automatically with the package).
  • Respect site terms of service, robots.txt directives, and rate limits when scraping.

Installation

Use your preferred packaging workflow—each command installs the dependencies above.

pip

pip install swarmauri_tool_webscraping

Poetry

poetry add swarmauri_tool_webscraping

uv

# Add to the current project and update uv.lock
uv add swarmauri_tool_webscraping

# or install into the active environment without editing pyproject.toml
uv pip install swarmauri_tool_webscraping

Tip: In containerized or restricted environments ensure outbound HTTPS traffic is permitted; requests needs network access to reach target sites.

Quick Start

from swarmauri_tool_webscraping import WebScrapingTool

scraper = WebScrapingTool()
result = scraper(url="https://example.com", selector="h1")

if "extracted_text" in result:
    print(result["extracted_text"])
else:
    print(result["error"])

extracted_text concatenates matches separated by newlines. When no elements match the selector, the tool returns an empty string.

Usage Scenarios

Monitor Site Copy for Compliance

from swarmauri_tool_webscraping import WebScrapingTool

scraper = WebScrapingTool()
result = scraper(
    url="https://status.vendor.com",
    selector=".uptime-banner"
)

if "error" in result:
    raise RuntimeError(result["error"])

if "maintenance" in result["extracted_text"].lower():
    print("Maintenance notice detected – alert the ops team.")

Inject Live Data Into a Swarmauri Agent Response

from swarmauri_core.agent.Agent import Agent
from swarmauri_core.messages.HumanMessage import HumanMessage
from swarmauri_standard.tools.registry import ToolRegistry
from swarmauri_tool_webscraping import WebScrapingTool

registry = ToolRegistry()
registry.register(WebScrapingTool())
agent = Agent(tool_registry=registry)

message = HumanMessage(content="Check the headline on https://example.com")
response = agent.run(message)
print(response)

Batch Collect Headlines From Multiple Pages

from swarmauri_tool_webscraping import WebScrapingTool

scraper = WebScrapingTool()
urls = [
    "https://news.example.com/tech",
    "https://news.example.com/business",
]

for url in urls:
    result = scraper(url=url, selector="h2.article-title")
    print(url)
    print(result.get("extracted_text", result.get("error")))
    print("---")

Troubleshooting

  • Request error – Network failures, DNS issues, or HTTP 4xx/5xx responses produce Request error messages. Verify connectivity, headers, or authentication if required by the site.
  • Empty extracted_text – The selector may not match any nodes. Use browser dev tools to confirm the CSS selector or adjust the parser to target the correct element.
  • SSL certificate problems – Pass verify=False by forking/extending the tool only when you trust the target; otherwise update CA certificates on the host.

License

swarmauri_tool_webscraping is released under the Apache 2.0 License. See LICENSE for full details.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

swarmauri_tool_webscraping-0.10.0.dev2.tar.gz (8.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file swarmauri_tool_webscraping-0.10.0.dev2.tar.gz.

File metadata

  • Download URL: swarmauri_tool_webscraping-0.10.0.dev2.tar.gz
  • Upload date:
  • Size: 8.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for swarmauri_tool_webscraping-0.10.0.dev2.tar.gz
Algorithm Hash digest
SHA256 42c17c4ec10444fc579f50553543cc9aaea8af6cc7c511a1f08888912144fe2b
MD5 3eb448cfcab838a8cbd419729dfe072c
BLAKE2b-256 7c3dccc0cd1be6ad21ccb5b93485f64f3366e227e8c8ffb607d356b2270530cf

See more details on using hashes here.

File details

Details for the file swarmauri_tool_webscraping-0.10.0.dev2-py3-none-any.whl.

File metadata

  • Download URL: swarmauri_tool_webscraping-0.10.0.dev2-py3-none-any.whl
  • Upload date:
  • Size: 9.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for swarmauri_tool_webscraping-0.10.0.dev2-py3-none-any.whl
Algorithm Hash digest
SHA256 b0a72b5a87c9b8b8f18be1c15e2b4cc4c17447d1bf7e966f852d9c327da46279
MD5 dd0ee6b92867040a52f8a4f5baf517a4
BLAKE2b-256 807bd74afd77ffea9719b106fe6d88f84921898d8fb8c227edf64ba5b3c43e8f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page