Skip to main content

Web Scraping Tool for Swarmauri

Project description

Swarmauri Logo

PyPI - Downloads Hits PyPI - Python Version PyPI - License PyPI - swarmauri_tool_webscraping


Swarmauri Tool · Web Scraping

A Swarmauri-compatible scraper that fetches HTML with requests, parses it via BeautifulSoup, and extracts content with CSS selectors. Ideal for lightweight data collection, compliance checks, or enriching agent answers with live webpage snippets.

  • Accepts any valid URL and CSS selector; returns joined text content from the matching nodes.
  • Handles HTTP/network failures gracefully by surfacing structured error messages.
  • Integrates with Swarmauri agents so scraping can be triggered through natural-language prompts.

Requirements

  • Python 3.10 – 3.13.
  • requests and beautifulsoup4 (installed automatically with the package).
  • Respect site terms of service, robots.txt directives, and rate limits when scraping.

Installation

Use your preferred packaging workflow—each command installs the dependencies above.

pip

pip install swarmauri_tool_webscraping

Poetry

poetry add swarmauri_tool_webscraping

uv

# Add to the current project and update uv.lock
uv add swarmauri_tool_webscraping

# or install into the active environment without editing pyproject.toml
uv pip install swarmauri_tool_webscraping

Tip: In containerized or restricted environments ensure outbound HTTPS traffic is permitted; requests needs network access to reach target sites.

Quick Start

from swarmauri_tool_webscraping import WebScrapingTool

scraper = WebScrapingTool()
result = scraper(url="https://example.com", selector="h1")

if "extracted_text" in result:
    print(result["extracted_text"])
else:
    print(result["error"])

extracted_text concatenates matches separated by newlines. When no elements match the selector, the tool returns an empty string.

Usage Scenarios

Monitor Site Copy for Compliance

from swarmauri_tool_webscraping import WebScrapingTool

scraper = WebScrapingTool()
result = scraper(
    url="https://status.vendor.com",
    selector=".uptime-banner"
)

if "error" in result:
    raise RuntimeError(result["error"])

if "maintenance" in result["extracted_text"].lower():
    print("Maintenance notice detected – alert the ops team.")

Inject Live Data Into a Swarmauri Agent Response

from swarmauri_core.agent.Agent import Agent
from swarmauri_core.messages.HumanMessage import HumanMessage
from swarmauri_standard.tools.registry import ToolRegistry
from swarmauri_tool_webscraping import WebScrapingTool

registry = ToolRegistry()
registry.register(WebScrapingTool())
agent = Agent(tool_registry=registry)

message = HumanMessage(content="Check the headline on https://example.com")
response = agent.run(message)
print(response)

Batch Collect Headlines From Multiple Pages

from swarmauri_tool_webscraping import WebScrapingTool

scraper = WebScrapingTool()
urls = [
    "https://news.example.com/tech",
    "https://news.example.com/business",
]

for url in urls:
    result = scraper(url=url, selector="h2.article-title")
    print(url)
    print(result.get("extracted_text", result.get("error")))
    print("---")

Troubleshooting

  • Request error – Network failures, DNS issues, or HTTP 4xx/5xx responses produce Request error messages. Verify connectivity, headers, or authentication if required by the site.
  • Empty extracted_text – The selector may not match any nodes. Use browser dev tools to confirm the CSS selector or adjust the parser to target the correct element.
  • SSL certificate problems – Pass verify=False by forking/extending the tool only when you trust the target; otherwise update CA certificates on the host.

License

swarmauri_tool_webscraping is released under the Apache 2.0 License. See LICENSE for full details.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

swarmauri_tool_webscraping-0.9.2.dev15.tar.gz (8.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file swarmauri_tool_webscraping-0.9.2.dev15.tar.gz.

File metadata

  • Download URL: swarmauri_tool_webscraping-0.9.2.dev15.tar.gz
  • Upload date:
  • Size: 8.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for swarmauri_tool_webscraping-0.9.2.dev15.tar.gz
Algorithm Hash digest
SHA256 ae51be9725d21d5a94556e87adcd1e946220fdaa8af459d9674d0b826e3d5cc1
MD5 ff4ad769748b146cc115d54f719e90cc
BLAKE2b-256 dc29f6d9d6f855688ec9abbc2a9dd0a9056dd4f814b2c3d28b8b64def3bcb4dc

See more details on using hashes here.

File details

Details for the file swarmauri_tool_webscraping-0.9.2.dev15-py3-none-any.whl.

File metadata

  • Download URL: swarmauri_tool_webscraping-0.9.2.dev15-py3-none-any.whl
  • Upload date:
  • Size: 9.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for swarmauri_tool_webscraping-0.9.2.dev15-py3-none-any.whl
Algorithm Hash digest
SHA256 025bb2fd3e09c7f3687f684162de6d49e6bc858866b9c0c1adfa38a88e7bb07e
MD5 8ea9ab94ff88c6a5bf1ba36df769dd92
BLAKE2b-256 3da20bbe8ab55e0bf53092cc4f3ea421c4f9a19bb9cd93f89bd640a7a89e73b2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page