Skip to main content

Library for extracting structured data from websites using ScrapeGraphAI

Project description

🕷️🦜 langchain-scrapegraph

License Python Support Documentation

Supercharge your LangChain agents with AI-powered web scraping capabilities. LangChain-ScrapeGraph provides a seamless integration between LangChain and ScrapeGraph AI, enabling your agents to extract structured data from websites using natural language.

🔗 ScrapeGraph API & SDKs

If you are looking for a quick solution to integrate ScrapeGraph in your system, check out our powerful API here!

We offer SDKs in both Python and Node.js, making it easy to integrate into your projects. Check them out below:

SDK Language GitHub Link
Python SDK Python scrapegraph-py
Node.js SDK Node.js scrapegraph-js

📦 Installation

pip install langchain-scrapegraph

🛠️ Available Tools

📝 MarkdownifyTool

Convert any webpage into clean, formatted markdown.

from langchain_scrapegraph.tools import MarkdownifyTool

tool = MarkdownifyTool()
markdown = tool.invoke({"website_url": "https://example.com"})

print(markdown)

🔍 SmartscraperTool

Extract structured data from any webpage using natural language prompts.

from langchain_scrapegraph.tools import SmartScraperTool

# Initialize the tool (uses SGAI_API_KEY from environment)
tool = SmartscraperTool()

# Extract information using natural language
result = tool.invoke({
    "website_url": "https://www.example.com",
    "user_prompt": "Extract the main heading and first paragraph"
})

print(result)

🌐 SearchscraperTool

Search and extract structured information from the web using natural language prompts.

from langchain_scrapegraph.tools import SearchScraperTool

# Initialize the tool (uses SGAI_API_KEY from environment)
tool = SearchScraperTool()

# Search and extract information using natural language
result = tool.invoke({
    "user_prompt": "What are the key features and pricing of ChatGPT Plus?"
})

print(result)
# {
#     "product": {
#         "name": "ChatGPT Plus",
#         "description": "Premium version of ChatGPT..."
#     },
#     "features": [...],
#     "pricing": {...},
#     "reference_urls": [
#         "https://openai.com/chatgpt",
#         ...
#     ]
# }
🔍 Using Output Schemas with SearchscraperTool

You can define the structure of the output using Pydantic models:

from typing import List, Dict
from pydantic import BaseModel, Field
from langchain_scrapegraph.tools import SearchScraperTool

class ProductInfo(BaseModel):
    name: str = Field(description="Product name")
    features: List[str] = Field(description="List of product features")
    pricing: Dict[str, Any] = Field(description="Pricing information")
    reference_urls: List[str] = Field(description="Source URLs for the information")

# Initialize with schema
tool = SearchScraperTool(llm_output_schema=ProductInfo)

# The output will conform to the ProductInfo schema
result = tool.invoke({
    "user_prompt": "What are the key features and pricing of ChatGPT Plus?"
})

print(result)
# {
#     "name": "ChatGPT Plus",
#     "features": [
#         "GPT-4 access",
#         "Faster response speed",
#         ...
#     ],
#     "pricing": {
#         "amount": 20,
#         "currency": "USD",
#         "period": "monthly"
#     },
#     "reference_urls": [
#         "https://openai.com/chatgpt",
#         ...
#     ]
# }

🌟 Key Features

  • 🐦 LangChain Integration: Seamlessly works with LangChain agents and chains
  • 🔍 AI-Powered Extraction: Use natural language to describe what data to extract
  • 📊 Structured Output: Get clean, structured data ready for your agents
  • 🔄 Flexible Tools: Choose from multiple specialized scraping tools
  • Async Support: Built-in support for async operations

💡 Use Cases

  • 📖 Research Agents: Create agents that gather and analyze web data
  • 📊 Data Collection: Automate structured data extraction from websites
  • 📝 Content Processing: Convert web content into markdown for further processing
  • 🔍 Information Extraction: Extract specific data points using natural language

🤖 Example Agent

from langchain.agents import initialize_agent, AgentType
from langchain_scrapegraph.tools import SmartScraperTool
from langchain_openai import ChatOpenAI

# Initialize tools
tools = [
    SmartScraperTool(),
]

# Create an agent
agent = initialize_agent(
    tools=tools,
    llm=ChatOpenAI(temperature=0),
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)

# Use the agent
response = agent.run("""
    Visit example.com, make a summary of the content and extract the main heading and first paragraph
""")

⚙️ Configuration

Set your ScrapeGraph API key in your environment:

export SGAI_API_KEY="your-api-key-here"

Or set it programmatically:

import os
os.environ["SGAI_API_KEY"] = "your-api-key-here"

📚 Documentation

💬 Support & Feedback

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

This project is built on top of:


Made with ❤️ by ScrapeGraph AI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_scrapegraph-1.6.0.tar.gz (13.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_scrapegraph-1.6.0-py3-none-any.whl (20.2 kB view details)

Uploaded Python 3

File details

Details for the file langchain_scrapegraph-1.6.0.tar.gz.

File metadata

  • Download URL: langchain_scrapegraph-1.6.0.tar.gz
  • Upload date:
  • Size: 13.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.18

File hashes

Hashes for langchain_scrapegraph-1.6.0.tar.gz
Algorithm Hash digest
SHA256 9d82106226fcaf8efd96753a095682d0c6a19dd6efafa28d7e6092edac886b84
MD5 f1c598d34b3fb57fd79646f5835672d5
BLAKE2b-256 5af3888b01e0c59bb0c1b89b5f306d08c466da13127f21b2d24d41586c966969

See more details on using hashes here.

File details

Details for the file langchain_scrapegraph-1.6.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_scrapegraph-1.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f3999f17280442c5a187fc5e2fff2f5406cea3b0da16cabf95594a93c9dbad3f
MD5 fc442d820a4fd1f2d4bd09963a788d4c
BLAKE2b-256 0e19dddf7a9ef9998cba46fcadc1a067bcbe050605f5ba9c66f302af78aed3a1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page