Skip to main content

A web scraping library based on LangChain which uses LLM and direct graph logic to create scraping pipelines.

Project description

🕷️ ScrapeGraphAI: You Only Scrape Once

English | 中文 | 日本語 | 코리아노 | Русский

Downloads linting: pylint Pylint CodeQL License: MIT

ScrapeGraphAI is a web scraping python library that uses LLM and direct graph logic to create scraping pipelines for websites and local documents (XML, HTML, JSON, Markdown, etc.).

Just say which information you want to extract and the library will do it for you!

Scrapegraph-ai Logo

🚀 Quick install

The reference page for Scrapegraph-ai is available on the official page of PyPI: pypi.

pip install scrapegraphai

Note: it is recommended to install the library in a virtual environment to avoid conflicts with other libraries 🐱

🔍 Demo

Official streamlit demo:

My Skills

Try it directly on the web using Google Colab:

Open In Colab

📖 Documentation

The documentation for ScrapeGraphAI can be found here.

Check out also the Docusaurus here.

💻 Usage

There are multiple standard scraping pipelines that can be used to extract information from a website (or local file):

  • SmartScraperGraph: single-page scraper that only needs a user prompt and an input source;

  • SearchGraph: multi-page scraper that extracts information from the top n search results of a search engine;

  • SpeechGraph: single-page scraper that extracts information from a website and generates an audio file.

  • ScriptCreatorGraph: single-page scraper that extracts information from a website and generates a Python script.

  • SmartScraperMultiGraph: multi-page scraper that extracts information from multiple pages given a single prompt and a list of sources;

  • ScriptCreatorMultiGraph: multi-page scraper that generates a Python script for extracting information from multiple pages given a single prompt and a list of sources.

It is possible to use different LLM through APIs, such as OpenAI, Groq, Azure and Gemini, or local models using Ollama.

Case 1: SmartScraper using Local Models

Remember to have Ollama installed and download the models using the ollama pull command.

from scrapegraphai.graphs import SmartScraperGraph

graph_config = {
    "llm": {
        "model": "ollama/mistral",
        "temperature": 0,
        "format": "json",  # Ollama needs the format to be specified explicitly
        "base_url": "http://localhost:11434",  # set Ollama URL
    },
    "embeddings": {
        "model": "ollama/nomic-embed-text",
        "base_url": "http://localhost:11434",  # set Ollama URL
    },
    "verbose": True,
}

smart_scraper_graph = SmartScraperGraph(
    prompt="List me all the projects with their descriptions",
    # also accepts a string with the already downloaded HTML code
    source="https://perinim.github.io/projects",
    config=graph_config
)

result = smart_scraper_graph.run()
print(result)

The output will be a list of projects with their descriptions like the following:

{'projects': [{'title': 'Rotary Pendulum RL', 'description': 'Open Source project aimed at controlling a real life rotary pendulum using RL algorithms'}, {'title': 'DQN Implementation from scratch', 'description': 'Developed a Deep Q-Network algorithm to train a simple and double pendulum'}, ...]}

Case 2: SearchGraph using Mixed Models

We use Groq for the LLM and Ollama for the embeddings.

from scrapegraphai.graphs import SearchGraph

# Define the configuration for the graph
graph_config = {
    "llm": {
        "model": "groq/gemma-7b-it",
        "api_key": "GROQ_API_KEY",
        "temperature": 0
    },
    "embeddings": {
        "model": "ollama/nomic-embed-text",
        "base_url": "http://localhost:11434",  # set ollama URL arbitrarily
    },
    "max_results": 5,
}

# Create the SearchGraph instance
search_graph = SearchGraph(
    prompt="List me all the traditional recipes from Chioggia",
    config=graph_config
)

# Run the graph
result = search_graph.run()
print(result)

The output will be a list of recipes like the following:

{'recipes': [{'name': 'Sarde in Saòre'}, {'name': 'Bigoli in salsa'}, {'name': 'Seppie in umido'}, {'name': 'Moleche frite'}, {'name': 'Risotto alla pescatora'}, {'name': 'Broeto'}, {'name': 'Bibarasse in Cassopipa'}, {'name': 'Risi e bisi'}, {'name': 'Smegiassa Ciosota'}]}

Case 3: SpeechGraph using OpenAI

You just need to pass the OpenAI API key and the model name.

from scrapegraphai.graphs import SpeechGraph

graph_config = {
    "llm": {
        "api_key": "OPENAI_API_KEY",
        "model": "gpt-3.5-turbo",
    },
    "tts_model": {
        "api_key": "OPENAI_API_KEY",
        "model": "tts-1",
        "voice": "alloy"
    },
    "output_path": "audio_summary.mp3",
}

# ************************************************
# Create the SpeechGraph instance and run it
# ************************************************

speech_graph = SpeechGraph(
    prompt="Make a detailed audio summary of the projects.",
    source="https://perinim.github.io/projects/",
    config=graph_config,
)

result = speech_graph.run()
print(result)

The output will be an audio file with the summary of the projects on the page.

Sponsors

🤝 Contributing

Feel free to contribute and join our Discord server to discuss with us improvements and give us suggestions!

Please see the contributing guidelines.

My Skills My Skills My Skills

📈 Roadmap

We are working on the following features! If you are interested in collaborating right-click on the feature and open in a new tab to file a PR. If you have doubts and wanna discuss them with us, just contact us on discord or open a Discussion here on Github!

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#5C4B9B', 'edgeLabelBackground':'#ffffff', 'tertiaryColor': '#ffffff', 'primaryBorderColor': '#5C4B9B', 'fontFamily': 'Arial', 'fontSize': '16px', 'textColor': '#5C4B9B' }}}%%
graph LR
    A[DeepSearch Graph] --> F[Use Existing Chromium Instances]
    F --> B[Page Caching]
    B --> C[Screenshot Scraping]
    C --> D[Handle Dynamic Content]
    D --> E[New Webdrivers]

    style A fill:#ffffff,stroke:#5C4B9B,stroke-width:2px,rx:10,ry:10
    style F fill:#ffffff,stroke:#5C4B9B,stroke-width:2px,rx:10,ry:10
    style B fill:#ffffff,stroke:#5C4B9B,stroke-width:2px,rx:10,ry:10
    style C fill:#ffffff,stroke:#5C4B9B,stroke-width:2px,rx:10,ry:10
    style D fill:#ffffff,stroke:#5C4B9B,stroke-width:2px,rx:10,ry:10
    style E fill:#ffffff,stroke:#5C4B9B,stroke-width:2px,rx:10,ry:10

    click A href "https://github.com/VinciGit00/Scrapegraph-ai/issues/260" "Open DeepSearch Graph Issue"
    click F href "https://github.com/VinciGit00/Scrapegraph-ai/issues/329" "Open Chromium Instances Issue"
    click B href "https://github.com/VinciGit00/Scrapegraph-ai/issues/197" "Open Page Caching Issue"
    click C href "https://github.com/VinciGit00/Scrapegraph-ai/issues/197" "Open Screenshot Scraping Issue"
    click D href "https://github.com/VinciGit00/Scrapegraph-ai/issues/279" "Open Handle Dynamic Content Issue"
    click E href "https://github.com/VinciGit00/Scrapegraph-ai/issues/171" "Open New Webdrivers Issue"

❤️ Contributors

Contributors

🎓 Citations

If you have used our library for research purposes please quote us with the following reference:

  @misc{scrapegraph-ai,
    author = {Marco Perini, Lorenzo Padoan, Marco Vinciguerra},
    title = {Scrapegraph-ai},
    year = {2024},
    url = {https://github.com/VinciGit00/Scrapegraph-ai},
    note = {A Python library for scraping leveraging large language models}
  }

Authors

Authors_logos

Contact Info
Marco Vinciguerra Linkedin Badge
Marco Perini Linkedin Badge
Lorenzo Padoan Linkedin Badge

📜 License

ScrapeGraphAI is licensed under the MIT License. See the LICENSE file for more information.

Acknowledgements

  • We would like to thank all the contributors to the project and the open-source community for their support.
  • ScrapeGraphAI is meant to be used for data exploration and research purposes only. We are not responsible for any misuse of the library.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapegraphai-1.9.0b6.tar.gz (3.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scrapegraphai-1.9.0b6-py3-none-any.whl (118.9 kB view details)

Uploaded Python 3

File details

Details for the file scrapegraphai-1.9.0b6.tar.gz.

File metadata

  • Download URL: scrapegraphai-1.9.0b6.tar.gz
  • Upload date:
  • Size: 3.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.12

File hashes

Hashes for scrapegraphai-1.9.0b6.tar.gz
Algorithm Hash digest
SHA256 45589bb534a21bd626ae3f8153a99de895fac42d6029ea9e5ac49754a596814b
MD5 21dde85389fa9422c1916d27307073f8
BLAKE2b-256 23e4bbb5ac57e5f37fa8f4cd7064a1265bdb5d4d9c7dcbe9d5610fc361404b9f

See more details on using hashes here.

File details

Details for the file scrapegraphai-1.9.0b6-py3-none-any.whl.

File metadata

  • Download URL: scrapegraphai-1.9.0b6-py3-none-any.whl
  • Upload date:
  • Size: 118.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.12

File hashes

Hashes for scrapegraphai-1.9.0b6-py3-none-any.whl
Algorithm Hash digest
SHA256 f52096a98a4b096f646fe9763e4dca95955c05ecfaf3530429ba4d90323886da
MD5 174bad1e0056381dd43542b0d148cf28
BLAKE2b-256 6f0d51c0f81d02470c0c64dbc4c0fef20cd46559874ccc3c5f3c9e80319d10e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page