Skip to main content

A web crawler for GPTs to build knowledge bases

Project description

简体中文 English

Introduction

GPT-Web-Crawler is a web crawler based on python and puppeteer. It can crawl web pages and extract content (including WebPages' title,url,keywords,description,all text content,all images and screenshot) from web pages. It is very easy to use and can be used to crawl web pages and extract content from web pages in a few lines of code. It is very suitable for people who are not familiar with web crawling and want to use web crawling to extract content from web pages. Crawler Working

The output of the spider can be a json file, which can be easily converted to a csv file, imported into a database or building an AI agent. Assistant demo

Getting Started

Step1. Install the package.

pip install gpt-web-crawler

Step2. Copy config_template.py and rename it to config.py. Then, edit the config.py file to config the openai api key and other settings, if you need use ProSpider to help you extract content from web pages. If you don't need to use ai help you extract content from web pages, you can keep the config.py file unchanged.

Step3. Run the following code to start a spider.

from gpt_web_crawler import run_spider,NoobSpider
run_spider(NoobSpider, 
           max_page_count= 10 ,
           start_urls="https://www.jiecang.cn/", 
           output_file = "test_pakages.json",
           extract_rules= r'.*\.html' )

Spiders

Spider Type Description
NoobSpider Basic web page scraping
CatSpider Web page scraping with screenshots
ProSpider Web page scraping with AI-extracted content
LionSpider Web page scraping with all images extracted

Cat Spider

Cat spider is a spider that can take screenshots of web pages. It is based on the Noob spider and uses puppeteer to simulate browser operations to take screenshots of the entire web page and save it as an image. So when you use the Cat spider, you need to install puppeteer first.

npm install puppeteer

TODO

  • 支持无需配置config.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpt-web-crawler-0.0.2.tar.gz (16.1 kB view hashes)

Uploaded Source

Built Distribution

gpt_web_crawler-0.0.2-py3-none-any.whl (21.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page