Skip to main content

A python utility for downloading Common Crawl data.

Project description

comcrawl

GitHub Workflow Status codecov GitHub

comcrawl is a python package for easily querying and downloading pages from commoncrawl.org.

Introduction

I was inspired to make comcrawl by reading this article.

Note: I made this for personal projects and for fun. Thus this package is intended for use in small to medium projects, because it is not optimized for handling gigabytes or terrabytes of data. You might want to check out cdx-toolkit or cdx-index-client in such cases.

What is Common Crawl?

The Common Crawl project is an "open repository of web crawl data that can be accessed and analyzed by anyone". It contains billions of web pages and is often used for NLP projects to gather large amounts of text data.

Common Crawl provides a search index, which you can use to search for certain URLs in their crawled data. Each search result contains a link and byte offset to a specific location in their AWS S3 buckets to download the page.

What does comcrawl offer?

comcrawl simplifies this process of searching and downloading from Common Crawl by offering a simple API interface you can use in your python program.

Installation

comcrawl is available on PyPI.

Install it via pip by running the following command from your terminal:

pip install comcrawl

Usage

Basic

The HTML for each page will be available as a string in the 'html' key in each results dictionary after calling the download method.

from comcrawl import IndexClient

client = IndexClient()

client.search("reddit.com/r/MachineLearning/*")
client.download()

first_page_html = client.results[0]["html"]

Multithreading

You can leverage multithreading while searching or downloading by specifying the number of threads you want to use.

Please keep in mind to not overdo this, so you don't put too much stress on the Common Crawl servers (have a look at Code of Conduct).

from comcrawl import IndexClient

client = IndexClient()

client.search("reddit.com/r/MachineLearning/*", threads=4)
client.download(threads=4)

Removing duplicates & Saving

You can easily combine this package with the pandas library, to filter out duplicate results and persist them to disk:

from comcrawl import IndexClient
import pandas as pd

client = IndexClient()
client.search("reddit.com/r/MachineLearning/*")

client.results = (pd.DataFrame(client.results)
                  .sort_values(by="timestamp")
                  .drop_duplicates("urlkey", keep="last")
                  .to_dict("records"))

client.download()

pd.DataFrame(client.results).to_csv("results.csv")

The urlkey alone might not be sufficient here, so you might want to write a function to compute a custom id from the results' properties for the removal of duplicates.

Searching subsets of Indexes

By default, when instantiated, the IndexClient fetches a list of currently available Common Crawl indexes to search. You can also restrict the search to certain Common Crawl Indexes, by specifying them as a list.

from comcrawl import IndexClient

client = IndexClient(["2019-51", "2019-47"])
client.search("reddit.com/r/MachineLearning/*")
client.download()

Logging HTTP requests

When debugging your code, you can enable logging of all HTTP requests that are made.

from comcrawl import IndexClient

client = IndexClient(verbose=True)
client.search("reddit.com/r/MachineLearning/*")
client.download()

Code of Conduct

When accessing Common Crawl, please beware these guidelines posted by one of the Common Crawl maintainers:

https://groups.google.com/forum/#!msg/common-crawl/3QmQjFA_3y4/vTbhGqIBBQAJ

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

comcrawl-1.0.2.tar.gz (8.9 kB view details)

Uploaded Source

Built Distribution

comcrawl-1.0.2-py3-none-any.whl (10.0 kB view details)

Uploaded Python 3

File details

Details for the file comcrawl-1.0.2.tar.gz.

File metadata

  • Download URL: comcrawl-1.0.2.tar.gz
  • Upload date:
  • Size: 8.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.0.10 CPython/3.8.3 Linux/5.3.0-1032-azure

File hashes

Hashes for comcrawl-1.0.2.tar.gz
Algorithm Hash digest
SHA256 a542db1b7cc05f65bfcef012d0dbf838331aebd7876fb9764734da788d608433
MD5 a67d1d3efc74ddfb7a6d20982b556e43
BLAKE2b-256 7c460c519595db0a5e217ab43b0755f7d8d3be305e0da98caee31df0454d20b5

See more details on using hashes here.

File details

Details for the file comcrawl-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: comcrawl-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 10.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.0.10 CPython/3.8.3 Linux/5.3.0-1032-azure

File hashes

Hashes for comcrawl-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2a9d299f88b8deb877ede94fa67cc8ecf300d1206fd225488037f8b70cd28ca8
MD5 ae35b6a43b11802b887c87d056ce7fc0
BLAKE2b-256 df1911fac3419c0da637abc9e99b2f18953813f7d77c9e5916e93d7a5aba3845

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page