Skip to main content

A basic but fast, persistent and threadsafe caching system

Project description

This package lets you efficiently retrieve pages from the Internet by caching request’s results.

Basic commands

Importing required modules first:

from webscrapetools import urlcaching

Initializing the cache:

urlcaching.set_cache_path(‘.wst_cache’)

The option _expiry_days_ sets the cache expiry period, default is 10 days.

This is a required step: otherwise responses to url calls will simply not be cached. Cache data are stored in the specified folder, so that re-using the same string makes the cache persistent. This creates the folder on the fly if it does not exist. The following command cleans up the cache, making sure we start with no prior data:

urlcaching.empty_cache()

Opening an url with the following command stores the repsonse content behind the scene, so that subsequent calls will not hit the network.

urlcaching.open_url(‘http://www.google.com’)

Full example

from webscrapetools import urlcaching
import time

# Initializing the cache
urlcaching.set_cache_path('.wst_cache')

# Making sure we start from scratch
urlcaching.empty_cache()

# Demo with 5 identical calls... only the first one is delayed, all others are hitting the cache
count_calls = 1
while count_calls <= 5:
    start_time = time.time()
    urlcaching.open_url('http://deelay.me/5000/http://www.google.com')
    duration = time.time() - start_time
    print('duration for call {}: {:0.2f}'.format(count_calls, duration))
    count_calls += 1

# Cleaning up
urlcaching.empty_cache()

The code above outputs the following:

duration for call 1: 6.74 duration for call 2: 0.00 duration for call 3: 0.00 duration for call 4: 0.00 duration for call 5: 0.00

Example plugging in a custom client

The framework lets you customize the way you access the web. It is therefore possible to drive a browser via Selenium for example.

from webscrapetools import urlcaching
urlcaching.set_cache_path('./output/tests', max_node_files=10, rebalancing_limit=100)

def dummy_client():
    return None

def dummy_call(_, key):
    return '{:d}'.format(int(key)) * int(key), key

# simulating calls using the dummy client
keys = ('{:05d}'.format(count) for count in range(500))
for key in keys:
    urlcaching.open_url(key, init_client_func=dummy_client, call_client_func=dummy_call)

urlcaching.empty_cache()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

webscrapetools-0.4.4.tar.gz (11.4 kB view details)

Uploaded Source

Built Distribution

webscrapetools-0.4.4-py3-none-any.whl (13.1 kB view details)

Uploaded Python 3

File details

Details for the file webscrapetools-0.4.4.tar.gz.

File metadata

  • Download URL: webscrapetools-0.4.4.tar.gz
  • Upload date:
  • Size: 11.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/28.8.0 requests-toolbelt/0.9.1 tqdm/4.35.0 CPython/3.6.0

File hashes

Hashes for webscrapetools-0.4.4.tar.gz
Algorithm Hash digest
SHA256 17e11193536a0777132157d5a779af5ce4b24747fd9a1f5ff67283043f3f154e
MD5 201d7af6f178c499c58ed924b3faa415
BLAKE2b-256 e7ddfa43a79f1960716ad36284af23b7db2a8e3f2d70811bc60e9331ca1662ab

See more details on using hashes here.

File details

Details for the file webscrapetools-0.4.4-py3-none-any.whl.

File metadata

  • Download URL: webscrapetools-0.4.4-py3-none-any.whl
  • Upload date:
  • Size: 13.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/28.8.0 requests-toolbelt/0.9.1 tqdm/4.35.0 CPython/3.6.0

File hashes

Hashes for webscrapetools-0.4.4-py3-none-any.whl
Algorithm Hash digest
SHA256 b2b6fd6b2f358c22cfc32a03c2afd1ccd9373f3dad5c9ac5fee43ac4dd5830de
MD5 abdc42512e7fe53a6b465ef1b86fb69e
BLAKE2b-256 77e9e993e914421080d2ed5f2a78617f8b55e21fe6e451e10480d3e8bd15e714

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page