Skip to main content

Never ever scrape the html off of a site multiple times ever again.

Project description

# Librarian

The goal of this package is to almost be like a training wheels setup for web scraping.

A good example is recursively trying to visit all of the links on a site such as:

http://web.archive.org/web/20080827084856/http://www.nanowerk.com:80/nanotechnology/nanomaterial/commercial_all.php?page=2

If you mess up your scrape and you have done no preemptive measures, you lose all of your work done thus far. What `Librarian` aims to do is to save that html for later, so you don't have to redo scrapes that you have done before, letting you be much nicer to where you're requesting to and saving you time, letting you go through a much smoother scraping experience.

Let's outline an example:

Take a look at the site above's html via Inspect Element; You will see that all of the names and links are under <div class="divhead">, and all of the blurbs are under <div class="divline">. Now, I would probably do this:

```python3
from urllib.request import urlopen
from bs4 import BeautifulSoup

alink = 'http://web.archive.org/web/20080827084856/http://www.nanowerk.com:80/nanotechnology/nanomaterial/commercial_all.php?page=2'
resp = urlopen(alink)
html = resp.read()
soup = BeautifulSoup(html, 'lxml')

for elem in soup.select('div.divhead'):
print(elem.get_text())
```

To check if I was correct and there was nothing above that had the same css selector, to see if I had to change my headers using urllib.request.Request, or something else.
Then I'd probably check the same for `div.divline`, for the first reason above. Then for each of those sites, I'd have to recursively visit into them and grab their html.

If I screw anything up, or if some pages have different html, then it will take a long time to get back to where I was, making the process of scraping a site more painful than it has to be.

If we use `Librarian`, we can instead do this:

```python3
from librarian import Librarian

lib = Librarian()

alink = 'http://web.archive.org/web/20080827084856/http://www.nanowerk.com:80/nanotechnology/nanomaterial/commercial_all.php?page=2'

html = lib.get(alink)
soup = BeautifulSoup(html, 'lxml')
```

Now, if I ever ask for that same website again (provided that `htmlLibrary` and the pickled files under `libarianTools` haven't been tampered with), the `Librarian` will find it and pull it out of your `htmlLibrary` so you can instantly use it.

If you need updated html, just

```python3
from librarian import Librarian

alink = 'http://web.archive.org/web/20080827084856/http://www.nanowerk.com:80/nanotechnology/nanomaterial/commercial_all.php?page=2'

removed = lib.remove(alink)
assert removed
```

`lib.remove(alink)` will remove the link `alink` from your `htmlLibrary` and `linkMap`, so if you ever call `lib.get(alink)` with the same link, the `Librarian` will get the html once again.


This project is in it's infancy so if you want any features created, create an issue and I will get on it.


Thank you to `keithreitz` for creating `samplemod`, which I largely copied for the project structure.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

htmlLibrarian-0.0.1.tar.gz (4.2 kB view hashes)

Uploaded Source

Built Distribution

htmlLibrarian-0.0.1-py3-none-any.whl (7.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page