Skip to main content

Never ever scrape the html off of a site multiple times ever again.

Project description

# Librarian

The goal of this package is to almost be like a training wheels setup for web scraping.

A good example is recursively trying to visit all of the links on a site such as:

http://web.archive.org/web/20080827084856/http://www.nanowerk.com:80/nanotechnology/nanomaterial/commercial_all.php?page=2

If you mess up your scrape and you have done no preemptive measures, you lose all of your work done thus far. What `Librarian` aims to do is to save that html for later, so you don't have to redo scrapes that you have done before, letting you be much nicer to where you're requesting to and saving you time, letting you go through a much smoother scraping experience.

Let's outline an example:

Take a look at the site above's html via Inspect Element; You will see that all of the names and links are under <div class="divhead">, and all of the blurbs are under <div class="divline">. Now, I would probably do this:

```python3
from urllib.request import urlopen
from bs4 import BeautifulSoup

alink = 'http://web.archive.org/web/20080827084856/http://www.nanowerk.com:80/nanotechnology/nanomaterial/commercial_all.php?page=2'
resp = urlopen(alink)
html = resp.read()
soup = BeautifulSoup(html, 'lxml')

for elem in soup.select('div.divhead'):
print(elem.get_text())
```

To check if I was correct and there was nothing above that had the same css selector, to see if I had to change my headers using urllib.request.Request, or something else.
Then I'd probably check the same for `div.divline`, for the first reason above. Then for each of those sites, I'd have to recursively visit into them and grab their html.

If I screw anything up, or if some pages have different html, then it will take a long time to get back to where I was, making the process of scraping a site more painful than it has to be.

If we use `Librarian`, we can instead do this:

```python3
from librarian import Librarian

lib = Librarian()

alink = 'http://web.archive.org/web/20080827084856/http://www.nanowerk.com:80/nanotechnology/nanomaterial/commercial_all.php?page=2'

html = lib.get(alink)
soup = BeautifulSoup(html, 'lxml')
```

Now, if I ever ask for that same website again (provided that `htmlLibrary` and the pickled files under `libarianTools` haven't been tampered with), the `Librarian` will find it and pull it out of your `htmlLibrary` so you can instantly use it.

If you need updated html, just

```python3
from librarian import Librarian

alink = 'http://web.archive.org/web/20080827084856/http://www.nanowerk.com:80/nanotechnology/nanomaterial/commercial_all.php?page=2'

removed = lib.remove(alink)
assert removed
```

`lib.remove(alink)` will remove the link `alink` from your `htmlLibrary` and `linkMap`, so if you ever call `lib.get(alink)` with the same link, the `Librarian` will get the html once again.


This project is in it's infancy so if you want any features created, create an issue and I will get on it.


Thank you to `keithreitz` for creating `samplemod`, which I largely copied for the project structure.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

html_librarian-0.0.1.tar.gz (4.2 kB view details)

Uploaded Source

Built Distribution

html_librarian-0.0.1-py3-none-any.whl (7.9 kB view details)

Uploaded Python 3

File details

Details for the file html_librarian-0.0.1.tar.gz.

File metadata

File hashes

Hashes for html_librarian-0.0.1.tar.gz
Algorithm Hash digest
SHA256 b0f21f6bfbb1f0b826ea95419a708619faef523c975e8f046b9ed64ebb3320aa
MD5 6a1344d6f90b41f5c27d84ded7fd9b98
BLAKE2b-256 823aa2ab2e3c37a37eee23371cfd5962f51f87243fcff074eab102188d4c2974

See more details on using hashes here.

File details

Details for the file html_librarian-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for html_librarian-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e6a3b2a522cd01e613e1e5ed6aa59803772c645fd1711d56bf1c39d389379eda
MD5 13a5428461f5ef0c1f8373fe8e5ef60b
BLAKE2b-256 62e895e1e26f806bf10f132697cbccacf292cfe493dd50e56c326907fb217074

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page