Skip to main content

A Smart, Automatic, Fast and Lightweight Web Scraper for Python

Project description

AutoSpider: A Smart, Automatic, Fast Web Spider for Python

This project is made for automatic web spider to make scraping easy. It gets a url or the html content of a web page and a list of sample data which we want to scrape from that page. This data can be text, url or any html tag value of that page. It learns the spider rules and returns the similar elements. Then you can use this learned object with new urls to get similar content or the exact same element of those new pages.

Installation

It's compatible with python 3.

  • Install latest version from git repository using pip:
$ pip install git+https://github.com/khulnasoft-lab/autospider.git
  • Install from PyPI:
$ pip install autospider
  • Install from source:
$ python setup.py install

How to use

Getting similar results

Say we want to fetch all related post titles in a stackoverflow page:

from autospider import AutoSpider

url = 'https://stackoverflow.com/questions/2081586/web-scraping-with-python'

# We can add one or multiple candidates here.
# You can also put urls here to retrieve urls.
wanted_list = ["What are metaclasses in Python?"]

scraper = AutoSpider()
result = scraper.build(url, wanted_list)
print(result)

Here's the output:

[
    'How do I merge two dictionaries in a single expression in Python (taking union of dictionaries)?', 
    'How to call an external command?', 
    'What are metaclasses in Python?', 
    'Does Python have a ternary conditional operator?', 
    'How do you remove duplicates from a list whilst preserving order?', 
    'Convert bytes to a string', 
    'How to get line count of a large file cheaply in Python?', 
    "Does Python have a string 'contains' substring method?", 
    'Why is “1000000000000000 in range(1000000000000001)” so fast in Python 3?'
]

Now you can use the scraper object to get related topics of any stackoverflow page:

scraper.get_result_similar('https://stackoverflow.com/questions/606191/convert-bytes-to-a-string')

Getting exact result

Say we want to scrape live stock prices from yahoo finance:

from autospider import AutoSpider

url = 'https://finance.yahoo.com/quote/AAPL/'

wanted_list = ["124.81"]

scraper = AutoSpider()

# Here we can also pass html content via the html parameter instead of the url (html=html_content)
result = scraper.build(url, wanted_list)
print(result)

Note that you should update the wanted_list if you want to copy this code, as the content of the page dynamically changes.

You can also pass any custom requests module parameter. for example you may want to use proxies or custom headers:

proxies = {
    "http": 'http://127.0.0.1:8001',
    "https": 'https://127.0.0.1:8001',
}

result = scraper.build(url, wanted_list, request_args=dict(proxies=proxies))

Now we can get the price of any symbol:

scraper.get_result_exact('https://finance.yahoo.com/quote/MSFT/')

You may want to get other info as well. For example if you want to get market cap too, you can just append it to the wanted list. By using the get_result_exact method, it will retrieve the data as the same exact order in the wanted list.

Another example: Say we want to scrape the about text, number of stars and the link to issues of Github repo pages:

from autospider import AutoSpider

url = 'https://github.com/khulnasoft-lab/autospider'

wanted_list = ['A Smart, Automatic, Fast and Lightweight Web Scraper for Python', '2.5k', 'https://github.com/khulnasoft-lab/autospider/issues']

scraper = AutoSpider()
scraper.build(url, wanted_list)

Simple, right?

Saving the model

We can now save the built model to use it later. To save:

# Give it a file path
scraper.save('yahoo-finance')

And to load:

scraper.load('yahoo-finance')

Issues

Feel free to open an issue if you have any problem using the module.

Happy Coding ♥️

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autospider-1.1.11.tar.gz (10.9 kB view details)

Uploaded Source

Built Distribution

autospider-1.1.11-py3-none-any.whl (9.8 kB view details)

Uploaded Python 3

File details

Details for the file autospider-1.1.11.tar.gz.

File metadata

  • Download URL: autospider-1.1.11.tar.gz
  • Upload date:
  • Size: 10.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for autospider-1.1.11.tar.gz
Algorithm Hash digest
SHA256 13e3d47f40cf0b108e612f0eed608cd51cc3514110c536ae65f5b5655f01bebd
MD5 41bc81cf42b264d2327375df5417a214
BLAKE2b-256 4522a7a5c032f74063e3d36a8e0078ab8f68ae3a4e6d30f4e4e000c36389b1c9

See more details on using hashes here.

Provenance

File details

Details for the file autospider-1.1.11-py3-none-any.whl.

File metadata

  • Download URL: autospider-1.1.11-py3-none-any.whl
  • Upload date:
  • Size: 9.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for autospider-1.1.11-py3-none-any.whl
Algorithm Hash digest
SHA256 df30ee496d1aa59b926ad6af1bba96e98a2115629b5a3236c3370f6ae4dc8e12
MD5 0427eae8fbad5a4a58ea58dd8b8c4fca
BLAKE2b-256 eeb853ac0511f09d7ed9b417688e3ad8ca312e276f8c6acb6343e84ef6d17eb4

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page