Skip to main content

Ecoindex_scraper module provides a way to scrape data from given website while simulating a real web browser

Project description

Ecoindex Scraper

Validate project quality

PyPI - Version PyPI - Downloads

This module provides a simple interface to get the Ecoindex of a given webpage using module ecoindex-compute

Requirements

  • Python ^3.10 with pip

Install

pip install ecoindex-scraper

Use

Get a page analysis

You can run a page analysis by calling the function get_page_analysis():

(function) get_page_analysis: (url: AnyHttpUrl, window_size: WindowSize | None = WindowSize(width=1920, height=1080), wait_before_scroll: int | None = 1, wait_after_scroll: int | None = 1) -> Coroutine[Any, Any, Result]

Example:

import asyncio
from pprint import pprint

from ecoindex.scraper import EcoindexScraper

pprint(
    asyncio.run(
        EcoindexScraper(url="http://ecoindex.fr").get_page_analysis()
    )
)

Result example:

Result(width=1920, height=1080, url=AnyHttpUrl('http://ecoindex.fr', ), size=549.253, nodes=52, requests=12, grade='A', score=90.0, ges=1.2, water=1.8, ecoindex_version='5.0.0', date=datetime.datetime(2022, 9, 12, 10, 54, 46, 773443), page_type=None)

Default behaviour: By default, the page analysis simulates:

  • Window size of 1920x1080 pixels (can be set with parameter window_size)
  • Wait for 1 second when page is loaded (can be set with parameter wait_before_scroll)
  • Scroll to the bottom of the page (if it is possible)
  • Wait for 1 second after having scrolled to the bottom of the page (can be set with parameter wait_after_scroll)

Get a page analysis and generate a screenshot

It is possible to generate a screenshot of the analyzed page by adding a ScreenShot property to the EcoindexScraper object. You have to define an id (can be a string, but it is recommended to use a unique id) and a path to the screenshot file (if the folder does not exist, it will be created).

import asyncio
from pprint import pprint
from uuid import uuid1

from ecoindex.models import ScreenShot
from ecoindex.scrap import EcoindexScraper

pprint(
    asyncio.run(
        EcoindexScraper(
            url="http://www.ecoindex.fr/",
            screenshot=ScreenShot(id=str(uuid1()), folder="./screenshots"),
        )
        .get_page_analysis()
    )
)

Async analysis

You can also run the analysis asynchronously:

import asyncio
from concurrent.futures import ThreadPoolExecutor, as_completed

from ecoindex.scrap import EcoindexScraper

def run_page_analysis(url):
    return asyncio.run(
        EcoindexScraper(url=url)
        .get_page_analysis()
    )


with ThreadPoolExecutor(max_workers=8) as executor:
    future_to_analysis = {}

    url = "https://www.ecoindex.fr"

    for i in range(10):
        future_to_analysis[
            executor.submit(
                run_page_analysis,
                url,
            )
        ] = (url)

    for future in as_completed(future_to_analysis):
        try:
            print(future.result())
        except Exception as e:
            print(e)

Get requests details from an analysis

You can get the details of the requests made by the page by calling the function get_all_requests() and also get the aggregation of requests by category by calling the function get_requests_by_category():

import asyncio
from pprint import pprint

from ecoindex.scraper import EcoindexScraper

scraper = EcoindexScraper(url="http://www.ecoindex.fr")

result = asyncio.run(scraper.get_page_analysis())
all_requests = asyncio.run(scraper.get_all_requests())
requests_by_category = asyncio.run(scraper.get_requests_by_category())

pprint([request.model_dump() for request in all_requests])
# [{'category': 'html',
#   'mime_type': 'text/html; charset=iso-8859-1',
#   'size': 475.0,
#   'status': 301,
#   'url': 'http://www.ecoindex.fr/'},
#  {'category': 'html',
#   'mime_type': 'text/html',
#   'size': 7772.0,
#   'status': 200,
#   'url': 'https://www.ecoindex.fr/'},
#  {'category': 'css',
#   'mime_type': 'text/css',
#   'size': 9631.0,
#   'status': 200,
#   'url': 'https://www.ecoindex.fr/css/bundle.min.d38033feecefa0352173204171412aec01f58eee728df0ac5c917a396ca0bc14.css'},
#  {'category': 'javascript',
#   'mime_type': 'application/javascript',
#   'size': 9823.0,
#   'status': 200,
#   'url': 'https://www.ecoindex.fr/fr/js/bundle.8781a9ae8d87b4ebaa689167fc17b7d71193cf514eb8bb40aac9bf4548e14533.js'},
#  {'category': 'other',
#   'mime_type': 'x-unknown',
#   'size': 892.0,
#   'status': 200,
#   'url': 'https://www.ecoindex.fr/images/logo-neutral-it.webp'},
#  {'category': 'image',
#   'mime_type': 'image/svg+xml',
#   'size': 3298.0,
#   'status': 200,
#   'url': 'https://www.ecoindex.fr/images/logo-greenit.svg'}]

pprint(requests_by_category.model_dump())
# {'css': {'total_count': 1, 'total_size': 9631.0},
#  'font': {'total_count': 0, 'total_size': 0.0},
#  'html': {'total_count': 2, 'total_size': 8247.0},
#  'image': {'total_count': 1, 'total_size': 3298.0},
#  'javascript': {'total_count': 1, 'total_size': 9823.0},
#  'other': {'total_count': 1, 'total_size': 892.0},
#  'video': {'total_count': 0, 'total_size': 0.0}}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ecoindex_scraper-3.12.1.tar.gz (15.9 kB view details)

Uploaded Source

Built Distribution

ecoindex_scraper-3.12.1-py3-none-any.whl (20.8 kB view details)

Uploaded Python 3

File details

Details for the file ecoindex_scraper-3.12.1.tar.gz.

File metadata

  • Download URL: ecoindex_scraper-3.12.1.tar.gz
  • Upload date:
  • Size: 15.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.14 Linux/6.5.0-1025-azure

File hashes

Hashes for ecoindex_scraper-3.12.1.tar.gz
Algorithm Hash digest
SHA256 401d0862ed913e810a450172e6541468dfff28b1c9f82b3f3c236edda0d9db02
MD5 bb6cda6bac3ab03692388cc313ed66bc
BLAKE2b-256 1029460ef3b8794772dfe13f02bac55a8a0528fedcfb0fcfad17356c05943fdc

See more details on using hashes here.

File details

Details for the file ecoindex_scraper-3.12.1-py3-none-any.whl.

File metadata

  • Download URL: ecoindex_scraper-3.12.1-py3-none-any.whl
  • Upload date:
  • Size: 20.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.14 Linux/6.5.0-1025-azure

File hashes

Hashes for ecoindex_scraper-3.12.1-py3-none-any.whl
Algorithm Hash digest
SHA256 91e12fd917be3c54969f2922fe9878f5e0ace42736ff7311cbcc5ee7e31f10ad
MD5 27f45a72b409d21e0fde91c6e2a9efb3
BLAKE2b-256 472a1a989e24316e77c72960accdfd164b3b6e631b8f16263d70e1ff3ab066e5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page