`ecoindex-cli` is a CLI tool that let you make ecoindex tests on given pages
Project description
Ecoindex-Cli
This tool provides an easy way to analyze websites with Ecoindex from your local computer using multi-threading. You have the ability to:
- Make the analysis on multiple pages
- Define multiple screen resolution
- Make a recursive analysis from a given website
This CLI is built on top of ecoindex-python with Typer
The output is a CSV or JSON file with the results of the analysis.
Requirements
- Python ^3.10
- pip
Setup
pip install --user -U ecoindex-cli
Use case
The cli gets 2 commands: analyze
and report
which can be used separately:
ecoindex-cli --help
Usage: ecoindex-cli [OPTIONS] COMMAND [ARGS]...
Ecoindex cli to make analysis of webpages
Options:
--install-completion [bash|zsh|fish|powershell|pwsh]
Install completion for the specified shell.
--show-completion [bash|zsh|fish|powershell|pwsh]
Show completion for the specified shell, to
copy it or customize the installation.
--help Show this message and exit.
Commands:
analyze Make an ecoindex analysis of given webpages or website.
report If you already performed an ecoindex analysis and have your...
Make a simple analysis
You give just one web url
ecoindex-cli analyze --url https://www.ecoindex.fr
Result
๐๏ธ Urls recorded in file `/tmp/ecoindex-cli/input/www.ecoindex.fr.csv`
There are 1 url(s), do you want to process? [Y/n]:
1 urls for 1 window size with 8 maximum workers
100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 1/1 โข 0:00:10 โข 0:00:00
โโโโโโโโโโโโโโโโโโณโโโโโโโโโโณโโโโโโโโโ
โ Total analysis โ Success โ Failed โ
โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ
โ 1 โ 1 โ 0 โ
โโโโโโโโโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโ
๐๏ธ File /tmp/ecoindex-cli/output/www.ecoindex.fr/2023-14-04_140013/results.csv written !
This makes an analysis with a screen resolution of 1920x1080px by default and with the last known version of chromedriver. You can set those settings with options:
--window-size
and--chrome-version
You can add multiple urls to analyze with the option--url
. For example:
ecoindex-cli analyze --url https://www.ecoindex.fr --url https://www.ecoindex.fr/a-propos/
Provide urls from a file
You can use a file with given urls that you want to analyze: One url per line. This is helpful if you want to play the same scenario recurrently.
ecoindex-cli analyze --urls-file input/ecoindex.csv
Result
๐๏ธ Urls recorded in file `/tmp/ecoindex-cli/input/www.ecoindex.fr.csv`
There are 2 url(s), do you want to process? [Y/n]:
2 urls for 1 window size with 8 maximum workers
100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 2/2 โข 0:00:14 โข 0:00:00
โโโโโโโโโโโโโโโโโโณโโโโโโโโโโณโโโโโโโโโ
โ Total analysis โ Success โ Failed โ
โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ
โ 2 โ 2 โ 0 โ
โโโโโโโโโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโ
๐๏ธ File /tmp/ecoindex-cli/output/www.ecoindex.fr.csv/2023-14-04_140853/results.csv written !
Make a recursive analysis
You can make a recursive analysis of a given webiste. This means that the app will try to find out all the pages into your website and launch an analysis on all those web pages. โ ๏ธ This can process for a very long time! Use it at your own risks!
ecoindex-cli analyze --url https://www.ecoindex.fr --recursive
Result
You are about to perform a recursive website scraping. This can take a long time. Are you sure to want to proceed? [Y/n]:
โฒ๏ธ Crawling root url https://www.ecoindex.fr -> Wait a minute!
-2023-04-14 14:09:38 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: scrapybot)
2023-04-14 14:09:38 [scrapy.utils.log] INFO: Versions: lxml 4.9.2.0, libxml2 2.9.14, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.1, Twisted 22.10.0, Python 3.11.3 (main, Apr 5 2023, 14:15:06) [GCC 9.4.0], pyOpenSSL 23.0.0 (OpenSSL 3.0.8 7 Feb 2023), cryptography 39.0.2, Platform Linux-5.15.0-67-generic-x86_64-with-glibc2.31
2023-04-14 14:09:38 [scrapy.crawler] INFO: Overridden settings:
{'LOG_ENABLED': False}
๐๏ธ Urls recorded in file `/tmp/ecoindex-cli/input/www.ecoindex.fr.csv`
There are 7 url(s), do you want to process? [Y/n]:
7 urls for 1 window size with 8 maximum workers
100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 7/7 โข 0:00:25 โข 0:00:00
โโโโโโโโโโโโโโโโโโณโโโโโโโโโโณโโโโโโโโโ
โ Total analysis โ Success โ Failed โ
โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ
โ 7 โ 7 โ 0 โ
โโโโโโโโโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโ
๐๏ธ File /tmp/ecoindex-cli/output/www.ecoindex.fr/2023-14-04_141011/results.csv written !
Generate a html report
You can generate a html report easily at the end of the analysis. You just have to add the option --html-report
.
ecoindex-cli analyze --url https://www.ecoindex.fr --recursive --html-report
Result
You are about to perform a recursive website scraping. This can take a long time. Are you sure to want to proceed? [Y/n]:
โฒ๏ธ Crawling root url https://www.ecoindex.fr -> Wait a minute!
-2023-04-14 14:16:13 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: scrapybot)
2023-04-14 14:16:13 [scrapy.utils.log] INFO: Versions: lxml 4.9.2.0, libxml2 2.9.14, cssselect 1.2.0, parsel 1.7.0, w3lib 2.1.1, Twisted 22.10.0, Python 3.11.3 (main, Apr 5 2023, 14:15:06) [GCC 9.4.0], pyOpenSSL 23.0.0 (OpenSSL 3.0.8 7 Feb 2023), cryptography 39.0.2, Platform Linux-5.15.0-67-generic-x86_64-with-glibc2.31
2023-04-14 14:16:13 [scrapy.crawler] INFO: Overridden settings:
{'LOG_ENABLED': False}
๐๏ธ Urls recorded in file `/tmp/ecoindex-cli/input/www.ecoindex.fr.csv`
There are 7 url(s), do you want to process? [Y/n]:
7 urls for 1 window size with 8 maximum workers
100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 7/7 โข 0:00:28 โข 0:00:00
โโโโโโโโโโโโโโโโโโณโโโโโโโโโโณโโโโโโโโโ
โ Total analysis โ Success โ Failed โ
โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ
โ 7 โ 7 โ 0 โ
โโโโโโโโโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโ
๐๏ธ File /tmp/ecoindex-cli/output/www.ecoindex.fr/2023-14-04_141645/results.csv written !
๐ฆ๏ธ Amazing! A report has been generated to /tmp/ecoindex-cli/output/www.ecoindex.fr/2023-14-04_141645/index.html
When generating a html report, the results are written in a CSV file and you can not specify the result file location. So options
--export-format
and--output-file
are ignored.
Here is a sample result:
Other features
Set the output file
You can define the csv output file
ecoindex-cli analyze --url https://www.ecoindex.fr --output-file ~/ecoindex-results/ecoindex.csv
Export to JSON file
By default, the results are exported to a CSV file. But, you can specify to export the results to a JSON file.
ecoindex-cli analyze --url https://www.ecoindex.fr --export-format json
Change wait before / after scroll
By default, the scenario waits 3 seconds before and after scrolling to bottom of the page so that the analysis results are conform to the Ecoindex main API methodology.
You can change this value with the option --wait-before-scroll
and --wait-after-scroll
to fit your needs.
ecoindex-cli analyze --url https://www.ecoindex.fr --wait-before-scroll 1 --wait-after-scroll 1
Using a specific Chrome version
You can use a specific Chrome version to make the analysis. This is useful if you use an old chrome version. You just have to provide the main Chrome version number.
ecoindex-cli analyze --url https://www.ecoindex.fr --chrome-version 107
Or if you do not know the Chrome version number, you can use the one line command
ecoindex-cli analyze --url https://www.ecoindex.fr --chrome-version (google-chrome --version | grep --only -P '(?<=\\s)\\d{3}')
Using multi-threading
You can use multi-threading to speed up the analysis when you have a lot of websites to analyze. In this case, you can define the maximum number of workers to use:
ecoindex-cli analyze --url https://www.ecoindex.fr --url https://www.greenit.fr/ --max-workers 10
By default, the number of maximum workers is set to CPU count.
Disable console interaction
You can disable confirmations, and force the app to answer yes to all of them. It can be useful if you need to start the app from another script, or if you have no time to wait it to finish.
ecoindex-cli analyze --url https://www.ecoindex.fr --recursive --no-interaction
Only generate a report from existing result file
If you already performed an anlayzis and (for example), forgot to generate the html report, you do not need to re-run a full analyzis, you can simply request a report from your result file :
ecoindex-cli report "/tmp/ecoindex-cli/output/www.ecoindex.fr/2021-05-06_191355/results.csv" "www.synchrone.fr"
Result
๐ฆ๏ธ Amazing! A report has been generated to /tmp/ecoindex-cli/output/www.ecoindex.fr/2021-05-06_191355/index.html
Results example
The result of the analysis is a CSV or JSON file which can be easily used for further analysis:
CSV example
width,height,url,size,nodes,requests,grade,score,ges,water,date,page_type
1920,1080,https://www.ecoindex.fr,521.54,45,68,B,75.0,1.5,2.25,2022-05-03 22:28:49.280479,
1920,1080,https://www.greenit.fr,1374.641,666,167,E,32.0,2.36,3.54,2022-05-03 22:28:51.176216,website
JSON example
[
{
"width": 1920,
"height": 1080,
"url": "https://www.ecoindex.fr",
"size": 521.54,
"nodes": 45,
"requests": 68,
"grade": "B",
"score": 75.0,
"ges": 1.5,
"water": 2.25,
"date": "2022-05-03 22:25:01.016749",
"page_type": null
},
{
"width": 1920,
"height": 1080,
"url": "https://www.greenit.fr",
"size": 1163.386,
"nodes": 666,
"requests": 148,
"grade": "E",
"score": 34.0,
"ges": 2.32,
"water": 3.48,
"date": "2022-05-03 22:25:04.516676",
"page_type": "website"
}
]
Docker
You can use this application in a docker container. You can simply run the container with the following command:
docker run -it --rm -v /tmp/ecoindex-cli:/tmp/ecoindex-cli vvatelot/ecoindex-cli:latest ecoindex-cli analyze --url https://www.ecoindex.fr --recursive --html-report
Fields description
width
is the screen width used for the page analysis (in pixels)height
is the screen height used for the page analysis (in pixels)url
is the analysed page urlsize
is the size of the page and of the downloaded elements of the page in KBnodes
is the number of the DOM elements in the pagerequests
is the number of external requests made by the pagegrade
is the corresponding ecoindex grade of the page (from A to G)score
ย is the corresponding ecoindex score of the page (0 to 100)ges
is the equivalent of greenhouse gases emission (ingCO2e
) of the pagewater
is the equivalent water consumption (incl
) of the pagedate
is the datetime of the page analysispage_type
is the type of the page, based ton the opengraph type tag
Testing
In order to develop or test, you have to use Poetry, install the dependencies and execute a poetry shell:
poetry install && \
poetry shell
We use Pytest to run unit tests for this project. The test suite are in the tests
folder. Just execute :
pytest --cov-report term-missing:skip-covered --cov=. --cov-config=.coveragerc tests
This runs pytest and also generate a coverage report (terminal and html)
Disclaimer
The LCA values used by ecoindex_cli to evaluate environmental impacts are not under free license - ยฉFrรฉdรฉric Bordage Please also refer to the mentions provided in the code files for specifics on the IP regime.
License
Contributing
Code of conduct
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file ecoindex_cli-2.19.0.tar.gz
.
File metadata
- Download URL: ecoindex_cli-2.19.0.tar.gz
- Upload date:
- Size: 26.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.4.2 CPython/3.10.9 Linux/5.15.0-1035-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cba1e8fc1a41d738c756122ba599995a67c19b1e3b97b569ecd463c2f83f5fc0 |
|
MD5 | 7c1aa5fbd80933240be7ad310e0c1712 |
|
BLAKE2b-256 | 884eefc382a0bb46f9dfbd256a13a3b78c4194aee9f6d6454c946a65fc9b6b3c |
File details
Details for the file ecoindex_cli-2.19.0-py3-none-any.whl
.
File metadata
- Download URL: ecoindex_cli-2.19.0-py3-none-any.whl
- Upload date:
- Size: 33.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.4.2 CPython/3.10.9 Linux/5.15.0-1035-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c4801b564255497f159b25321b2685098c0018cb25b46d39db97c3e53683194c |
|
MD5 | cabed6c8563a5c4dd5ad6fa75626e17c |
|
BLAKE2b-256 | c2f815545bf58f34c1ca79576022a09ce91e4de26f017ca7231181eafccdb40d |