Creating a scraper for multiple fantasy football sites
Project description
fantasy-scraper
Trying to scrape all data from our fantasy league with NFL.com. Since we are moving to Sleeper. It might end up include myfantasyleague as well
Pipenv and Poetry
Setup
curl -sSL https://instcurl -sSL https://install.python-poetry.org | python3 -
pipenv install --python=/usr/loca/bin/python3.10
pipenv shell
poetry completions bash >> ~/.bash_completion
#export PIP_PYTHON_PATH="$VIRTUAL_ENV/bin/python3"
poetry new nfl_scraper
poetry new nfl_scraper
#pipenv install --index=pip
#pipenv install --index=distutils
poetry add requests
poetry add html5lib
poetry add bs4
#pip uninstall -y setuptools
#exit
#deactivate
Running as Non Dev
poetry install --without dev --sync
poetry run python -V
# Help
poetry run python main.py -h
# Sub out the params
poetry run python main.py -e <email> -p <password> -i <id> -n <name>
# Test need to beef these up
poetry run pytest
Running as Dev
poetry check
poetry build
#poetry update #gets latest package version
Running in CICD
poetry check
# output version
poetry version -s
poetry version major|minor|patch --dry-run
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
nfl_scraper-1.0.1.tar.gz
(64.7 kB
view hashes)
Built Distribution
Close
Hashes for nfl_scraper-1.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | fb575b3c25aba13508ae7b6ab6ce64b0deb18b406cd8082e2763ed744f171aad |
|
MD5 | ec87ea3c198b980f174866524dbdf522 |
|
BLAKE2b-256 | 76333e431d6e4e9dbc6de9cb10b66f5c909d48d2393207c924d4618788cbd785 |