Web scraper for race results hosted on Athlinks.
Project description
scrapy-athlinks: web scraper for race results hosted on Athlinks
Introduction
scrapy-athlinks
provides the RaceSpider
class.
This spider crawls through all results pages from a race hosted on athlinks.com, building and following links to each athlete's individual results page, where it collects their split data. It also collects some metadata about the race itself.
By default, the spider returns one race metadata object (RaceItem
), and one
AthleteItem
per participant.
Each AthleteItem
consists of some basic athlete info and a list of RaceSplitItem
containing data from each split they recorded.
How to use this package
Option 1: In python scripts
Scrapy can be operated entirely from python scripts. See the scrapy documentation for more info.
Installation
The package is available on PyPi and can be installed with pip
:
pip install scrapy-athlinks
Example usage
An demo script is included in this repo.
from scrapy.crawler import CrawlerProcess
from scrapy_athlinks import RaceSpider, AthleteItem, RaceItem
settings = {
'FEEDS': {
# Athlete data. Inside this file will be a list of dicts containing
# data about each athlete's race and splits.
'athletes.json': {
'format':'json',
'overwrite': True,
'item_classes': [AthleteItem],
},
# Race metadata. Inside this file will be a list with a single dict
# containing info about the race itself.
'metadata.json': {
'format':'json',
'overwrite': True,
'item_classes': [RaceItem],
},
}
}
process = CrawlerProcess(settings=settings)
# Crawl results for the 2022 Leadville Trail 100 Run
process.crawl(RaceSpider, 'https://www.athlinks.com/event/33913/results/Event/1018673/')
process.start()
Option 2: Command line
Alternatively, you may clone this repo for use like a typical Scrapy project that you might create on your own.
Installation
git clone https://github.com/aaron-schroeder/athlinks-scraper-scrapy
cd athlinks-scraper-scrapy
pip install -r requirements.txt
Example usage
Run a RaceSpider
:
cd scrapy_athlinks
scrapy crawl race -a url=https://www.athlinks.com/event/33913/results/Event/1018673 -O out.json
Dependencies
All that is required is Scrapy (and its dependencies).
Testing
make test
License
This project is licensed under the MIT License. See LICENSE file for details.
Contact
You can get in touch with me at the following places:
- Website: trailzealot.com
- LinkedIn: linkedin.com/in/aarondschroeder
- GitHub: github.com/aaron-schroeder
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file scrapy-athlinks-0.0.1.tar.gz
.
File metadata
- Download URL: scrapy-athlinks-0.0.1.tar.gz
- Upload date:
- Size: 10.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8e2a6d541f16438ca75f256c8ca3a66b2ef0112b591bc58cc1b546b7754b3312 |
|
MD5 | 8241ee2335c635bdcb240bddb2bfb882 |
|
BLAKE2b-256 | 3a786dcf45c068f31b57e6305bd3a834687694526ecc3d09a1cf75d3edf96953 |
File details
Details for the file scrapy_athlinks-0.0.1-py3-none-any.whl
.
File metadata
- Download URL: scrapy_athlinks-0.0.1-py3-none-any.whl
- Upload date:
- Size: 9.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 474a04d10e070197d9c97ffcf40f83bef90ca2e995c3e7a2723f272ebced6760 |
|
MD5 | 62115dcbadf195b2c1bd3608b6f6c0a4 |
|
BLAKE2b-256 | 0877c152c17f5b395cf375189d8eb53aaaf66172c61e0c79c6cd6a983ae34f0b |