Skip to main content

Web scraper for race results hosted on Athlinks.

Project description

scrapy-athlinks: web scraper for race results hosted on Athlinks

License Python 3.8 PyPI

Introduction

scrapy-athlinks provides the RaceSpider class.

This spider crawls through all results pages from a race hosted on athlinks.com, building and following links to each athlete's individual results page, where it collects their split data. It also collects some metadata about the race itself.

By default, the spider returns one race metadata object (RaceItem), and one AthleteItem per participant. Each AthleteItem consists of some basic athlete info and a list of RaceSplitItem containing data from each split they recorded.

How to use this package

Option 1: In python scripts

Scrapy can be operated entirely from python scripts. See the scrapy documentation for more info.

Installation

The package is available on PyPi and can be installed with pip:

pip install scrapy-athlinks

Example usage

An demo script is included in this repo.

from scrapy.crawler import CrawlerProcess
from scrapy_athlinks import RaceSpider, AthleteItem, RaceItem


settings = {
  'FEEDS': {
    # Athlete data. Inside this file will be a list of dicts containing
    # data about each athlete's race and splits.
    'athletes.json': {
      'format':'json',
      'overwrite': True,
      'item_classes': [AthleteItem],
    },
    # Race metadata. Inside this file will be a list with a single dict
    # containing info about the race itself.
    'metadata.json': {
      'format':'json',
      'overwrite': True,
      'item_classes': [RaceItem],
    },
  }
}
process = CrawlerProcess(settings=settings)

# Crawl results for the 2022 Leadville Trail 100 Run
process.crawl(RaceSpider, 'https://www.athlinks.com/event/33913/results/Event/1018673/')
process.start()

Option 2: Command line

Alternatively, you may clone this repo for use like a typical Scrapy project that you might create on your own.

Installation

git clone https://github.com/aaron-schroeder/athlinks-scraper-scrapy
cd athlinks-scraper-scrapy
pip install -r requirements.txt

Example usage

Run a RaceSpider:

cd scrapy_athlinks
scrapy crawl race -a url=https://www.athlinks.com/event/33913/results/Event/1018673 -O out.json

Dependencies

All that is required is Scrapy (and its dependencies).

Testing

make test

License

License

This project is licensed under the MIT License. See LICENSE file for details.

Contact

You can get in touch with me at the following places:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapy-athlinks-0.0.1.tar.gz (10.4 kB view hashes)

Uploaded Source

Built Distribution

scrapy_athlinks-0.0.1-py3-none-any.whl (9.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page