Skip to main content

Scrapers for Texas elections results

Project description

Hi. This is just me fooling around trying to come up a better way to scrape election results. The tricky logic has been refined in other Texas Tribune projects, but they were deeply tied to other logic.

The idea is to split the process up into multiple logical steps that other people might find useful:

  1. Ingest results: Typically either with curl or cat or anything that pipes output to stdout.
  2. Serialize the output html as JSON: Does not attempt to extract information. Just separates data from the html. This is the hard part that scrapers have trouble with.
  3. Interpret the serialized output: Turns the raw serialized data into something you might expect to see from a nice API.

In a Extract, transform, load (ETL) process, this just covers the extractions, with support for minor transforming.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for tx_elections_scrapers, version 0.4.1
Filename, size File type Python version Upload date Hashes
Filename, size tx_elections_scrapers-0.4.1-py2-none-any.whl (19.5 kB) File type Wheel Python version 2.7 Upload date Hashes View hashes
Filename, size tx_elections_scrapers-0.4.1.tar.gz (13.0 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page