Skip to main content
Join the official Python Developers Survey 2018 and win valuable prizes: Start the survey!

Package for extracting software repository metadata

Project description

# Scraper

Scraper is a tool for scraping and visualizing open source data from various
code hosting platforms, such as:, GitHub Enterprise,,
hosted GitLab, and Bitbucket Server.

## Getting Started:

[]( is a newly launched website of the US Federal
Government to allow the People to access metadata from the governments custom
developed software. This site requires metadata to function, and this Python
library can help with that!

To get started, you will need a [GitHub Personal Auth
to make requests to the GitHub API. This should be set in your environment or
shell ``rc`` file with the name ``GITHUB_API_TOKEN``:


$ echo "export GITHUB_API_TOKEN=XYZ" >> ~/.bashrc

Additionally, to perform the labor hours estimation, you will need to install
``cloc`` into your environment. This is typically done with a [Package
Manager]( such as
``npm`` or ``homebrew``.

Then to generate a ``code.json`` file for your agency, you will need a
``config.json`` file to coordinate the platforms you will connect to and scrape
data from. An example config file can be found in [demo.json](/demo.json). Once
you have your config file, you are ready to install and run the scraper!

# Install Scraper
$ pip install -e .

# Run Scraper with your config file ``config.json``
$ scraper --config config.json

A full example of the resulting ``code.json`` file can be [found

## Config File Options

The configuration file is a json file that specifies what repository platforms
to pull projects from as well as some settings that can be used to override
incomplete or inaccurate data returned via the scraping.

The basic structure is:

"contact_email": "...", # Used when the contact email cannot be found otherwise

"agency": "...", # Your agency abbreviation here
"organization": "...", # The organization within the agency
"permissions": { ... }, # Object containing default values for usageType and exemptionText

# Platform configurations, described in more detail below
"GitHub": [ ... ],
"GitLab": [ ... ],
"Bitbucket": [ ... ],

"GitHub": [
"url": "", # or GitHub Enterprise URL to inventory
"token": null, # Private token for accessing this GitHub instance
"public_only": true, # Only inventory public repositories

"orgs": [ ... ], # List of organizations to inventory
"repos": [ ... ], # List of single repositories to inventory
"exclude": [ ... ] # List of organizations / repositories to exclude from inventory

"GitLab": [
"url": "", # or hosted GitLab instance URL to inventory
"token": null, # Private token for accessing this GitHub instance

"orgs": [ ... ], # List of organizations to inventory
"repos": [ ... ], # List of single repositories to inventory
"exclude": [ ... ] # List of groups / repositories to exclude from inventory

"Bitbucket": [
"url": "https://bitbucket.internal", # Base URL for a Bitbucket Server instance
"username": "", # Username to authenticate with
"password": "", # Password to authenticate with

"exclude": [ ... ] # List of projects / repositories to exclude from inventory

## License

Scraper is released under an MIT license. For more details see the


Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
llnl_scraper-0.5.1-py2.py3-none-any.whl (24.9 kB) Copy SHA256 hash SHA256 Wheel py2.py3 Aug 30, 2018
llnl-scraper-0.5.1.tar.gz (21.8 kB) Copy SHA256 hash SHA256 Source None Aug 30, 2018

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page