Skip to main content

Random sampling of GitHub repositories

Project description

Repo Roulette 🎲

Spin the wheel and see which GitHub repositories you get!

License: MIT

Randomly sample repositories from GitHub.

🌟 Features

  • Multiple sampling methods to meet different research needs
  • Avoid the 1,000 result limitation of GitHub's Search API

🚀 Installation

# Using pip
pip install reporoulette

# From source
git clone https://github.com/username/reporoulette.git
cd reporoulette
pip install -e .

📖 Sampling Methods

RepoRoulette provides three distinct methods for random GitHub repository sampling:

1. 🎯 ID-Based Sampling

Uses GitHub's sequential repository ID system to generate truly random samples by probing random IDs from the valid ID range.

from reporoulette import IDSampler

# Initialize the sampler
sampler = IDSampler(token="your_github_token")

# Get 50 random repositories
repos = sampler.sample(n_samples=50)

# Print basic stats
print(f"Success rate: {sampler.success_rate:.2f}%")
print(f"Samples collected: {len(repos)}")

Advantages:

  • Most unbiased sampling method
  • Covers the entire GitHub ecosystem
  • Simple approach with minimal API calls

Limitations:

  • Lower hit rate (many IDs are invalid)
  • No control over repository characteristics

2. ⏱️ Temporal Sampling

Randomly selects time points (date/hour combinations) within a specified range and then retrieves repositories updated during those periods.

from reporoulette import TemporalSampler
from datetime import datetime, timedelta

# Define a date range (last 3 months)
end_date = datetime.now()
start_date = end_date - timedelta(days=90)

# Initialize the sampler
sampler = TemporalSampler(
    token="your_github_token",
    start_date=start_date,
    end_date=end_date
)

# Get 100 random repositories
repos = sampler.sample(n_samples=100)

# Get repositories with specific characteristics
filtered_repos = sampler.sample(
    n_samples=50,
    min_stars=10,
    languages=["python", "javascript"]
)

Advantages:

  • Higher hit rate than ID-based sampling
  • Can filter by repository characteristics
  • Allows for stratified sampling by time periods

Limitations:

  • Limited to repositories with recent activity
  • Some time periods may have fewer repositories

3. 🔍 BigQuery Sampling

Leverages Google BigQuery's GitHub dataset for high-volume, efficient sampling. Perfect for research requiring large samples or specific criteria.

from reporoulette import BigQuerySampler

# Initialize the sampler (requires GCP credentials)
sampler = BigQuerySampler(
    credentials_path="path/to/credentials.json"
)

# Sample 1,000 repositories created in the last year
repos = sampler.sample(
    n_samples=1000,
    created_after="2023-01-01",
    sample_by="created_at"
)

# Sample repositories with multiple criteria
specialty_repos = sampler.sample(
    n_samples=500,
    min_stars=100,
    min_forks=50,
    languages=["rust", "go"],
    has_license=True
)

Advantages:

  • Handles large sample sizes efficiently
  • Powerful filtering and stratification options
  • Not limited by GitHub API rate limits
  • Access to historical data

Limitations:

  • Could be expensive
  • Requires Google Cloud Platform account and billing
  • Dataset may have a slight delay (typically 24-48 hours)
  • Requires knowledge of SQL for custom queries

📊 Example Use Cases

  • Academic Research: Study coding practices across different languages and communities
  • Learning Resources: Discover diverse code examples for education
  • Data Science: Build datasets for machine learning models about code patterns
  • Trend Analysis: Identify emerging technologies and practices
  • Security Research: Find vulnerability patterns across repository types

🛠️ Configuration

Create a .reporoulette.yml file in your home directory or project root:

github:
  token: "your_github_token"
  rate_limit_safety: 100  # Stop when this many requests remain

bigquery:
  credentials_path: "path/to/credentials.json"
  project_id: "your-gcp-project"

sampling:
  default_method: "temporal"
  cache_results: true
  cache_path: "~/.reporoulette/cache"

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📜 License

This project is licensed under the MIT License - see the LICENSE file for details.

🔗 Related Projects


Built with ❤️ by [Your Name/Organization]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reporoulette-0.1.0.tar.gz (12.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reporoulette-0.1.0-py3-none-any.whl (12.9 kB view details)

Uploaded Python 3

File details

Details for the file reporoulette-0.1.0.tar.gz.

File metadata

  • Download URL: reporoulette-0.1.0.tar.gz
  • Upload date:
  • Size: 12.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for reporoulette-0.1.0.tar.gz
Algorithm Hash digest
SHA256 e1e333a8afc7db3d8d4eec0459db3e0a64352adeec318e97c5b025dff1cd3024
MD5 aa35d1ea2404466def8cb7875920be71
BLAKE2b-256 b716d9789da81b9172c24c59ad5187096b30ad77c6ebd8c058df218c368fb817

See more details on using hashes here.

File details

Details for the file reporoulette-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: reporoulette-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 12.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for reporoulette-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3bc39b953784f8e7ba969a8f8d551714282d7a9a4a1ec49d4f22f7ebe91e9d91
MD5 ad456d1c64dba732a0452a805e1b8984
BLAKE2b-256 7bfca57cd2a42a461fac7fb2161ab0075e17b1ede37f297d8ba4394cfbe1e779

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page