CLI for the Lobstr.io scraping API
Project description
lobstrio
Command-line interface for the Lobstr.io scraping API
Run web scrapers, manage squids, download results — all from your terminal.
Demo
Installation
pip install lobstrio
Requires Python 3.10+.
Quick start
# Save your API token (find it at https://app.lobstr.io/dashboard/api)
lobstr config set-token YOUR_TOKEN
# One-command scrape: create squid, add tasks, run, download
lobstr go google-maps-leads-scraper "https://maps.google.com/maps/place/..." -o leads.csv
# Multiple URLs
lobstr go google-maps-leads-scraper url1 url2 url3 -o results.csv
# From a file
lobstr go google-maps-leads-scraper --file urls.txt -o results.csv
Commands
Account & config
lobstr whoami # Show account info and balance
lobstr config set-token TOKEN # Save API token
lobstr config show # Show current config
lobstr config set-alias maps SQUID # Create alias for a squid
Crawlers — browse the scraper catalog
lobstr crawlers ls # List all available crawlers
lobstr crawlers show google-maps-leads-scraper # Show crawler details
lobstr crawlers search "Google Maps" # Search by name
lobstr crawlers params google-maps-leads-scraper # Show crawler parameters
lobstr crawlers attrs google-maps-leads-scraper # Show result attributes
Squids — manage scraper instances
lobstr squid create google-maps-leads-scraper --name "My Scraper"
lobstr squid ls # List your squids
lobstr squid show SQUID # Show details
lobstr squid update SQUID --concurrency 5 --param max_results=200
lobstr squid empty SQUID # Remove all tasks
lobstr squid rm SQUID --force # Delete squid
Tasks — manage input URLs and keywords
lobstr task add SQUID url1 url2 # Add tasks
lobstr task add SQUID "pizza" --key keyword # Keyword-based crawlers
lobstr task ls SQUID # List tasks
lobstr task show FULL_TASK_HASH # Show task details
lobstr task rm FULL_TASK_HASH # Delete task
lobstr task upload SQUID tasks.csv # Bulk upload from CSV
lobstr task upload-status UPLOAD_ID # Check upload progress
Runs — start, monitor, and download
lobstr run start SQUID # Start a run
lobstr run start SQUID --wait # Start and wait for completion
lobstr run start SQUID --download results.csv # Start, wait, download
lobstr run ls SQUID # List runs
lobstr run show FULL_RUN_HASH # Show run details
lobstr run stats FULL_RUN_HASH # Show run statistics
lobstr run tasks FULL_RUN_HASH # List tasks in a run
lobstr run watch FULL_RUN_HASH # Live progress bar
lobstr run abort FULL_RUN_HASH # Stop a run
lobstr run download FULL_RUN_HASH # Download results CSV
Results — fetch scraped data
lobstr results get SQUID # Fetch results (JSON)
lobstr results get SQUID --format csv # Fetch as CSV
lobstr results get SQUID -o data.json # Save to file
Accounts — manage connected platform accounts
lobstr accounts ls # List all accounts
lobstr accounts show ACCOUNT # Show account details
lobstr accounts types # List available account types
lobstr accounts sync --type google --cookies-file cookies.json # Sync account
lobstr accounts sync-status SYNC_ID # Check sync progress
lobstr accounts update ACCOUNT --param daily_limit=100
lobstr accounts rm ACCOUNT --force # Delete account
Delivery — configure result delivery
lobstr delivery email SQUID --email you@example.com
lobstr delivery googlesheet SQUID --url "https://docs.google.com/..."
lobstr delivery s3 SQUID --bucket my-bucket --target-path scrapes/
lobstr delivery webhook SQUID --url "https://your-server.com/hook"
lobstr delivery sftp SQUID --host ftp.example.com --username user --password pass
# Test connectivity
lobstr delivery test-email --email you@example.com
lobstr delivery test-s3 --bucket my-bucket
Go — full workflow in one command
# Basic usage
lobstr go google-maps-leads-scraper "https://maps.google.com/..." -o results.csv
# Keyword-based crawler
lobstr go google-search-scraper "pizza delivery" --key keyword
# With crawler parameters
lobstr go google-maps-leads-scraper url1 --param max_results=200 --param language=English
# Set concurrency
lobstr go google-maps-leads-scraper url1 --concurrency 3
# Start without waiting for download
lobstr go google-maps-leads-scraper url1 --no-download
# Reuse existing squid by name
lobstr go google-maps-leads-scraper url1 --name "My Leads"
# Clear old tasks when reusing squid
lobstr go google-maps-leads-scraper url1 --name "My Leads" --empty
# Delete squid after completion
lobstr go google-maps-leads-scraper url1 --delete
# Custom output file
lobstr go google-maps-leads-scraper url1 -o my_leads.csv
Global flags
| Flag | Description |
|---|---|
--json |
Output raw JSON (for piping/scripting) |
--quiet |
Suppress non-essential output |
--verbose |
Show HTTP request details |
--token TOKEN |
Override API token for this command |
--version |
Show version |
Aliases
Create shortcuts for frequently used squids:
lobstr config set-alias maps abc123def456...
lobstr task ls @maps
lobstr run start @maps
Identifier resolution
| Resource | Resolution order | Example |
|---|---|---|
| Crawlers | Hash prefix → Slug (exact/prefix) → Name (exact/substring) | google-maps, 4734d096, "Google Maps" |
| Squids | @alias → Hash prefix → Name (exact/substring) |
@maps, abc1, "My Scraper" |
| Accounts | Hash prefix → Username (exact/substring) | f9a2, "john@gmail.com" |
| Runs & Tasks | Full 32-character hash only | a1b2c3d4e5f6... |
Configuration
Config is stored at ~/.config/lobstr/config.toml. The API token can also be set via the LOBSTR_TOKEN environment variable.
CLI vs SDK
CLI (pip install lobstrio) |
SDK (pip install lobstrio-sdk) |
|
|---|---|---|
| Use case | Terminal workflows, quick scrapes, cron jobs | Scripts, pipelines, applications |
| Interface | Shell commands | Python API |
| Output | Rich tables, progress bars, CSV files | Typed dataclass models |
| Async | No | Yes (AsyncLobstrClient) |
| Pagination | Manual (--page, --limit) |
Auto (client.squids.iter()) |
For programmatic access, see lobstrio-sdk.
FAQ
Where do I get an API token?
Go to Dashboard → API to find your token. It's always available there, pre-generated.
How do I use keyword-based crawlers?
Some crawlers accept keywords instead of URLs. Use the --key flag:
lobstr go google-search-scraper "pizza delivery" --key keyword
Use lobstr crawlers params <crawler> to see what parameters a crawler accepts.
Can I use short hashes for runs and tasks?
No. Run and task endpoints require the full 32-character hash. Use lobstr run ls SQUID or lobstr task ls SQUID to see full hashes. Crawlers and squids support prefix matching.
How do I pipe results to other tools?
Use --json for machine-readable output:
lobstr --json results get SQUID | jq '.[].email'
lobstr --json crawlers ls | jq '.[] | .name'
Can I run scrapes in the background?
Yes. Use --no-download with go, or start a run without --wait:
lobstr go google-maps-leads-scraper urls.txt --no-download
lobstr run start SQUID # returns immediately
lobstr run watch RUN_HASH # check progress later
Contributing
Contributions are welcome! See CONTRIBUTING.md for development setup, code style, and versioning guidelines.
Changelog
See CHANGELOG.md for release history.
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lobstrio-0.5.0.tar.gz.
File metadata
- Download URL: lobstrio-0.5.0.tar.gz
- Upload date:
- Size: 567.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6705413e522a218c911837da3254800f146d0a11be13ac7a763836ef11014285
|
|
| MD5 |
b3e200a3e5c1af79bf8f5ae5106cc378
|
|
| BLAKE2b-256 |
816169d5ef4748b4dc265d7af1a100e97e6e8548ca4783c8a4f5a06fe694cf0e
|
File details
Details for the file lobstrio-0.5.0-py3-none-any.whl.
File metadata
- Download URL: lobstrio-0.5.0-py3-none-any.whl
- Upload date:
- Size: 29.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cd9a84a061e03b8067d25158c68b8db4ddb839b5102a6986dc5c4345ecfd085b
|
|
| MD5 |
46db4fd0335f7ba53752a2c0ad53bd90
|
|
| BLAKE2b-256 |
e867b5a70895add2c974ac0c8641bafaf674ab9c1554a61d6de26cd9ef258a85
|