Skip to main content

Parallel Tools: CLI and data enrichment utilities for the Parallel API

Project description

Parallel-Web-Tools

CLI and data enrichment utilities for the Parallel API.

Note: This package provides the parallel-cli command-line tool and data enrichment utilities in the parallel-web-tools package. It depends on parallel-web, the official Parallel Python SDK, but does not contain it. Install parallel-web separately if you need direct SDK access.

Features

  • CLI for Humans & AI Agents - Works interactively or fully via command-line arguments
  • Web Search - AI-powered search with domain filtering and date ranges
  • Content Extraction - Extract clean markdown from any URL
  • Data Enrichment - Enrich CSV, DuckDB, and BigQuery data with AI
  • AI-Assisted Planning - Use natural language to define what data you want
  • Multiple Integrations - Polars, DuckDB, Snowflake, BigQuery, Spark

Installation

Standalone CLI (Recommended)

Install the standalone parallel-cli binary for search, extract, enrichment, and deep research (no Python required):

curl -fsSL https://raw.githubusercontent.com/parallel-web/parallel-web-tools/main/install-cli.sh | bash

This automatically detects your platform (macOS/Linux, x64/arm64) and installs to ~/.local/bin.

Note: The standalone binary includes core CLI features. For deployment commands (enrich deploy), use pip: pip install parallel-web-tools[snowflake] or [bigquery].

Python Package

For programmatic usage or data enrichment integrations:

# Full install with CLI and all connectors
pip install parallel-web-tools[all]

# Library only (minimal dependencies)
pip install parallel-web-tools

# With specific connectors
pip install parallel-web-tools[cli]          # CLI only
pip install parallel-web-tools[polars]       # Polars DataFrame
pip install parallel-web-tools[duckdb]       # DuckDB
pip install parallel-web-tools[bigquery]     # BigQuery
pip install parallel-web-tools[spark]        # Apache Spark

CLI Overview

parallel-cli
├── auth                    # Check authentication status
├── login                   # OAuth login (or use PARALLEL_API_KEY env var)
├── logout                  # Remove stored credentials
├── search                  # Web search
├── extract                 # Extract content from URLs
└── enrich                  # Data enrichment commands
    ├── run                 # Run enrichment
    ├── plan                # Create YAML config
    ├── suggest             # AI suggests output columns
    └── deploy              # Deploy to cloud systems (requires pip install)

Quick Start

1. Authenticate

# Interactive OAuth login
parallel-cli login

# Or set environment variable
export PARALLEL_API_KEY=your_api_key

2. Search the Web

# Natural language search
parallel-cli search "What is Anthropic's latest AI model?" --json

# Keyword search with filters
parallel-cli search -q "bitcoin price" --after-date 2024-01-01 --json

# Search specific domains
parallel-cli search "SEC filings for Apple" --include-domains sec.gov --json

3. Extract Content from URLs

# Extract content as markdown
parallel-cli extract https://example.com --json

# Extract with a specific focus
parallel-cli extract https://company.com --objective "Find pricing info" --json

# Get full page content
parallel-cli extract https://example.com --full-content --json

4. Enrich Data

# Let AI suggest what columns to add
parallel-cli enrich suggest "Find the CEO and annual revenue" --json

# Create a config file (interactive)
parallel-cli enrich plan -o config.yaml

# Create a config file (non-interactive, for AI agents)
parallel-cli enrich plan -o config.yaml \
    --source-type csv \
    --source companies.csv \
    --target enriched.csv \
    --source-columns '[{"name": "company", "description": "Company name"}]' \
    --intent "Find the CEO and annual revenue"

# Run enrichment from config
parallel-cli enrich run config.yaml

# Run enrichment directly (no config file needed)
parallel-cli enrich run \
    --source-type csv \
    --source companies.csv \
    --target enriched.csv \
    --source-columns '[{"name": "company", "description": "Company name"}]' \
    --intent "Find the CEO and annual revenue"

5. Deploy to Cloud Systems

# Deploy to BigQuery for SQL-native enrichment
parallel-cli enrich deploy --system bigquery --project my-gcp-project

Non-Interactive Mode (for AI Agents & Scripts)

All commands support --json output and can be fully controlled via CLI arguments:

# Search with JSON output
parallel-cli search "query" --json

# Extract with JSON output
parallel-cli extract https://url.com --json

# Suggest columns with JSON output
parallel-cli enrich suggest "Find CEO" --json

# Plan without prompts (provide all args)
parallel-cli enrich plan -o config.yaml \
    --source-type csv \
    --source input.csv \
    --target output.csv \
    --source-columns '[{"name": "company", "description": "Company name"}]' \
    --enriched-columns '[{"name": "ceo", "description": "CEO name"}]'

# Or use --intent to let AI determine the columns
parallel-cli enrich plan -o config.yaml \
    --source-type csv \
    --source input.csv \
    --target output.csv \
    --source-columns '[{"name": "company", "description": "Company name"}]' \
    --intent "Find CEO, revenue, and headquarters"

Integrations

Integration Type Install Documentation
Polars Python DataFrame pip install parallel-web-tools[polars] Setup Guide
DuckDB SQL + Python pip install parallel-web-tools[duckdb] Setup Guide
Snowflake SQL UDF pip install parallel-web-tools[snowflake] Setup Guide
BigQuery Cloud Function pip install parallel-web-tools[bigquery] Setup Guide
Spark SQL UDF pip install parallel-web-tools[spark] Demo Notebook

Quick Integration Examples

Polars:

import polars as pl
from parallel_web_tools.integrations.polars import parallel_enrich

df = pl.DataFrame({"company": ["Google", "Microsoft"]})
result = parallel_enrich(
    df,
    input_columns={"company_name": "company"},
    output_columns=["CEO name", "Founding year"],
)
print(result.result)

DuckDB:

import duckdb
from parallel_web_tools.integrations.duckdb import enrich_table

conn = duckdb.connect()
conn.execute("CREATE TABLE companies AS SELECT 'Google' as name")
result = enrich_table(
    conn,
    source_table="companies",
    input_columns={"company_name": "name"},
    output_columns=["CEO name", "Founding year"],
)
print(result.result.fetchdf())

Programmatic Usage

from parallel_web_tools import run_enrichment, run_enrichment_from_dict

# From YAML file
run_enrichment("config.yaml")

# From dictionary
run_enrichment_from_dict({
    "source": "data.csv",
    "target": "enriched.csv",
    "source_type": "csv",
    "source_columns": [{"name": "company", "description": "Company name"}],
    "enriched_columns": [{"name": "ceo", "description": "CEO name"}]
})

YAML Configuration Format

source: input.csv
target: output.csv
source_type: csv  # csv, duckdb, or bigquery
processor: core-fast  # lite, base, core, pro, ultra (add -fast for speed)

source_columns:
  - name: company_name
    description: The name of the company

enriched_columns:
  - name: ceo
    description: The CEO of the company
    type: str  # str, int, float, bool
  - name: revenue
    description: Annual revenue in USD
    type: float

Environment Variables

Variable Description
PARALLEL_API_KEY API key for authentication (alternative to parallel-cli login)
DUCKDB_FILE Default DuckDB file path
BIGQUERY_PROJECT Default BigQuery project ID

Related Packages

  • parallel-web - Official Parallel Python SDK (this package depends on it)

Development

git clone https://github.com/parallel-web/parallel-web-tools.git
cd parallel-web-tools
uv sync --all-extras
uv run pytest tests/ -v

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

parallel_web_tools-0.0.5.tar.gz (49.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

parallel_web_tools-0.0.5-py3-none-any.whl (67.5 kB view details)

Uploaded Python 3

File details

Details for the file parallel_web_tools-0.0.5.tar.gz.

File metadata

  • Download URL: parallel_web_tools-0.0.5.tar.gz
  • Upload date:
  • Size: 49.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for parallel_web_tools-0.0.5.tar.gz
Algorithm Hash digest
SHA256 332606d4d8aa8731b0ab0292c491f4d3c8afef1ea8ac4ecf0a685fea88413a6f
MD5 31a526dbcb794331cdf1f0a330de9487
BLAKE2b-256 2f2d79b1d2fb7fd90a4cc36c1979f9df6868c89d7ee8afa4c2b4badc450b3a0d

See more details on using hashes here.

Provenance

The following attestation bundles were made for parallel_web_tools-0.0.5.tar.gz:

Publisher: publish.yml on parallel-web/parallel-web-tools

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file parallel_web_tools-0.0.5-py3-none-any.whl.

File metadata

File hashes

Hashes for parallel_web_tools-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 0ad9b51f8e1c46997136cbf99adccd040509301ff72b9cff67904dadf3667744
MD5 7effa4e079d3efa3731dda959fad4f02
BLAKE2b-256 13eceb604eae62ac2e6f2790054dec7f40c3821385e01e375b6aa62a9adf0e63

See more details on using hashes here.

Provenance

The following attestation bundles were made for parallel_web_tools-0.0.5-py3-none-any.whl:

Publisher: publish.yml on parallel-web/parallel-web-tools

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page