Skip to main content

Modern async-first Python SDK for Bright Data APIs

Project description

Bright Data Python SDK

The official Python SDK for Bright Data APIs. Scrape any website, get SERP results, bypass bot detection and CAPTCHAs, and access 100+ ready-made datasets.

Python License

Installation

pip install brightdata-sdk

Configuration

Get your API Token from the Bright Data Control Panel:

export BRIGHTDATA_API_TOKEN="your_api_token_here"

Quick Start

This SDK is async-native. A sync client is also available (see Sync Client).

import asyncio
from brightdata import BrightDataClient

async def main():
    async with BrightDataClient() as client:
        result = await client.scrape_url("https://example.com")
        print(result.data)

asyncio.run(main())

Usage Examples

Web Scraping

async with BrightDataClient() as client:
    result = await client.scrape_url("https://example.com")
    print(result.data)

Web Scraping Async Mode

For non-blocking web scraping, use mode="async". This triggers a request and returns a response_id, which the SDK automatically polls until results are ready:

async with BrightDataClient() as client:
    # Triggers request → gets response_id → polls until ready
    result = await client.scrape_url(
        url="https://example.com",
        mode="async",
        poll_interval=5,    # Check every 5 seconds
        poll_timeout=180    # Web Unlocker async can take ~2 minutes
    )
    print(result.data)

    # Batch scraping multiple URLs concurrently
    urls = ["https://example.com", "https://example.org", "https://example.net"]
    results = await client.scrape_url(url=urls, mode="async", poll_timeout=180)

How it works:

  1. Sends request to /unblocker/req → returns response_id immediately
  2. Polls /unblocker/get_result?response_id=... until ready or timeout
  3. Returns the scraped data

When to use async mode:

  • Batch scraping with many URLs
  • Background processing while continuing other work

Performance note: Web Unlocker async mode typically takes ~2 minutes to complete. For faster results on single URLs, use the default sync mode (no mode parameter).

Search Engines (SERP)

async with BrightDataClient() as client:
    result = await client.search.google(query="python scraping", num_results=10)
    for item in result.data:
        print(item)

SERP Async Mode

For non-blocking SERP requests, use mode="async":

async with BrightDataClient() as client:
    # Non-blocking - polls for results
    result = await client.search.google(
        query="python programming",
        mode="async",
        poll_interval=2,   # Check every 2 seconds
        poll_timeout=30    # Give up after 30 seconds
    )

    for item in result.data:
        print(item['title'], item['link'])

When to use async mode:

  • Batch operations with many queries
  • Background processing while continuing other work
  • When scraping may take longer than usual

Note: Async mode uses the same zones and returns the same data structure as sync mode - no extra configuration needed!

Web Scraper API

The SDK includes ready-to-use scrapers for popular websites: Amazon, LinkedIn, Instagram, Facebook, and more.

Pattern: client.scrape.<platform>.<method>(url)

Example: Amazon

async with BrightDataClient() as client:
    # Product details
    result = await client.scrape.amazon.products(url="https://amazon.com/dp/B0CRMZHDG8")

    # Reviews
    result = await client.scrape.amazon.reviews(url="https://amazon.com/dp/B0CRMZHDG8")

    # Sellers
    result = await client.scrape.amazon.sellers(url="https://amazon.com/dp/B0CRMZHDG8")

Available scrapers:

  • client.scrape.amazon - products, reviews, sellers
  • client.scrape.linkedin - profiles, companies, jobs, posts
  • client.scrape.instagram - profiles, posts, comments, reels
  • client.scrape.facebook - posts, comments, reels

Browser API

Cloud-hosted Chrome instances accessible via the Chrome DevTools Protocol (CDP). The SDK builds the connection URL — you drive the browser with Playwright, Puppeteer, or Selenium.

from brightdata import BrightDataClient
from playwright.async_api import async_playwright

client = BrightDataClient(
    browser_username="brd-customer-<id>-zone-<zone>",
    browser_password="<password>",
)

url = client.browser.get_connect_url(country="us")  # country is optional

async with async_playwright() as pw:
    browser = await pw.chromium.connect_over_cdp(url)
    page = await browser.new_page()
    await page.goto("https://example.com")
    html = await page.content()
    await browser.close()

When to use: sites that require full browser automation — JS rendering, login flows, interactive clicks. For plain HTML fetches, prefer client.scrape_url().

Datasets API

Access 100+ ready-made datasets from Bright Data — pre-collected, structured data from popular platforms.

async with BrightDataClient() as client:
    # Filter a dataset — returns a snapshot_id
    snapshot_id = await client.datasets.imdb_movies(
        filter={"name": "title", "operator": "includes", "value": "black"},
        records_limit=5
    )

    # Download when ready (polls until snapshot is complete)
    data = await client.datasets.imdb_movies.download(snapshot_id)
    print(f"Got {len(data)} records")

    # Quick sample: .sample() auto-discovers fields, no filter needed
    # Works on any dataset
    snapshot_id = await client.datasets.imdb_movies.sample(records_limit=5)

Export results to file:

from brightdata.datasets import export

export(data, "results.json")   # JSON
export(data, "results.csv")    # CSV
export(data, "results.jsonl")  # JSONL

Available dataset categories:

  • E-commerce: Amazon, Walmart, Shopee, Lazada, Zalando, Zara, H&M, Shein, IKEA, Sephora, and more
  • Business intelligence: ZoomInfo, PitchBook, Owler, Slintel, VentureRadar, Manta
  • Jobs & HR: Glassdoor (companies, reviews, jobs), Indeed (companies, jobs), Xing
  • Reviews: Google Maps, Yelp, G2, Trustpilot, TrustRadius
  • Social media: Pinterest (posts, profiles), Facebook Pages
  • Real estate: Zillow, Airbnb, and 8+ regional platforms
  • Luxury brands: Chanel, Dior, Prada, Balenciaga, Hermes, YSL, and more
  • Entertainment: IMDB, NBA, Goodreads

Discover available fields:

metadata = await client.datasets.imdb_movies.get_metadata()
for name, field in metadata.fields.items():
    print(f"{name}: {field.type}")

Async Usage

Run multiple requests concurrently:

import asyncio
from brightdata import BrightDataClient

async def main():
    async with BrightDataClient() as client:
        urls = ["https://example.com/page1", "https://example.com/page2", "https://example.com/page3"]
        tasks = [client.scrape_url(url) for url in urls]
        results = await asyncio.gather(*tasks)

asyncio.run(main())

Manual Trigger/Poll/Fetch

For long-running scrapes:

async with BrightDataClient() as client:
    # Trigger
    job = await client.scrape.amazon.products_trigger(url="https://amazon.com/dp/B123")

    # Wait for completion
    await job.wait(timeout=180)

    # Fetch results
    data = await job.fetch()

Sync Client

For simpler use cases, use SyncBrightDataClient:

from brightdata import SyncBrightDataClient

with SyncBrightDataClient() as client:
    result = client.scrape_url("https://example.com")
    print(result.data)

    # All methods work the same
    result = client.scrape.amazon.products(url="https://amazon.com/dp/B123")
    result = client.search.google(query="python")

See docs/sync_client.md for details.

Troubleshooting

RuntimeError: SyncBrightDataClient cannot be used inside async context

# Wrong - using sync client in async function
async def main():
    with SyncBrightDataClient() as client:  # Error!
        ...

# Correct - use async client
async def main():
    async with BrightDataClient() as client:
        result = await client.scrape_url("https://example.com")

RuntimeError: BrightDataClient not initialized

# Wrong - forgot context manager
client = BrightDataClient()
result = await client.scrape_url("...")  # Error!

# Correct - use context manager
async with BrightDataClient() as client:
    result = await client.scrape_url("...")

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

brightdata_sdk-2.3.1.tar.gz (200.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

brightdata_sdk-2.3.1-py3-none-any.whl (344.5 kB view details)

Uploaded Python 3

File details

Details for the file brightdata_sdk-2.3.1.tar.gz.

File metadata

  • Download URL: brightdata_sdk-2.3.1.tar.gz
  • Upload date:
  • Size: 200.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for brightdata_sdk-2.3.1.tar.gz
Algorithm Hash digest
SHA256 966e6fe4cc0379b9884d1fc607e4567fbbe8100b0087c3b17047ef6bffcb4f39
MD5 7655e8e42d5515c8e930ddd6b5ebbc37
BLAKE2b-256 49becef9cf5f4fa84361ffb6d7108f08509bcdf0254153157bae9eeb551192d2

See more details on using hashes here.

File details

Details for the file brightdata_sdk-2.3.1-py3-none-any.whl.

File metadata

  • Download URL: brightdata_sdk-2.3.1-py3-none-any.whl
  • Upload date:
  • Size: 344.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for brightdata_sdk-2.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 372c865fa389d7168b78ebc10e1b28802b34ca467aedb19dfd36d6c62591f90c
MD5 f31ddc24992eccee36c8d799e955cfed
BLAKE2b-256 0995a96dfed30733684aa5ca464b143959157b695114a84e9e85b3364eea4758

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page