Skip to main content

Python client for Evomi API

Project description

Evomi Python Client

A production-ready Python client for the Evomi API. Provides both synchronous and asynchronous clients for maximum flexibility.

Features

  • Async & Sync Support - Use EvomiClient for async or EvomiClientSync for synchronous operations
  • Full API Coverage - All Evomi endpoints supported
  • Type Hints - Complete type annotations for IDE support
  • Minimal Dependencies - Only requires httpx

Installation

pip install evomi-client

Quick Start

Async Client

import asyncio
from evomi_client import EvomiClient

async def main():
    # Initialize with API key (or set EVOMI_API_KEY env var)
    client = EvomiClient(api_key="your-api-key")
    
    # Scrape a URL
    result = await client.scrape("https://example.com")
    print(result)
    
    # Get markdown output
    result = await client.scrape(
        "https://example.com",
        output="markdown",
        mode="auto"  # auto-detect JS requirement
    )
    
    # AI-powered extraction
    result = await client.scrape(
        "https://example.com/products",
        ai_enhance=True,
        ai_prompt="Extract product names and prices"
    )

asyncio.run(main())

Sync Client

from evomi_client import EvomiClientSync

# Initialize with API key (or set EVOMI_API_KEY env var)
client = EvomiClientSync(api_key="your-api-key")

# Scrape a URL
result = client.scrape("https://example.com")
print(result)

Proxy String Builder

The client includes a powerful proxy string builder for constructing proxy URLs with targeting parameters.

Basic Usage

from evomi_client import ProxyConfig, ProxyType, ProxyProtocol, ResidentialMode

# Create a proxy configuration
config = ProxyConfig(
    proxy_type=ProxyType.RESIDENTIAL,
    protocol=ProxyProtocol.HTTP,
    country="US",
    username="your-username",
    password="your-password"
)

# Build the proxy string
proxy_string = config.build_proxy_string()
print(proxy_string)
# Output: http://username:password_country-US@rp.evomi.com:1000

Advanced Configuration

# Geographic targeting
config = ProxyConfig(
    proxy_type=ProxyType.RESIDENTIAL,
    country="US",
    city="New York",      # Becomes city-new.york
    region="California",  # Becomes region-california
    continent="north.america",
    username="user",
    password="pass"
)

# Session management
config = ProxyConfig(
    proxy_type=ProxyType.RESIDENTIAL,
    session="abc12345",   # 6-10 alphanumeric chars
    lifetime=30,          # Session lifetime in minutes (max 120)
    username="user",
    password="pass"
)

# Residential modes
config = ProxyConfig(
    proxy_type=ProxyType.RESIDENTIAL,
    mode=ResidentialMode.SPEED,  # or ResidentialMode.QUALITY
    username="user",
    password="pass"
)

# Expert settings
config = ProxyConfig(
    proxy_type=ProxyType.RESIDENTIAL,
    latency=100,         # Max latency in ms
    fraudscore=20,       # Max fraud score
    device="windows",    # Device type
    http3=True,          # Enable HTTP3/QUIC
    username="user",
    password="pass"
)

# Datacenter proxies
config = ProxyConfig(
    proxy_type=ProxyType.DATACENTER,
    protocol=ProxyProtocol.HTTP,
    country="DE",
    username="user",
    password="pass"
)
# Output: http://user:pass_country-DE@dcp.evomi.com:2000

# Mobile proxies with SOCKS5
config = ProxyConfig(
    proxy_type=ProxyType.MOBILE,
    protocol=ProxyProtocol.SOCKS5,
    country="JP",
    username="user",
    password="pass"
)
# Output: socks5h://user:pass_country-JP@mp.evomi.com:3002

Validation

config = ProxyConfig(
    session="invalid!@#",  # Invalid: special chars
    username="user",
    password="pass"
)

errors = config.validate()
print(errors)  # ['Session ID must be 6-10 alphanumeric characters']

Client Helper Methods

The client provides helper methods that automatically fetch credentials from the Public API:

from evomi_client import EvomiClient

client = EvomiClient(api_key="your-api-key", public_api_key="your-public-key")

# Build a proxy config with credentials from API
config = await client.build_proxy_config(
    proxy_type=ProxyType.RESIDENTIAL,
    country="US",
    session="test123456",
)
print(f"Username: {config.username}, Password: {config.password}")

# Build a proxy string directly
proxy_string = await client.build_proxy_string(
    country="US",
    city="Los Angeles",
    latency=100,
    fraudscore=20,
)
print(proxy_string)
# Output: http://user:pass_country-US_city-los.angeles_latency-100_fraudscore-20@rp.evomi.com:1000

Sync Client Usage

from evomi_client import EvomiClientSync, ProxyConfig, ProxyType

client = EvomiClientSync(api_key="your-api-key", public_api_key="your-public-key")

# Build proxy config (sync)
config = client.build_proxy_config(country="US")
proxy_string = client.build_proxy_string(country="US", city="Los Angeles")

Proxy Endpoints Reference

Proxy Type Endpoint Ports
Residential rp.evomi.com HTTP: 1000, HTTPS: 1001, SOCKS5: 1002
Datacenter dcp.evomi.com HTTP: 2000, HTTPS: 2001, SOCKS5: 2002
Mobile mp.evomi.com HTTP: 3000, HTTPS: 3001, SOCKS5: 3002

Parameter Reference

Parameter Description Example
country 2-letter ISO country code "US", "GB", "DE"
city City name (spaces become dots) "New York" → city-new.york
region State/region name "California" → region-california
continent Continent name "north.america", "europe"
isp ISP shortcode "att", "comcast"
session Sticky session ID (6-10 chars) "abc12345"
hardsession Hard session ID "xyz98765"
lifetime Session duration (max 120 min) 30
mode Residential mode "", "speed", "quality"
latency Max latency in ms 100
fraudscore Max fraud score 20
device Device type "windows", "unix", "apple"
http3 Enable HTTP3/QUIC True/False
localdns Local DNS resolution True/False
udp UDP support (Enterprise) True/False
extended Extended pool True/False

API Reference

Scraping Operations

scrape(url, ...)

Scrape a single URL with configurable options.

Parameter Type Default Description
url str required URL to scrape
mode str "auto" Scraping mode: "request" (fast), "browser" (JS), "auto" (detect)
output str "markdown" Output format: "html", "markdown", "screenshot", "pdf"
device str "windows" Device type: "windows", "macos", "android"
proxy_type str "residential" Proxy type: "datacenter", "residential"
proxy_country str "US" Two-letter country code
proxy_session_id str None Proxy session ID (6-8 chars)
wait_until str "domcontentloaded" Wait condition
ai_enhance bool False Enable AI enhancement
ai_prompt str None Prompt for AI extraction
ai_source str None AI source: "markdown", "screenshot"
js_instructions list None JS actions: click, wait, fill, wait_for
execute_js str None Raw JavaScript to execute
screenshot bool False Capture screenshot
pdf bool False Capture PDF
wait_seconds int 0 Seconds to wait after load
excluded_tags list None HTML tags to remove
excluded_selectors list None CSS selectors to remove
block_resources list None Resource types to block
additional_headers dict None Extra HTTP headers
capture_headers bool False Capture response headers
network_capture list None Network capture filters
async_mode bool False Return immediately with task ID
config_id str None Saved config ID
scheme_id str None Saved extraction schema ID
extract_scheme list None Inline extraction schema
storage_id str None Storage config ID
use_default_storage bool False Use default storage
no_html bool False Exclude HTML from response

crawl(domain, ...)

Crawl a website to discover and scrape multiple pages.

result = await client.crawl(
    domain="example.com",
    max_urls=100,
    depth=2,
    url_pattern="/products/.*",  # Regex filter
    async_mode=True  # Returns task_id
)

map_website(domain, ...)

Discover URLs from a website.

result = await client.map_website(
    domain="example.com",
    sources=["sitemap", "commoncrawl"],
    max_urls=500
)

search_domains(query, ...)

Find domains by searching the web.

result = await client.search_domains(
    query="best e-commerce sites",
    max_urls=20,
    region="us-en"
)

agent_request(message)

Send a natural language request to the AI agent.

result = await client.agent_request(
    "Scrape example.com and extract all product prices"
)

get_task_status(task_id, task_type)

Check the status of an async task.

result = await client.get_task_status(
    task_id="abc123",
    task_type="scrape"  # or "crawl", "map", "config_generate", "schema"
)

Config Management

# List configs
configs = await client.list_configs()

# Create config
config = await client.create_config(
    name="My Scraper",
    config={"mode": "browser", "output": "markdown"}
)

# Get config
config = await client.get_config("cfg_abc123")

# Update config
config = await client.update_config("cfg_abc123", name="New Name")

# Delete config
await client.delete_config("cfg_abc123")

# Generate config from natural language
config = await client.generate_config(
    name="Amazon Scraper",
    prompt="Scrape product title, price, and reviews from Amazon"
)

Schema Management

# List schemas
schemas = await client.list_schemas()

# Create schema with extraction rules
schema = await client.create_schema(
    name="Product Schema",
    config={
        "url": "https://example.com/product",
        "extract_scheme": [
            {"label": "title", "type": "content", "selector": "h1"},
            {"label": "price", "type": "content", "selector": ".price"}
        ]
    },
    test=True  # Test the schema
)

# Get schema status (for async testing)
status = await client.get_schema_status("sch_abc123")

Schedule Management

# Create a scheduled job
schedule = await client.create_schedule(
    name="Daily Price Check",
    config_id="cfg_abc123",
    interval_minutes=1440,  # Daily
    start_time="09:00"  # UTC
)

# List schedules
schedules = await client.list_schedules(active_only=True)

# Toggle schedule
await client.toggle_schedule("sched_abc123")

# Get execution history
runs = await client.list_schedule_runs("sched_abc123")

Storage Management

# Create S3 storage config
storage = await client.create_storage_config(
    name="My S3",
    storage_type="s3_compatible",
    config={
        "bucket": "my-bucket",
        "region": "us-east-1",
        "access_key": "...",
        "secret_key": "..."
    },
    set_as_default=True
)

# List storage configs
configs = await client.list_storage_configs()

Account Info

info = await client.get_account_info()
print(f"Credits remaining: {info.get('credits', 'N/A')}")

Pricing & Credits

All operations consume credits:

  • Base cost: 1 credit per request
  • Browser mode: 5x multiplier
  • Residential proxy: 2x multiplier
  • AI enhancement: +30 credits
  • Screenshot/PDF: +1 credit each

Credit information is returned in response headers and in _credits_used, _credits_remaining fields.

Error Handling

from evomi_client import EvomiClient

client = EvomiClient(api_key="your-key")

try:
    result = await client.scrape("https://example.com")
except httpx.HTTPStatusError as e:
    print(f"HTTP error: {e.response.status_code}")
    print(f"Response: {e.response.text}")
except httpx.RequestError as e:
    print(f"Request error: {e}")

Configuration

Set your API key via environment variable:

export EVOMI_API_KEY="your-api-key"

Or pass it directly:

client = EvomiClient(api_key="your-api-key")

Public API Key (for Proxy Endpoints)

For accessing Public API endpoints (proxy generation, rotation, etc.), you can optionally provide a separate public API key:

export EVOMI_PUBLIC_API_KEY="your-public-api-key"

Or pass it directly:

client = EvomiClient(
    api_key="your-api-key",
    public_api_key="your-public-api-key"  # Falls back to api_key if not provided
)

If public_api_key is not provided, it defaults to the standard api_key.

Custom Base URL

client = EvomiClient(
    api_key="your-api-key",
    base_url="https://custom.evomi.com"
)

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

evomi_client-1.0.1.tar.gz (18.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

evomi_client-1.0.1-py3-none-any.whl (19.4 kB view details)

Uploaded Python 3

File details

Details for the file evomi_client-1.0.1.tar.gz.

File metadata

  • Download URL: evomi_client-1.0.1.tar.gz
  • Upload date:
  • Size: 18.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.1

File hashes

Hashes for evomi_client-1.0.1.tar.gz
Algorithm Hash digest
SHA256 8d6d46bd8d433b1287e76acdf13f0dca20d9a929d2aa117b4fbc45f2ce1f6793
MD5 8d52fc7a5ef960b5c6dde59af4455948
BLAKE2b-256 419f4d980dc62d167ed396be5e95da21c00e27d756ffa7b1e553680d50266798

See more details on using hashes here.

File details

Details for the file evomi_client-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: evomi_client-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 19.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.1

File hashes

Hashes for evomi_client-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ca4d8f29eda718134356ad127be50dda5ccb15700f42e60b76ad1cac5c5405ac
MD5 d7fbb7b266967069a91697f266616091
BLAKE2b-256 698880d7f815af111181283d9e1591ff98bfb1a5cc139915f87dabf2999a58be

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page