Skip to main content

Residential proxy SDK with full scraping capabilities — render JS, bypass protection, extract data

Project description

IPLoop Python SDK

Residential proxy SDK — one-liner web fetching through millions of real IPs.

pip install iploop

Quick Start

from iploop import IPLoop

ip = IPLoop("your-api-key")

# Fetch any URL through a residential proxy
response = ip.get("https://httpbin.org/ip")
print(response.text)

# Target a specific country
response = ip.get("https://example.com", country="DE")

# POST request
response = ip.post("https://api.example.com/data", json={"key": "value"})

Smart Headers

Headers are automatically matched to the target country — correct language, timezone, and User-Agent:

ip = IPLoop("key", country="JP")  # Japanese Chrome headers automatically

Sticky Sessions

Keep the same IP across multiple requests:

s = ip.session(country="US", city="newyork")
page1 = s.fetch("https://site.com/page1")  # same IP
page2 = s.fetch("https://site.com/page2")  # same IP

Auto-Retry

Failed requests (403, 502, 503, timeouts) automatically retry with a fresh IP:

# Retries up to 3 times with different IPs
response = ip.get("https://tough-site.com", retries=5)

Async Support

import asyncio
from iploop import AsyncIPLoop

async def main():
    async with AsyncIPLoop("key") as ip:
        results = await asyncio.gather(
            ip.get("https://site1.com"),
            ip.get("https://site2.com"),
            ip.get("https://site3.com"),
        )
        for r in results:
            print(r.status_code)

asyncio.run(main())

Support API

ip.usage()     # Check bandwidth quota
ip.status()    # Service status
ip.ask("how do I handle captchas?")  # Ask support
ip.countries() # List available countries

Data Extraction (v1.2.0)

Auto-extract structured data from popular sites:

# eBay — extract product listings
products = ip.ebay.search("laptop", extract=True)["products"]
# [{"title": "MacBook Pro 16", "price": "$1,299.00"}, ...]

# Nasdaq — extract stock quotes
quote = ip.nasdaq.quote("AAPL", extract=True)
# {"price": "$185.50", "change": "+2.30", "pct_change": "+1.25%"}

# Google — extract search results
results = ip.google.search("best proxy service", extract=True)["results"]
# [{"title": "...", "url": "..."}, ...]

# Twitter — extract profile info
profile = ip.twitter.profile("elonmusk", extract=True)
# {"name": "Elon Musk", "handle": "elonmusk", ...}

# YouTube — extract video metadata
video = ip.youtube.video("dQw4w9WgXcQ", extract=True)
# {"title": "...", "channel": "...", "views": 1234567}

Smart Rate Limiting

Built-in per-site rate limiting prevents blocks automatically:

# These calls auto-delay to respect site limits
for q in ["laptop", "phone", "tablet"]:
    ip.ebay.search(q)  # 15s delay between requests

LinkedIn (New)

ip.linkedin.profile("satyanadella")
ip.linkedin.company("microsoft")

Concurrent Fetching (v1.3.0)

Batch fetch up to 25 URLs in parallel:

# Concurrent fetching (safe up to 25)
batch = ip.batch(max_workers=10)
results = batch.fetch_all([
    "https://ebay.com/sch/i.html?_nkw=laptop",
    "https://ebay.com/sch/i.html?_nkw=phone",
    "https://ebay.com/sch/i.html?_nkw=tablet"
], country="US")

# Multi-country comparison
prices = batch.fetch_multi_country("https://ebay.com/sch/i.html?_nkw=iphone", ["US", "GB", "DE"])

Chrome Fingerprinting (v1.3.0)

Every request auto-applies a 14-header Chrome desktop fingerprint — the universal recipe from Phase 9 testing:

# Auto fingerprinting — no setup needed
html = ip.fetch("https://ebay.com", country="US")  # fingerprinted automatically

# Get fingerprint headers directly
headers = ip.fingerprint("DE")  # 14 headers for German Chrome

Stats Tracking (v1.3.0)

# After making requests...
print(ip.stats)
# {"requests": 10, "success": 9, "errors": 1, "total_time": 23.5, "avg_time": 2.35, "success_rate": 90.0}

Debug Mode

ip = IPLoop("key", debug=True)
# Logs: GET https://example.com → 200 (0.45s) country=US session=abc123

Exceptions

from iploop import AuthError, QuotaExceeded, ProxyError, TimeoutError

try:
    response = ip.get("https://example.com")
except QuotaExceeded:
    print("Upgrade at https://iploop.io/pricing")
except ProxyError:
    print("Proxy connection failed")
except TimeoutError:
    print("Request timed out")

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

iploop_sdk-1.7.3.tar.gz (54.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

iploop_sdk-1.7.3-py3-none-any.whl (76.5 kB view details)

Uploaded Python 3

File details

Details for the file iploop_sdk-1.7.3.tar.gz.

File metadata

  • Download URL: iploop_sdk-1.7.3.tar.gz
  • Upload date:
  • Size: 54.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for iploop_sdk-1.7.3.tar.gz
Algorithm Hash digest
SHA256 707e27cd832b29e02355b4f4c0056b8a5e0b83a40e625ea4bc3739cda4b62125
MD5 0e09559611092481a6a1f79b6d81635b
BLAKE2b-256 831e99b3794db297e44ce82b46faa90c3fc5ac586b7932e489a132258549057a

See more details on using hashes here.

File details

Details for the file iploop_sdk-1.7.3-py3-none-any.whl.

File metadata

  • Download URL: iploop_sdk-1.7.3-py3-none-any.whl
  • Upload date:
  • Size: 76.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for iploop_sdk-1.7.3-py3-none-any.whl
Algorithm Hash digest
SHA256 030d49f9735624189de78a43d83ca35eb4c999b275e6c767c3560e261b93fda7
MD5 4c6cf37b6354df60718187dafe7bd388
BLAKE2b-256 1105c9e67125f0424be2586eea1674fb2089b3f746781fbc17a468a348d0141e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page