Skip to main content

API for the internet

Project description

Desync Search — "API to the Internet"

Motto: The easiest way to scrape and retrieve web data without aggressive rate limits or heavy detection.

PyPI version License: MIT

Key Features

  • No Rate Limiting: We allow you to scale concurrency without punishing usage. You can open many parallel searches; we’ll only throttle if the underlying cloud providers themselves are saturated.
  • Extremely Low Detection Rates: Our “stealth_search” uses advanced methods for a “human-like” page visit. While we cannot guarantee 100% evasion, most websites pass under the radar, and CAPTCHAs—when they do appear—are often circumvented by a second pass.
  • Competitive, Pay-as-You-Go Pricing: No forced subscriptions or huge minimum monthly costs. You pick how much you spend. Our per-search cost is typically half of what big competitors charge (who often require $1,000+ per month).
  • First 1,000 Searches Free: Not convinced? Try it yourself, risk-free. We’ll spot you 1,000 searches when you sign up. Check out desync.ai for more info.

Installation

Install via PyPI using:

pip install desync_search

This library requires Python 3.6+ and the requests package (installed automatically).


Basic Usage

You’ll need a user API key (like "totallynotarealapikeywithactualcreditsonit").
A best practice is to store that key in an environment variable (e.g. DESYNC_API_KEY) to avoid embedding secrets in code:

export DESYNC_API_KEY="YOUR_ACTUAL_KEY"

Then in your Python code:

import os
from desync_search.core import DesyncClient

user_api_key = os.environ.get("DESYNC_API_KEY", "")
client = DesyncClient(user_api_key)

Here, the client automatically targets our production endpoint:

https://nycv5sx75joaxnzdkgvpx5mcme0butbo.lambda-url.us-east-1.on.aws/

Searching for Data

1) Performing a Search

By default, search(...) does a stealth search (cost: 10 credits). If you want a test search (cost: 1 credit), pass search_type="test_search".

# Stealth Search (default)
page_data = client.search("https://www.137ventures.com/portfolio")

print("URL:", page_data.url)
print("Text length:", len(page_data.text_content))

# Test Search
test_response = client.search(
    "https://www.python.org", 
    search_type="test_search"
)
print("Test search type:", test_response.search_type)

Both calls return a PageData object if success=True. For stealth, you’ll typically see fields like .text_content, .internal_links, and .external_links. Example:

print(page_data)
# <PageData url=https://www.137ventures.com/portfolio search_type=stealth_search timestamp=... complete=True>

print(page_data.text_content[:200])  # first 200 chars of text

You can pass scrape_full_html=True to get the entire HTML, or remove_link_duplicates=False to keep duplicates:

stealth_response = client.search(
    "https://www.137ventures.com/portfolio",
    scrape_full_html=True,
    remove_link_duplicates=False
)
print(len(stealth_response.html_content), "HTML chars")

Example: Visit All Internal Links

A simple spider approach: after you search a page, you gather internal links, check which ones you haven’t visited, and recursively fetch them. Example pseudo-code:

visited = set()

def crawl(client, url):
    if url in visited:
        return
    visited.add(url)

    page_data = client.search(url)  # stealth by default
    print("Scraped:", url, "Found", len(page_data.internal_links), "internal links")

    # For each new internal link, crawl again
    for link in page_data.internal_links:
        if link not in visited:
            crawl(client, link)

# Start from a seed URL
crawl(client, "https://www.137ventures.com/portfolio")

Note: Keep an eye on your credit usage if you do large-scale crawling.


Retrieving Past Results

2) Listing Available Results

Use list_available() to get minimal data for each past search:

records = client.list_available()
for r in records:
    print(r.id, r.url, r.search_type, r.created_at)

Each r is a PageData with minimal fields (omitting large text/html for bandwidth savings).

3) Pulling Detailed Data

If you want all fields (including text, HTML, links, etc.), call pull_data(...).
By default, we show:

# e.g. pull by record_id
details = client.pull_data(record_id=10)
if details:
    first = details[0]
    print(first.url, len(first.text_content), "chars of text")

You can also pass a url_filter if your library method supports it, e.g.:

details = client.pull_data(url_filter="https://example.org")

4) Checking Your Credits Balance

Get your current_credits_balance:

balance_info = client.pull_credits_balance()
print(balance_info)
# e.g. { "success": true, "credits_balance": 240 }

We store the user’s credits on our server, so you can see how many searches remain.


Additional Notes

  • Attribution: This package relies on open-source libraries like requests.
  • Rate Limits: We do not impose user-level concurrency throttles, but large-scale usage could be slowed if the underlying cloud environment is heavily utilized.
  • Your First 1,000 Searches: On new accounts, we credit 1,000 searches automatically, so you can test stealth or test calls with zero upfront cost.
  • For more advanced usage (like admin ops, account creation, adding credits) see desync.ai or contact support.

License

This project is licensed under the MIT License.


Happy scraping with Desync Search—the next-level “API to the Internet”! We look forward to your feedback and contributions.


END README CONTENT

Simply replace all instances of

with the normal Markdown code fence marker (```) in your local `README.md`. Then your code sections and headers will render properly on GitHub (or similar).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

desync_search-0.2.16.tar.gz (9.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

desync_search-0.2.16-py3-none-any.whl (9.6 kB view details)

Uploaded Python 3

File details

Details for the file desync_search-0.2.16.tar.gz.

File metadata

  • Download URL: desync_search-0.2.16.tar.gz
  • Upload date:
  • Size: 9.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for desync_search-0.2.16.tar.gz
Algorithm Hash digest
SHA256 6e54f8b40bc9f2cfc320b8ac3578aecec621db6414967dc6591aea43cf1f8a0d
MD5 ae1a4b54a834f236536a431ec781d29b
BLAKE2b-256 d3bcc1242c5149017d58a98dabf54693a289fd9c9112156f41ee29e401d192b7

See more details on using hashes here.

File details

Details for the file desync_search-0.2.16-py3-none-any.whl.

File metadata

  • Download URL: desync_search-0.2.16-py3-none-any.whl
  • Upload date:
  • Size: 9.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.7

File hashes

Hashes for desync_search-0.2.16-py3-none-any.whl
Algorithm Hash digest
SHA256 b4e0b198cb8c263e70a61f965886d511ce11b007bfec95657cd54e1f34998b66
MD5 c3bc671af6c4b86ef473b1b9a7579348
BLAKE2b-256 fbb1832612e3bc157ad0308bcf7c79f66fe10442e16760b58f4bdcb96fad7615

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page