A Python SDK for interacting with the Desearch API service.
Project description
Desearch
The official Python SDK for the Desearch API — AI-powered search, X (Twitter) data retrieval, web search, and crawling.
Table of Contents
- Installation
- Quick Start
- AI Contextual Search
- AI Web Links Search
- AI X Posts Links Search
- X Search
- Fetch Posts by URLs
- Retrieve Post by ID
- Search X Posts by User
- Get Retweeters of a Post
- Get X Posts by Username
- Fetch User's Tweets and Replies
- Retrieve Replies for a Post
- SERP Web Search
- Crawl a URL
- Links
Installation
pip install desearch-py
Quick Start
import asyncio
from desearch_py import Desearch
async def main():
async with Desearch(api_key="your_api_key") as desearch:
result = await desearch.ai_search(
prompt="Bittensor",
tools=["web", "twitter"],
)
print(result)
asyncio.run(main())
AI Contextual Search
ai_search
AI-powered multi-source contextual search. Searches across web, X (Twitter), Reddit, YouTube, HackerNews, Wikipedia, and arXiv and returns results with optional AI-generated summaries. Supports streaming responses.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
prompt |
str |
Yes | — | Search query prompt |
tools |
List[str] |
Yes | — | List of tools to search with (e.g. web, twitter, reddit, hackernews, youtube, wikipedia, arxiv) |
start_date |
Optional[str] |
No | None |
Start date in UTC (YYYY-MM-DDTHH:MM:SSZ) |
end_date |
Optional[str] |
No | None |
End date in UTC (YYYY-MM-DDTHH:MM:SSZ) |
date_filter |
Optional[str] |
No | PAST_24_HOURS |
Predefined date filter for search results |
result_type |
Optional[str] |
No | LINKS_WITH_FINAL_SUMMARY |
Result type (ONLY_LINKS or LINKS_WITH_FINAL_SUMMARY) |
system_message |
Optional[str] |
No | None |
System message for the search |
scoring_system_message |
Optional[str] |
No | None |
System message for scoring the response |
count |
Optional[int] |
No | None |
Number of results per source (10–200) |
result = await desearch.ai_search(
prompt="Bittensor",
tools=["web", "hackernews", "reddit", "wikipedia", "youtube", "twitter", "arxiv"],
date_filter="PAST_24_HOURS",
result_type="LINKS_WITH_FINAL_SUMMARY",
count=20,
)
AI Web Links Search
ai_web_links_search
Search for raw links across web sources (web, HackerNews, Reddit, Wikipedia, YouTube, arXiv). Returns structured link results without AI summaries.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
prompt |
str |
Yes | — | Search query prompt |
tools |
List[str] |
Yes | — | List of web tools to search with (e.g. web, hackernews, reddit, wikipedia, youtube, arxiv) |
count |
Optional[int] |
No | None |
Number of results per source (10–200) |
result = await desearch.ai_web_links_search(
prompt="What are the recent sport events?",
tools=["web", "hackernews", "reddit", "wikipedia", "youtube", "arxiv"],
count=20,
)
AI X Posts Links Search
ai_x_links_search
Search for X (Twitter) post links matching a prompt using AI-powered models. Returns tweet objects from the miner network.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
prompt |
str |
Yes | — | Search query prompt |
count |
Optional[int] |
No | None |
Number of results to return (10–200) |
result = await desearch.ai_x_links_search(
prompt="What are the recent sport events?",
count=20,
)
X Search
x_search
X (Twitter) search with extensive filtering options: date range, user, language, verification status, media type (image/video/quote), and engagement thresholds (min likes, retweets, replies). Sort by Top or Latest.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
query |
str |
Yes | — | Advanced search query |
sort |
Optional[str] |
No | Top |
Sort by Top or Latest |
user |
Optional[str] |
No | None |
User to search for |
start_date |
Optional[str] |
No | None |
Start date in UTC (YYYY-MM-DD) |
end_date |
Optional[str] |
No | None |
End date in UTC (YYYY-MM-DD) |
lang |
Optional[str] |
No | None |
Language code (e.g. en, es, fr) |
verified |
Optional[bool] |
No | None |
Filter for verified users |
blue_verified |
Optional[bool] |
No | None |
Filter for blue checkmark verified users |
is_quote |
Optional[bool] |
No | None |
Include only tweets with quotes |
is_video |
Optional[bool] |
No | None |
Include only tweets with videos |
is_image |
Optional[bool] |
No | None |
Include only tweets with images |
min_retweets |
Optional[Union[int, str]] |
No | None |
Minimum number of retweets |
min_replies |
Optional[Union[int, str]] |
No | None |
Minimum number of replies |
min_likes |
Optional[Union[int, str]] |
No | None |
Minimum number of likes |
count |
Optional[int] |
No | 20 |
Number of tweets to retrieve (1–100) |
result = await desearch.x_search(
query="Whats going on with Bittensor",
sort="Top",
user="elonmusk",
start_date="2024-12-01",
end_date="2025-02-25",
lang="en",
verified=True,
blue_verified=True,
count=20,
)
Fetch Posts by URLs
x_posts_by_urls
Fetch full post data for a list of X (Twitter) post URLs. Returns metadata, content, and engagement metrics for each URL.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
urls |
List[str] |
Yes | — | List of tweet URLs to retrieve |
result = await desearch.x_posts_by_urls(
urls=["https://x.com/RacingTriple/status/1892527552029499853"],
)
Retrieve Post by ID
x_post_by_id
Fetch a single X (Twitter) post by its unique ID. Returns metadata, content, and engagement metrics.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
id |
str |
Yes | — | The unique ID of the post |
result = await desearch.x_post_by_id(
id="1892527552029499853",
)
Search X Posts by User
x_posts_by_user
Search X (Twitter) posts by a specific user, with optional keyword filtering.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
user |
str |
Yes | — | User to search for |
query |
Optional[str] |
No | None |
Advanced search query |
count |
Optional[int] |
No | None |
Number of tweets to retrieve (1–100) |
result = await desearch.x_posts_by_user(
user="elonmusk",
query="Whats going on with Bittensor",
count=20,
)
Get Retweeters of a Post
x_post_retweeters
Retrieve the list of users who retweeted a specific post by its ID. Supports cursor-based pagination.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
id |
str |
Yes | — | The ID of the post to get retweeters for |
cursor |
Optional[str] |
No | None |
Cursor for pagination |
result = await desearch.x_post_retweeters(
id="1982770537081532854",
)
Get X Posts by Username
x_user_posts
Retrieve a user's timeline posts by their username. Fetches the latest tweets posted by that user. Supports cursor-based pagination.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
username |
str |
Yes | — | Username to fetch posts for |
cursor |
Optional[str] |
No | None |
Cursor for pagination |
result = await desearch.x_user_posts(
username="elonmusk",
)
Fetch User's Tweets and Replies
x_user_replies
Fetch tweets and replies posted by a specific user, with optional keyword filtering.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
user |
str |
Yes | — | The username of the user to search for |
count |
Optional[int] |
No | None |
The number of tweets to fetch (1–100) |
query |
Optional[str] |
No | None |
Advanced search query |
result = await desearch.x_user_replies(
user="elonmusk",
query="latest news on AI",
count=20,
)
Retrieve Replies for a Post
x_post_replies
Fetch replies to a specific X (Twitter) post by its post ID.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
post_id |
str |
Yes | — | The ID of the post to search for |
count |
Optional[int] |
No | None |
The number of tweets to fetch (1–100) |
query |
Optional[str] |
No | None |
Advanced search query |
result = await desearch.x_post_replies(
post_id="1234567890",
query="latest news on AI",
count=20,
)
SERP Web Search
web_search
SERP web search. Returns paginated web search results, replicating a typical search engine experience.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
query |
str |
Yes | — | The search query string |
start |
Optional[int] |
No | 0 |
Number of results to skip for pagination |
result = await desearch.web_search(
query="latest news on AI",
start=10,
)
Crawl a URL
web_crawl
Crawl a URL and return its content as plain text or HTML.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
url |
str |
Yes | — | URL to crawl |
format |
Optional[str] |
No | text |
Format of content (html or text) |
result = await desearch.web_crawl(
url="https://en.wikipedia.org/wiki/Artificial_intelligence",
format="html",
)
Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file desearch_py-1.1.0.tar.gz.
File metadata
- Download URL: desearch_py-1.1.0.tar.gz
- Upload date:
- Size: 12.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
71120923d1dc60578e2de0e24adc297dd0603be6bfbfa2063684bf2094344873
|
|
| MD5 |
45866c9498f00e05e690dd016acb9ef5
|
|
| BLAKE2b-256 |
603db5170a755fa364085581f6da4575850e649d80f083f152d617650f0ad491
|
File details
Details for the file desearch_py-1.1.0-py3-none-any.whl.
File metadata
- Download URL: desearch_py-1.1.0-py3-none-any.whl
- Upload date:
- Size: 10.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
78d0f74ea35e64e07ed66662596e9de69fd61f270deedf2f3cb662a06a5a492f
|
|
| MD5 |
78a403cc021d22e4a7bbb9d4c83d42e4
|
|
| BLAKE2b-256 |
289af044c9aa363ef10d9fbcfe723d88faf9ebda39682c0b038fd0dccf530deb
|