Free Python client for DuckDuckGo AI Chat (duck.ai). Sync, streaming, image generation, image edit, multimodal vision, web search. Auto-retry on challenge failures. No API key required.
Project description
p2d-duck
Free, no-API-key Python client for DuckDuckGo AI Chat (duck.ai).
- Single sync client — built on
httpx. No async twin to drift out of sync. - Auto-retry on challenge failures — fresh
x-vqd-hash-1solve, exponential backoff with jitter. Works on the first call, not the third. - Session warm-up — cookies are seeded on construction so the very first chat request looks like a real browser session.
- 6 chat models + image generation: GPT-4o mini, GPT-5 mini, Claude Haiku 4.5, Llama 4 Scout, Mistral Small, GPT-OSS 120B.
- Reasoning effort switching (
fastvsreasoning). - Image generation (
image-generationmodel). - Image edit (caption + source image → edited image, same model).
- Image upload / multimodal (vision-capable chats).
- Web search tool opt-in per call on supported models (GPT-4o mini, GPT-5 mini, Claude Haiku 4.5).
- Built-in solver for the
x-vqd-hash-1JS challenge viamini-racer. - CLI:
p2d-duck. - No account, no API key, no server, no fee.
Install
pip install p2d-duck
The Python import name is still
duck_ai, e.g.from duck_ai import DuckChat.
Latest release: v1.2.0 — see the full changelog on the release page.
v1.2.0: chat history is now off by default to match a clean single-turn API call. Pass
history=True(or--historyon the CLI, or/history onin the REPL) to opt back into multi-turn memory. A reference Telegram bot (bot.py) is shipped alongside the library.
Quickstart
from duck_ai import DuckChat, gpt4
with DuckChat(model=gpt4) as duck:
print(duck.ask("Explain quantum tunneling in one sentence."))
Switch models by passing an alias:
from duck_ai import DuckChat
with DuckChat(model="claude") as duck:
print(duck.ask("Hi Claude!"))
You can pass any of: "gpt4", "gpt5_mini", "claude", "llama", "mistral", "gpt-oss", "image", or any raw model id.
Streaming
from duck_ai import DuckChat
with DuckChat() as duck:
for chunk in duck.stream("Write a 4-line haiku about ducks."):
print(chunk, end="", flush=True)
Multi-turn conversation
History is off by default in v1.2.0+. Each call to ask / stream is treated as a fresh single-turn request unless you opt in.
Three ways to enable it:
from duck_ai import DuckChat, claude
# 1) Per-client: turn it on at construction time
with DuckChat(model=claude, history=True) as duck:
duck.ask("My name is Alice. Remember it.")
print(duck.ask("What is my name?")) # -> Alice
duck.reset() # clear the buffered turns
# 2) Toggle live without recreating the client
with DuckChat(model=claude) as duck:
duck.enable_history()
duck.ask("Pick a number between 1 and 10.")
print(duck.ask("What number did you pick?"))
duck.disable_history() # also clears the buffer
# 3) Per-call override
with DuckChat() as duck:
duck.ask("hi", remember=True) # buffer this turn
duck.ask("ignore me", remember=False) # do not buffer
In the CLI:
p2d-duck --history chat— start with history on- Inside the REPL:
/history on//history offto toggle,/resetto clear
The toggle is purely client-side. duck.ai's chat endpoint is stateless — multi-turn memory is implemented by replaying previous turns in the request, so disabling history also reduces token usage and latency.
Reasoning vs Fast mode
from duck_ai import DuckChat, gpt5_mini, claude
with DuckChat(model=gpt5_mini, effort="fast") as duck:
print(duck.ask("Quick: 2+2?"))
with DuckChat(model=claude, effort="reasoning") as duck:
print(duck.ask("Solve: I speak without a mouth..."))
| Model | Supports fast / reasoning? |
Default effort |
|---|---|---|
gpt4 (gpt-4o-mini) |
no | — |
gpt5_mini |
yes | minimal |
claude (Haiku 4.5) |
yes | low |
gpt_oss (gpt-oss 120B) |
yes | low |
llama (Llama 4 Scout) |
no | — |
mistral (Small 2603) |
no | — |
Image generation
from duck_ai import DuckChat, image_generation
with DuckChat(model=image_generation) as duck:
duck.generate_image(
"a cute rubber duck wearing a wizard hat, digital art",
save_to="duck_wizard.jpg",
)
Image edit
Edit an existing image with a caption. Same image-generation model, same
endpoint as image generation, but the user message carries both a text
caption and an ImagePart:
from duck_ai import DuckChat, image_generation
with DuckChat(model=image_generation) as duck:
duck.edit_image(
"make the duck wear a tiny chef hat",
"duck_wizard.jpg",
save_to="duck_chef.jpg",
)
Web search
Web search is an opt-in tool, off by default. Pass web_search=True per call.
It is only sent to models that actually support it (GPT-4o mini, GPT-5 mini,
Claude Haiku 4.5); the flag is silently ignored for the others.
from duck_ai import DuckChat, gpt5_mini, model_supports_web_search
assert model_supports_web_search(gpt5_mini)
with DuckChat(model=gpt5_mini) as duck:
print(duck.ask(
"What did Apple announce at WWDC this year?",
web_search=True,
))
CLI:
p2d-duck -m gpt5_mini chat "Latest SpaceX launch?" --web-search
Image upload (multimodal)
from duck_ai import DuckChat, ImagePart
with DuckChat() as duck:
print(duck.ask_with_image("What is in this image?", "photo.jpg"))
print(duck.ask([
"Compare these two images:",
ImagePart.from_path("a.png"),
ImagePart.from_path("b.png"),
]))
If your selected model has no vision capability, multimodal requests are
automatically routed to a vision-capable model (gpt-5-mini).
CLI
p2d-duck # interactive REPL
p2d-duck chat "Hello, who are you?"
p2d-duck -m claude chat "Hi Claude!"
p2d-duck -m gpt5_mini -e reasoning chat "Solve x^2 - 5x + 6 = 0"
p2d-duck chat "Describe this" --image cat.jpg
p2d-duck image "a watercolor moon over a lake" -o moon.jpg
p2d-duck edit "make the cat wear sunglasses" --image cat.jpg -o cat_cool.jpg
p2d-duck -m claude chat "Top news today" --web-search
p2d-duck models # list known models
The legacy
duck-aicommand is also installed for backwards compatibility, so existing scripts keep working.
Reliability — what changed
The previous version of this client raised on the very first 418
ERR_CHALLENGE, leaving callers to retry manually 2-3 times. This rewrite:
- Warms the HTTP session by hitting the duck.ai homepage on construction so cookies are present before the first chat request.
- Wraps every chat call in a retry loop. On
ChallengeError,RemoteProtocolError, transientRateLimitError, dropped streams, or an empty SSE response, it re-fetches thex-vqd-hash-1challenge and tries again with exponential backoff + jitter. - Treats
ConversationLimitErroras terminal so we don't burn retries on a permanent failure. - Refuses to fall back to a fake RSA key for durable streams. If
cryptographyisn't installed we raise immediately instead of sending a garbage public key the server will reject.
You can tune retries with DuckChat(max_retries=4, backoff_base=0.6).
Models
from duck_ai import (
DuckChat,
gpt4, gpt5_mini, claude, llama, mistral, gpt_oss, image_generation,
)
| Alias | Resolved model id |
|---|---|
gpt4 / gpt4o_mini |
gpt-4o-mini |
gpt5 / gpt5_mini |
gpt-5-mini |
claude / claude_haiku |
claude-haiku-4-5 |
llama / llama4_scout |
meta-llama/Llama-4-Scout-17B-16E-Instruct |
mistral / mistral_small |
mistral-small-2603 |
gpt_oss / gpt_oss_120b |
tinfoil/gpt-oss-120b |
image_generation / image |
image-generation |
You can also pass any model string directly: DuckChat(model="gpt-4o-mini").
How it works
DuckDuckGo's AI Chat backend (duck.ai/duckchat/v1/*) requires a per-request
proof-of-work challenge encoded in the x-vqd-hash-1 header. The server returns
an obfuscated JavaScript snippet that must be evaluated against a browser-like
environment to compute valid client hashes.
p2d-duck ships with:
- A minimal browser-DOM JavaScript shim (
_stubs.js). - An embedded V8 isolate via
mini-racerto execute the challenge. - SHA-256 hashing of the resulting fingerprint values.
- A real RSA-OAEP public key for resumable streams (durable streams).
No external Node.js install is required.
Exceptions
| Exception | When |
|---|---|
DuckChatError |
Generic error; base class. |
ChallengeError |
Couldn't solve the JS challenge. |
RateLimitError |
HTTP 429 from the server. |
ConversationLimitError |
Too many turns in one session (terminal). |
APIError |
Any other non-200 response (.status_code, .body). |
If you hit HTTP 418 ERR_CHALLENGE after the retry budget is exhausted,
your IP is being throttled by duck.ai's anti-abuse system. Wait 30-60 seconds
between consecutive requests.
License
MIT. See LICENSE.
Disclaimer
This is an unofficial reverse-engineered client. It is not affiliated with or endorsed by DuckDuckGo. Use at your own risk and respect duck.ai's terms of service. The DuckDuckGo backend may change at any time and break this library.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file p2d_duck-1.2.0.tar.gz.
File metadata
- Download URL: p2d_duck-1.2.0.tar.gz
- Upload date:
- Size: 24.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c6f2490d975b881a29ff0c3d34ba84a6462bcabb5eee5062476da4a41ef08198
|
|
| MD5 |
30f95d7f866ef410f9892cf656f1ffbc
|
|
| BLAKE2b-256 |
834aaad0232e2452f21557e207bb4ea16867000d71eea77aaedb2ad6034e42e9
|
Provenance
The following attestation bundles were made for p2d_duck-1.2.0.tar.gz:
Publisher:
publish.yml on pooraddyy/p2d-duck
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
p2d_duck-1.2.0.tar.gz -
Subject digest:
c6f2490d975b881a29ff0c3d34ba84a6462bcabb5eee5062476da4a41ef08198 - Sigstore transparency entry: 1399425019
- Sigstore integration time:
-
Permalink:
pooraddyy/p2d-duck@fc7f21b9df9f4ec0d42ba514f0d71d2ac3e09b60 -
Branch / Tag:
refs/tags/v1.2.0 - Owner: https://github.com/pooraddyy
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@fc7f21b9df9f4ec0d42ba514f0d71d2ac3e09b60 -
Trigger Event:
push
-
Statement type:
File details
Details for the file p2d_duck-1.2.0-py3-none-any.whl.
File metadata
- Download URL: p2d_duck-1.2.0-py3-none-any.whl
- Upload date:
- Size: 26.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
446feeaf7cc622caa065d2e0448aed6ae1dd79e285fea5b6e56a96cbbdf2a263
|
|
| MD5 |
0da8119fb56634a09c37b482ec1db0a1
|
|
| BLAKE2b-256 |
f52a3f3bfe59b399ad15f532089e349376f20b216e1011866dcaf6f99412d4d5
|
Provenance
The following attestation bundles were made for p2d_duck-1.2.0-py3-none-any.whl:
Publisher:
publish.yml on pooraddyy/p2d-duck
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
p2d_duck-1.2.0-py3-none-any.whl -
Subject digest:
446feeaf7cc622caa065d2e0448aed6ae1dd79e285fea5b6e56a96cbbdf2a263 - Sigstore transparency entry: 1399425054
- Sigstore integration time:
-
Permalink:
pooraddyy/p2d-duck@fc7f21b9df9f4ec0d42ba514f0d71d2ac3e09b60 -
Branch / Tag:
refs/tags/v1.2.0 - Owner: https://github.com/pooraddyy
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@fc7f21b9df9f4ec0d42ba514f0d71d2ac3e09b60 -
Trigger Event:
push
-
Statement type: