Skip to main content

An integration package connecting Doubleword and LangChain / LangGraph.

Project description

langchain-doubleword

A LangChain integration package for Doubleword.

This package wires Doubleword's OpenAI-compatible inference API (https://api.doubleword.ai/v1) into LangChain and LangGraph as both real-time chat / embedding models and transparently-batched variants powered by autobatcher.

The batched variants are required to access models that Doubleword exposes only via the batch API, and they cut cost on workloads that fan out many concurrent calls — typically the case in LangGraph agents.

Installation

pip install langchain-doubleword

Authentication

Three resolution paths, in precedence order:

  1. Explicit constructor argument:

    ChatDoubleword(model="...", api_key="sk-...")
    
  2. Environment variable:

    export DOUBLEWORD_API_KEY=sk-...
    
  3. ~/.dw/credentials.toml — the same file written by Doubleword's CLI tooling. The active account is selected by ~/.dw/config.toml's active_account field, and inference_key from that account is used.

    # ~/.dw/config.toml
    active_account = "work"
    
    # ~/.dw/credentials.toml
    [accounts.work]
    inference_key = "sk-..."
    

    To use a non-active account from your credentials file, set DOUBLEWORD_API_KEY directly to that account's inference_key — there is no account= selector on the model itself.

Chat models

ChatDoubleword (real-time)

Drop-in chat model. Use this in any LangChain or LangGraph workflow that expects a BaseChatModel.

from langchain_doubleword import ChatDoubleword

llm = ChatDoubleword(model="your-model-name")

response = llm.invoke("Explain bismuth in three sentences.")
print(response.content)

ChatDoublewordBatch (transparently batched)

Same interface, but every concurrent .ainvoke() call is collected by autobatcher and submitted via Doubleword's batch endpoint. Async-only — sync .invoke() raises.

Use this when:

  • The model you want is batch-only (some Doubleword-hosted models do not expose a real-time chat endpoint).
  • You're running a LangGraph workflow with parallel branches and want ~50% cost savings via batch pricing.
import asyncio
from langchain_doubleword import ChatDoublewordBatch

llm = ChatDoublewordBatch(model="batch-only-model")

async def main():
    # Concurrent calls collected into a single batch under the hood.
    results = await asyncio.gather(*[
        llm.ainvoke(f"Summarize chapter {i}") for i in range(50)
    ])
    for r in results:
        print(r.content)

asyncio.run(main())

Tuning autobatcher

Four autobatcher.BatchOpenAI knobs are exposed as constructor arguments:

Argument Default Purpose
batch_size 1000 Submit a batch when this many requests are queued.
batch_window_seconds 10.0 Submit a batch after this many seconds even if the size cap is not reached.
poll_interval_seconds 5.0 How often autobatcher polls for batch completion.
completion_window "24h" Doubleword batch completion window. "1h" is more expensive but faster.
llm = ChatDoublewordBatch(
    model="your-model",
    batch_size=250,           # smaller batches for fast-turnaround LangGraph nodes
    batch_window_seconds=2.5, # don't make latency-sensitive calls wait 10s
    completion_window="1h",   # pay more, finish quicker
)

The same arguments are available on DoublewordEmbeddingsBatch.

Embeddings

from langchain_doubleword import DoublewordEmbeddings, DoublewordEmbeddingsBatch

embed = DoublewordEmbeddings(model="your-embedding-model")
vec = embed.embed_query("hello world")

# Or, transparently batched:
batch_embed = DoublewordEmbeddingsBatch(model="your-embedding-model")
# vecs = await batch_embed.aembed_documents([...])

Use with LangGraph

ChatDoubleword and ChatDoublewordBatch are standard BaseChatModel implementations, so they slot into any LangGraph node:

from langgraph.graph import StateGraph, END
from langchain_doubleword import ChatDoublewordBatch

llm = ChatDoublewordBatch(model="your-model")

async def call_model(state):
    response = await llm.ainvoke(state["messages"])
    return {"messages": [response]}

graph = StateGraph(dict)
graph.add_node("model", call_model)
graph.set_entry_point("model")
graph.add_edge("model", END)
app = graph.compile()

When several model nodes execute in parallel (e.g. via Send or fan-out edges), autobatcher collects their requests into a single batch.

Configuration

Argument Env var Default
api_key DOUBLEWORD_API_KEY required
base_url DOUBLEWORD_API_BASE https://api.doubleword.ai/v1
model required

All other arguments accepted by langchain_openai.ChatOpenAI are forwarded unchanged (temperature, max_tokens, model_kwargs, timeout, etc.).

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_doubleword-0.1.1.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_doubleword-0.1.1-py3-none-any.whl (10.8 kB view details)

Uploaded Python 3

File details

Details for the file langchain_doubleword-0.1.1.tar.gz.

File metadata

  • Download URL: langchain_doubleword-0.1.1.tar.gz
  • Upload date:
  • Size: 10.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for langchain_doubleword-0.1.1.tar.gz
Algorithm Hash digest
SHA256 e69f421aa3bcfa2cdd7c6d9007b08010e28f13afccff69ad45d4cc9d30a82d36
MD5 b7a18af1c270301fc12effa038ae5f3c
BLAKE2b-256 6b9c338d3daac6369d05bd924db2c7c3dd1bc897a7264fd8531c7fcc172dabc4

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_doubleword-0.1.1.tar.gz:

Publisher: publish.yml on doublewordai/langchain-doubleword

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file langchain_doubleword-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_doubleword-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3271f669aa5755698dfc484dd9d3216301d76c10c51e67026269622751a0e6c2
MD5 d47ef348f7d8665e86a36d04c84267cf
BLAKE2b-256 aec70d1810e50b96ffab893195ab97912893dbf5e2e75a20e7ccadb3bb8f059a

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_doubleword-0.1.1-py3-none-any.whl:

Publisher: publish.yml on doublewordai/langchain-doubleword

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page