Pi-style LM authentication helpers for DSPy
Project description
dspy-lm-auth
Pi-style LM authentication helpers for DSPy.
dspy-lm-auth makes it easy to reuse Pi-style credentials with dspy.LM, including ChatGPT Codex subscription auth.
What it does
- reuses Pi credentials from
~/.pi/agent/auth.json - resolves provider config values from:
- literal strings
- environment variable names
!shell commandlookups
- supports OAuth login and token refresh flows for subscription-backed providers
- patches
dspy.LMso model aliases and alternate auth routes work out of the box
Current support
- OpenAI Codex / ChatGPT Plus or Pro subscription
Install
pip install dspy-lm-auth
Or with uv:
uv pip install dspy-lm-auth
Quick start
import dspy
import dspy_lm_auth
# Optional: patch dspy.LM in place.
dspy_lm_auth.install()
# Reuse Pi's ChatGPT Codex login from ~/.pi/agent/auth.json.
lm = dspy.LM("codex/gpt-5.4")
dspy.configure(lm=lm)
print(lm("hello")[0]["text"])
You can also keep the original model string and apply the Codex auth route explicitly:
import dspy_lm_auth
lm = dspy_lm_auth.LM("openai/gpt-5.4", auth_provider="codex")
print(lm("hello")[0]["text"])
A very cheap laptop stack: uv + llama-cpp + baguettotron + dspy
People often ask for a free stack to try DSPy locally on a laptop.
A nice minimal stack is:
uvfor env managementllama-cpp-python[server]for OpenAI-compatible local serving- Baguettotron-GGUF as the local model
- DSPy for the program
dspy-lm-authfor the Codex reflection model when you want a stronger optimizer / teacher model
Start a local OpenAI-compatible server with llama-cpp:
uv venv
source .venv/bin/activate
uv pip install "llama-cpp-python[server]" huggingface-hub dspy dspy-lm-auth
uv run python -m llama_cpp.server \
--host 127.0.0.1 \
--port 8000 \
--hf_model_repo_id P1eIAS/Baguettotron-GGUF \
--model Baguettotron-BF16.gguf
If that GGUF is too large for your machine, swap in a smaller GGUF from the same repo.
Then connect to it from DSPy:
import dspy
local_lm = dspy.LM(
"openai/local-model",
api_base="http://localhost:8000/v1",
api_key="",
model_type="chat",
)
dspy.configure(lm=local_lm, adapter=dspy.JSONAdapter())
Tiny GEPA demo: optimize a French→English translator
This is a small self-contained demo that:
- uses a local
llama-cppserver running Baguettotron as the student model - uses your ChatGPT Codex subscription via
dspy-lm-authas the GEPA reflection model - optimizes a tiny translator on just 10 French→English examples
import dspy
import dspy_lm_auth
# Patch dspy.LM so `codex/...` works.
dspy_lm_auth.install()
# Student model: local llama-cpp server.
student_lm = dspy.LM(
"openai/local-model",
api_base="http://localhost:8000/v1",
api_key="",
model_type="chat",
)
# Reflection model: stronger model used by GEPA to improve the prompt.
reflection_lm = dspy.LM("codex/gpt-5.4")
# DSPy program inference uses the cheap local model.
dspy.configure(lm=student_lm, adapter=dspy.JSONAdapter())
class TranslateFrenchToEnglish(dspy.Signature):
"""Translate the French input into short, natural English."""
french: str = dspy.InputField(desc="French sentence")
english: str = dspy.OutputField(desc="Natural English translation")
translator = dspy.Predict(TranslateFrenchToEnglish)
pairs = [
("bonjour", "hello"),
("merci beaucoup", "thank you very much"),
("où est la gare ?", "where is the train station?"),
("je suis fatigué", "I am tired"),
("il fait très chaud aujourd'hui", "it is very hot today"),
("je ne comprends pas", "I do not understand"),
("pouvez-vous m'aider ?", "can you help me?"),
("j'aime apprendre le français", "I like learning French"),
("nous arrivons demain matin", "we are arriving tomorrow morning"),
("combien ça coûte ?", "how much does it cost?"),
]
examples = [
dspy.Example(french=fr, english=en).with_inputs("french")
for fr, en in pairs
]
trainset = examples[:8]
valset = examples[8:]
def metric(gold, pred, trace=None, pred_name=None, pred_trace=None):
guess = pred.english.strip()
target = gold.english.strip()
exact = guess.lower() == target.lower()
score = 1.0 if exact else 0.0
if exact:
feedback = (
"Exact match. Keep translations short, natural, and direct. "
"Do not add explanations."
)
else:
feedback = (
f"Expected {target!r} but got {guess!r}. "
"Prefer direct, idiomatic English. Preserve tense, pronouns, and politeness. "
"Do not explain the translation or add extra words."
)
return dspy.Prediction(score=score, feedback=feedback)
gepa = dspy.GEPA(
metric=metric,
reflection_lm=reflection_lm,
auto="light",
track_stats=True,
)
optimized = gepa.compile(translator, trainset=trainset, valset=valset)
print("Optimized instruction:\n")
print(optimized.signature.instructions)
print()
print(optimized(french="je ne comprends pas").english)
print(optimized(french="combien ça coûte ?").english)
Notes:
student_lmis the cheap local model you serve yourself.reflection_lmis the stronger model GEPA uses to improve the prompt.auto="light"keeps the demo small enough for a laptop workflow.- if your local server requires a specific model name, replace
openai/local-modelwith the one exposed by yourllama-cppserver.
Login
If you do not already have credentials stored in Pi's auth file:
import dspy_lm_auth
# Starts the OAuth flow and writes credentials to ~/.pi/agent/auth.json.
dspy_lm_auth.login("codex")
Credential resolution
API key credentials can be stored as:
- a literal value
- an environment variable name
- a shell lookup prefixed with
!
Examples:
{
"some-provider": {
"type": "api_key",
"key": "OPENAI_API_KEY"
}
}
{
"some-provider": {
"type": "api_key",
"key": "!op read op://Private/openai/api_key --no-newline"
}
}
Development
uv sync --extra dev
uv run pytest
uv run ruff check src tests
Roadmap
The package is structured so more Pi-like providers can be added later, for example:
- Anthropic subscription auth
- GitHub Copilot
- Gemini CLI
- Antigravity
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dspy_lm_auth-0.1.1.tar.gz.
File metadata
- Download URL: dspy_lm_auth-0.1.1.tar.gz
- Upload date:
- Size: 19.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
28760aebbfcb5dd3d55a51629c143d4e8ffc975e4e8033d2f84b9a235d6936c3
|
|
| MD5 |
8665364037c21343abaec49d19025f16
|
|
| BLAKE2b-256 |
230c6f37e839b6b4ae2d2293948144c849ca5bba7d78ed33d7efaebee08bb01a
|
File details
Details for the file dspy_lm_auth-0.1.1-py3-none-any.whl.
File metadata
- Download URL: dspy_lm_auth-0.1.1-py3-none-any.whl
- Upload date:
- Size: 14.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
95863502e56c6686783c18b5d056b760d4f2b40c661faecafec735affe9498a8
|
|
| MD5 |
426d3186874a38e7f2a9a0c40e6ee6ea
|
|
| BLAKE2b-256 |
26c68c30e6338ea26b9d987341d3d4858f2b3d5e306b9b63a170e0b09df5b0a6
|