Config-driven LangChain create_agent wrapper
Project description
lc-agent-factory
A thin, config-driven wrapper around LangChain's create_agent.
Our create_agent(config, **kwargs) accepts an additional config parameter on top of LangChain's create_agent's keyword arguments — letting you declare complex object arguments (e.g. model, middleware, tools) in a plain dict or external file, instead of wiring them in code. The returned agent is exactly what create_agent returns. No magic, no lock-in.
Note: Not all
create_agentparameters are configurable yet — pass unsupported ones directly askwargs. When a parameter appears in both,kwargstake priority for scalar values, while lists are merged.
from lc_agent_factory import create_agent
agent = create_agent(config) # config is just a dict
create_agent() accepts a plain dict. Load it however fits your stack — YAML file, JSON, environment variable, database, or constructed directly in code.
Install
pip install lc-agent-factory
Then install your LLM provider:
pip install langchain-openai # OpenAI
pip install langchain-anthropic # Anthropic
pip install langchain-google-genai # Google GenAI
# see https://python.langchain.com/docs/integrations/chat/ for all providers
Quickstart
1. Create a config file:
# config.yaml
models:
primary:
model_provider: google_genai
model: gemini-2.5-flash
temperature: 0
middleware:
prebuilt:
- ModelCallLimitMiddleware:
run_limit: 10
exit_behavior: 'end'
2. Set your API key:
export GOOGLE_API_KEY="..."
3. Run:
import yaml
from lc_agent_factory import create_agent
agent = create_agent(yaml.safe_load(open("config.yaml")))
res = agent.invoke({"messages": [ {"role": "user", "content": "Hello!"} ]})
print(res["messages"][-1].content)
Configuration reference
globals (optional)
globals:
set_debug: false # default: false
set_verbose: false # default: false
models
Named model configurations. primary is required. Any additional models (e.g. fallback) are referenced by name in middleware.
models:
primary:
model_provider: openai # any init_chat_model provider
model: gpt-4o
temperature: 0
timeout: 60
fallback:
model_provider: openai
model: gpt-4o-mini
temperature: 0
All keys under a model entry are passed as-is to LangChain's init_chat_model — refer to its docs for available options per provider.
middleware.prebuilt (optional)
A list of LangChain built-in middleware. Each entry is a single-key dict: {MiddlewareClassName: {kwargs}}.
middleware:
prebuilt:
# Stop after N model calls
- ModelCallLimitMiddleware:
run_limit: 10
exit_behavior: 'end'
# Stop after N total tool calls
- ToolCallLimitMiddleware:
run_limit: 20
# Per-tool call limit
- ToolCallLimitMiddleware:
tool_name: 'web_search'
run_limit: 5
# Retry failed model calls
- ModelRetryMiddleware:
max_retries: 3
backoff_factor: 2.0
initial_delay: 1.0
# Retry failed tool calls
- ToolRetryMiddleware:
max_retries: 2
backoff_factor: 2.0
initial_delay: 1.0
# Automatically switch to fallback model on failure
- ModelFallbackMiddleware:
first_model: 'fallback'
# Trim old tool messages when context grows too large
- ContextEditingMiddleware:
token_count_method: 'approximate'
edits:
- ClearToolUsesEdit:
trigger: 50000
keep: 3
clear_tool_inputs: false
placeholder: '[cleared]'
Kwargs are passed as-is to LangChain middleware constructors — refer to LangChain middleware docs for the full list and options.
middleware.custom (optional)
Same structure as middleware.prebuilt, for your own AgentMiddleware subclasses.
Adding tools
Pass tools at call time — they are merged and deduplicated with any tools returned by internal loaders:
from langchain.tools import tool
@tool
def web_search(query: str) -> str:
"""Search the web."""
...
agent = create_agent(config, tools=[web_search])
Async
The returned agent is a compiled LangGraph and supports all four invocation modes out of the box:
# sync
agent.invoke({"messages": [...]})
# async
await agent.ainvoke({"messages": [...]})
# streaming
for chunk in agent.stream({"messages": [...]}):
print(chunk)
# async streaming
async for chunk in agent.astream({"messages": [...])):
print(chunk)
License
MIT — free to use for any purpose, personal or commercial. See LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file lc_agent_factory-0.0.1.tar.gz.
File metadata
- Download URL: lc_agent_factory-0.0.1.tar.gz
- Upload date:
- Size: 14.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: Hatch/1.16.5 cpython/3.11.14 HTTPX/0.28.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cf8874dc91d641fdb57b9ddc5bc9878d4d1d5d73c7fc63981ac6511c9cefe574
|
|
| MD5 |
7db6cfc2b106275ba82ffd0f7ea2177a
|
|
| BLAKE2b-256 |
0596e31ba0808795c7894cf37f7cd8a91618e44cba9c9ed997ef5c3de6ac7f36
|
File details
Details for the file lc_agent_factory-0.0.1-py3-none-any.whl.
File metadata
- Download URL: lc_agent_factory-0.0.1-py3-none-any.whl
- Upload date:
- Size: 10.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: Hatch/1.16.5 cpython/3.11.14 HTTPX/0.28.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c7b10d00d514c18a2ef5a341a328ce97ee9c703f994def6522d984d6ad576429
|
|
| MD5 |
44eb1b1ab093ca19ec003cacb7e6403f
|
|
| BLAKE2b-256 |
536c09057a3849196887fa2bd167fca68f713c9829c9060901781d519400d018
|