A building block library for composed agent workflows
Project description
agenticblocks 🧱
A composable building block library for AI agent workflows. / Uma biblioteca componível para construir fluxos de agentes de IA.
🇺🇸 English
Philosophy
A library to build agent workflows like Lego blocks. Each step in your agentic pipeline is a self-contained block, with strictly typed inputs and outputs via Pydantic and natively concurrent execution using AsyncIO and NetworkX graphs.
- Strong typing: Pydantic validates connections and prevents unmatched dependencies between LLM tool calls.
- Standardized connections: Blocks only know their own inputs and outputs. Thus, entire workflows can act as single blocks later.
- Smart Parallelism (Waves): The asyncio engine fires simultaneous tasks (waves) whenever dependencies are resolved, maximizing API speed.
- Native Cycles: Declare bounded feedback loops directly in the graph (
add_cycle()). The executor handles iteration, feedback propagation, and exit conditions automatically. - Functions as Tools: Any plain Python function (sync or async) becomes a block with
@as_tool— no class boilerplate required. - Focus on local open-source models: small models run well with this library, as we provide ready-made blocks that handle their limitations, such as HeuristicLLMAgentBlock, which heuristically extracts tool calls in JSON format from plain text and executes them transparently. See more in docs/heuristicagent.md.
Getting Started
Install the module locally for development:
pip install -e .
1. Define Input and Output Models
from pydantic import BaseModel
class HelloInput(BaseModel):
name: str
class HelloOutput(BaseModel):
greeting: str
2. Create the Logic Block
from agenticblocks.core.block import Block
class HelloWorldBlock(Block[HelloInput, HelloOutput]):
name: str = "say_hello"
async def run(self, input: HelloInput) -> HelloOutput:
msg = f"Hello, {input.name}! Welcome to agenticblocks."
return HelloOutput(greeting=msg)
3. Connect and Execute
import asyncio
from agenticblocks.core.graph import WorkflowGraph
from agenticblocks.runtime.executor import WorkflowExecutor
async def main():
graph = WorkflowGraph()
graph.add_block(HelloWorldBlock(name="say_hello"))
executor = WorkflowExecutor(graph)
ctx = await executor.run(initial_input={"name": "Alice"})
print(ctx.get_output("say_hello").greeting)
asyncio.run(main())
4. Functions as Tools
Any Python function can be registered as a block with @as_tool. Both sync and async functions are supported — sync functions run in a thread pool automatically.
from agenticblocks import as_tool
from agenticblocks.blocks.llm.agent import LLMAgentBlock
@as_tool
async def fetch_weather(city: str) -> str:
"""Returns the current weather for a city."""
return f"Sunny in {city}."
agent = LLMAgentBlock(
name="assistant",
model="gpt-4o-mini",
tools=[fetch_weather], # same interface as any Block
)
5. LLM Agent Autonomy & A2A
LLMAgentBlock is a ready-to-use orchestrator that dynamically translates your other Blocks into Tools (Agent-to-Agent) seamlessly.
- Bounded tool loop:
max_tool_callsprevents runaway loops. - A2A bridging: sub-agents are called as tools transparently — the parent LLM receives only the text response, not raw JSON metadata.
- Connection Pooling: Pass any
litellm_kwargs(HTTP clients, timeouts, etc.) to optimize API performance.
6. Advanced Flow Control & Heuristics
- PromptBuilderBlock: Merges outputs from multiple predecessors into a single formatted
AgentInputprompt using Python format-strings. Useful for "diamond" graph patterns (e.g., feeding both the original topic and a search report to a final summarizer). - HeuristicLLMAgentBlock: A specialized agent for models with weak native tool support (like smaller local models). It heuristically parses hallucinated JSON tool calls out of plain text responses and executes them transparently.
7. Native Feedback Cycles
Declare validator loops directly in the graph without any wrapper block:
from agenticblocks import as_tool
from agenticblocks.core.graph import WorkflowGraph
@as_tool
def validate_output(content: str) -> dict:
ok = len(content.split()) >= 100
return {"is_valid": ok, "feedback": "Too short." if not ok else ""}
graph = WorkflowGraph()
graph.add_block(writer)
graph.add_block(validate_output)
graph.add_cycle(
name="refine",
edges=[("writer", "validate_output")],
condition_block="validate_output",
max_iterations=3,
)
# Downstream nodes connect to the cycle output as a normal node
graph.connect("refine", "publisher")
The executor runs the cycle, propagates feedback to the producer on each rejection, and stores the result in ctx under the cycle name.
Examples & Model Recommendations
It is recommended to install Ollama with the model granite4:1b (ollama run granite4:1b) to test the examples locally. Alternatively, you can modify the examples to use a commercial API, such as Gemini (gemini/gemini-2.0-flash) or OpenAI.
| Example | Description |
|---|---|
01_hello_world.py |
Minimal block + graph + executor setup |
02_llm_pipeline.py |
Parallel wave execution with multiple blocks |
03_mcp_a2a_agent.py |
MCP bridge + Agent-to-Agent (A2A) tool delegation |
04_mcp_python_native.py |
Native Python MCP server |
05_basic_blocks.py |
Overhead benchmarking |
06_functionastool.py |
@as_tool decorator for plain functions |
07_validator_loop.py |
Native graph cycle with producer + validator feedback loop |
Note: Quantized or small models like
granitemay produce lower-quality reasoning and struggle with native tool calling. For reliable local tool usage, usellama3.1ormistral-nemo. If usinggranite4, prefer theHeuristicLLMAgentBlockto capture hallucinated JSON tool calls. Large commercial models (OpenAI, Gemini, Anthropic) yield excellent results but require an API key.
🇧🇷 Português
Filosofia
Uma biblioteca para construir fluxos de agentes no estilo Lego. Cada passo do seu pipeline agêntico é um bloco auto-contido, com entradas e saídas rigorosamente tipadas via Pydantic e execução simultânea usando AsyncIO e grafos do NetworkX.
- Forte tipagem: Pydantic valida os encaixes e previne dependências não satisfeitas.
- Encaixes padronizados: Blocos só conhecem as próprias entradas e saídas. Workflows inteiros funcionam como blocos únicos.
- Paralelismo Inteligente (Ondas): O motor dispara tarefas simultâneas sempre que as dependências de um bloco são resolvidas.
- Ciclos Nativos: Declare loops de feedback diretamente no grafo com
add_cycle(). O executor gerencia iteração, propagação de feedback e condição de saída automaticamente. - Funções como Ferramentas: Qualquer função Python (síncrona ou async) vira um bloco com
@as_tool— sem boilerplate de classe. - Foco em modelos locais open-source: modelos pequenos rodam bem com esta biblioteca, pois provemos blocos prontos que lidam com suas limitações, como HeuristicLLMAgentBlock, que extrai heuristicamente chamadas de ferramenta em formato JSON do texto plano e as executa de forma transparente. Veja mais em docs/heuristicagent.md.
Primeiros Passos
Instale o módulo de forma local editável:
pip install -e .
1–3. Blocos, Grafo e Execução
A estrutura básica é idêntica ao tutorial acima (seção em inglês): defina modelos Pydantic, crie um Block, adicione ao WorkflowGraph e execute com WorkflowExecutor.
4. Funções como Ferramentas
Qualquer função pode ser registrada como bloco com @as_tool. Funções síncronas rodam em thread pool automaticamente.
from agenticblocks import as_tool
@as_tool
def buscar_clima(cidade: str) -> str:
"""Retorna o clima atual de uma cidade."""
return f"Ensolarado em {cidade}."
5. Autonomia com Agentes LLM & A2A
O LLMAgentBlock abstrai e converte sub-blocos em ferramentas nativas (A2A). Destaques:
max_tool_calls: Limita o loop de ferramentas para evitar execuções infinitas.- A2A transparente: Agentes subordinados são chamados como ferramentas; o agente pai recebe apenas o texto da resposta, sem metadados JSON brutos.
- Connection Pooling: Aceite sessões HTTP e parâmetros estendidos via
litellm_kwargs.
6. Controle de Fluxo Avançado & Heurísticas
- PromptBuilderBlock: Mescla saídas de múltiplos predecessores em um único prompt
AgentInputformatado. Ideal para padrões de grafo em "diamante". - HeuristicLLMAgentBlock: Agente especializado para modelos com suporte fraco a chamadas de ferramentas (como modelos locais pequenos). Ele extrai heuristicamente chamadas de ferramenta em formato JSON do texto plano e as executa de forma transparente.
7. Ciclos de Feedback Nativos
Declare um loop validador diretamente no grafo — sem bloco orquestrador especial:
from agenticblocks import as_tool
from agenticblocks.core.graph import WorkflowGraph
@as_tool
def validar(content: str) -> dict:
ok = len(content.split()) >= 100
return {"is_valid": ok, "feedback": "Muito curto." if not ok else ""}
graph = WorkflowGraph()
graph.add_block(escritor)
graph.add_block(validar)
graph.add_cycle(
name="refinar",
edges=[("escritor", "validar")],
condition_block="validar",
max_iterations=3,
)
graph.connect("refinar", "publicador")
O executor itera automaticamente, injeta o feedback no prompt do produtor a cada rejeição e disponibiliza o resultado final em ctx.get_output("refinar").
Exemplos & Modelos
Recomenda-se instalar o Ollama com o modelo granite4:1b para testar localmente. Alternativamente, use uma API comercial como Gemini ou OpenAI.
| Exemplo | Descrição |
|---|---|
01_hello_world.py |
Setup mínimo: bloco + grafo + executor |
02_llm_pipeline.py |
Execução paralela em waves |
03_mcp_a2a_agent.py |
Bridge MCP + delegação A2A entre agentes |
04_mcp_python_native.py |
Servidor MCP nativo em Python |
05_basic_blocks.py |
Benchmark de overhead |
06_functionastool.py |
Decorator @as_tool para funções simples |
07_validator_loop.py |
Ciclo nativo no grafo: produtor + validador com feedback |
Atenção: Modelos quantizados ou menores podem produzir resultados abaixo do esperado e ter dificuldade com chamadas nativas de ferramentas. Para uso local confiável de ferramentas, prefira
llama3.1oumistral-nemo. Caso usegranite4, utilize oHeuristicLLMAgentBlock. Modelos comerciais grandes geram excelentes resultados, mas exigem configuração de API KEY.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agenticblocks_io-0.8.10.tar.gz.
File metadata
- Download URL: agenticblocks_io-0.8.10.tar.gz
- Upload date:
- Size: 26.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2d8e5bd1d1528e0e20e275335cecdd4561577d6b5186948c870286da324b61a2
|
|
| MD5 |
8d88bbd6d728bfc2afe7930941e2aa3e
|
|
| BLAKE2b-256 |
77481f44c9123d605f62c4d6153e03c038b3a7673091b9f19bb1a0c57b17bd1f
|
Provenance
The following attestation bundles were made for agenticblocks_io-0.8.10.tar.gz:
Publisher:
workflow.yml on gilzamir18/agenticblocks
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agenticblocks_io-0.8.10.tar.gz -
Subject digest:
2d8e5bd1d1528e0e20e275335cecdd4561577d6b5186948c870286da324b61a2 - Sigstore transparency entry: 1340862230
- Sigstore integration time:
-
Permalink:
gilzamir18/agenticblocks@e5ad516bfc40076c74383f0efbe7d99cd74b9621 -
Branch / Tag:
refs/tags/v0.8.10 - Owner: https://github.com/gilzamir18
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
workflow.yml@e5ad516bfc40076c74383f0efbe7d99cd74b9621 -
Trigger Event:
release
-
Statement type:
File details
Details for the file agenticblocks_io-0.8.10-py3-none-any.whl.
File metadata
- Download URL: agenticblocks_io-0.8.10-py3-none-any.whl
- Upload date:
- Size: 31.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
caa14e3e8a64fcbe031007d4eac1baf9dee040c0b17e62bac7193d8e494196aa
|
|
| MD5 |
473941eab9002175ae71e7e40244f83b
|
|
| BLAKE2b-256 |
78aaca36a9f6d6ad0e022067abfe3cf64ea69638912511ab65b0c0e39c2ed1e4
|
Provenance
The following attestation bundles were made for agenticblocks_io-0.8.10-py3-none-any.whl:
Publisher:
workflow.yml on gilzamir18/agenticblocks
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agenticblocks_io-0.8.10-py3-none-any.whl -
Subject digest:
caa14e3e8a64fcbe031007d4eac1baf9dee040c0b17e62bac7193d8e494196aa - Sigstore transparency entry: 1340862245
- Sigstore integration time:
-
Permalink:
gilzamir18/agenticblocks@e5ad516bfc40076c74383f0efbe7d99cd74b9621 -
Branch / Tag:
refs/tags/v0.8.10 - Owner: https://github.com/gilzamir18
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
workflow.yml@e5ad516bfc40076c74383f0efbe7d99cd74b9621 -
Trigger Event:
release
-
Statement type: