Skip to main content

A building block library for composed agent workflows

Project description

agenticblocks 🧱

A composable building block library for AI agent workflows. / Uma biblioteca componível para construir fluxos de agentes de IA.

🇺🇸 English | 🇧🇷 Português


🇺🇸 English

Philosophy

A library to build agent workflows like Lego blocks. Each step in your agentic pipeline is a self-contained block, with strictly typed inputs and outputs via Pydantic and natively concurrent execution using AsyncIO and NetworkX graphs.

  • Strong typing: Pydantic validates connections and prevents unmatched dependencies between LLM tool calls.
  • Standardized connections: Blocks only know their own inputs and outputs. Thus, entire workflows can act as single blocks later.
  • Smart Parallelism (Waves): The asyncio engine fires simultaneous tasks (waves) whenever dependencies are resolved, maximizing API speed.

Getting Started

Install the module locally for development:

pip install -e .

1. Define Input and Output Models

from pydantic import BaseModel

class HelloInput(BaseModel):
    name: str

class HelloOutput(BaseModel):
    greeting: str

2. Create the Logic Block

from agenticblocks.core.block import Block

class HelloWorldBlock(Block[HelloInput, HelloOutput]):
    name: str = "say_hello"
    
    async def run(self, input: HelloInput) -> HelloOutput:
        msg = f"Hello, {input.name}! Welcome to agenticblocks."
        return HelloOutput(greeting=msg)

3. Connect and Execute

import asyncio
from agenticblocks.core.graph import WorkflowGraph
from agenticblocks.runtime.executor import WorkflowExecutor

async def main():
    graph = WorkflowGraph()
    graph.add_block(HelloWorldBlock(name="say_hello"))

    executor = WorkflowExecutor(graph)
    ctx = await executor.run(initial_input={"name": "Alice"})
    
    print(ctx.get_output("say_hello").greeting)

asyncio.run(main())

4. LLM Agent Autonomy & A2A

The library features LLMAgentBlock, a ready-to-use orchestrator that dynamically translates your other Blocks into Tools (Agent-to-Agent) smoothly.

  • Unbounded Reasoning Loop: Operates completely natively with max_iterations=None by default to avoid breaking long autonomous tasks abruptly. Limit it explicitly per agent if needed.
  • Connection Pooling & Advanced API Parameters: Pass HTTP client instances (e.g., httpx.AsyncClient()) or any specific API argument via litellm_kwargs to improve efficiency and skip initial TLS Handshake delays.

Check the examples/ directory for full demos.

Examples & Model Recommendations

It is recommended to install Ollama with the model granite4:1b (ollama run granite4:1b) to test the examples locally. Alternatively, you can modify the examples to use a commercial API, such as Gemini (gemini/gemini-3.1-flash-lite-preview) or OpenAI.

Note: The use of quantized or smaller models like granite may result in failures or lower-than-expected reasoning results. Using large commercial models almost always yields excellent results, but requires extra configuration (setting the API KEY environment variable). Be aware that they may fail due to free-tier restrictions and rate limits. Using paid versions is an excellent option for stable operations.


🇧🇷 Português

Filosofia

Uma biblioteca para construir fluxos de agentes no estilo Lego. Cada passo do seu pipeline agêntico é um bloco auto-contido, com entradas e saídas rigorosamente tipadas via Pydantic e execução simultânea usando AsyncIO e grafos do NetworkX.

  • Forte tipagem: Pydantic valida os encaixes e previne dependências não satisfeitas.
  • Encaixes padronizados: Blocos só conhecem as próprias entradas e saídas. Workflows inteiros funcionam como blocos únicos.
  • Paralelismo Inteligente (Ondas): O motor dispara tarefas simultâneas (waves) sempre que as dependências de um bloco são resolvidas, otimizando a velocidade de conexões a APIs.

Primeiros Passos

Instale o módulo de forma local editável:

pip install -e .

4. Autonomia com Agentes LLM & A2A

O módulo traz o LLMAgentBlock, um orquestrador pronto que abstrai e converte seus sub-blocos transparentemente em tools nativas.

  • Raciocínio Ilimitado: Sem amarras (max_iterations=None como padrão) para não abortar tarefas autônomas demoradas, permitindo definição exata pontualmente.
  • Connection Pooling: Aceita passagem de Sessões HTTP e dezenas de parâmetros estendidos via argumento litellm_kwargs para zerar o atraso inicial nas requisições do seu loop de execução.

Explore os laboratórios interativos completos dentro da pasta examples/:

  • 01_hello_world.py: Simulação básica e limpa do tutorial inicial.
  • 03_mcp_a2a_agent.py: Exemplo do framework criando pontes Automáticas pra LLMs e lidando com delegação de chamadas entre dois agentes LLM em Loop (A2A).
  • 05_basic_blocks.py: Teste rígido nativo para aferição de Overheads.

Recomendações de Modelos e Exemplos

Recomenda-se instalar o Ollama com o modelo granite4:1b (ollama run granite4:1b) para testar os exemplos localmente. Alternativamente, você pode modificar os exemplos para utilizar uma API comercial, como Gemini (gemini/gemini-1.5-flash) ou OpenAI.

Atenção: O uso de modelos quantizados ou menores como o granite pode resultar em falhas ou resultados de raciocínio abaixo do esperado. O uso de grandes modelos comerciais quase sempre gera excelentes resultados, mas exige configuração extra (definição de variável de ambiente com a API KEY). Vale ressaltar que eles sofrem com restrições e limites de uso da camada gratuita, o que pode resultar em erros de execução. O uso de versões pagas é uma das melhores opções.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agenticblocks_io-0.4.2.tar.gz (11.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agenticblocks_io-0.4.2-py3-none-any.whl (15.0 kB view details)

Uploaded Python 3

File details

Details for the file agenticblocks_io-0.4.2.tar.gz.

File metadata

  • Download URL: agenticblocks_io-0.4.2.tar.gz
  • Upload date:
  • Size: 11.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for agenticblocks_io-0.4.2.tar.gz
Algorithm Hash digest
SHA256 d3fce713720292094dea823b906ac974f892520899da71f06216989e0dea1fa8
MD5 f2c8362184a5341e510b38b666584fd1
BLAKE2b-256 f78d290c419f3668316437df9add4cb568cca7740b779a791adec312c25eb862

See more details on using hashes here.

Provenance

The following attestation bundles were made for agenticblocks_io-0.4.2.tar.gz:

Publisher: workflow.yml on gilzamir18/agenticblocks

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agenticblocks_io-0.4.2-py3-none-any.whl.

File metadata

File hashes

Hashes for agenticblocks_io-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 63c08ca22b765ecb5583a34ffb483ff6827f697cb27fb191d96d53fbafa050df
MD5 2e8d2d71c5bcb6bdcf01939bd10a1b1d
BLAKE2b-256 e03ad6d7505e17e31f9ee053124e59390856578fe09282a287b05b76daab77b8

See more details on using hashes here.

Provenance

The following attestation bundles were made for agenticblocks_io-0.4.2-py3-none-any.whl:

Publisher: workflow.yml on gilzamir18/agenticblocks

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page