Skip to main content

A building block library for composed agent workflows

Project description

agenticblocks 🧱

A composable building block library for AI agent workflows. / Uma biblioteca componível para construir fluxos de agentes de IA.

🇺🇸 English | 🇧🇷 Português


🇺🇸 English

Philosophy

A library to build agent workflows like Lego blocks. Each step in your agentic pipeline is a self-contained block, with strictly typed inputs and outputs via Pydantic and natively concurrent execution using AsyncIO and NetworkX graphs.

  • Strong typing: Pydantic validates connections and prevents unmatched dependencies between LLM tool calls.
  • Standardized connections: Blocks only know their own inputs and outputs. Thus, entire workflows can act as single blocks later.
  • Smart Parallelism (Waves): The asyncio engine fires simultaneous tasks (waves) whenever dependencies are resolved, maximizing API speed.

Getting Started

Install the module locally for development:

pip install -e .

1. Define Input and Output Models

from pydantic import BaseModel

class HelloInput(BaseModel):
    name: str

class HelloOutput(BaseModel):
    greeting: str

2. Create the Logic Block

from agenticblocks.core.block import Block

class HelloWorldBlock(Block[HelloInput, HelloOutput]):
    name: str = "say_hello"
    
    async def run(self, input: HelloInput) -> HelloOutput:
        msg = f"Hello, {input.name}! Welcome to agenticblocks."
        return HelloOutput(greeting=msg)

3. Connect and Execute

import asyncio
from agenticblocks.core.graph import WorkflowGraph
from agenticblocks.runtime.executor import WorkflowExecutor

async def main():
    graph = WorkflowGraph()
    graph.add_block(HelloWorldBlock(name="say_hello"))

    executor = WorkflowExecutor(graph)
    ctx = await executor.run(initial_input={"name": "Alice"})
    
    print(ctx.get_output("say_hello").greeting)

asyncio.run(main())

4. LLM Agent Autonomy & A2A

The library features LLMAgentBlock, a ready-to-use orchestrator that dynamically translates your other Blocks into Tools (Agent-to-Agent) smoothly.

  • Unbounded Reasoning Loop: Operates completely natively with max_iterations=None by default to avoid breaking long autonomous tasks abruptly. Limit it explicitly per agent if needed.
  • Connection Pooling & Advanced API Parameters: Pass HTTP client instances (e.g., httpx.AsyncClient()) or any specific API argument via litellm_kwargs to improve efficiency and skip initial TLS Handshake delays.

Check the examples/ directory for full demos.

Examples & Model Recommendations

It is recommended to install Ollama with the model granite4:1b (ollama run granite4:1b) to test the examples locally. Alternatively, you can modify the examples to use a commercial API, such as Gemini (gemini/gemini-3.1-flash-lite-preview) or OpenAI.

Note: The use of quantized or smaller models like granite may result in failures or lower-than-expected reasoning results. Using large commercial models almost always yields excellent results, but requires extra configuration (setting the API KEY environment variable). Be aware that they may fail due to free-tier restrictions and rate limits. Using paid versions is an excellent option for stable operations.


🇧🇷 Português

Filosofia

Uma biblioteca para construir fluxos de agentes no estilo Lego. Cada passo do seu pipeline agêntico é um bloco auto-contido, com entradas e saídas rigorosamente tipadas via Pydantic e execução simultânea usando AsyncIO e grafos do NetworkX.

  • Forte tipagem: Pydantic valida os encaixes e previne dependências não satisfeitas.
  • Encaixes padronizados: Blocos só conhecem as próprias entradas e saídas. Workflows inteiros funcionam como blocos únicos.
  • Paralelismo Inteligente (Ondas): O motor dispara tarefas simultâneas (waves) sempre que as dependências de um bloco são resolvidas, otimizando a velocidade de conexões a APIs.

Primeiros Passos

Instale o módulo de forma local editável:

pip install -e .

4. Autonomia com Agentes LLM & A2A

O módulo traz o LLMAgentBlock, um orquestrador pronto que abstrai e converte seus sub-blocos transparentemente em tools nativas.

  • Raciocínio Ilimitado: Sem amarras (max_iterations=None como padrão) para não abortar tarefas autônomas demoradas, permitindo definição exata pontualmente.
  • Connection Pooling: Aceita passagem de Sessões HTTP e dezenas de parâmetros estendidos via argumento litellm_kwargs para zerar o atraso inicial nas requisições do seu loop de execução.

Explore os laboratórios interativos completos dentro da pasta examples/:

  • 01_hello_world.py: Simulação básica e limpa do tutorial inicial.
  • 03_mcp_a2a_agent.py: Exemplo do framework criando pontes Automáticas pra LLMs e lidando com delegação de chamadas entre dois agentes LLM em Loop (A2A).
  • 05_basic_blocks.py: Teste rígido nativo para aferição de Overheads.

Recomendações de Modelos e Exemplos

Recomenda-se instalar o Ollama com o modelo granite4:1b (ollama run granite4:1b) para testar os exemplos localmente. Alternativamente, você pode modificar os exemplos para utilizar uma API comercial, como Gemini (gemini/gemini-1.5-flash) ou OpenAI.

Atenção: O uso de modelos quantizados ou menores como o granite pode resultar em falhas ou resultados de raciocínio abaixo do esperado. O uso de grandes modelos comerciais quase sempre gera excelentes resultados, mas exige configuração extra (definição de variável de ambiente com a API KEY). Vale ressaltar que eles sofrem com restrições e limites de uso da camada gratuita, o que pode resultar em erros de execução. O uso de versões pagas é uma das melhores opções.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agenticblocks_io-0.6.0.tar.gz (16.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agenticblocks_io-0.6.0-py3-none-any.whl (20.2 kB view details)

Uploaded Python 3

File details

Details for the file agenticblocks_io-0.6.0.tar.gz.

File metadata

  • Download URL: agenticblocks_io-0.6.0.tar.gz
  • Upload date:
  • Size: 16.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for agenticblocks_io-0.6.0.tar.gz
Algorithm Hash digest
SHA256 0db636d1ec43b55a4a2fbbb685af9ccfd4d7b8ca0a298fd35d7d80783ebfbac3
MD5 8b39990d6d573931358e88ba257745a8
BLAKE2b-256 ac049e972eb9ef6a7474d3009971f456bef94ff95d5a5086e46397a994944422

See more details on using hashes here.

Provenance

The following attestation bundles were made for agenticblocks_io-0.6.0.tar.gz:

Publisher: workflow.yml on gilzamir18/agenticblocks

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agenticblocks_io-0.6.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agenticblocks_io-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cab21e4fbf40f3d0ae59dc8f29909f45f6d966b5168d1150b4c59d24ac46e93d
MD5 da5119e6f61e595496ca99d38c066bb9
BLAKE2b-256 b2e23515f4d89d4c1a038c48ef2fa1525b8ac77677a3427d2c2d7835b1a5b971

See more details on using hashes here.

Provenance

The following attestation bundles were made for agenticblocks_io-0.6.0-py3-none-any.whl:

Publisher: workflow.yml on gilzamir18/agenticblocks

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page