Skip to main content

A framework to create agents, tasks, and workflows.

Project description

IO Intelligence Agent Framework

IMPORTANT: Beta Notice: This project is in rapid development and may not be stable for production use.

This repository provides a flexible system for building and orchestrating agents and workflows. It offers two modes:

  • Client Mode: Where tasks call out to a remote API client (e.g., your client.py functions).
  • Local Mode: Where tasks run directly in the local environment, utilizing run_agents(...) and local logic.

It also supports loading YAML or JSON workflows to define multi-step tasks.


Table of Contents

  1. Overview
  2. Installation
  3. Concepts
  4. Usage
  5. Examples
  6. API Endpoints
  7. License

Overview

The framework has distilled Agents into 3 distinct pieces:

  • Agents
  • Tasks
  • Workflows

The Agent can be configured with:

  • Model Provider (e.g., OpenAI, Llama, etc.)
  • Tools (e.g., specialized functions)

Users can define tasks (like sentiment, translate_text, etc.) in a local or client mode. They can also upload workflows (in YAML or JSON) to orchestrate multiple steps in sequence.


Installation

  1. Install the latest release:
pip install --upgrade iointel
  1. For UI features (Gradio interface), install with UI dependencies:
pip install --upgrade "iointel[ui]"
  1. Set Required Environment Variable:

    • OPENAI_API_KEY or IO_API_KEY for the default OpenAI-based ChatOpenAI.
  2. Optional Environment Variables:

    • AGENT_LOGGING_LEVEL (optional) to configure logging verbosity: DEBUG, INFO, etc.
    • OPENAI_API_BASE_URL or IO_API_BASE_URL to point to OpenAI-compatible API implementation, like https://api.intelligence.io.solutions/api/v1
    • OPENAI_API_MODEL or IO_API_MODEL to pick specific LLM model as "agent brain", like openai/gpt-oss-120b

Concepts

Agents

  • They can have a custom model (e.g., OpenAIModel, a Llama-based model, etc.).
  • Agents can have tools attached, which are specialized functions accessible during execution.
  • Agents can have a custom Persona Profile configured.

Tasks

  • A task is a single step in a workflow, e.g., schedule_reminder, sentiment, translate_text, etc.
  • Tasks are managed by the Workflow class in workflow.py.
  • Tasks can be chained for multi-step logic into a workflow (e.g., await Workflow(objective="...").translate_text().sentiment().run_tasks()).

Client Mode vs Local Mode

  • Local Mode: The system calls run_agents(...) directly in your local environment.
  • Client Mode: The system calls out to remote endpoints in a separate API.
    • In client_mode=True, each task (e.g. sentiment) triggers a client function (sentiment_analysis(...)) instead of local logic.

This allows you to switch between running tasks locally or delegating them to a server.

Workflows (YAML/JSON)

Note: this part is under active development and might not always function!

  • You can define multi-step workflows in YAML or JSON.
  • The endpoint /run-file accepts a file (via multipart form data).
    • First tries parsing the payload as JSON.
    • If that fails, it tries parsing the payload as YAML.
  • The file is validated against a WorkflowDefinition Pydantic model.
  • Each step has a type (e.g., "sentiment", "custom") and optional parameters (like agents, target_language, etc.).

Usage

Creating Agents

from iointel import Agent

my_agent = Agent(
    name="MyAgent",
    instructions="You are a helpful agent.",
    # one can also pass custom model using pydantic_ai.models.openai.OpenAIModel
    # or pass args to OpenAIModel() as kwargs to Agent()
)

Creating an Agent with a Persona

from iointel import PersonaConfig, Agent


my_persona = PersonaConfig(
    name="Elandria the Arcane Scholar",
    age=164,
    role="an ancient elven mage",
    style="formal and slightly archaic",
    domain_knowledge=["arcane magic", "elven history", "ancient runes"],
    quirks="often references centuries-old events casually",
    bio="Once studied at the Grand Academy of Runic Arts",
    lore="Elves in this world can live up to 300 years",
    personality="calm, wise, but sometimes condescending",
    conversation_style="uses 'thee' and 'thou' occasionally",
    description="Tall, silver-haired, wearing intricate robes with arcane symbols",
    emotional_stability=0.85,
    friendliness=0.45,
    creativity=0.68,
    curiosity=0.95,
    formality=0.1,
    empathy=0.57,
    humor=0.99,
)

agent = Agent(
    name="ArcaneScholarAgent",
    instructions="You are an assistant specialized in arcane knowledge.",
    persona=my_persona
)

print(agent.instructions)

Building a Workflow

In Python code, you can create tasks by instantiating the Tasks class and chaining methods:

from iointel import Workflow

tasks = Workflow(objective="This is the text to analyze", client_mode=False)
(
  tasks
    .sentiment(agents=[my_agent])
    .translate_text(target_language="french")   # a second step
)

results = await tasks.run_tasks()
print(results)

Because client_mode=False, everything runs locally.

Running a Local Workflow

tasks = Workflow(objective="Breaking news: local sports team wins!", client_mode=False)
await tasks.summarize_text(max_words=50).run_tasks()

Running a Remote Workflow (Client Mode)

tasks = Workflow(objective="Breaking news: local sports team wins!", client_mode=True)
await tasks.summarize_text(max_words=50).run_tasks()

Now, summarize_text calls the client function (e.g., summarize_task(...)) instead of local logic.

Uploading YAML/JSON Workflows

Note: this part is under active development and might not always function!

1.	Create a YAML or JSON file specifying workflow:
name: "My YAML Workflow"
text: "Large text to analyze"
workflow:
  - type: "sentiment"
  - type: "summarize_text"
    max_words: 20
  - type: "moderation"
    threshold: 0.7
  - type: "custom"
    name: "special-step"
    objective: "Analyze the text"
    instructions: "Use advanced analysis"
    context:
      extra_info: "some metadata"
2.	Upload via the /run-file endpoint (multipart file upload).

The server reads it as JSON or YAML and runs the tasks sequentially in local mode.

Examples

Simple Summarize Task

tasks = Workflow("Breaking news: new Python release!", client_mode=False)
await tasks.summarize_text(max_words=30).run_tasks()

Returns a summarized result.

Chainable Workflows

tasks = Workflow("Tech giant acquires startup for $2B", client_mode=False)
(tasks
   .translate_text(target_language="spanish")
   .sentiment()
)
await results = tasks.run_tasks()
1.	Translate to Spanish,
2.	Sentiment analysis.

Custom Workflow

tasks = Workflow("Analyze this special text", client_mode=False)
tasks.custom(
    name="my-unique-step",
    objective="Perform advanced analysis",
    instructions="Focus on entity extraction and sentiment",
    agents=[my_agent],
    **{"extra_context": "some_val"}
)
await results = tasks.run_tasks()

A "custom" task can reference a custom function in the CUSTOM_WORKFLOW_REGISTRY or fall back to a default behavior.

Loading From a YAML File

Note: this part is under active development and might not always function!

curl -X POST "https://api.intelligence.io.solutions/api/v1/workflows/run-file" \
     -F "yaml_file=@path/to/workflow.yaml"

API Endpoints

Please refer to (IO.net documentation)[https://docs.io.net/docs/exploring-ai-agents] to see particular endpoints and their documentation.

License

See the LICENSE file for license rights and limitations (Apache 2.0).

IOIntel: Agentic Tools with Beautiful UI

Features (MISSING ANCHOR)

  • Agentic tool use: Agents can call Python tools, return results, and chain reasoning.
  • Rich tool call visualization: Tool calls and results are rendered as beautiful, gold-accented "pills" in both CLI (with rich) and Gradio UI.
  • Dynamic UI: Agents can generate forms (textboxes, sliders, etc.) on the fly in the Gradio app.
  • Live CSS theming: Agents can change the UI theme at runtime.
  • Jupyter compatible: The Gradio UI can be launched in a notebook cell.

Quickstart: CLI Usage (MISSING ANCHOR)

from iointel import Agent, register_tool

[@register_tool](https://github.com/register_tool)
def add(a: float, b: float) -> float:
    return a + b

agent = Agent(
    name="Solar",
    instructions="You are a helpful assistant.",
    model="gpt-4o",
    api_key="sk-...",
    tools=[add],
    show_tool_calls=True,  # Pretty rich tool call output!
)

import asyncio
async def main():
    result = await agent.run("What is 2 + 2?", pretty=True)
    # Tool calls/results are shown in rich formatting!

asyncio.run(main())

Multimodal Support (MISSING ANCHOR)

iointel supports multimodal inputs through various content types:

from iointel import Agent, ImageUrl, BinaryContent, DocumentUrl, AudioUrl, VideoUrl

agent = Agent(
    name="VisionAgent",
    instructions="You are a helpful vision assistant.",
    model="openai/gpt-oss-120b",
    api_key="io-...",
)

# Images
result = await agent.run([
    "What's in this image?",
    ImageUrl(url="https://example.com/image.png")
])

# Local images with binary content
with open("local_image.png", "rb") as f:
    image_data = f.read()

result = await agent.run([
    "Describe this image",
    BinaryContent(data=image_data, media_type="image/png")
])

# Documents
result = await agent.run([
    "Summarize this document",
    DocumentUrl(url="https://example.com/document.pdf")
])

# Audio/Video (model dependent)
result = await agent.run([
    "Transcribe this audio",
    AudioUrl(url="https://example.com/audio.mp3")
])

Supported Media Types: The specific media types supported depend on your LLM model provider:

  • Images: PNG, JPEG, GIF, WebP
  • Documents: PDF, TXT
  • Audio/Video: MP3, MP4, WAV (varies by provider)

Check your model provider's documentation for specific format support and limitations. Screenshot 2025-06-02 at 5 46 15 PM

Screenshot 2025-06-02 at 5 46 55 PM


Quickstart: Gradio UI (MISSING ANCHOR)

Note: To use the Gradio UI, install with UI dependencies: pip install "iointel[ui]"

from iointel import Agent, register_tool

[@register_tool](https://github.com/register_tool)
def get_weather(city: str) -> dict:
    return {"temp": 72, "condition": "Sunny"}

agent = Agent(
    name="GradioSolar",
    instructions="You are a helpful assistant.",
    model="openai/gpt-oss-120b",
    api_key="io-...",
    tools=[get_weather],
    show_tool_calls=True,
)

# Launch the beautiful Gradio Chat UI (works in Jupyter too!)
await agent.launch_chat_ui(interface_title="Iointel Gradio Solar")

# Or, for more control across different agents:
# from iointel.src.ui.io_gradio_ui import IOGradioUI
# ui = IOGradioUI(agent, interface_title="Iointel GradioSolar")
# await ui.launch(share=True)

Screenshot 2025-06-02 at 5 44 49 PM

  • Tool calls are rendered as beautiful, gold-trimmed panels in the chat.
  • Dynamic UI: If your agent/tool returns a UI spec, it will be rendered live.
  • Works in Jupyter: Just run the above in a notebook cell!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

iointel-1.12.0.tar.gz (354.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

iointel-1.12.0-py3-none-any.whl (161.6 kB view details)

Uploaded Python 3

File details

Details for the file iointel-1.12.0.tar.gz.

File metadata

  • Download URL: iointel-1.12.0.tar.gz
  • Upload date:
  • Size: 354.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for iointel-1.12.0.tar.gz
Algorithm Hash digest
SHA256 603922e91304b5fbd772d815c14ddfea4d92406bba25f2b8120bba2bb615d33e
MD5 05161bfe245e8abeb42cd0ae4a8e54f6
BLAKE2b-256 d1c1f4eb4ef44795e70f2602275e1ef6de0269c2445e1be728152d3ea03c29b1

See more details on using hashes here.

File details

Details for the file iointel-1.12.0-py3-none-any.whl.

File metadata

  • Download URL: iointel-1.12.0-py3-none-any.whl
  • Upload date:
  • Size: 161.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for iointel-1.12.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9c956c72308ee3e809abc451d1be8b1e00405efa82fcfe1e169d3b9cc3ee87f9
MD5 4badcff3e3d0dfa09ad20dff6becd80b
BLAKE2b-256 902054dab5251bf1c14c6c3d1d5dfc266e09c02531d85253239a5a4dd5af1975

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page