Skip to main content

Aiexec Executor - A lightweight CLI tool for executing and serving Aiexec AI flows

Project description

wfx - Aiexec Executor

wfx is a command-line tool for running Aiexec workflows. It provides two main commands: serve and run.

Installation

From PyPI (recommended)

# Install globally
uv pip install wfx

# Or run without installing using uvx
uvx wfx serve my_flow.json
uvx wfx run my_flow.json "input"

From source (development)

# Clone and run in workspace
git clone https://github.com/khulnasoft/aiexec
cd aiexec/src/wfx
uv run wfx serve my_flow.json

Key Features

Flattened Component Access

wfx now supports simplified component imports for better developer experience:

Before (old import style):

from wfx.components.agents.agent import AgentComponent
from wfx.components.data.url import URLComponent
from wfx.components.input_output import ChatInput, ChatOutput

Now (new flattened style):

from wfx import components as cp

# Direct access to all components
chat_input = cp.ChatInput()
agent = cp.AgentComponent()
url_component = cp.URLComponent()
chat_output = cp.ChatOutput()

Benefits:

  • Simpler imports: One import line instead of multiple deep imports
  • Better discovery: All components accessible via cp.ComponentName
  • Helpful error messages: Clear guidance when dependencies are missing
  • Backward compatible: Traditional imports still work

Commands

wfx serve - Run flows as an API

Serve a Aiexec workflow as a REST API.

Important: You must set the AIEXEC_API_KEY environment variable before running the serve command.

export AIEXEC_API_KEY=your-secret-key
uv run wfx serve my_flow.json --port 8000

This creates a FastAPI server with your flow available at /flows/{flow_id}/run. The actual flow ID will be displayed when the server starts.

Options:

  • --host, -h: Host to bind server (default: 127.0.0.1)
  • --port, -p: Port to bind server (default: 8000)
  • --verbose, -v: Show diagnostic output
  • --env-file: Path to .env file
  • --log-level: Set logging level (debug, info, warning, error, critical)
  • --check-variables/--no-check-variables: Check global variables for environment compatibility (default: check)

Example:

# Set API key (required)
export AIEXEC_API_KEY=your-secret-key

# Start server
uv run wfx serve simple_chat.json --host 0.0.0.0 --port 8000

# The server will display the flow ID, e.g.:
# Flow ID: af9edd65-6393-58e2-9ae5-d5f012e714f4

# Call API using the displayed flow ID
curl -X POST http://localhost:8000/flows/af9edd65-6393-58e2-9ae5-d5f012e714f4/run \
  -H "Content-Type: application/json" \
  -H "x-api-key: your-secret-key" \
  -d '{"input_value": "Hello, world!"}'

wfx run - Run flows directly

Execute a Aiexec workflow and get results immediately.

uv run wfx run my_flow.json "What is AI?"

Options:

  • --format, -f: Output format (json, text, message, result) (default: json)
  • --verbose: Show diagnostic output
  • --input-value: Input value to pass to the graph (alternative to positional argument)
  • --flow-json: Inline JSON flow content as a string
  • --stdin: Read JSON flow from stdin
  • --check-variables/--no-check-variables: Check global variables for environment compatibility (default: check)

Examples:

# Basic execution
uv run wfx run simple_chat.json "Tell me a joke"

# JSON output (default)
uv run wfx run simple_chat.json "input text" --format json

# Text output only
uv run wfx run simple_chat.json "Hello" --format text

# Using --input-value flag
uv run wfx run simple_chat.json --input-value "Hello world"

# From stdin (requires --input-value for input)
echo '{"data": {"nodes": [...], "edges": [...]}}' | uv run wfx run --stdin --input-value "Your message"

# Inline JSON
uv run wfx run --flow-json '{"data": {"nodes": [...], "edges": [...]}}' --input-value "Test"

Complete Agent Example

Here's a step-by-step example of creating and running an agent workflow with dependencies:

Step 1: Create the agent script

Create a file called simple_agent.py:

"""A simple agent flow example for Aiexec.

This script demonstrates how to set up a conversational agent using Aiexec's
Agent component with web search capabilities.

Features:
- Uses the new flattened component access (cp.AgentComponent instead of deep imports)
- Configures logging to 'aiexec.log' at INFO level
- Creates an agent with OpenAI GPT model
- Provides web search tools via URLComponent
- Connects ChatInput → Agent → ChatOutput

Usage:
    uv run wfx run simple_agent.py "How are you?"
"""

import os
from pathlib import Path

# Using the new flattened component access
from wfx import components as cp
from wfx.graph import Graph
from wfx.log.logger import LogConfig

log_config = LogConfig(
    log_level="INFO",
    log_file=Path("aiexec.log"),
)

# Showcase the new flattened component access - no need for deep imports!
chat_input = cp.ChatInput()
agent = cp.AgentComponent()
url_component = cp.URLComponent()
tools = url_component.to_toolkit()

agent.set(
    model_name="gpt-4.1-mini",
    agent_llm="OpenAI",
    api_key=os.getenv("OPENAI_API_KEY"),
    input_value=chat_input.message_response,
    tools=tools,
)
chat_output = cp.ChatOutput().set(input_value=agent.message_response)

graph = Graph(chat_input, chat_output, log_config=log_config)

Step 2: Install dependencies

# Install wfx (if not already installed)
uv pip install wfx

# Install additional dependencies required for the agent
uv pip install langchain-community langchain beautifulsoup4 lxml langchain-openai

Step 3: Set up environment

# Set your OpenAI API key
export OPENAI_API_KEY=your-openai-api-key-here

Step 4: Run the agent

# Run with verbose output to see detailed execution
uv run wfx run simple_agent.py "How are you?" --verbose

# Run with different questions
uv run wfx run simple_agent.py "What's the weather like today?"
uv run wfx run simple_agent.py "Search for the latest news about AI"

This creates an intelligent agent that can:

  • Answer questions using the GPT model
  • Search the web for current information
  • Process and respond to natural language queries

The --verbose flag shows detailed execution information including timing and component details.

Input Sources

Both commands support multiple input sources:

  • File path: uv run wfx serve my_flow.json
  • Inline JSON: uv run wfx serve --flow-json '{"data": {"nodes": [...], "edges": [...]}}'
  • Stdin: uv run wfx serve --stdin

Development

# Install development dependencies
make dev

# Run tests
make test

# Format code
make format

License

MIT License. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wfx-0.1.13.tar.gz (5.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wfx-0.1.13-py3-none-any.whl (1.6 MB view details)

Uploaded Python 3

File details

Details for the file wfx-0.1.13.tar.gz.

File metadata

  • Download URL: wfx-0.1.13.tar.gz
  • Upload date:
  • Size: 5.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.5

File hashes

Hashes for wfx-0.1.13.tar.gz
Algorithm Hash digest
SHA256 abbab9a8d9e13c9ebfd4d442d8ac77b45e16d37214224a03cf6f89b50117bab6
MD5 88580f228ecdd7084c4c829f783c03f2
BLAKE2b-256 a1a3f9975b7d981e37372b8afc2d15c265928171838f51e2128b33988f7157ac

See more details on using hashes here.

File details

Details for the file wfx-0.1.13-py3-none-any.whl.

File metadata

  • Download URL: wfx-0.1.13-py3-none-any.whl
  • Upload date:
  • Size: 1.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.5

File hashes

Hashes for wfx-0.1.13-py3-none-any.whl
Algorithm Hash digest
SHA256 86140e9caca999c6ec0baf8a5a68b80faac5e81675b5baa73e17504d05d5f6d2
MD5 f0cda41e9f0368c1e607907ea4cbecd6
BLAKE2b-256 4820c009367ded56d7ef0e7138780d68bc40f7401809dd9075bb840b43108780

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page