Skip to main content

Aiexec Executor - A lightweight CLI tool for executing and serving Aiexec AI flows

Project description

wfx - Aiexec Executor

wfx is a command-line tool for running Aiexec workflows. It provides two main commands: serve and run.

Installation

From PyPI (recommended)

# Install globally
uv pip install wfx

# Or run without installing using uvx
uvx wfx serve my_flow.json
uvx wfx run my_flow.json "input"

From source (development)

# Clone and run in workspace
git clone https://gitlab.com/khulnasoft/aiexec
cd aiexec/src/wfx
uv run wfx serve my_flow.json

Key Features

Flattened Component Access

wfx now supports simplified component imports for better developer experience:

Before (old import style):

from wfx.components.agents.agent import AgentComponent
from wfx.components.data.url import URLComponent
from wfx.components.input_output import ChatInput, ChatOutput

Now (new flattened style):

from wfx import components as cp

# Direct access to all components
chat_input = cp.ChatInput()
agent = cp.AgentComponent()
url_component = cp.URLComponent()
chat_output = cp.ChatOutput()

Benefits:

  • Simpler imports: One import line instead of multiple deep imports
  • Better discovery: All components accessible via cp.ComponentName
  • Helpful error messages: Clear guidance when dependencies are missing
  • Backward compatible: Traditional imports still work

Commands

wfx serve - Run flows as an API

Serve a Aiexec workflow as a REST API.

Important: You must set the AIEXEC_API_KEY environment variable before running the serve command.

export AIEXEC_API_KEY=your-secret-key
uv run wfx serve my_flow.json --port 8000

This creates a FastAPI server with your flow available at /flows/{flow_id}/run. The actual flow ID will be displayed when the server starts.

Options:

  • --host, -h: Host to bind server (default: 127.0.0.1)
  • --port, -p: Port to bind server (default: 8000)
  • --verbose, -v: Show diagnostic output
  • --env-file: Path to .env file
  • --log-level: Set logging level (debug, info, warning, error, critical)
  • --check-variables/--no-check-variables: Check global variables for environment compatibility (default: check)

Example:

# Set API key (required)
export AIEXEC_API_KEY=your-secret-key

# Start server
uv run wfx serve simple_chat.json --host 0.0.0.0 --port 8000

# The server will display the flow ID, e.g.:
# Flow ID: af9edd65-6393-58e2-9ae5-d5f012e714f4

# Call API using the displayed flow ID
curl -X POST http://localhost:8000/flows/af9edd65-6393-58e2-9ae5-d5f012e714f4/run \
  -H "Content-Type: application/json" \
  -H "x-api-key: your-secret-key" \
  -d '{"input_value": "Hello, world!"}'

wfx run - Run flows directly

Execute a Aiexec workflow and get results immediately.

uv run wfx run my_flow.json "What is AI?"

Options:

  • --format, -f: Output format (json, text, message, result) (default: json)
  • --verbose: Show diagnostic output
  • --input-value: Input value to pass to the graph (alternative to positional argument)
  • --flow-json: Inline JSON flow content as a string
  • --stdin: Read JSON flow from stdin
  • --check-variables/--no-check-variables: Check global variables for environment compatibility (default: check)

Examples:

# Basic execution
uv run wfx run simple_chat.json "Tell me a joke"

# JSON output (default)
uv run wfx run simple_chat.json "input text" --format json

# Text output only
uv run wfx run simple_chat.json "Hello" --format text

# Using --input-value flag
uv run wfx run simple_chat.json --input-value "Hello world"

# From stdin (requires --input-value for input)
echo '{"data": {"nodes": [...], "edges": [...]}}' | uv run wfx run --stdin --input-value "Your message"

# Inline JSON
uv run wfx run --flow-json '{"data": {"nodes": [...], "edges": [...]}}' --input-value "Test"

Complete Agent Example

Here's a step-by-step example of creating and running an agent workflow with dependencies:

Step 1: Create the agent script

Create a file called simple_agent.py:

"""A simple agent flow example for Aiexec.

This script demonstrates how to set up a conversational agent using Aiexec's
Agent component with web search capabilities.

Features:
- Uses the new flattened component access (cp.AgentComponent instead of deep imports)
- Configures logging to 'aiexec.log' at INFO level
- Creates an agent with OpenAI GPT model
- Provides web search tools via URLComponent
- Connects ChatInput → Agent → ChatOutput

Usage:
    uv run wfx run simple_agent.py "How are you?"
"""

import os
from pathlib import Path

# Using the new flattened component access
from wfx import components as cp
from wfx.graph import Graph
from wfx.log.logger import LogConfig

log_config = LogConfig(
    log_level="INFO",
    log_file=Path("aiexec.log"),
)

# Showcase the new flattened component access - no need for deep imports!
chat_input = cp.ChatInput()
agent = cp.AgentComponent()
url_component = cp.URLComponent()
tools = url_component.to_toolkit()

agent.set(
    model_name="gpt-4.1-mini",
    agent_llm="OpenAI",
    api_key=os.getenv("OPENAI_API_KEY"),
    input_value=chat_input.message_response,
    tools=tools,
)
chat_output = cp.ChatOutput().set(input_value=agent.message_response)

graph = Graph(chat_input, chat_output, log_config=log_config)

Step 2: Install dependencies

# Install wfx (if not already installed)
uv pip install wfx

# Install additional dependencies required for the agent
uv pip install langchain-community langchain beautifulsoup4 lxml langchain-openai

Step 3: Set up environment

# Set your OpenAI API key
export OPENAI_API_KEY=your-openai-api-key-here

Step 4: Run the agent

# Run with verbose output to see detailed execution
uv run wfx run simple_agent.py "How are you?" --verbose

# Run with different questions
uv run wfx run simple_agent.py "What's the weather like today?"
uv run wfx run simple_agent.py "Search for the latest news about AI"

This creates an intelligent agent that can:

  • Answer questions using the GPT model
  • Search the web for current information
  • Process and respond to natural language queries

The --verbose flag shows detailed execution information including timing and component details.

Input Sources

Both commands support multiple input sources:

  • File path: uv run wfx serve my_flow.json
  • Inline JSON: uv run wfx serve --flow-json '{"data": {"nodes": [...], "edges": [...]}}'
  • Stdin: uv run wfx serve --stdin

Development

# Install development dependencies
make dev

# Run tests
make test

# Format code
make format

License

MIT License. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wfx-0.1.12.tar.gz (4.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wfx-0.1.12-py3-none-any.whl (1.0 MB view details)

Uploaded Python 3

File details

Details for the file wfx-0.1.12.tar.gz.

File metadata

  • Download URL: wfx-0.1.12.tar.gz
  • Upload date:
  • Size: 4.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.17

File hashes

Hashes for wfx-0.1.12.tar.gz
Algorithm Hash digest
SHA256 2c99a5e522b181e4e0a17391f3f3ff6008ccdf897868213b167861204e68c223
MD5 bc127cfda91110d49c53a36b1bc85bf0
BLAKE2b-256 b5cfbfa7fced97dd3ed12aaaf24f8620e9a604a578d37d0427e1d6f88420ae57

See more details on using hashes here.

File details

Details for the file wfx-0.1.12-py3-none-any.whl.

File metadata

  • Download URL: wfx-0.1.12-py3-none-any.whl
  • Upload date:
  • Size: 1.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.17

File hashes

Hashes for wfx-0.1.12-py3-none-any.whl
Algorithm Hash digest
SHA256 a4837be09b6727b3a50735e02df91c0b047fe9cacc3efb58360a388f68f9906c
MD5 aa612501048364d2ed1e8a9189c10895
BLAKE2b-256 462434235392dffed113e5215ca8ba00182a0db44a7fd357e65bc24c60c18dd9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page