Skip to main content

Langflow Executor - A lightweight CLI tool for executing and serving Langflow AI flows

Project description

lfx - Langflow Executor

lfx is a command-line tool for running Langflow workflows. It provides two main commands: serve and run.

Installation

From PyPI (recommended)

# Install globally
uv pip install lfx

# Or run without installing using uvx
uvx lfx serve my_flow.json
uvx lfx run my_flow.json "input"

From source (development)

# Clone and run in workspace
git clone https://github.com/langflow-ai/langflow
cd langflow/src/lfx
uv run lfx serve my_flow.json

Commands

lfx serve - Run flows as an API

Serve a Langflow workflow as a REST API.

Important: You must set the LANGFLOW_API_KEY environment variable before running the serve command.

export LANGFLOW_API_KEY=your-secret-key
uv run lfx serve my_flow.json --port 8000

This creates a FastAPI server with your flow available at /flows/{flow_id}/run. The actual flow ID will be displayed when the server starts.

Options:

  • --host, -h: Host to bind server (default: 127.0.0.1)
  • --port, -p: Port to bind server (default: 8000)
  • --verbose, -v: Show diagnostic output
  • --env-file: Path to .env file
  • --log-level: Set logging level (debug, info, warning, error, critical)
  • --check-variables/--no-check-variables: Check global variables for environment compatibility (default: check)

Example:

# Set API key (required)
export LANGFLOW_API_KEY=your-secret-key

# Start server
uv run lfx serve simple_chat.json --host 0.0.0.0 --port 8000

# The server will display the flow ID, e.g.:
# Flow ID: af9edd65-6393-58e2-9ae5-d5f012e714f4

# Call API using the displayed flow ID
curl -X POST http://localhost:8000/flows/af9edd65-6393-58e2-9ae5-d5f012e714f4/run \
  -H "Content-Type: application/json" \
  -H "x-api-key: your-secret-key" \
  -d '{"input_value": "Hello, world!"}'

lfx run - Run flows directly

Execute a Langflow workflow and get results immediately.

uv run lfx run my_flow.json "What is AI?"

Options:

  • --format, -f: Output format (json, text, message, result) (default: json)
  • --verbose: Show diagnostic output
  • --input-value: Input value to pass to the graph (alternative to positional argument)
  • --flow-json: Inline JSON flow content as a string
  • --stdin: Read JSON flow from stdin
  • --check-variables/--no-check-variables: Check global variables for environment compatibility (default: check)

Examples:

# Basic execution
uv run lfx run simple_chat.json "Tell me a joke"

# JSON output (default)
uv run lfx run simple_chat.json "input text" --format json

# Text output only
uv run lfx run simple_chat.json "Hello" --format text

# Using --input-value flag
uv run lfx run simple_chat.json --input-value "Hello world"

# From stdin (requires --input-value for input)
echo '{"data": {"nodes": [...], "edges": [...]}}' | uv run lfx run --stdin --input-value "Your message"

# Inline JSON
uv run lfx run --flow-json '{"data": {"nodes": [...], "edges": [...]}}' --input-value "Test"

Complete Agent Example

Here's a step-by-step example of creating and running an agent workflow with dependencies:

Step 1: Create the agent script

Create a file called simple_agent.py:

"""A simple agent flow example for Langflow.

This script demonstrates how to set up a conversational agent using Langflow's
Agent component with web search capabilities.

Features:
- Configures logging to 'langflow.log' at INFO level
- Creates an agent with OpenAI GPT model
- Provides web search tools via URLComponent
- Connects ChatInput → Agent → ChatOutput

Usage:
    uv run lfx run simple_agent.py "How are you?"
"""

import os
from pathlib import Path

from lfx.components.agents.agent import AgentComponent
from lfx.components.data.url import URLComponent
from lfx.components.input_output import ChatInput, ChatOutput
from lfx.graph import Graph
from lfx.lfx_logging.logger import LogConfig

log_config = LogConfig(
    log_level="INFO",
    log_file=Path("langflow.log"),
)
chat_input = ChatInput()
agent = AgentComponent()
url_component = URLComponent()
tools = url_component.to_toolkit()
agent.set(
    model_name="gpt-4.1-mini",
    agent_llm="OpenAI",
    api_key=os.getenv("OPENAI_API_KEY"),
    input_value=chat_input.message_response,
    tools=tools,
)
chat_output = ChatOutput().set(input_value=agent.message_response)

graph = Graph(chat_input, chat_output, log_config=log_config)

Step 2: Install dependencies

# Install lfx (if not already installed)
uv pip install lfx

# Install additional dependencies required for the agent
uv pip install langchain-community langchain beautifulsoup4 lxml langchain-openai

Step 3: Set up environment

# Set your OpenAI API key
export OPENAI_API_KEY=your-openai-api-key-here

Step 4: Run the agent

# Run with verbose output to see detailed execution
uv run lfx run simple_agent.py "How are you?" --verbose

# Run with different questions
uv run lfx run simple_agent.py "What's the weather like today?"
uv run lfx run simple_agent.py "Search for the latest news about AI"

This creates an intelligent agent that can:

  • Answer questions using the GPT model
  • Search the web for current information
  • Process and respond to natural language queries

The --verbose flag shows detailed execution information including timing and component details.

Input Sources

Both commands support multiple input sources:

  • File path: uv run lfx serve my_flow.json
  • Inline JSON: uv run lfx serve --flow-json '{"data": {"nodes": [...], "edges": [...]}}'
  • Stdin: uv run lfx serve --stdin

Development

# Install development dependencies
make dev

# Run tests
make test

# Format code
make format

License

MIT License. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lfx-0.1.6.tar.gz (4.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lfx-0.1.6-py3-none-any.whl (914.4 kB view details)

Uploaded Python 3

File details

Details for the file lfx-0.1.6.tar.gz.

File metadata

  • Download URL: lfx-0.1.6.tar.gz
  • Upload date:
  • Size: 4.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.10

File hashes

Hashes for lfx-0.1.6.tar.gz
Algorithm Hash digest
SHA256 066ba5c6c709e80451346a5e9a182e9e5858f27ddb1c1b35c04c6b58faf44fb5
MD5 20fabd8fc3ad329d8e6405d0ec31adc2
BLAKE2b-256 cb5b03c6ac6c4e20b7994a9f98082d8596a6521342f3e515b58e068e74e71b48

See more details on using hashes here.

File details

Details for the file lfx-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: lfx-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 914.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.10

File hashes

Hashes for lfx-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 b60aed9f1c5d9ebf0f9a2a38aa33c25631e4d7d2851566fa8272c78b5c6421c2
MD5 d4b822618ebb23dced2ec0261ed50db1
BLAKE2b-256 f184f780acb4ecc80fb125eaeb2fee72dab1ebc0a08ec24a9aa4c2e0a9e6f094

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page