Skip to main content

IGNIS LFX - Langflow Executor for advanced flow execution and component management

Project description

IGNIS LFX - Langflow Executor

Python Version License: MIT GitHub Repository

IGNIS LFX is a powerful Python package that provides execution capabilities for Langflow flows with advanced component management, memory handling, and flow execution features.

Features

  • ๐Ÿš€ Flow Execution: Execute Langflow flows programmatically with full control
  • ๐Ÿ’พ Memory Management: Built-in session-based memory management for conversational flows
  • ๐Ÿ”ง Component System: Access and utilize extensive component library for flow building
  • ๐Ÿ“Š MCP Integration: Model Context Protocol support for advanced integrations
  • ๐Ÿ›ก๏ธ Type Safety: Full type hints for better IDE support and development experience
  • โšก FastAPI Integration: Ready-to-use FastAPI integration for web services
  • ๐Ÿ”Œ Multi-LLM Support: Support for OpenAI, Ollama, IBM LangChain, and more

Installation

Install ignis_lfx from PyPI:

pip install ignis_lfx

Or with optional dependencies:

# Development tools and testing
pip install ignis_lfx[dev]

# Documentation tools
pip install ignis_lfx[docs]

# All optional dependencies
pip install ignis_lfx[dev,docs]

Quick Start

Basic Flow Execution

from ignis_lfx import execute_flow

# Load and execute a flow
result = execute_flow(
    flow_name="my_flow.json",
    input_data={"question": "What is Python?"}
)

print(result)

FastAPI Integration

from fastapi import FastAPI
from ignis_lfx import execute_flow

app = FastAPI()

@app.post("/execute")
async def run_flow(input_data: dict):
    result = await execute_flow(
        flow_name="assistant.json",
        input_data=input_data
    )
    return {"result": result}

Memory-Based Chat

from ignis_lfx.memory import SessionMemory

# Initialize session memory
memory = SessionMemory(session_id="user_123")

# Store conversation history
memory.save("user", "Hello, how are you?")
memory.save("assistant", "I'm doing great! How can I help?")

# Load conversation history
history = memory.load("user_123")

Configuration

Environment Variables

# LFX Configuration
LANGFLOW_DEV=false
LFX_API_KEY=your_api_key_here
LFX_BASE_URL=http://localhost:7860

# LLM Configuration
OPENAI_API_KEY=sk-...
OLLAMA_BASE_URL=http://localhost:11434

Configuration File (ignis_lfx_config.json)

{
  "LFX_URL": "http://localhost:7860",
  "LFX_API_KEY": "your-api-key",
  "DEFAULT_FLOW": "default.json",
  "MEMORY_TYPE": "session",
  "DEBUG": false
}

Project Structure

ignis_lfx/
โ”œโ”€โ”€ __init__.py           # Package initialization
โ”œโ”€โ”€ core/                 # Core functionality
โ”‚   โ”œโ”€โ”€ flow.py          # Flow execution engine
โ”‚   โ”œโ”€โ”€ executor.py      # Flow executor
โ”‚   โ””โ”€โ”€ schema.py        # Data schemas
โ”œโ”€โ”€ components/          # Component library
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ””โ”€โ”€ base.py          # Base component class
โ”œโ”€โ”€ memory/              # Memory management
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”œโ”€โ”€ base.py          # Base memory class
โ”‚   โ””โ”€โ”€ session.py       # Session memory implementation
โ”œโ”€โ”€ integrations/        # External integrations
โ”‚   โ”œโ”€โ”€ fastapi.py       # FastAPI integration
โ”‚   โ”œโ”€โ”€ mcp.py           # MCP protocol support
โ”‚   โ””โ”€โ”€ llm.py           # LLM provider support
โ”œโ”€โ”€ cli/                 # Command-line interface
โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ””โ”€โ”€ commands.py      # CLI commands
โ””โ”€โ”€ utils/               # Utility functions
    โ”œโ”€โ”€ __init__.py
    โ”œโ”€โ”€ logger.py        # Logging configuration
    โ””โ”€โ”€ validators.py    # Input validation

Dependencies

Core Dependencies

  • fastapi (โ‰ฅ0.128.0) - Web framework
  • pydantic (โ‰ฅ2.0.0) - Data validation
  • langchain-core (โ‰ฅ0.3.0) - LangChain core library
  • orjson (โ‰ฅ3.10.0) - Fast JSON serialization

LLM Provider Support

  • langchain-openai - OpenAI API support
  • langchain-ollama - Ollama local LLM support
  • langchain-ibm - IBM LangChain support

CLI & HTTP

  • typer (โ‰ฅ0.12.0) - CLI framework
  • httpx (โ‰ฅ0.25.0) - Async HTTP client

Advanced Usage

Custom Memory Backend

from ignis_lfx.memory import BaseMemory

class CustomMemory(BaseMemory):
    def save(self, session_id: str, role: str, content: str) -> None:
        # Implement custom save logic
        pass
    
    def load(self, session_id: str) -> list:
        # Implement custom load logic
        pass

# Use custom memory
from ignis_lfx import execute_flow
result = execute_flow(
    flow_name="my_flow.json",
    memory_backend=CustomMemory()
)

Component Development

from ignis_lfx.components import BaseComponent
from pydantic import Field

class MyCustomComponent(BaseComponent):
    name: str = "MyComponent"
    description: str = "A custom component"
    
    input_param: str = Field(..., description="Input parameter")
    
    def run(self, **kwargs) -> dict:
        # Implement component logic
        return {"result": f"Processed: {self.input_param}"}

Contributing

We welcome contributions! Please follow these steps:

  1. Fork the repository at Infogain-GenAI/ignis-lfx
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit changes: git commit -m 'Add amazing feature'
  4. Push to branch: git push origin feature/amazing-feature
  5. Open a Pull Request

Development Setup

# Clone repository
git clone https://github.com/Infogain-GenAI/ignis-lfx.git
cd ignis-lfx

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install in development mode
pip install -e ".[dev]"

# Run tests
pytest

# Format code
black .

# Lint code
ruff check . --fix

Testing

# Run all tests
pytest

# Run tests with coverage
pytest --cov=ignis_lfx --cov-report=html

# Run specific test file
pytest tests/test_execution.py

# Run tests matching pattern
pytest -k "memory" -v

Troubleshooting

Common Issues

Issue: ImportError: cannot import name 'execute_flow'

  • Solution: Ensure ignis_lfx is properly installed: pip install --upgrade ignis_lfx

Issue: Connection refused to LFX server

  • Solution: Verify LFX server is running and LFX_URL is correctly configured

Issue: API Key authentication failed

  • Solution: Check your API key in the configuration file or environment variable

Documentation

For detailed documentation, examples, and API reference, visit:

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

Changelog

Version 0.1.0 (Initial Release)

  • Initial release of ignis_lfx
  • Core flow execution engine
  • Memory management system
  • FastAPI integration
  • MCP protocol support
  • Comprehensive documentation

Acknowledgments

Built by the Infogain GenAI team using the Langflow framework.


Made with โค๏ธ by Infogain

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ignis_lfx-0.1.0.tar.gz (7.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ignis_lfx-0.1.0-py3-none-any.whl (6.7 kB view details)

Uploaded Python 3

File details

Details for the file ignis_lfx-0.1.0.tar.gz.

File metadata

  • Download URL: ignis_lfx-0.1.0.tar.gz
  • Upload date:
  • Size: 7.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for ignis_lfx-0.1.0.tar.gz
Algorithm Hash digest
SHA256 08f8fe2e995241ec8cdfac8f9ec54d41dc7e0514907deff25c780af2243af6eb
MD5 29e534117ffe917e3b03998cb628c756
BLAKE2b-256 69847fa4b4960b54fcdf94edcaf376a9824eb7da6c9440fa055b87d4b26ff6ac

See more details on using hashes here.

File details

Details for the file ignis_lfx-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: ignis_lfx-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 6.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for ignis_lfx-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 879b92bd43b529582e2fa7f8509701dfb9af81518c7449f7f62dc97d2de4598a
MD5 e285dba9cd4688776795c785c8f1fd6a
BLAKE2b-256 dcb1e80cfcd5c96b9e623aaa5594c9c6b97138a2b4edbb63e6e4f3edeef0b440

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page