IGNIS LFX - Langflow Executor for advanced flow execution and component management
Project description
IGNIS LFX - Langflow Executor
IGNIS LFX is a powerful Python package that provides execution capabilities for Langflow flows with advanced component management, memory handling, and flow execution features.
Features
- ๐ Flow Execution: Execute Langflow flows programmatically with full control
- ๐พ Memory Management: Built-in session-based memory management for conversational flows
- ๐ง Component System: Access and utilize extensive component library for flow building
- ๐ MCP Integration: Model Context Protocol support for advanced integrations
- ๐ก๏ธ Type Safety: Full type hints for better IDE support and development experience
- โก FastAPI Integration: Ready-to-use FastAPI integration for web services
- ๐ Multi-LLM Support: Support for OpenAI, Ollama, IBM LangChain, and more
Installation
Install ignis_lfx from PyPI:
pip install ignis_lfx
Or with optional dependencies:
# Development tools and testing
pip install ignis_lfx[dev]
# Documentation tools
pip install ignis_lfx[docs]
# All optional dependencies
pip install ignis_lfx[dev,docs]
Quick Start
Basic Flow Execution
from ignis_lfx import execute_flow
# Load and execute a flow
result = execute_flow(
flow_name="my_flow.json",
input_data={"question": "What is Python?"}
)
print(result)
FastAPI Integration
from fastapi import FastAPI
from ignis_lfx import execute_flow
app = FastAPI()
@app.post("/execute")
async def run_flow(input_data: dict):
result = await execute_flow(
flow_name="assistant.json",
input_data=input_data
)
return {"result": result}
Memory-Based Chat
from ignis_lfx.memory import SessionMemory
# Initialize session memory
memory = SessionMemory(session_id="user_123")
# Store conversation history
memory.save("user", "Hello, how are you?")
memory.save("assistant", "I'm doing great! How can I help?")
# Load conversation history
history = memory.load("user_123")
Configuration
Environment Variables
# LFX Configuration
LANGFLOW_DEV=false
LFX_API_KEY=your_api_key_here
LFX_BASE_URL=http://localhost:7860
# LLM Configuration
OPENAI_API_KEY=sk-...
OLLAMA_BASE_URL=http://localhost:11434
Configuration File (ignis_lfx_config.json)
{
"LFX_URL": "http://localhost:7860",
"LFX_API_KEY": "your-api-key",
"DEFAULT_FLOW": "default.json",
"MEMORY_TYPE": "session",
"DEBUG": false
}
Project Structure
ignis_lfx/
โโโ __init__.py # Package initialization
โโโ core/ # Core functionality
โ โโโ flow.py # Flow execution engine
โ โโโ executor.py # Flow executor
โ โโโ schema.py # Data schemas
โโโ components/ # Component library
โ โโโ __init__.py
โ โโโ base.py # Base component class
โโโ memory/ # Memory management
โ โโโ __init__.py
โ โโโ base.py # Base memory class
โ โโโ session.py # Session memory implementation
โโโ integrations/ # External integrations
โ โโโ fastapi.py # FastAPI integration
โ โโโ mcp.py # MCP protocol support
โ โโโ llm.py # LLM provider support
โโโ cli/ # Command-line interface
โ โโโ __init__.py
โ โโโ commands.py # CLI commands
โโโ utils/ # Utility functions
โโโ __init__.py
โโโ logger.py # Logging configuration
โโโ validators.py # Input validation
Dependencies
Core Dependencies
- fastapi (โฅ0.128.0) - Web framework
- pydantic (โฅ2.0.0) - Data validation
- langchain-core (โฅ0.3.0) - LangChain core library
- orjson (โฅ3.10.0) - Fast JSON serialization
LLM Provider Support
- langchain-openai - OpenAI API support
- langchain-ollama - Ollama local LLM support
- langchain-ibm - IBM LangChain support
CLI & HTTP
- typer (โฅ0.12.0) - CLI framework
- httpx (โฅ0.25.0) - Async HTTP client
Advanced Usage
Custom Memory Backend
from ignis_lfx.memory import BaseMemory
class CustomMemory(BaseMemory):
def save(self, session_id: str, role: str, content: str) -> None:
# Implement custom save logic
pass
def load(self, session_id: str) -> list:
# Implement custom load logic
pass
# Use custom memory
from ignis_lfx import execute_flow
result = execute_flow(
flow_name="my_flow.json",
memory_backend=CustomMemory()
)
Component Development
from ignis_lfx.components import BaseComponent
from pydantic import Field
class MyCustomComponent(BaseComponent):
name: str = "MyComponent"
description: str = "A custom component"
input_param: str = Field(..., description="Input parameter")
def run(self, **kwargs) -> dict:
# Implement component logic
return {"result": f"Processed: {self.input_param}"}
Contributing
We welcome contributions! Please follow these steps:
- Fork the repository at Infogain-GenAI/ignis-lfx
- Create a feature branch:
git checkout -b feature/amazing-feature - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
Development Setup
# Clone repository
git clone https://github.com/Infogain-GenAI/ignis-lfx.git
cd ignis-lfx
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black .
# Lint code
ruff check . --fix
Testing
# Run all tests
pytest
# Run tests with coverage
pytest --cov=ignis_lfx --cov-report=html
# Run specific test file
pytest tests/test_execution.py
# Run tests matching pattern
pytest -k "memory" -v
Troubleshooting
Common Issues
Issue: ImportError: cannot import name 'execute_flow'
- Solution: Ensure ignis_lfx is properly installed:
pip install --upgrade ignis_lfx
Issue: Connection refused to LFX server
- Solution: Verify LFX server is running and
LFX_URLis correctly configured
Issue: API Key authentication failed
- Solution: Check your API key in the configuration file or environment variable
Documentation
For detailed documentation, examples, and API reference, visit:
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
- ๐ง Email: nishanth.p@infogain.com
- ๐ Issue Tracker: GitHub Issues
- ๐ฌ Discussions: GitHub Discussions
Changelog
Version 0.1.0 (Initial Release)
- Initial release of ignis_lfx
- Core flow execution engine
- Memory management system
- FastAPI integration
- MCP protocol support
- Comprehensive documentation
Acknowledgments
Built by the Infogain GenAI team using the Langflow framework.
Made with โค๏ธ by Infogain
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ignis_lfx-0.1.0.tar.gz.
File metadata
- Download URL: ignis_lfx-0.1.0.tar.gz
- Upload date:
- Size: 7.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
08f8fe2e995241ec8cdfac8f9ec54d41dc7e0514907deff25c780af2243af6eb
|
|
| MD5 |
29e534117ffe917e3b03998cb628c756
|
|
| BLAKE2b-256 |
69847fa4b4960b54fcdf94edcaf376a9824eb7da6c9440fa055b87d4b26ff6ac
|
File details
Details for the file ignis_lfx-0.1.0-py3-none-any.whl.
File metadata
- Download URL: ignis_lfx-0.1.0-py3-none-any.whl
- Upload date:
- Size: 6.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
879b92bd43b529582e2fa7f8509701dfb9af81518c7449f7f62dc97d2de4598a
|
|
| MD5 |
e285dba9cd4688776795c785c8f1fd6a
|
|
| BLAKE2b-256 |
dcb1e80cfcd5c96b9e623aaa5594c9c6b97138a2b4edbb63e6e4f3edeef0b440
|