AutoMagik Spark - Automagion Engine with LangFlow integration
Project description
Spark
**Because magic shouldn't be complicated. **
Spark is an automagion engine that seamlessly integrates with multiple LangFlow instances. Deploy AI-driven flows, schedule one-time or recurring tasks, and monitor everything with minimal fuss—no coding required.
🔗 Ecosystem
- AutoMagik Agents: Develop production-level AI agents
- AutoMagik UI: Create agents using natural language with our dedicated UI
🚀 Installation
Spark provides two setup options:
Prerequisites
- Linux-based system (Ubuntu/Debian recommended)
- Docker and Docker Compose (automatically installed on Ubuntu/Debian if not present)
Local Production Setup
For a production-ready local environment:
./scripts/setup_local.sh
Development Setup
For development with PostgreSQL and Redis Docker containers:
./scripts/setup_dev.sh
What Happens During Setup
Both setup scripts will:
- Create necessary environment files
- Install Docker if needed (on Ubuntu/Debian)
- Set up all required services
- Install the CLI tool (optional)
- Guide you through the entire process
After Installation
You'll have access to:
- Spark API: Running at http://localhost:8883
- PostgreSQL Database: Available at
localhost:15432 - Worker Service: Running and ready to process tasks
- CLI Tool: Installed (if chosen during setup)
Verifying Your Installation
The setup automatically verifies all services, but you can also check manually:
# Access API documentation
open http://localhost:8883/api/v1/docs # Interactive Swagger UI
open http://localhost:8883/api/v1/redoc # ReDoc documentation
# List flows (requires CLI installation)
source .venv/bin/activate
automagik-spark flow list
🧩 System Components
- API Server: Handles all HTTP requests and core logic
- Worker: Processes tasks and schedules
- Database: PostgreSQL with all required tables automatically created
- LangFlow (optional): Visual flow editor for creating AI workflows
- CLI Tool (optional): Command-line interface for managing flows and tasks
🏗️ System Architecture
flowchart LR
subgraph Services
DB[PostgreSQL]
LF1[LangFlow Instance 1]
LF2[LangFlow Instance 2]
end
subgraph Spark
CLI[CLI]
API[API Server]
CW[Celery Worker]
W[Worker]
end
API -- uses --> DB
API -- triggers --> CW
W -- processes --> API
API -- integrates with --> LF1
API -- integrates with --> LF2
CLI -- controls --> API
API -- has UI --> UI[Automagik UI]
Core Components Explained
- API: Core service handling requests and business logic
- Worker: Processes tasks and schedules
- CLI: Command-line tool for managing flows and tasks
- PostgreSQL: Stores flows, tasks, schedules, and other data
- LangFlow: Optional service for creating and editing flows
📚 API Documentation
For complete API documentation, visit:
- Swagger UI: http://localhost:8883/api/v1/docs
- ReDoc: http://localhost:8883/api/v1/redoc
🛠️ Next Steps
- If you installed LangFlow, visit http://localhost:17860 to create your first flow
- Use the API at http://localhost:8883/api/v1/docs to manage your flows and tasks
- Try out the CLI commands with
automagik-spark --help - Monitor task execution through logs and API endpoints
📊 Telemetry
Spark collects anonymous usage analytics to help improve the project. This data helps us understand which features are most useful and prioritize development efforts.
What We Collect
- Command usage and performance metrics
- API endpoint usage patterns
- Workflow execution statistics
- System information (OS, Python version)
- Error rates and types
What We DON'T Collect
- Personal information or credentials
- Actual workflow data or content
- File paths or environment variables
- Database connection strings or API keys
How to Disable Telemetry
Environment Variable:
export AUTOMAGIK_SPARK_DISABLE_TELEMETRY=true
CLI Commands:
# Disable permanently
automagik-spark telemetry disable
# Check status
automagik-spark telemetry status
# See what data is collected
automagik-spark telemetry info
# Use --no-telemetry flag for single session
automagik-spark --no-telemetry <command>
Opt-out File:
touch ~/.automagik-no-telemetry
Telemetry is automatically disabled in CI/testing environments.
🗺️ Roadmap
Spark's future development focuses on:
- TBA
Spark: Bringing AI Automation to Life
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file automagik_spark-0.3.7.tar.gz.
File metadata
- Download URL: automagik_spark-0.3.7.tar.gz
- Upload date:
- Size: 95.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5a6a1de115d137ece2df9a9a551bdadbc10e043ba950aff65d4be9a93987b3a9
|
|
| MD5 |
bf270548708b5bb57838a184f6947187
|
|
| BLAKE2b-256 |
50e8ee5994b18e42bc5c88941995dc8ecc28721d35e5f37c3b133f2a5f513282
|
File details
Details for the file automagik_spark-0.3.7-py3-none-any.whl.
File metadata
- Download URL: automagik_spark-0.3.7-py3-none-any.whl
- Upload date:
- Size: 104.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9bff7ebef5c086f5279b645c260652fcbda5f814dddb12aa12925b289c637979
|
|
| MD5 |
8f3cbe84cfd735158df840cba0bd1090
|
|
| BLAKE2b-256 |
b34f281258e6df49bf2b8017da550eb92813ce67d20afef66370de6b62289018
|