An agent-based orchestration framework for generative AI workflows
Project description
GenAItor
A platform for AI Agents and AI Agents products generation.
Overview
GenAItor is a cutting-edge platform designed to generate AI agents and related products that help automate complex tasks and processes. It leverages state-of-the-art machine learning libraries and tools to deliver flexible and scalable AI solutions.
To install the required dependencies, follow these steps:
-
Clone the repository:
git clone https://github.com/enterpriselm/genaitor.git cd genaitor
-
Create a virtual environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Important:
This project is best run with Python 3.12 to ensure compatibility and avoid potential errors. -
Install the required packages:
pip install -e .
-
Add API_KEY for llm (Gemini set as main, but you can use Anthropic, OpenAI, DeepSeek, Grok, Ollama or a custom LLM model):
echo "API_KEY=your_gemini_api_key" >> .env
General Framework Architecture
Features
- Generate AI agents for a variety of use cases.
- Modular architecture with components such as
core,llm,utils, andpresets. - Support for multiple data processing and communication protocols.
- Integration with popular libraries like Transformers, Langchain, and more.
Usage
Basic Example
Here’s a simple example of how to create an agent that answers questions using a generative model:
from genaitor.core import Agent, Task
from genaitor.llm import GeminiProvider, GeminiConfig
# Define a custom task
class QuestionAnsweringTask(Task):
def __init__(self, description: str, goal: str, output_format: str, llm_provider):
super().__init__(description, goal, output_format)
self.llm = llm_provider
def execute(self, input_data: str):
prompt = f"""
Task: {self.description}
Goal: {self.goal}
Question: {input_data}
Please provide a response following the format:
{self.output_format}
"""
return self.llm.generate(prompt)
# Configure the LLM provider
llm_provider = GeminiProvider(GeminiConfig(api_key="your_api_key"))
# Create an agent
agent = Agent(name="QA Agent", task=QuestionAnsweringTask("Answering questions", "Provide accurate answers", "Text format", llm_provider))
# Execute a task
result = agent.task.execute("What is AI?")
print(result)
Multi-Agent Example
Here’s a simple example of how to create a flow using multiple agents:
import asyncio
from genaitor.core import (
Agent, Task, Orchestrator, Flow,
ExecutionMode, AgentRole, TaskResult
)
from genaitor.llm import GeminiProvider, GeminiConfig
# Define a base task (you could use different tasks for each agent)
class LLMTask(Task):
def __init__(self, description: str, goal: str, output_format: str, llm_provider):
super().__init__(description, goal, output_format)
self.llm = llm_provider
def execute(self, input_data: str) -> TaskResult:
prompt = f"""
Task: {self.description}
Goal: {self.goal}
Input: {input_data}
Please provide a response following the format:
{self.output_format}
"""
try:
response = self.llm.generate(prompt)
return TaskResult(
success=True,
content=response,
metadata={{"task_type": self.description}}
)
except Exception as e:
return TaskResult(
success=False,
content=None,
error=str(e)
)
# Configure the LLM provider
llm_provider = GeminiProvider(GeminiConfig(api_key="your_api_key"))
# Generating two specific tasks
qa_task = LLMTask(
description="Question Answering",
goal="Provide clear and accurate responses",
output_format="Concise and informative",
llm_provider=llm_provider
)
summarization_task = LLMTask(
description="Text Summarization",
goal="Summarize lengthy content into key points",
output_format="Bullet points or short paragraph",
llm_provider=llm_provider
)
# Create agents
qa_agent = Agent(
role=AgentRole.SPECIALIST,
tasks=[qa_task],
llm_provider=llm_provider
)
summarization_agent = Agent(
role=AgentRole.SUMMARIZER,
tasks=[summarization_task],
llm_provider=llm_provider
)
orchestrator = Orchestrator(
agents={{"qa_agent": qa_agent, "summarization_agent": summarization_agent}},
flows={{"default_flow": Flow(agents=["qa_agent", "summarization_agent"], context_pass=[True,True])}},
mode=ExecutionMode.SEQUENTIAL
)
result_process = orchestrator.process_request('What is the impact of AI on modern healthcare?', flow_name='default_flow')
result = asyncio.run(result_process)
print(result)
Demo Videos
Here are some demo videos showcasing Genaitor in action:
FAQ
Why should I use this framework over others like LangChain, LangGraph, CrewAI or LlamaIndex?
While popular frameworks like LangChain, LangGraph, and LlamaIndex are powerful, they are primarily designed as general-purpose agentic frameworks. Our framework is specifically optimized for Scientific Machine Learning (SciML) applications and offers the following key advantages:
- Specific focus on Scientific Machine Learning:
Unlike generalist frameworks, we prioritize workflows tailored for scientific and physics-based AI tasks, where agent behavior often requires structured reasoning and domain-specific knowledge handling.
-
Greater control and transparency: Our design provides developers with direct access to agent modeling and lifecycle management. You are not tied to predefined abstractions or "black-box" architectures, allowing full customization to match scientific workflows.
-
Reduced learning curve: Our framework minimizes unnecessary complexity. Users can build efficient agents with a much simpler and more intuitive interface, without needing to dive deep into multiple layers of abstractions before achieving results.
Is this framework compatible with LangChain or LlamaIndex?
Our framework is independent but compatible with most libraries from the ecosystem. You can integrate components like LlamaIndex for document retrieval or LangChain tools if needed, while still maintaining full control over the agent lifecycle inside our framework.
What kind of Scientific Machine Learning tasks is this framework suited for?
This framework is designed for tasks such as:
-
Physics-informed problem solving
-
Scientific reasoning and simulation control
-
AI-driven research assistants for scientific domains
-
Autonomous agents for data-driven discovery processes
-
Interaction with physical simulation APIs, datasets, and analytical tools
If your use case involves structured reasoning, scientific models, or physics-based tasks, this framework provides the flexibility and precision you need.
Contribution Guidelines
We welcome contributions! To contribute:
- Fork the repository.
- Create a new branch (
git checkout -b feature-name). - Make your changes and commit (
git commit -m 'Add new feature'). - Push to the branch (
git push origin feature-name). - Create a pull request.
License
This project is licensed under the MIT License.
Contact
For any questions or suggestions, feel free to open an issue or contact the maintainers at enterpriselearningmachines@gmail.com or the main author Yan Barros at https://www.linkedin.com/in/yan-barros-yan
You can also check our landing-page to more news:
enterpriselm.github.io/home
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file genaitor-1.0.0.tar.gz.
File metadata
- Download URL: genaitor-1.0.0.tar.gz
- Upload date:
- Size: 4.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4444d0ae7e99038da059c1717850e65f23c932721d32aaefece4eca55b7b6dd6
|
|
| MD5 |
3fd350fe29b1a3ad4be6581a552f0d34
|
|
| BLAKE2b-256 |
b3e3dfd6893f34bbfc0a4764e50fa51ddefa88b4c41a03f18ba4f28693847ddc
|
File details
Details for the file genaitor-1.0.0-py3-none-any.whl.
File metadata
- Download URL: genaitor-1.0.0-py3-none-any.whl
- Upload date:
- Size: 4.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.10.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
519dab9908b71a0ea1c4f6ffa254a776181c1f4f14a1719fb3ec602e111aa48d
|
|
| MD5 |
4f799edc2bdc0bb7c9ed2cf53690efcf
|
|
| BLAKE2b-256 |
8db131d9d3b9361cedc653a542390cde1161824bceee77924d384a0b589e2673
|