Fabriq is a Python SDK for developing quick, low code Generative AI solutions.
This project has been archived.
The maintainers of this project have marked this project as archived. No new releases are expected.
Project description
Fabriq
Fabriq is a powerful, modular framework for building quick and low code AI solutions. It provides a modular framework for building and deploying conversational AI Agents with minimal effort.
NOTICE: This package is currently under active development. The API and functionality are subject to significant changes.
Table of Contents
- Features
- Installation
- Core Components
- Quick Start
- Configuration Guide
- Advanced Features
- Examples
- Troubleshooting
- Contributing
- License
- Support
- Author
Features
✅ Multi-provider LLM Support: OpenAI, Azure OpenAI, HuggingFace, Gemini, Bedrock, Ollama, Groq, Mistral, and more
✅ Comprehensive Document Processing: PDF, Word, Excel, images, audio, and video with OCR support
✅ Advanced RAG Pipeline: Query rewriting, small talk detection, relevance checking, and optional reranking
✅ Multiple Vector Stores: ChromaDB, FAISS, and PGVector support
✅ Agent Framework: Build complex agent workflows with sequential or hierarchical processing
✅ Evaluation Suite: Metrics for answer relevancy, contextual precision, recall, faithfulness, and hallucination
✅ Modular Design: Easy to customize and extend components
✅ Tracing Support: MLflow integration for monitoring and debugging
✅ Low-Code Solutions: Quick deployment with CLI and UI interfaces
Installation
Prerequisites
- Python 3.10, 3.11, or 3.12
- pip
- (Optional) CUDA for GPU acceleration
Installation Steps
Install the package with desired features:
# For all features
pip install fabriq[all]
# For chatbot only
pip install fabriq[chat]
# For agents only
pip install fabriq[agents]
# For document loader only
pip install fabriq[doc-loader]
# For indexing only
pip install fabriq[index]
# For rag pipeline only
pip install fabriq[rag]
# For tools only
pip install fabriq[tools]
# For evaluations only
pip install fabriq[evals]
# For tracing only
pip install fabriq[trace]
Configuration
-
Create a
.envfile in the project root. -
Edit the
.envfile with your desired API keys:
OPENAI_API_KEY=your-openai-key
AZURE_OPENAI_KEY=your-azure-key
MISTRAL_API_KEY=your-mistral-api-key
...
- Configure the
config.yamlfile (see Configuration Guide for details)
Quick Start
Basic RAG Pipeline
from fabriq.config import ConfigParser
from fabriq.pipelines import RAGPipeline
# Initialize config and RAG pipeline
config = ConfigParser("config.yaml")
rag = RAGPipeline(config)
# Get response
response = rag.get_response("What are the main components of Fabriq?")
print(response["text"])
# Get sources
for chunk in response["chunks"]:
print(f"Source: {chunk.metadata['source']}")
Chat Interface
Fabriq provides two chat interfaces:
Terminal-based CLI:
fabriq-chat-cli
- Chat Commands:
/help: Show help/clear: Clear conversation history/history: Show conversation history/upload <directory>: Upload documents from a directory/exitor/quit: Exit chatbot
Web-based UI (requires Streamlit):
fabriq-chat-ui
Advanced Features
Multimodal Processing
The document loader can process images, tables, audio etc. within documents:
# Enable in config.yaml
document_loader:
params:
multimodal: true
Custom Tools
Create custom tools for agents:
class CustomTool:
def __init__(self, api_key):
self.api_key = api_key
self.description = "Detailed Tool Description"
def run(self, query):
# Implement tool logic
return "Tool result"
Examples
Building a Research Assistant
from fabriq.config import ConfigParser
from fabriq.pipelines import RAGPipeline
from fabriq.indexers import DocumentIndexer
# Initialize components
config = ConfigParser("config.yaml")
indexer = DocumentIndexer(config)
rag = RAGPipeline(config)
# Index research papers
indexer.index_documents([
"paper1.pdf",
"paper2.pdf",
"report.docx"
])
# Ask questions
response = rag.get_response("What are the latest advancements in NLP?")
print(response["text"])
# Get sources
for chunk in response["chunks"]:
print(f"Source: {chunk.metadata['source']}")
Creating a Multi-Agent System
# config.yaml
agent_builder:
process: hierarchical
params:
agents:
- name: researcher
role: Research Analyst
goal: Find relevant information
backstory: Expert in information retrieval
tools: [WebSearchTool]
- name: analyst
role: Data Analyst
goal: Analyze information
backstory: Skilled in data interpretation
- name: writer
role: Technical Writer
goal: Create comprehensive reports
backstory: Experienced technical communicator
tasks:
- name: research
description: >
Research the topic: {topic_name}
expected_output: Research notes
agent: researcher
- name: analyze
description: Analyze research findings
expected_output: Analysis report
agent: analyst
context: [research]
- name: write
description: Write final report
expected_output: Complete report
agent: writer
context: [analyze]
from fabriq.config import ConfigParser
from fabriq.agents import AgentBuilder
config = ConfigParser("config.yaml")
agent_builder = AgentBuilder(config)
# Execute the workflow
result = agent_builder.run(inputs={"topic_name":"Artificial Intelligence"})
print(result)
For more details, see the wardrobe directory and example notebooks in the config folder.
Troubleshooting
Common Issues and Solutions
1. Configuration Errors
- Symptom:
ValueError: Unsupported LLM model type - Solution: Verify
config.yamlcontains valid model types and all required parameters.
2. Document Loading Failures
- Symptom: OCR errors when loading documents
- Solution:
- Ensure Tesseract OCR is installed
- Check file permissions
- Verify document integrity
3. Vector Store Connection Issues
- Symptom: Connection errors with PGVector
- Solution:
- Verify PostgreSQL is running
- Check connection string in config
- Ensure pgvector extension is installed
4. LLM API Errors
- Symptom: Rate limit exceeded errors
- Solution:
- Add retry logic in
model_kwargs - Reduce batch size
- Verify API key validity
- Add retry logic in
5. Memory Issues
- Symptom: Out of memory errors with large documents
- Solution:
- Reduce
chunk_sizein text splitter - Process documents in smaller batches
- Use smaller embedding models
- Reduce
Debugging Tips
- Enable verbose logging:
import logging
logging.basicConfig(level=logging.DEBUG)
- Test components individually:
# Test LLM
llm = LLM(config)
print(llm.generate("Test prompt"))
# Test embeddings
embeddings = EmbeddingModel(config)
print(embeddings.embed_query("Test query"))
- Use MLflow tracing:
# config.yaml
llm:
params:
tracing_enabled: true
tracing_uri: "http://localhost:5000"
License
Fabriq is released under the MIT License.
Support
For questions and support:
- Open an issue on GitHub
Author
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fabriq-0.2.9.8.tar.gz.
File metadata
- Download URL: fabriq-0.2.9.8.tar.gz
- Upload date:
- Size: 47.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9d21d625b88faab27f676e40d3805ffccaf4ef1eed3dc7e54c8eec37a07f92c3
|
|
| MD5 |
8649afe58ed30e09a41f6e83bb8bac55
|
|
| BLAKE2b-256 |
d062bba4fb89246b06ac163310f5de07f5f43d0674d8fe8104c43a44241377db
|
File details
Details for the file fabriq-0.2.9.8-py3-none-any.whl.
File metadata
- Download URL: fabriq-0.2.9.8-py3-none-any.whl
- Upload date:
- Size: 57.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fc254f22c1eedd9289d0f40f549524368b0262a3e72b242a2ebb116e8c458d1c
|
|
| MD5 |
dd384d238c737a911873b7323ec2e98c
|
|
| BLAKE2b-256 |
1041df81012083b39b2ed3af630e3b245a9d94417b94a254c67701df8734b25b
|