Skip to main content

A Python project for FloTorch

Project description

🚀 FloTorch-core

FloTorch-core is a modular and extensible Python framework for building LLM-powered RAG (Retrieval-Augmented Generation) pipelines. It offers plug-and-play components for embeddings, chunking, retrieval, gateway-based LLM calls, and RAG evaluation.


✨ Features

  • 🧩 Text Chunking (Fixed-size, Hierarchical)
  • 🧠 Embedding Models (Titan, Cohere, Bedrock)
  • 🔍 Document Retrieval (OpenSearch + Vector Storage)
  • 💻 Bedrock/sagemaker/gateway inferencer
  • 🔌 Unified LLM Gateway (OpenAI, Bedrock, Ollama, etc.)
  • 📏 RAG Evaluation (RAGAS Metrics)
  • ☁️ AWS Integration (S3, DynamoDB, Lambda)
  • 🧢 Built-in Testing Support

📆 Installation

pip install FloTorch-core

To install development dependencies:

pip install FloTorch-core[dev]

📂 Project Structure

flotorch/
├── inferencer/         # LLM gateway/bedrock/sagemaker interface
├── embedding/          # Embedding models
├── chunking/           # Text chunking logic
├── evaluator/          # RAG evaluation (RAGAS)
├── storage/            # Vector DB, S3, DynamoDB
├── util/               # Utilities and helpers
├── rerank/             # Ranking documents
├── guardrails/         # Enabling guardrails
├── reader/             # reader for json/pdf

📖 Usage Example

Reader

from flotorch_core.reader.json_reader import JSONReader
from flotorch_core.storage.s3_storage import S3StorageProvider

json_reader = JSONReader(S3StorageProvider(<S3 bucket>))
json_reader.read(<path>)

Embedding

from flotorch_core.embedding.embedding_registry import embedding_registry

embedding_class = embedding_registry.get_model(<model id>)

# model id example: amazon.titan-text-express-v1, amazon.titan-embed-text-v2:0, cohere.embed-multilingual-v3

Vector storage (opensearch)

from flotorch_core.storage.db.vector.open_search import OpenSearchClient

vector_storage_object = OpenSearchClient(
    <opensearch_host>, 
    <opensearch_port>, 
    <opensearch_username>, 
    <opensearch_password>, 
    <index_id>, 
    <embedding object>
)

Vector storage (bedrock knowledgebase)

from flotorch_core.storage.db.vector.bedrock_knowledgebase_storage import BedrockKnowledgeBaseStorage

vector_storage_object = BedrockKnowledgeBaseStorage(
    knowledge_base_id=<knowledge_base_id>,
    region=<aws_region>
)

Guardrails over vector storage

from flotorch_core.storage.db.vector.guardrails_vector_storage import GuardRailsVectorStorage

base_guardrails = BedrockGuardrail(<guardrail_id>, <guardrail_version>, <aws_region>)            
vector_storage_object = GuardRailsVectorStorage(
    vector_storage_object, 
    base_guardrails,
    <enable_prompt_guardrails(True/False)>,
    <enable_context_guardrails(True/False)>
)

Inferencer

from flotorch_core.inferencer.bedrock_inferencer import BedrockInferencer
from flotorch_core.inferencer.gateway_inferencer import GatewayInferencer
from flotorch_core.inferencer.sagemaker_inferencer import SageMakerInferencer

inferencer = BedrockInferencer(
    <model_id>, 
    <region>, 
    <number of n_shot_prompts>, 
    <temperature>, 
    <n_shot_prompt_guide_obj>
)

inferencer = GatewayInferencer(
    model_id=<model_id>, 
    api_key=<api_key>, 
    base_url=<base_url>, 
    n_shot_prompts=<n_shot_prompts>, 
    n_shot_prompt_guide_obj=<n_shot_prompt_guide_obj>
)

inferencer = SageMakerInferencer(
    <model_id>, 
    <region>, 
    <arn_role>, 
    <n_shot_prompts>, 
    <temperature>, 
    <n_shot_prompt_guide_obj>
)

GuardRail over inferencer

from flotorch_core.inferencer.guardrails.guardrails_inferencer import GuardRailsInferencer

inferencer = GuardRailsInferencer(inferencer, base_guardrails)

📬 Maintainer

Shiva Krishna
📧 Email: shiva.krishnaah@gmail.com

Adil Raza
📧 Email: adilraza.9752@gmail.com


📄 License

This project is licensed under the MIT License.


🌐 Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flotorch_core-2.9.6.tar.gz (59.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flotorch_core-2.9.6-py3-none-any.whl (91.8 kB view details)

Uploaded Python 3

File details

Details for the file flotorch_core-2.9.6.tar.gz.

File metadata

  • Download URL: flotorch_core-2.9.6.tar.gz
  • Upload date:
  • Size: 59.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.0

File hashes

Hashes for flotorch_core-2.9.6.tar.gz
Algorithm Hash digest
SHA256 0670855ad675f5188418271e57d5c4b556d7577068d29f29c0bc665f7f33d28d
MD5 bb58ac09cb4c22173812e52f1762419c
BLAKE2b-256 1af8c6e8d290ee48198a1b311384418cf26c58a99c4097681cbd5c0757af91b5

See more details on using hashes here.

File details

Details for the file flotorch_core-2.9.6-py3-none-any.whl.

File metadata

  • Download URL: flotorch_core-2.9.6-py3-none-any.whl
  • Upload date:
  • Size: 91.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.0

File hashes

Hashes for flotorch_core-2.9.6-py3-none-any.whl
Algorithm Hash digest
SHA256 f133368433f5f2c1e73d2d42ed65b982eb9e7259a2d6e2ad92596ea5f94644e9
MD5 79da1b8dddb7ab85f823a20f8ecfba0f
BLAKE2b-256 2a8b95bd880e68989e357e45aa72b5e6952f70b0cfc2f4a3dfd65476b00ed1ab

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page