Skip to main content

No project description provided

Project description

tock-genai-core

Composants principaux d'IA générative : models, factories, gestion des erreurs utilisés dans les composants Gen AI python de tock.

Architecture

Le projet est structuré en trois composants principaux :

  • Models : Contient les définitions des classes et modèles de données utilisés dans l'application
  • Services/Factories : Regroupe la logique métier et les fonctions utilisées par les routes

Technologies

  • Backend : Python
  • Base de données :
    • pgvector pour la base de données vectorielle
  • LLM & RAG :
    • Langchain pour l'orchestration
    • Support de guardrails
    • Reranker pour l'amélioration des résultats
    • Langfuse pour le monitoring

Providers disponibles

  • LLMProvider (LLM)

    • TGI = "HuggingFaceTextGenInference"
    • OpenAI = "OpenAI"
    • Vllm = "Vllm"
  • GuardrailProvider (Guardrail)

    • BloomZ = "BloomzGuardrail"
  • EMProvider (Embedding)

    • BloomZ = "BloomzEmbeddings"
    • OpenAI = "OpenAI"
    • Vllm = "Vllm"
  • VectorDBProvider (Database)

    • OpenSearch = "OPENSEARCH"
    • PGVector = "PGVECTOR"
  • ContextualCompressorProvider (Contetual Compressor)

    • BloomZ = "BloomzRerank"

Settings

  • Embedding

    • Classe parente

      BaseEMSetting:
          provider: EMProvider
          model: Optional[str]
          api_key: Optional[SecretKey]
          api_base: str
          pooling: Optional[str]
          space_type: Optional[str]
      
    • Classes enfants

      BloomZEMSetting(BaseEMSetting):
          provider: Literal[EMProvider.BloomZ]
      
      VLLMEMSetting(BaseEMSetting):
          provider: Literal[EMProvider.Vllm]
          model: str
      
      OpenAIEMSetting(BaseEMSetting):
          provider: Literal[EMProvider.OpenAI]
          api_base: str
          api_version: str
          deployment: str
      
  • Contextual compressor

    • Classe parente

      BaseCompressorSetting:
          provider: ContextualCompressorProvider
          endpoint: str
          api_key: Optional[SecretKey]
      
    • Classe enfant

      BloomZCompressorSetting(BaseCompressorSetting):
          provider: Literal[ContextualCompressorProvider.BloomZ]
          min_score: float
          max_documents: Optional[int]
          label: Optional[str]
      
  • Database

    • Classe parente

      BaseVectorDBSetting:
          index: Optional[str]
          provider: VectorDBProvider
          db_url: str
      
    • Classes enfants

      OpenSearchSetting(BaseVectorDBSetting):
          provider: Literal[VectorDBProvider.OpenSearch]
          username: SecretKey
          password: SecretKey
          use_ssl: bool
          verify_certs: bool
      
      class PGVectorSetting(BaseVectorDBSetting):
          provider: Literal[VectorDBProvider.PGVector]
          username: SecretKey
          password: SecretKey 
          db_name: str
          sslmode: Optional[str]
          namespace: str
      
  • Guardrail

    • Classe parente

      BaseGuardrailSetting:
          provider: GuardrailProvider
          api_base: str
          max_score: Optional[float]
          api_key: Optional[SecretKey]
      
    • Classe enfant

      BloomZGuardrailSetting(BaseGuardrailSetting):
          provider: Literal[GuardrailProvider.BloomZ]
      
  • Langfuse

    LangfuseSetting:
        host: Optional[str]
        public_key: Optional[SecretKey]
        secret_key: Optional[SecretKey]
        app_name: Optional[str]
        user_id: Optional[str]
        session_id: Optional[str]
    
  • LLM

    • Classe parente

      BaseLLMSetting:
          provider: LLMProvider
          model: Optional[str]
          api_key: Optional[SecretKey]
          temperature: float
      
    • Classes enfants

      OpenAILLMSetting(BaseLLMSetting):
          provider: Literal[LLMProvider.OpenAI]
          api_base: str
          api_version: str
          deployment: str
      
      HuggingFaceTextGenInferenceLLMSetting(BaseLLMSetting):
          provider: Literal[LLMProvider.TGI]
          repetition_penalty: float
          max_new_tokens: int
          api_base: str
          streaming: bool
      
      VllmSetting(BaseLLMSetting):
          provider: Literal[LLMProvider.Vllm]
          api_base: str
          max_new_tokens: int
          additional_model_kwargs: Optional[Dict[str, Any]]
      

Fonctionnement

Chaque outil utilisé (database, embedding, llm, langfuse, ...) a besoin d'un certains nombre de paramètres qui sont référencés dans les models (classes de settings)

Ces classes sont ensuite héritées par des services ou des factories afin de pouvoir répondre au besoin.

Exemple de get_vector_db_factory qui crée une factory de base vectorielle basée sur le nom de l'application et les paramètres d'embedding fournis

from tock-genai-core import get_vector_db_factory
from tock-genai-core import PGVectorSetting, VLLMEMSetting
from tock-genai-core import DBSetting, EMSetting


db_settings = PGVectorSetting(
    index = "first_index",
    provider = "PGVECTOR",
    db_url = "127.0.0.1:XXXX",
    db_name = "rag_sandbox_db",
    sslmode = "disable",
    username = {
      type = "Raw",
      value = "admin"
    },
    password = {
      type = "Raw",
      value = "example"
    },
    namespace = "test-name"
)

em_settings = VLLMEMSetting(
    provider = "Vllm",
    model = "model_name",
    api_base = "https://continue.com/v1"
)



def function_name(db_settings: DBSetting, em_settings: EMSetting):

    # do somethings

    vector = get_vector_db_factory(db_settings: DBSetting, em_settings: BaseEMSetting)

    # do somethings

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tock_genai_core-1.2.0.tar.gz (22.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tock_genai_core-1.2.0-py3-none-any.whl (56.3 kB view details)

Uploaded Python 3

File details

Details for the file tock_genai_core-1.2.0.tar.gz.

File metadata

  • Download URL: tock_genai_core-1.2.0.tar.gz
  • Upload date:
  • Size: 22.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.11.13 Linux/6.11.0-1018-azure

File hashes

Hashes for tock_genai_core-1.2.0.tar.gz
Algorithm Hash digest
SHA256 ffffdef7b5474dc2a793736edce7332e9bb2a76198b2a3719e1fa0e4e8ac91ec
MD5 2b7e5148fa0223d75ed7e7e7a17e3f8f
BLAKE2b-256 3e27212e6cf8779216b680410cf4d608d3f513de62d8d1aa9cdd97f923f9ef4f

See more details on using hashes here.

File details

Details for the file tock_genai_core-1.2.0-py3-none-any.whl.

File metadata

  • Download URL: tock_genai_core-1.2.0-py3-none-any.whl
  • Upload date:
  • Size: 56.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.10.18 Linux/6.11.0-1018-azure

File hashes

Hashes for tock_genai_core-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f100bebb0b9b7d9f5836b15ebe37333661c69086e72967c98f6bda815a79b7d8
MD5 9eaa77ef5c2d3ba34ad4fe9dc18c958d
BLAKE2b-256 8efaca307417190dcc0d6a064b58a5e73a16f5d6f7f2b9e3cd0b5d9c3824dc1a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page