Skip to main content

A Python package to get data from different llm's

Project description

🔷 Kynex

Kynex is a modular, pluggable Python framework that simplifies integrating multiple LLM providers such as Google Gemini, Groq, and Ollama through a unified, flexible interface.

Whether you're building AI workflows, chatbots, or prompt-based tools, Kynex allows seamless integration with different LLMs — all through a single API.


🚀 Features

  • Multi-LLM Support: Easily switch between Gemini, Groq, and Ollama with a single interface.
  • Dynamic Inputs: Accept LLM type, model name, API keys, and host dynamically at runtime.
  • LangChain Prompt Templates: Built-in LangChain PromptTemplate for clean prompt formatting.
  • Pluggable Architecture: Easily extend to new LLMs in the future.
  • Ready for Local & Remote Deployments: Supports local Ollama and remote LLM services.

📦 Installation

pip install kynex

Example Usage:

Create it:

Create a Python file and add the following:

 from kynex.LLMTools import LLMConnector

Google Gemini Example:


if __name__ == "__main__":
    # Simulated input from frontend
    request = {
        "prompt": "what is fast api",
        "model_name": "your_model",   #gemini-1.5-flash
        "api_key": "your_api_key",
        "llm_type": "LLMConnector.LLM_GEMINI"
    }

    response = LLMConnector.get_llm_response(
        prompt=request["prompt"],
        model_name=request["model_name"],
        api_key=request["api_key"],
        llm_type=request.get("llm_type")  # Can be None — will default to gemini
    )

    print("\n🔹 Response:\n")
    print(response)



Groq Example:

from kynex.LLMTools import LLMConnector


 if __name__ == "__main__":
     request = {
         "prompt": "your_prompt",
         "model_name": "your_model",  # ✅ Groq model
         "api_key": "your_api_key",
         "llm_type": "groq"
     }

     response =LLMConnector.get_llm_response(
         prompt=request["prompt"],
         model_name=request["model_name"],
         api_key=request["api_key"],
         llm_type=request.get("llm_type")
     )

     print("\n🔹 Groq LLaMA-4 Response:\n")
     print(response)

Ollama Example (Local or Remote):

from kynex.LLMTools import LLMConnector

response = LLMConnector.get_llm_response(
    prompt="your_prompt",
    model_name="your_model",  #EX:llama3
    llm_type=LLMConnector.LLM_OLLAMA,
    host="your_host"  # or remote URL if exposed via proxy
)

print(response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kynex-0.2.3.tar.gz (6.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kynex-0.2.3-py3-none-any.whl (7.7 kB view details)

Uploaded Python 3

File details

Details for the file kynex-0.2.3.tar.gz.

File metadata

  • Download URL: kynex-0.2.3.tar.gz
  • Upload date:
  • Size: 6.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for kynex-0.2.3.tar.gz
Algorithm Hash digest
SHA256 4c1edecdc3c5a62d2a52339cb45a9e6275ade5206a295008d652f1a47ec937b2
MD5 9ff916dbb4964df5cf3c233c28343f51
BLAKE2b-256 17b3ea64a717b8605b9d3b4f2aa5354efd937850a6cea8be81c26d559ab564db

See more details on using hashes here.

File details

Details for the file kynex-0.2.3-py3-none-any.whl.

File metadata

  • Download URL: kynex-0.2.3-py3-none-any.whl
  • Upload date:
  • Size: 7.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for kynex-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 f2f11b68a0d22e913c7e52459763b1878b133e095d5a76a0c8e7c5c79aba3ed5
MD5 d33931fb3b6b7db084e97e28967e5791
BLAKE2b-256 a7f6ce650f97ebb9f2d24beb517ab3bcb184c58ca56e068bfcb61458c26e435f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page