Skip to main content

A Python package to get data from different llm's

Project description

🔷 Kynex

Kynex is a modular, pluggable Python framework that simplifies integrating multiple LLM providers such as Google Gemini, Groq, and Ollama through a unified, flexible interface.

Whether you're building AI workflows, chatbots, or prompt-based tools, Kynex allows seamless integration with different LLMs — all through a single API.


🚀 Features

  • Multi-LLM Support: Easily switch between Gemini, Groq, and Ollama with a single interface.
  • Dynamic Inputs: Accept LLM type, model name, API keys, and host dynamically at runtime.
  • LangChain Prompt Templates: Built-in LangChain PromptTemplate for clean prompt formatting.
  • Pluggable Architecture: Easily extend to new LLMs in the future.
  • Ready for Local & Remote Deployments: Supports local Ollama and remote LLM services.

📦 Installation

pip install kynex

Example Usage:

Create it:

Create a Python file and add the following:

 from kynex.LLMTools import LLMConnector

Google Gemini Example:


if __name__ == "__main__":
    # Simulated input from frontend
    request = {
        "prompt": "what is fast api",
        "model_name": "your_model",   #gemini-1.5-flash
        "api_key": "your_api_key",
        "llm_type": "LLMConnector.LLM_GEMINI"
    }

    response = LLMConnector.get_llm_response(
        prompt=request["prompt"],
        model_name=request["model_name"],
        api_key=request["api_key"],
        llm_type=request.get("llm_type")  # Can be None — will default to gemini
    )

    print("\n🔹 Response:\n")
    print(response)



Groq Example:

from kynex.LLMTools import LLMConnector


 if __name__ == "__main__":
     request = {
         "prompt": "your_prompt",
         "model_name": "your_model",  # ✅ Groq model
         "api_key": "your_api_key",
         "llm_type": "groq"
     }

     response =LLMConnector.get_llm_response(
         prompt=request["prompt"],
         model_name=request["model_name"],
         api_key=request["api_key"],
         llm_type=request.get("llm_type")
     )

     print("\n🔹 Groq LLaMA-4 Response:\n")
     print(response)

Ollama Example (Local or Remote):

from kynex.LLMTools import LLMConnector

response = LLMConnector.get_llm_response(
    prompt="your_prompt",
    model_name="your_model",  #EX:llama3
    llm_type=LLMConnector.LLM_OLLAMA,
    host="your_host"  # or remote URL if exposed via proxy
)

print(response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kynex-0.2.4.tar.gz (6.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kynex-0.2.4-py3-none-any.whl (7.9 kB view details)

Uploaded Python 3

File details

Details for the file kynex-0.2.4.tar.gz.

File metadata

  • Download URL: kynex-0.2.4.tar.gz
  • Upload date:
  • Size: 6.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for kynex-0.2.4.tar.gz
Algorithm Hash digest
SHA256 594018ddd7a75924bc45fdf328f07b1ef72bacf6fbaaffd67868b5389c69b613
MD5 f8e69138e4e0338060e489291c8c696f
BLAKE2b-256 928a08a89b54e64ded699e45340cd5087617556056c440b9c98680539ba0c335

See more details on using hashes here.

File details

Details for the file kynex-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: kynex-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 7.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for kynex-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 f60b5570d65440b3a8f8ccfb3ee58e5a76da274f3f892f12de20b74b7062c3c1
MD5 dda21c12773882c46d8ef91d43a09e97
BLAKE2b-256 ca32c14e7ae8681e0d89b910e2610df24a01dc8099dd3bc8ae6008ce97c892cc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page