Skip to main content

A Python package to get data from different llm's

Project description

🔷 Kynex

Kynex is a modular, pluggable Python framework that simplifies integrating multiple LLM providers such as Google Gemini, Groq, and Ollama through a unified, flexible interface.

Whether you're building AI workflows, chatbots, or prompt-based tools, Kynex allows seamless integration with different LLMs — all through a single API.


🚀 Features

  • Multi-LLM Support: Easily switch between Gemini, Groq, and Ollama with a single interface.
  • Dynamic Inputs: Accept LLM type, model name, API keys, and host dynamically at runtime.
  • LangChain Prompt Templates: Built-in LangChain PromptTemplate for clean prompt formatting.
  • Pluggable Architecture: Easily extend to new LLMs in the future.
  • Ready for Local & Remote Deployments: Supports local Ollama and remote LLM services.

📦 Installation

pip install kynex

Example Usage:

Create it:

Create a Python file and add the following:

 from kynex.LLMTools import LLMConnector

Google Gemini Example:


if __name__ == "__main__":
    # Simulated input from frontend
    request = {
        "prompt": "what is fast api",
        "model_name": "your_model",   #gemini-1.5-flash
        "api_key": "your_api_key",
        "llm_type": "LLMConnector.LLM_GEMINI"
    }

    response = LLMConnector.get_llm_response(
        prompt=request["prompt"],
        model_name=request["model_name"],
        api_key=request["api_key"],
        llm_type=request.get("llm_type")  # Can be None — will default to gemini
    )

    print("\n🔹 Response:\n")
    print(response)



Groq Example:

from kynex.LLMTools import LLMConnector


 if __name__ == "__main__":
     request = {
         "prompt": "your_prompt",
         "model_name": "your_model",  # ✅ Groq model
         "api_key": "your_api_key",
         "llm_type": "groq"
     }

     response =LLMConnector.get_llm_response(
         prompt=request["prompt"],
         model_name=request["model_name"],
         api_key=request["api_key"],
         llm_type=request.get("llm_type")
     )

     print("\n🔹 Groq LLaMA-4 Response:\n")
     print(response)

Ollama Example (Local or Remote):

from kynex.LLMTools import LLMConnector

response = LLMConnector.get_llm_response(
    prompt="your_prompt",
    model_name="your_model",  #EX:llama3
    llm_type=LLMConnector.LLM_OLLAMA,
    host="your_host"  # or remote URL if exposed via proxy
)

print(response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kynex-0.2.1.tar.gz (6.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kynex-0.2.1-py3-none-any.whl (7.4 kB view details)

Uploaded Python 3

File details

Details for the file kynex-0.2.1.tar.gz.

File metadata

  • Download URL: kynex-0.2.1.tar.gz
  • Upload date:
  • Size: 6.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for kynex-0.2.1.tar.gz
Algorithm Hash digest
SHA256 b122509591ab36651b8b36b5c9966c6c2c8a45a7387cebb6e6c5ed96797c0033
MD5 f6696b10de5e4b89b6f292758dc05c14
BLAKE2b-256 02be7cb64f0b39214fc2bec5c87c83b5324e2b7baf1a81b4b2ff1d0170c2c980

See more details on using hashes here.

File details

Details for the file kynex-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: kynex-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 7.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for kynex-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 573840308b1337eee6135d78c08c52669e12d1a2090310717ed41fa5450dc04d
MD5 5400a81a39c685c46c9fecac8e0e2792
BLAKE2b-256 4bcd76e8a2be5c0c24f5a24628ae85c616e80ffccac9dd13d71407f040fe4f46

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page