A Python SDK to seamlessly interact with the Katonic platform, enabling developers to access language and vision models, generate completions, and log requests directly from code.
Project description
Katonic SDK
A Python SDK for interacting with Katonic's AI/ML platform, providing easy access to language models, vision models, and comprehensive request logging capabilities.
Features
- 🤖 Language Model Integration: Generate completions from various LLM models
- 👁️ Vision Model Support: Process text and image inputs with vision models
- 📊 Request Logging: Comprehensive logging with token usage and cost tracking
- ⚡ Simple API: Easy-to-use Python functions instead of raw API calls
- 🔍 Monitoring: Track and analyze LLM requests across your applications
Installation
pip install katonic
Model Completion
Import
from katonic.llm import generate_completion
generate_completion
Generates a completion from a specified model.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model_id |
str | ✅ | Unique identifier of the model (found in My Model Library under LLM Management) |
data |
dict | ✅ | Input payload containing query and optional image_url |
data.query |
str | ✅ | The prompt or question |
data.image_url |
str | ❌ | Image URL for vision models |
Returns
- Type:
str - Description: Model-generated response as plain text
Finding Your Model ID
To find your model ID:
-
Navigate to My Model Library under LLM Management in the Katonic UI
-
Copy the model ID from the interface
-
Use this ID in the
model_idparameter
Examples
Text Model
from katonic.llm import generate_completion
result = generate_completion(
model_id="688b552061aa55897ae98fdc",
data={"query": "Tell me a fun fact about space."}
)
print(result)
# Output: "Space is completely silent because there is no atmosphere to carry sound waves."
Vision Model (Text + Image)
from katonic.llm import generate_completion
result = generate_completion(
model_id="688b552061aa55897ae98fdc",
data={
"query": "Describe what is in this image.",
"image_url": "https://example.com/photo.jpg"
}
)
print(result)
Response Format
The response will contain the model's generated text.
Example:
"Space is completely silent because there is no atmosphere to carry sound waves."
Request Logging
Import
from katonic.llm.log_requests import log_request_to_platform
log_request_to_platform
Logs user queries, responses, token usage, and cost details for monitoring and analysis.
model_name can be fetched from the following section of the platform.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
input_query |
str | ✅ | The original user query |
response |
str | ✅ | The LLM's response to the query |
user_name |
str | ✅ | The user's email or unique identifier |
model_name |
str | ✅ | The LLM model name (e.g., "Openai/gpt-5-nano", "Anthropic/claude") |
product_type |
str | ✅ | Type of product (e.g., "Ace") |
product_name |
str | ✅ | Name of the product where the query was made |
project_name |
str | ✅ | The project associated with the query |
latency |
float | ✅ | API latency in seconds |
status |
str | ✅ | Request status (e.g., "success", "failed") |
answer_validity |
bool | ❌ | Whether the response is valid/usable (default: False) |
embedding_model_name |
str | ❌ | Name of embedding model if applicable |
Returns
- Success: Returns a
message_id(string) for tracking the logged request - Failure: Returns
None
Example
from katonic.llm.log_requests import log_request_to_platform
# Log a request with comprehensive details
message_id = log_request_to_platform(
input_query="tell me about katonic",
response="Katonic is a modern MLOps platform that helps enterprises manage AI/ML workflows efficiently.",
user_name="developer@company.com",
model_name="Openai/gpt-5-nano",
product_type="Ace",
product_name="Ace",
project_name="Ace",
latency=0.42, # API response time in seconds
status="success", # status of the request
embedding_model_name=None
)
print(f"Message logged with ID: {message_id}")
Response Format
On successful logging:
✅ Cost has been added successfully.
Message logged with ID: 650e95d2a8c7b123f5c123ab
Complete Workflow Example
Here's how to use both methods together:
from katonic.llm import generate_completion
from katonic.llm.log_requests import log_request_to_platform
import time
# Step 1: Generate completion
start_time = time.time()
query = "Explain quantum computing in simple terms"
model_id = "688b552061aa55897ae98fdc"
result = generate_completion(
model_id=model_id,
data={"query": query}
)
# Step 2: Calculate latency
latency = time.time() - start_time
# Step 3: Log the request
message_id = log_request_to_platform(
input_query=query,
response=result,
user_name="developer@company.com",
model_name="Openai/gpt-4",
product_type="Research",
product_name="QA Assistant",
project_name="Science Education",
latency=latency,
status="success"
)
print(f"Response: {result}")
print(f"Logged with ID: {message_id}")
Notes
Model Completion
- Use the correct
model_idprovided in the Katonic UI - For image-based models, ensure
image_urlis accessible over the internet - The SDK handles API calls internally, so you only need to focus on inputs and outputs
Request Logging
- The SDK automatically calculates token usage and cost (if pricing data is available)
- If the request fails, you will see a warning message in the console
- Use the returned
message_idto track logged requests
Requirements
- Python 3.7+
- Internet connection for API calls
- Valid Katonic platform credentials
Support
For issues, questions, or feature requests, please contact the Katonic support team or refer to the official Katonic platform documentation.
License
Please refer to your Katonic platform agreement for licensing terms.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file katonic-6.2.0.tar.gz.
File metadata
- Download URL: katonic-6.2.0.tar.gz
- Upload date:
- Size: 6.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b429e4a8458d8e1958d89890fb2449917787de211c56e343740e1ed160bc84ba
|
|
| MD5 |
0a534b542765dbcefe76cff255d5e014
|
|
| BLAKE2b-256 |
9b914f6486eeb4d494bce50216bfea907a0644f44a13394dd6b8a458b5093469
|
File details
Details for the file katonic-6.2.0-py3-none-any.whl.
File metadata
- Download URL: katonic-6.2.0-py3-none-any.whl
- Upload date:
- Size: 6.7 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ef924c90c4760761013aaa6839008162a90b15736f87d4a43bbb195188ad0fe4
|
|
| MD5 |
ebe8e9e6102aa6a30a8926d03c3180f9
|
|
| BLAKE2b-256 |
8fe4bf1984f006b804811107dd815ffe58f6b4a26dbd8d8df543245a3b39badd
|