Skip to main content

A package for interacting with the Mlchain Service-API

Project description

mlchain-client

A Mlchain App Service-API Client, using for build a webapp by request Service-API

Usage

First, install mlchain-client python sdk package:

pip install mlchain-client

Write your code with sdk:

  • completion generate with blocking response_mode
from mlchain_client import CompletionClient

api_key = "your_api_key"

# Initialize CompletionClient
completion_client = CompletionClient(api_key)

# Create Completion Message using CompletionClient
completion_response = completion_client.create_completion_message(inputs={"query": "What's the weather like today?"},
                                                                  response_mode="blocking", user="user_id")
completion_response.raise_for_status()

result = completion_response.json()

print(result.get('answer'))
  • completion using vision model, like gpt-4-vision
from mlchain_client import CompletionClient

api_key = "your_api_key"

# Initialize CompletionClient
completion_client = CompletionClient(api_key)

files = [{
    "type": "image",
    "transfer_method": "remote_url",
    "url": "your_image_url"
}]

# files = [{
#     "type": "image",
#     "transfer_method": "local_file",
#     "upload_file_id": "your_file_id"
# }]

# Create Completion Message using CompletionClient
completion_response = completion_client.create_completion_message(inputs={"query": "Describe the picture."},
                                                                  response_mode="blocking", user="user_id", files=files)
completion_response.raise_for_status()

result = completion_response.json()

print(result.get('answer'))
  • chat generate with streaming response_mode
import json
from mlchain_client import ChatClient

api_key = "your_api_key"

# Initialize ChatClient
chat_client = ChatClient(api_key)

# Create Chat Message using ChatClient
chat_response = chat_client.create_chat_message(inputs={}, query="Hello", user="user_id", response_mode="streaming")
chat_response.raise_for_status()

for line in chat_response.iter_lines(decode_unicode=True):
    line = line.split('data:', 1)[-1]
    if line.strip():
        line = json.loads(line.strip())
        print(line.get('answer'))
  • chat using vision model, like gpt-4-vision
from mlchain_client import ChatClient

api_key = "your_api_key"

# Initialize ChatClient
chat_client = ChatClient(api_key)

files = [{
    "type": "image",
    "transfer_method": "remote_url",
    "url": "your_image_url"
}]

# files = [{
#     "type": "image",
#     "transfer_method": "local_file",
#     "upload_file_id": "your_file_id"
# }]

# Create Chat Message using ChatClient
chat_response = chat_client.create_chat_message(inputs={}, query="Describe the picture.", user="user_id",
                                                response_mode="blocking", files=files)
chat_response.raise_for_status()

result = chat_response.json()

print(result.get("answer"))
  • upload file when using vision model
from mlchain_client import MlchainClient

api_key = "your_api_key"

# Initialize Client
mlchain_client = MlchainClient(api_key)

file_path = "your_image_file_path"
file_name = "panda.jpeg"
mime_type = "image/jpeg"

with open(file_path, "rb") as file:
    files = {
        "file": (file_name, file, mime_type)
    }
    response = mlchain_client.file_upload("user_id", files)

    result = response.json()
    print(f'upload_file_id: {result.get("id")}')
  • Others
from mlchain_client import ChatClient

api_key = "your_api_key"

# Initialize Client
client = ChatClient(api_key)

# Get App parameters
parameters = client.get_application_parameters(user="user_id")
parameters.raise_for_status()

print('[parameters]')
print(parameters.json())

# Get Conversation List (only for chat)
conversations = client.get_conversations(user="user_id")
conversations.raise_for_status()

print('[conversations]')
print(conversations.json())

# Get Message List (only for chat)
messages = client.get_conversation_messages(user="user_id", conversation_id="conversation_id")
messages.raise_for_status()

print('[messages]')
print(messages.json())

# Rename Conversation (only for chat)
rename_conversation_response = client.rename_conversation(conversation_id="conversation_id",
                                                          name="new_name", user="user_id")
rename_conversation_response.raise_for_status()

print('[rename result]')
print(rename_conversation_response.json())

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlchain_client-0.1.12.tar.gz (7.7 kB view details)

Uploaded Source

Built Distribution

mlchain_client-0.1.12-py3-none-any.whl (6.4 kB view details)

Uploaded Python 3

File details

Details for the file mlchain_client-0.1.12.tar.gz.

File metadata

  • Download URL: mlchain_client-0.1.12.tar.gz
  • Upload date:
  • Size: 7.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for mlchain_client-0.1.12.tar.gz
Algorithm Hash digest
SHA256 b3bd41eebf8f99a19507f537ed8b57b00023a6dda7f0097f956f9b5681c564db
MD5 693f06a090f0cc0a06558765a1a523ba
BLAKE2b-256 b0e6f627d2655d8bf5f764f17f49fa1b75172628de1cc784697d31e234e1a8c3

See more details on using hashes here.

File details

Details for the file mlchain_client-0.1.12-py3-none-any.whl.

File metadata

File hashes

Hashes for mlchain_client-0.1.12-py3-none-any.whl
Algorithm Hash digest
SHA256 3a22bb520e5123af00f3e0dbfbc7cdeaceb6990305fe1843c0f94577d76d2663
MD5 bb0dfbf5773a9d7a28a1bb92deceac25
BLAKE2b-256 b982a14a018e44d04fd5ff471b9d6d09e86ce4787abc839fe788911910a249e5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page