Skip to main content

A library for managing LLM models

Project description

Description

ModelhubClient: A Python client for the Modelhub API

Installation

pip install puyuan_modelhub --user

Usage

OpenAI Client

from openai import OpenAI

client = OpenAI(api_key="xxx", base_url="xxxx")

client.chat.xxxxx

ModelhubClient

Initialize a client

from modelhub import ModelhubClient

client = ModelhubClient(
    host="https://xxxx.com/api/",
    user_name="xxxx",
    user_password="xxxx",
    model="xxx", # Optional
)

get supported models

client.supported_models

Get model supported params

client.get_supported_params("Minimax")

perform a chat query

response = client.chat(
    query,
    model="xxx", # Optional(use model in client construction)
    history=history,
    parameters=dict(
        key1=value1,
        key2=value2
    )
)

Get model embeddings

client.get_embeddings(["你好", "Hello"], model="m3e")

gemini-pro embedding need extra parameters

Use the embed_content method to generate embeddings. The method handles embedding for the following tasks (task_type):

Task Type Description
RETRIEVAL_QUERY Specifies the given text is a query in a search/retrieval setting.
RETRIEVAL_DOCUMENT Specifies the given text is a document in a search/retrieval setting. Using this task type requires a title.
SEMANTIC_SIMILARITY Specifies the given text will be used for Semantic Textual Similarity (STS).
CLASSIFICATION Specifies that the embeddings will be used for classification.
CLUSTERING Specifies that the embeddings will be used for clustering.

Response structure

generated_text: response_text from model
history: generated history
details: generation details. Include tokens used, request duration, ...

History can be only used with ChatGLM3 now.

BaseMessage is the unit of history.

# import some pre-defined message types
from modelhub.common.types import SystemMessage, AIMessage, UserMessage
# construct history of your own
history = [
    SystemMessage(content="xxx", other_value="xxxx"),
    UserMessage(content="xxx", other="xxxx"),
]

VLMClient

Initailize a vlm client

from modelhub import VLMClient
client = VLMClient(...)
client.chat(prompt=..., image_path=..., parameters=...)

Chat with model

VLM Client chat add image_path param to Modelhub Client and other params are same.

client.chat("Hello?", image_path="xxx", model="m3e")

Examples

Use ChatCLM3 for tools calling

from modelhub import ModelhubClient, VLMClient
from modelhub.common.types import SystemMessage

client = ModelhubClient(
    host="https://xxxxx/api/",
    user_name="xxxxx",
    user_password="xxxxx",
)
tools = [
    {
        "name": "track",
        "description": "追踪指定股票的实时价格",
        "parameters": {
            "type": "object",
            "properties": {"symbol": {"description": "需要追踪的股票代码"}},
            "required": ["symbol"],
        },
    },
    {
        "name": "text-to-speech",
        "description": "将文本转换为语音",
        "parameters": {
            "type": "object",
            "properties": {
                "text": {"description": "需要转换成语音的文本"},
                "voice": {"description": "要使用的语音类型(男声、女声等)"},
                "speed": {"description": "语音的速度(快、中等、慢等)"},
            },
            "required": ["text"],
        },
    },
]

# construct system history
history = [
    SystemMessage(
        content="Answer the following questions as best as you can. You have access to the following tools:",
        tools=tools,
    )
]
query = "帮我查询股票10111的价格"

# call ChatGLM3
response = client.chat(query, model="ChatGLM3", history=history)
history = response["history"]
print(response["generated_text"])
Output:
{"name": "track", "parameters": {"symbol": "10111"}}
# generate a fake result for track function call

result = {"price": 1232}

res = client.chat(
    json.dumps(result),
    parameters=dict(role="observation"), # Tell ChatGLM3 this is a function call result
    model="ChatGLM3",
    history=history,
)
print(res["generated_text"])
Output:
根据API调用结果,我得知当前股票的价格为1232。请问您需要我为您做什么?

Contact

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

puyuan_modelhub-1.0.15.tar.gz (6.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

puyuan_modelhub-1.0.15-py2.py3-none-any.whl (35.9 kB view details)

Uploaded Python 2Python 3

File details

Details for the file puyuan_modelhub-1.0.15.tar.gz.

File metadata

  • Download URL: puyuan_modelhub-1.0.15.tar.gz
  • Upload date:
  • Size: 6.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for puyuan_modelhub-1.0.15.tar.gz
Algorithm Hash digest
SHA256 f11858b5e491ccfaf71471b6be18e1ec94faad04043b769d304970e02a1a7861
MD5 891fc49876d1c9d10f6663fd023c7489
BLAKE2b-256 9ea97d689e5d87e79cdb32706cfb85c8cfddb44750c299c1fb63d60d2de4b78f

See more details on using hashes here.

File details

Details for the file puyuan_modelhub-1.0.15-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for puyuan_modelhub-1.0.15-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 c819e12a0b4e564d9f63847aa3b64f01498ae9c126a7dca18d6c0c934b83700a
MD5 6874b3a24a160f6e0fe80217c44b96b1
BLAKE2b-256 14f25869e0ca0c96114f6356d8e7ad8299fbdfc45f8498fb759826cc6b59b619

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page