Packaging tools for own use
Project description
hwhkit
Main function
- Connection
- mqtt
- llm
Connection
Mqtt
from hwhkit import MQTTAsyncClient, mqtt_subscribe
import asyncio
# Config MQTT Client
client = MQTTAsyncClient(broker="broker.hivemq.com", port=1883, client_id="my_client")
client.start()
@mqtt_subscribe("topic/test1")
async def handle_message_1(message: str):
print(f"Received message from topic 1: {message}")
@mqtt_subscribe("topic/test2")
async def handle_message_2(message: str):
print(f"Received message from topic 2: {message}")
async def send_messages():
while True:
await asyncio.sleep(2)
client.publish("topic/test1", "Hello from topic 1!")
client.publish("topic/test2", "Hello from topic 2!")
async def main():
await asyncio.gather(
send_messages(),
asyncio.sleep(3600)
)
if __name__ == '__main__':
asyncio.run(main())
LLM
Three steps to use models
Step1, llm_config.yaml
matter that needs attention
- A_custom_model_name used for models.get_model_instance()
- A_custom_model_name.name should specify the name of the model supported by the current company
models:
openai:
A_custom_model_name:
name: "gpt-4o"
short_name: "OIG4"
company: "openai"
max_input_token: 8100
max_output_token: 2048
top_p: 0.5
top_k: 1
temperature: 0.5
input_token_fee_pm: 30.0
output_token_fee_pm: 60.0
train_token_fee_pm: 0.0
keys:
- name: "openai_key1"
- name: "openai_key2"
siliconflow:
qw-72b-p:
name: "Qwen/QVQ-72B-Preview"
short_name: "QW-72B-P"
company: "siliconflow"
max_input_token: 8100
max_output_token: 2048
top_p: 0.5
top_k: 1
temperature: 0.5
input_token_fee_pm: 30.0
output_token_fee_pm: 60.0
train_token_fee_pm: 0.0
keys:
- name: "siliconflow_1"
Step2, llm_keys.yaml
- The keys name of the model in llm_config.yaml corresponds to llm_keys.yaml one by one
keys:
openai_key1: "xx"
openai_key2: "xx"
anthropic_key1: "your_anthropic_api_key_1"
anthropic_key2: "your_anthropic_api_key_2"
Step3, load models
from hwhkit.llm.config import load_models_from_yaml
async def main():
models = load_models_from_yaml(config_file="llm_config.yaml", keys_file="llm_keys.yaml")
print(models.list_models())
resp = await models.get_model_instance("gpt-4o").chat("who r u?")
print(resp)
if __name__ == '__main__':
import asyncio
asyncio.run(main())
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
hwhkit-1.0.3.tar.gz
(8.8 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
hwhkit-1.0.3-py3-none-any.whl
(17.7 kB
view details)
File details
Details for the file hwhkit-1.0.3.tar.gz.
File metadata
- Download URL: hwhkit-1.0.3.tar.gz
- Upload date:
- Size: 8.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.10.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
68e1c8a218519f8035b6209e525c375e41248aad6b5728ff38db51aea0676825
|
|
| MD5 |
b75d9b378fef051144c888bc825a7fe9
|
|
| BLAKE2b-256 |
e0b4fd714f891d51c7c4931657c56c3aa7fdedbcf1a80da444ac8daa39085b2f
|
File details
Details for the file hwhkit-1.0.3-py3-none-any.whl.
File metadata
- Download URL: hwhkit-1.0.3-py3-none-any.whl
- Upload date:
- Size: 17.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.10.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
45e6790065e7b65d5cfa8c1eda1b5580dc19e0c32e0382db15152a65329a6707
|
|
| MD5 |
cd9c1d0995ab6174aa95781bc14dc161
|
|
| BLAKE2b-256 |
15dec80b8f471d159c0dba815c311c67453d8084e1cad6f5546dbaaf5c6cbda8
|