Packaging tools for own use
Project description
hwhkit
Main function
- Connection
- mqtt
- llm
Connection
Mqtt
from hwhkit import MQTTAsyncClient, mqtt_subscribe
import asyncio
# Config MQTT Client
client = MQTTAsyncClient(broker="broker.hivemq.com", port=1883, client_id="my_client")
client.start()
@mqtt_subscribe("topic/test1")
async def handle_message_1(message: str):
print(f"Received message from topic 1: {message}")
@mqtt_subscribe("topic/test2")
async def handle_message_2(message: str):
print(f"Received message from topic 2: {message}")
async def send_messages():
while True:
await asyncio.sleep(2)
client.publish("topic/test1", "Hello from topic 1!")
client.publish("topic/test2", "Hello from topic 2!")
async def main():
await asyncio.gather(
send_messages(),
asyncio.sleep(3600)
)
if __name__ == '__main__':
asyncio.run(main())
LLM
Three steps to use models
Step1, llm_config.yaml
matter that needs attention
- A_custom_model_name used for models.get_model_instance()
- A_custom_model_name.name should specify the name of the model supported by the current company
models:
openai:
A_custom_model_name:
name: "gpt-4o"
short_name: "OIG4"
company: "openai"
max_input_token: 8100
max_output_token: 2048
top_p: 0.5
top_k: 1
temperature: 0.5
input_token_fee_pm: 30.0
output_token_fee_pm: 60.0
train_token_fee_pm: 0.0
keys:
- name: "openai_key1"
- name: "openai_key2"
siliconflow:
qw-72b-p:
name: "Qwen/QVQ-72B-Preview"
short_name: "QW-72B-P"
company: "siliconflow"
max_input_token: 8100
max_output_token: 2048
top_p: 0.5
top_k: 1
temperature: 0.5
input_token_fee_pm: 30.0
output_token_fee_pm: 60.0
train_token_fee_pm: 0.0
keys:
- name: "siliconflow_1"
Step2, llm_keys.yaml
- The keys name of the model in llm_config.yaml corresponds to llm_keys.yaml one by one
keys:
openai_key1: "xx"
openai_key2: "xx"
anthropic_key1: "your_anthropic_api_key_1"
anthropic_key2: "your_anthropic_api_key_2"
Step3, load models
from hwhkit.llm.config import load_models_from_yaml
async def main():
models = load_models_from_yaml(config_file="llm_config.yaml", keys_file="llm_keys.yaml")
print(models.list_models())
resp = await models.get_model_instance("gpt-4o").chat("who r u?")
print(resp)
if __name__ == '__main__':
import asyncio
asyncio.run(main())
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
hwhkit-1.0.1.tar.gz
(8.9 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
hwhkit-1.0.1-py3-none-any.whl
(17.8 kB
view details)
File details
Details for the file hwhkit-1.0.1.tar.gz.
File metadata
- Download URL: hwhkit-1.0.1.tar.gz
- Upload date:
- Size: 8.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.10.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
928ce902b01db07d235142a8b79f85ca01739fbf60ff9a34ac6972bffac0dc49
|
|
| MD5 |
e372078e0894653b5951aab6b6da4b4b
|
|
| BLAKE2b-256 |
6bb39e547243f15ffc21faf7abce078546e4d2bed25d2a456dddd384633841ba
|
File details
Details for the file hwhkit-1.0.1-py3-none-any.whl.
File metadata
- Download URL: hwhkit-1.0.1-py3-none-any.whl
- Upload date:
- Size: 17.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.10.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
30f98678b56a6e4d569cea687bbc9c3ea3e7e610f0c9a72a4aa7a89c82918568
|
|
| MD5 |
728a2078acbf0b49788c5716db2a5409
|
|
| BLAKE2b-256 |
a6c400dcd86bb937335e6969935ef80fa0b7f45faf1b67dc66ac32e2b591f6d0
|