A Python client for Monster API v2
Project description
Monsterapi v2
A Python client for interacting with Monster API v2 in .
Installation
pip install monsterapi
Note: For detailed documentation please visit here
Has support to following MonsterAPI services:
-
Pre Hosted AI-Models: SOA models like whisper, SDXL, Bark, Pix2Pix, txt2img etc are pre hosted and can be accessed using client.
a. How to use client ? here
b. What are models that are supported ? here
-
QuickServe API: New service from monsterapi deploy popular LLM models into monsterapi compute infrastructure with one request.
a. How to use client to launch and manage a quickserve deployment ? here
Additional Information link: here
Code Documentation:
Client module code documentation can be found here
Basic Usage to access Hosted AI-Models
Import Module
from monsterapi import client
set MONSTER_API_KEY
env variable to your API key.
os.environ["MONSTER_API_KEY"] = <your_api_key>
client = client() # Initialize client
or
pass api_key
parameter to client constructor.
client = client(<api_key>) # pass api_key as parameter
Use generate method
result = client.generate(model='falcon-7b-instruct', data={
"prompt": "Your prompt here",
# ... other parameters
})
Quick Serve LLM
Launch a llama2-7b model using QuickServe API
Prepare and send payload to launch a LLM deployment. a. Choose Per_GPU_VRAM and GPU_Count based on your model size and batch size. Please see here for detailed list of supported model and infrastructure matrix.
launch_payload = {
"basemodel_path": "meta-llama/Llama-2-7b-chat",
"loramodel_path": "",
"prompt_template": "{prompt}{completion}",
"api_auth_token": "b6a97d3b-35d0-4720-a44c-59ee33dbc25b",
"per_gpu_vram": 24,
"gpu_count": 1
}
# Launch a deployment
ret = client.deploy("llm", launch_payload)
deployment_id = ret.get("deployment_id")
print(ret)
# Get deployment status
status_ret = client.get_deployment_status(deployment_id)
print(status_ret)
logs_ret = client.get_deployment_logs(deployment_id)
print(logs_ret)
# Terminate Deployment
terminate_return = client.terminate_deployment(deployment_id)
print(terminate_return)
Run tests
Install test dependencies
pip install monsterapi[tests]
Run functional tests involving actual API key
export MONSTER_API_KEY=<your_api_key>
python3 -m pytest tests/ # Run all tests includes functional tests using actual API key
Run unit tests
export MONSTER_API_KEY="dummy"
python3 -m pytest tests/ -m "not slow" # Run only unit tests
PIP package push Instructions
pip install --upgrade setuptools wheel
python setup.py sdist bdist_wheel
pip install twine
twine upload dist/*
About us
Check us out at monsterapi.ai
Check out new no-code finetuning service here
Checkout our Monster-SD Stable Diffusion v1.5 vs XL Comparison space here
Checkout our Monster API LLM comparison space here
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for monsterapi-1.0.2b2.post3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8a0f0c3a2c432b62cf9d3036d2057f2ff2704384d4cb489c1928528b43f84b9e |
|
MD5 | d6c19704043189a97f81a177c2012dcd |
|
BLAKE2b-256 | d9086ada0b83647e3b959972f729eba09d182f40e08fef5c00eee7961125cebd |