Skip to main content

A Python client for Monster API v2

Project description

Monsterapi v2

A Python client for interacting with Monster API v2 in .

Installation

pip install monsterapi

Note: For detailed documentation please visit here

Has support to following MonsterAPI services:

  1. Pre Hosted AI-Models: SOA models like whisper, SDXL, Bark, Pix2Pix, txt2img etc are pre hosted and can be accessed using client.

    a. How to use client ? here

    b. What are models that are supported ? here

  2. QuickServe API: New service from monsterapi deploy popular LLM models into monsterapi compute infrastructure with one request.

    a. How to use client to launch and manage a quickserve deployment ? here

Additional Information link: here

Code Documentation:

Client module code documentation can be found here

Basic Usage to access Hosted AI-Models

Import Module

from monsterapi import client

set MONSTER_API_KEY env variable to your API key.

os.environ["MONSTER_API_KEY"] = <your_api_key>
client = client() # Initialize client

or

pass api_key parameter to client constructor.

client = client(<api_key>) # pass api_key as parameter

Use generate method

result = client.generate(model='falcon-7b-instruct', data={
    "prompt": "Your prompt here",
    # ... other parameters
})

Quick Serve LLM

Launch a llama2-7b model using QuickServe API

Prepare and send payload to launch a LLM deployment. a. Choose Per_GPU_VRAM and GPU_Count based on your model size and batch size. Please see here for detailed list of supported model and infrastructure matrix.

launch_payload = {
    "basemodel_path": "meta-llama/Llama-2-7b-chat",
    "loramodel_path": "",
    "prompt_template": "{prompt}{completion}",
    "api_auth_token": "b6a97d3b-35d0-4720-a44c-59ee33dbc25b",
    "per_gpu_vram": 24,
    "gpu_count": 1
}

# Launch a deployment
ret = client.deploy("llm", launch_payload) 
deployment_id = ret.get("deployment_id")
print(ret)

# Get deployment status
status_ret = client.get_deployment_status(deployment_id)
print(status_ret)

logs_ret = client.get_deployment_logs(deployment_id)
print(logs_ret)

# Terminate Deployment
terminate_return = client.terminate_deployment(deployment_id)
print(terminate_return)

Run tests

Install test dependencies

pip install monsterapi[tests]

Run functional tests involving actual API key

export MONSTER_API_KEY=<your_api_key>
python3 -m pytest tests/ # Run all tests includes functional tests using actual API key

Run unit tests

export MONSTER_API_KEY="dummy"
python3 -m pytest tests/ -m "not slow" # Run only unit tests

PIP package push Instructions

pip install --upgrade setuptools wheel

python setup.py sdist bdist_wheel

pip install twine

twine upload dist/*

About us

Check us out at monsterapi.ai

Check out new no-code finetuning service here

Checkout our Monster-SD Stable Diffusion v1.5 vs XL Comparison space here

Checkout our Monster API LLM comparison space here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

monsterapi-1.0.2b1.tar.gz (17.5 kB view hashes)

Uploaded Source

Built Distribution

monsterapi-1.0.2b1-py3-none-any.whl (14.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page