Python API client for Bytez service
Project description
API Documentation
Introduction
Welcome to the Bytez API documentation! This API provides access to various machine learning models for serverless operation. Below, you will find examples demonstrating how to interact with the API using our Python client library.
Getting Your Key
To use this API, you need an API key. Obtain your key by joining the Bytez Discord. If you prefer not to use Discord, email us at team@bytez.com.
Boot Times and Billing
Cold Boot Times
Expect the following boot times for models:
- Smallest model: ~12 minutes.
- Largest model: ~15 minutes. We are working on reducing these boot times to under 5 minutes.
Billing
Billing begins from the first 60 seconds of use, with subsequent usage rounded to the nearest minute. Charges are based on $0.0000166667 per GB-second on GPUs. The default expiration period for a model instance is 30 minutes.
Python Client Library Usage Examples
Authentication
Always include your API key when initializing the client:
from bytez import Bytez
client = Bytez('YOUR_API_KEY')
List Available Models
Lists the currently available models, and provides basic information about each one, such as RAM required
models = client.list_models()
print(models)
List Serverless Instances
List your serverless instances
instances = client.list_instances()
print(instances)
Make a Model Serverless
Make a HuggingFace model serverless + available on this API! Running this command queues a job. You'll receive an email when the model is ready.
@param modelId The HuggingFace modelId, for example openai-community/gpt2
model_id = 'openai-community/gpt2'
job_status = client.process(model_id)
print(job_status)
Get a Model
Get a model, so you can check its status, load, run, or shut it down.
@param modelId The HuggingFace modelId, for example openai-community/gpt2
model = client.model('openai-community/gpt2')
Start the model
Convenience method for running model.start(), and then awaiting model to be ready.
@param options Serverless configuration
results = model.load({'concurrency': 1, 'timeout': 300})
print(results)
# Concurrency
# Number of serverless instances.
#
# For example, if you set to `3`, then you can do 3 parallel inferences.
#
# If you set to `1`, then you can do 1 inference at a time.
#
# Default: `1`
# Timeout
# Seconds to wait before serverless instance auto-shuts down.
#
# By default, if an instance doesn't receive a request after `300` seconds, then it shuts down.
#
# Receiving a request resets this timer.
#
# Default: `300`
Check Model Status
Check on the status of the model, to see if its deploying, running, or stopped
status = model.status()
print(status)
Run a Model
Run inference
output = model.run("Once upon a time there was a")
print(output)
Run a Model with HuggingFace params
Run inference with HuggingFace parameters.
output = model.run("Once upon a time there was a", model_params={"max_new_tokens":1,"min_new_tokens":1})
print(output)
Stream the response
Streaming text
output = model.run("Once upon a time there was a", stream=True)
for chunk in stream:
print(chunk)
Shutdown a Model
Serverless models auto-shutdown, though you can early stop with this method
model.stop()
Feedback
We value your feedback to improve our documentation and services. If you have any suggestions, please join our Discord or contact us via email.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.