Skip to main content

Simple client to interact with regolo.ai

Project description

Regolo.ai Python Client

A simple Python client for interacting for Regolo.ai's LLM-based API.

Installation

Ensure you have the regolo module installed. If not, install it using:

  pip install regolo

Basic Usage

1. Import the regolo module

import regolo

2. Set Up Default API Key and Model

To avoid manually passing the API key and model in every request, you can set them globally:

regolo.default_key = "<EXAMPLE_KEY>"
regolo.default_model = "Llama-3.3-70B-Instruct"

This ensures that all RegoloClient instances and static functions will use the specified API key and model.

Still, you can create run methods by inserting model and key directly.

3. Perform a basic request

Completion:

print(regolo.static_completions(prompt="Tell me something about Rome."))

Chat_completion

print(regolo.static_chat_completions(messages=[{"role": "user", "content": "Tell me something about rome"}]))

Loading envs

if you want to interact with this client through environment variables, you can follow this reference:

Default values

  • "API_KEY"

You can use this environment variable to insert the default_key. You can load it after importing regolo using regolo.key_load_from_env_if_exists(). Using it is equivalent to updating regolo.default_key when you import regolo.

  • "LLM"

You can use this environment variable to insert the default_model. You can load it after importing regolo using regolo.default_model_load_from_env_if_exists(). This is equivalent to updating regolo.default_model when you import regolo.

  • "IMAGE_MODEL"

You can use this environment variable to insert the default_image_model. You can load it after importing regolo using regolo.default_image_load_from_env_if_exists(). This is equivalent to updating regolo.default_image_model when you import regolo.

  • "EMBEDDER_MODEL"

You can use this environment variable to insert the default_embedder_model. You can load it after importing regolo using regolo.default_embedder_load_from_env_if_exists(). This is equivalent to updating regolo.default_embedder_model when you import regolo.

[!TIP] All "default" environment variables can be updated together through regolo.try_loading_from_env().

It does nothing but run all the load_from_env methods al once.

Endpoints

  • "REGOLO_URL"

You can use this env variable to set the default base_url used by regolo client and its static methods.

  • "COMPLETIONS_URL_PATH"

You can use this env variable to set the base_url used by regolo client and its static methods.

  • "CHAT_COMPLETIONS_URL_PATH"

You can use this env variable to set the chat completions endpoint used by regolo client and its static methods.

  • "IMAGE_GENERATION_URL_PATH"

You can use this env variable to set the image generation endpoint used by regolo client and its static methods.

  • "EMBEDDINGS_URL_PATH"

You can use this env variable to set the embedding generation endpoint used by regolo client and its static methods.

[!TIP] The "endpoints" environment variables can be changed during execution. Since the client works directly with them.

However, you are likely not to want to change them, since they are tied to how we handle our endpoints.


Other usages

Handling streams

With full output:

import regolo
regolo.default_key = "<EXAMPLE_KEY>"
regolo.default_model = "Llama-3.3-70B-Instruct"

# Completions

client = regolo.RegoloClient()
response = client.completions("Tell me about Rome in a concise manner", full_output=True, stream=True)

while True:
    try:
        print(next(response))
    except StopIteration:
        break

# Chat completions

client = regolo.RegoloClient()
response = client.run_chat(user_prompt="Tell me about Rome in a concise manner", full_output=True, stream=True)


while True:
    try:
        print(next(response))
    except StopIteration:
        break

Without full output:

import regolo
regolo.default_key = "<EXAMPLE_KEY>"
regolo.default_model = "Llama-3.3-70B-Instruct"

# Completions

client = regolo.RegoloClient()
response = client.completions("Tell me about Rome in a concise manner", full_output=False, stream=True)

while True:
    try:
        print(next(response), end='', flush=True)
    except StopIteration:
        break

# Chat completions

client = regolo.RegoloClient()
response = client.run_chat(user_prompt="Tell me about Rome in a concise manner", full_output=False, stream=True)

while True:
    try:
        res = next(response)
        if res[0]:
            print(res[0] + ":")
        print(res[1], end="", flush=True)
    except StopIteration:
        break

Handling chat through add_prompt_to_chat()

import regolo

regolo.default_key = "<EXAMPLE_KEY>"
regolo.default_model = "Llama-3.3-70B-Instruct"

client = regolo.RegoloClient()

# Make a request

client.add_prompt_to_chat(role="user", prompt="Tell me about rome!")

print(client.run_chat())

# Continue the conversation

client.add_prompt_to_chat(role="user", prompt="Tell me something more about it!")

print(client.run_chat())

# You can print the whole conversation if needed

print(client.instance.get_conversation())

It is to consider that using the user_prompt parameter in run_chat() is equivalent to adding a prompt with role=user through add_prompt_to_chat().

Handling image models

Without client:

from io import BytesIO

import regolo
from PIL import Image

regolo.default_image_model = "FLUX.1-dev"
regolo.default_key = "<EXAMPLE_KEY>"

img_bytes = regolo.static_image_create(prompt="a cat")[0]

image = Image.open(BytesIO(img_bytes))

image.show()

With client

from io import BytesIO

import regolo
from PIL import Image
client = regolo.RegoloClient(image_model="FLUX.1-dev", api_key="<EXAMPLE_KEY>")

img_bytes = client.create_image(prompt="A cat in Rome")[0]

image = Image.open(BytesIO(img_bytes))

image.show()

Handling embedder models

Without client:

import regolo

regolo.default_key = "<EXAMPLE_KEY>"
regolo.default_embedder_model = "gte-Qwen2"


embeddings = regolo.static_embeddings(input_text=["test", "test1"])

print(embeddings)

With client:

import regolo

client = regolo.RegoloClient(api_key="<EXAMPLE_KEY>", embedder_model="gte-Qwen2")

embeddings = client.embeddings(input_text=["test", "test1"])

print(embeddings)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

regolo-1.1.0.tar.gz (16.0 kB view details)

Uploaded Source

Built Distribution

regolo-1.1.0-py3-none-any.whl (15.2 kB view details)

Uploaded Python 3

File details

Details for the file regolo-1.1.0.tar.gz.

File metadata

  • Download URL: regolo-1.1.0.tar.gz
  • Upload date:
  • Size: 16.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for regolo-1.1.0.tar.gz
Algorithm Hash digest
SHA256 d565a98a7e403d753ef297575b8290ed498dfc02afd583f18e48c0a302736080
MD5 553f8436b62e44fcdfab46cc66b288fb
BLAKE2b-256 708f6634cb0e3bb11ac311d5fb622cbb6cb1e8fc010b15f1bcefb1884678dd7b

See more details on using hashes here.

File details

Details for the file regolo-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: regolo-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 15.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for regolo-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7fa74d65133282566c620293b032f276a307acca48e4c794f888694cb16eacd4
MD5 b6fe95954247aa988fa9e00b9e5ac32f
BLAKE2b-256 0718f694065e926cc2ebc20aaf080a7ba8b5885212529a3d8af02c3e922a8609

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page