Skip to main content

Client for Humanloop API

Project description

humanloop@0.5.0a2

Requirements

Python >=3.7

Installing

pip install humanloop==0.5.0a2

Getting Started

from pprint import pprint
from humanloop import Humanloop, ApiException

humanloop = Humanloop(
    api_key="YOUR_API_KEY",
    openai_api_key="YOUR_OPENAI_API_KEY",
    ai21_api_key="YOUR_AI21_API_KEY",
    mock_api_key="YOUR_MOCK_API_KEY",
    anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)

try:
    # Chat
    chat_response = humanloop.chat(
        project="sdk-example",
        messages=[
            {
                "role": "user",
                "content": "Explain asynchronous programming.",
            }
        ],
        model_config={
            "model": "gpt-3.5-turbo",
            "max_tokens": -1,
            "temperature": 0.7,
            "chat_template": [
                {
                    "role": "system",
                    "content": "You are a helpful assistant who replies in the style of {{persona}}.",
                },
            ],
        },
        inputs={
            "persona": "the pirate Blackbeard",
        },
        stream=False,
    )
    pprint(chat_response.body)
    pprint(chat_response.body["project_id"])
    pprint(chat_response.body["data"][0])
    pprint(chat_response.body["provider_responses"])
    pprint(chat_response.headers)
    pprint(chat_response.status)
    pprint(chat_response.round_trip_time)
except ApiException as e:
    print("Exception when calling .chat: %s\n" % e)
    pprint(e.body)
    if e.status == 422:
        pprint(e.body["detail"])
    pprint(e.headers)
    pprint(e.status)
    pprint(e.reason)
    pprint(e.round_trip_time)

try:
    # Complete
    complete_response = humanloop.complete(
        project="sdk-example",
        inputs={
            "text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
        },
        model_config={
            "model": "gpt-3.5-turbo",
            "max_tokens": -1,
            "temperature": 0.7,
            "prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
        },
        stream=False,
    )
    pprint(complete_response.body)
    pprint(complete_response.body["project_id"])
    pprint(complete_response.body["data"][0])
    pprint(complete_response.body["provider_responses"])
    pprint(complete_response.headers)
    pprint(complete_response.status)
    pprint(complete_response.round_trip_time)
except ApiException as e:
    print("Exception when calling .complete: %s\n" % e)
    pprint(e.body)
    if e.status == 422:
        pprint(e.body["detail"])
    pprint(e.headers)
    pprint(e.status)
    pprint(e.reason)
    pprint(e.round_trip_time)

try:
    # Feedback
    feedback_response = humanloop.feedback(
        type="rating",
        value="good",
        data_id="data_[...]",
        user="user@example.com",
    )
    pprint(feedback_response.body)
    pprint(feedback_response.headers)
    pprint(feedback_response.status)
    pprint(feedback_response.round_trip_time)
except ApiException as e:
    print("Exception when calling .feedback: %s\n" % e)
    pprint(e.body)
    if e.status == 422:
        pprint(e.body["detail"])
    pprint(e.headers)
    pprint(e.status)
    pprint(e.reason)
    pprint(e.round_trip_time)

try:
    # Log
    log_response = humanloop.log(
        project="sdk-example",
        inputs={
            "text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
        },
        output="Llamas can be friendly and curious if they are trained to be around people, but if they are treated too much like pets when they are young, they can become difficult to handle when they grow up. This means they might spit, kick, and wrestle with their necks.",
        source="sdk",
        config={
            "model": "gpt-3.5-turbo",
            "max_tokens": -1,
            "temperature": 0.7,
            "prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
            "type": "model",
        },
    )
    pprint(log_response.body)
    pprint(log_response.headers)
    pprint(log_response.status)
    pprint(log_response.round_trip_time)
except ApiException as e:
    print("Exception when calling .log: %s\n" % e)
    pprint(e.body)
    if e.status == 422:
        pprint(e.body["detail"])
    pprint(e.headers)
    pprint(e.status)
    pprint(e.reason)
    pprint(e.round_trip_time)

Async

async support is available by prepending a to any method.

import asyncio
from pprint import pprint
from humanloop import Humanloop, ApiException

humanloop = Humanloop(
    api_key="YOUR_API_KEY",
    openai_api_key="YOUR_OPENAI_API_KEY",
    ai21_api_key="YOUR_AI21_API_KEY",
    mock_api_key="YOUR_MOCK_API_KEY",
    anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)


async def main():
    try:
        complete_response = await humanloop.acomplete(
            project="sdk-example",
            inputs={
                "text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
            },
            model_config={
                "model": "gpt-3.5-turbo",
                "max_tokens": -1,
                "temperature": 0.7,
                "prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
            },
            stream=False,
        )
        pprint(complete_response.body)
        pprint(complete_response.body["project_id"])
        pprint(complete_response.body["data"][0])
        pprint(complete_response.body["provider_responses"])
        pprint(complete_response.headers)
        pprint(complete_response.status)
        pprint(complete_response.round_trip_time)
    except ApiException as e:
        print("Exception when calling .complete: %s\n" % e)
        pprint(e.body)
        if e.status == 422:
            pprint(e.body["detail"])
        pprint(e.headers)
        pprint(e.status)
        pprint(e.reason)
        pprint(e.round_trip_time)


asyncio.run(main())

Streaming

Streaming support is available by suffixing a chat or complete method with _stream.

import asyncio
from humanloop import Humanloop

humanloop = Humanloop(
    api_key="YOUR_API_KEY",
    openai_api_key="YOUR_OPENAI_API_KEY",
    ai21_api_key="YOUR_AI21_API_KEY",
    mock_api_key="YOUR_MOCK_API_KEY",
    anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)


async def main():
    response = await humanloop.chat_stream(
        project="sdk-example",
        messages=[
            {
                "role": "user",
                "content": "Explain asynchronous programming.",
            }
        ],
        model_config={
            "model": "gpt-3.5-turbo",
            "max_tokens": -1,
            "temperature": 0.7,
            "chat_template": [
                {
                    "role": "system",
                    "content": "You are a helpful assistant who replies in the style of {{persona}}.",
                },
            ],
        },
        inputs={
            "persona": "the pirate Blackbeard",
        },
    )
    async for token in response.content:
        print(token)


asyncio.run(main())

Documentation for API Endpoints

All URIs are relative to https://api.humanloop.com/v4

Tag Method HTTP request Description
Chats create POST /chat Get a chat response by providing details of the model configuration in the request.
Chats create_deployed POST /chat-deployed Get a chat response using the project's active deployment. The active deployment can be a specific model configuration or an experiment.
Chats create_experiment POST /chat-experiment Get a chat response for a specific experiment.
Chats create_model_config POST /chat-model-config Get chat response for a specific model configuration.
Completions create POST /completion Create a completion by providing details of the model configuration in the request.
Completions create_deployed POST /completion-deployed Create a completion using the project's active deployment. The active deployment can be a specific model configuration or an experiment.
Completions create_experiment POST /completion-experiment Create a completion for a specific experiment.
Completions create_model_config POST /completion-model-config Create a completion for a specific model configuration.
Evaluations create POST /projects/{project_id}/evaluations Create an evaluation.
Evaluations get GET /evaluations/{id} Get evaluation by ID.
Evaluations get_all_for_project GET /projects/{project_id}/evaluations Get all the evaluations associated with your project.
Evaluations get_testcases GET /evaluations/{id}/testcases Get testcases by evaluation ID.
Evaluators create POST /evaluation-functions Create an evaluator within your organization.
Evaluators delete DELETE /evaluation-functions/{id} Delete an evaluator within your organization.
Evaluators get_all GET /evaluation-functions Get all evaluators within your organization.
Evaluators update PATCH /evaluation-functions/{id} Update an evaluator within your organization.
Experiments create POST /projects/{project_id}/experiments Create an experiment for your project. You can optionally specify IDs of your project's model configs to include in the experiment, along with a set of labels to consider as positive feedback and whether the experiment should be set as active.
Experiments delete DELETE /experiments/{experiment_id} Delete the experiment with the specified ID.
Experiments get_all GET /projects/{project_id}/experiments Get an array of experiments associated to your project.
Experiments sample GET /experiments/{experiment_id}/model-config Samples a model config from the experiment's active model configs.
Experiments update PATCH /experiments/{experiment_id} Update your experiment, including registering and de-registering model configs.
Feedback feedback POST /feedback Submit an array of feedback for existing `data_ids`
Logs log POST /logs Log a datapoint or array of datapoints to your Humanloop project.
Logs update PATCH /logs/{id} Update a logged datapoint in your Humanloop project.
Logs update_by_ref PATCH /logs Update a logged datapoint by its reference ID. The `reference_id` query parameter must be provided, and refers to the `reference_id` of a previously-logged datapoint.
Model Configs get GET /model-configs/{id} Get a specific model config by ID.
Model Configs register POST /model-configs Register a model config to a project and optionally add it to an experiment. If the project provided does not exist, a new project will be created automatically. If an experiment name is provided, the specified experiment must already exist. Otherwise, an error will be raised. If the model config is the first to be associated to the project, it will be set as the active model config.
Projects create POST /projects Create a new project.
Projects deactivate_config DELETE /projects/{id}/active-config Remove the project's active config, if set. This has no effect if the project does not have an active model config set.
Projects deactivate_experiment DELETE /projects/{id}/active-experiment Remove the project's active experiment, if set. This has no effect if the project does not have an active experiment set.
Projects delete_deployed_config DELETE /projects/{project_id}/deployed-config/{environment_id} Remove the model config deployed to environment. This has no effect if the project does not have an active model config set.
Projects deploy_config PATCH /projects/{project_id}/deploy-config Deploy a model config to an environment. If the environment already has a model config deployed, it will be replaced.
Projects export POST /projects/{id}/export Export all logged datapoints associated to your project. Results are paginated and sorts the datapoints based on `created_at` in descending order.
Projects get GET /projects/{id} Get a specific project.
Projects get_active_config GET /projects/{id}/active-config Retrieves a config to use to execute your model. A config will be selected based on the project's active config/experiment settings.
Projects get_all GET /projects Get a paginated list of projects.
Projects get_configs GET /projects/{id}/configs Get an array of configs associated to your project.
Projects get_deployed_configs GET /projects/{id}/deployed-configs Get an array of environments with the deployed configs associated to your project.
Projects update PATCH /projects/{id} Update a specific project. Set the project's active model config/experiment by passing either `active_experiment_id` or `active_model_config_id`. These will be set to the Default environment unless a list of environments are also passed in specifically detailing which environments to assign the active config or experiment. Set the feedback labels to be treated as positive user feedback used in calculating top-level project metrics by passing a list of labels in `positive_labels`.
Projects update_feedback_types PATCH /projects/{id}/feedback-types Update feedback types. Allows creation of the default feedback types and setting status of feedback types/categorical values. This behaves like an upsert; any feedback categorical values that do not already exist in the project will be created.
Sessions create POST /sessions Create a new session. Returns a session ID that can be used to log datapoints to the session.
Sessions get GET /sessions/{id} Get a session by ID.
Sessions get_all GET /sessions Get a page of sessions.
Testcases delete DELETE /testcases Delete a list of testsets by their IDs.
Testcases get GET /testcases/{id} Get a testcase by ID.
Testcases update PATCH /testcases/{id} Edit the input, messages and criteria fields of a testcase. The fields passed in the request are the ones edited. Passing `null` as a value for a field will delete that field. In order to signify not changing a field, it should be omitted from the request body.
Testsets create POST /projects/{project_id}/testsets Create a new testset for a project.
Testsets create_testcase POST /testsets/{testset_id}/testcases Create a new testcase for a testset.
Testsets delete DELETE /testsets/{id} Delete a testset by ID.
Testsets get GET /testsets/{id} Get a single testset by ID.
Testsets get_all_for_project GET /projects/{project_id}/testsets Get all testsets for a project.
Testsets get_testcases GET /testsets/{testset_id}/testcases Get testcases for a testset.
Testsets update PATCH /testsets/{id} Update a testset by ID.

Author

This Python package is automatically generated by Konfig

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

humanloop-0.5.0a2.tar.gz (197.0 kB view details)

Uploaded Source

Built Distribution

humanloop-0.5.0a2-py3-none-any.whl (918.5 kB view details)

Uploaded Python 3

File details

Details for the file humanloop-0.5.0a2.tar.gz.

File metadata

  • Download URL: humanloop-0.5.0a2.tar.gz
  • Upload date:
  • Size: 197.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.16

File hashes

Hashes for humanloop-0.5.0a2.tar.gz
Algorithm Hash digest
SHA256 3def4ee88c0dfb2a0107b2930d7ee64041464b8c83ce16a254a6b7c8fe810a8f
MD5 3d44f4db9c9c44cfa2274ab1aa4c873f
BLAKE2b-256 c55ba22bceaa1cbd0ef3be731feaaadde14452932d3237f07881e1a2d54ebf45

See more details on using hashes here.

File details

Details for the file humanloop-0.5.0a2-py3-none-any.whl.

File metadata

  • Download URL: humanloop-0.5.0a2-py3-none-any.whl
  • Upload date:
  • Size: 918.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.16

File hashes

Hashes for humanloop-0.5.0a2-py3-none-any.whl
Algorithm Hash digest
SHA256 f1aabaa331e450df32e8c181181a0941f669d9b37ee64875b5010ac49d4fde22
MD5 28975097506a6b8b2d9f0499d71df6b5
BLAKE2b-256 4e48dba69043804ca727e2159f60959be90693a0ea7441b723b037c1fec7a2a5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page