Skip to main content

Client for Humanloop API

Project description

Visit Humanloop

Humanloop

PyPI README.md

[!WARNING] This SDK has breaking changes in >= 0.6.0 versions. All methods now return Pydantic models.

Before (< 0.6.0)

Previously, you had to use the [] syntax to access response values. This required a little more code for every property access.

chat_response = humanloop.chat(
        # parameters
    )
print(chat_response.body["project_id"])

After (>= 0.6.0)

With Pydantic-based response values, you can use the . syntax to access. This is slightly less verbose and looks more Pythonic.

chat_response = humanloop.chat(
        # parameters
    )
print(chat_response.project_id)

To reuse existing implementations from < 0.6.0, use the .raw namespace as specified in the Raw HTTP Response section.

Table of Contents

Requirements

Python >=3.7

Installation

pip install humanloop==0.7.0-beta.31

Getting Started

from pprint import pprint
from humanloop import Humanloop, ApiException

humanloop = Humanloop(
    api_key="YOUR_API_KEY",
    openai_api_key="YOUR_OPENAI_API_KEY",
    anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)

try:
    # Chat
    chat_response = humanloop.chat(
        project="sdk-example",
        messages=[
            {
                "role": "user",
                "content": "Explain asynchronous programming.",
            }
        ],
        model_config={
            "model": "gpt-3.5-turbo",
            "max_tokens": -1,
            "temperature": 0.7,
            "chat_template": [
                {
                    "role": "system",
                    "content": "You are a helpful assistant who replies in the style of {{persona}}.",
                },
            ],
        },
        inputs={
            "persona": "the pirate Blackbeard",
        },
        stream=False,
    )
    print(chat_response)
except ApiException as e:
    print("Exception when calling .chat: %s\n" % e)
    pprint(e.body)
    if e.status == 422:
        pprint(e.body["detail"])
    pprint(e.headers)
    pprint(e.status)
    pprint(e.reason)
    pprint(e.round_trip_time)

try:
    # Complete
    complete_response = humanloop.complete(
        project="sdk-example",
        inputs={
            "text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
        },
        model_config={
            "model": "gpt-3.5-turbo",
            "max_tokens": -1,
            "temperature": 0.7,
            "prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
        },
        stream=False,
    )
    print(complete_response)
except ApiException as e:
    print("Exception when calling .complete: %s\n" % e)
    pprint(e.body)
    if e.status == 422:
        pprint(e.body["detail"])
    pprint(e.headers)
    pprint(e.status)
    pprint(e.reason)
    pprint(e.round_trip_time)

try:
    # Feedback
    feedback_response = humanloop.feedback(
        type="rating",
        value="good",
        data_id="data_[...]",
        user="user@example.com",
    )
    print(feedback_response)
except ApiException as e:
    print("Exception when calling .feedback: %s\n" % e)
    pprint(e.body)
    if e.status == 422:
        pprint(e.body["detail"])
    pprint(e.headers)
    pprint(e.status)
    pprint(e.reason)
    pprint(e.round_trip_time)

try:
    # Log
    log_response = humanloop.log(
        project="sdk-example",
        inputs={
            "text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
        },
        output="Llamas can be friendly and curious if they are trained to be around people, but if they are treated too much like pets when they are young, they can become difficult to handle when they grow up. This means they might spit, kick, and wrestle with their necks.",
        source="sdk",
        config={
            "model": "gpt-3.5-turbo",
            "max_tokens": -1,
            "temperature": 0.7,
            "prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
            "type": "model",
        },
    )
    print(log_response)
except ApiException as e:
    print("Exception when calling .log: %s\n" % e)
    pprint(e.body)
    if e.status == 422:
        pprint(e.body["detail"])
    pprint(e.headers)
    pprint(e.status)
    pprint(e.reason)
    pprint(e.round_trip_time)

Async

async support is available by prepending a to any method.

import asyncio
from pprint import pprint
from humanloop import Humanloop, ApiException

humanloop = Humanloop(
    api_key="YOUR_API_KEY",
    openai_api_key="YOUR_OPENAI_API_KEY",
    anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)


async def main():
    try:
        complete_response = await humanloop.acomplete(
            project="sdk-example",
            inputs={
                "text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
            },
            model_config={
                "model": "gpt-3.5-turbo",
                "max_tokens": -1,
                "temperature": 0.7,
                "prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
            },
            stream=False,
        )
        print(complete_response)
    except ApiException as e:
        print("Exception when calling .complete: %s\n" % e)
        pprint(e.body)
        if e.status == 422:
            pprint(e.body["detail"])
        pprint(e.headers)
        pprint(e.status)
        pprint(e.reason)
        pprint(e.round_trip_time)


asyncio.run(main())

Raw HTTP Response

To access raw HTTP response values, use the .raw namespace.

from pprint import pprint
from humanloop import Humanloop, ApiException

humanloop = Humanloop(
    openai_api_key="OPENAI_API_KEY",
    openai_azure_api_key="OPENAI_AZURE_API_KEY",
    openai_azure_endpoint_api_key="OPENAI_AZURE_ENDPOINT_API_KEY",
    anthropic_api_key="ANTHROPIC_API_KEY",
    cohere_api_key="COHERE_API_KEY",
    api_key="YOUR_API_KEY",
)

try:
    # Chat
    create_response = humanloop.chats.raw.create(
        messages=[
            {
                "role": "user",
            }
        ],
        model_config={
            "provider": "openai",
            "model": "model_example",
            "max_tokens": -1,
            "temperature": 1,
            "top_p": 1,
            "presence_penalty": 0,
            "frequency_penalty": 0,
            "endpoint": "complete",
        },
        project="string_example",
        project_id="string_example",
        session_id="string_example",
        session_reference_id="string_example",
        parent_id="string_example",
        parent_reference_id="string_example",
        inputs={},
        source="string_example",
        metadata={},
        save=True,
        source_datapoint_id="string_example",
        provider_api_keys={},
        num_samples=1,
        stream=False,
        user="string_example",
        seed=1,
        return_inputs=True,
        tool_choice="string_example",
        tool_call="string_example",
        response_format={
            "type": "json_object",
        },
    )
    pprint(create_response.body)
    pprint(create_response.body["data"])
    pprint(create_response.body["provider_responses"])
    pprint(create_response.body["project_id"])
    pprint(create_response.body["num_samples"])
    pprint(create_response.body["logprobs"])
    pprint(create_response.body["suffix"])
    pprint(create_response.body["user"])
    pprint(create_response.body["usage"])
    pprint(create_response.body["metadata"])
    pprint(create_response.body["provider_request"])
    pprint(create_response.body["session_id"])
    pprint(create_response.body["tool_choice"])
    pprint(create_response.headers)
    pprint(create_response.status)
    pprint(create_response.round_trip_time)
except ApiException as e:
    print("Exception when calling ChatsApi.create: %s\n" % e)
    pprint(e.body)
    if e.status == 422:
        pprint(e.body["detail"])
    pprint(e.headers)
    pprint(e.status)
    pprint(e.reason)
    pprint(e.round_trip_time)

Streaming

Streaming support is available by suffixing a chat or complete method with _stream.

import asyncio
from humanloop import Humanloop

humanloop = Humanloop(
    api_key="YOUR_API_KEY",
    openai_api_key="YOUR_OPENAI_API_KEY",
    anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)


async def main():
    response = await humanloop.chat_stream(
        project="sdk-example",
        messages=[
            {
                "role": "user",
                "content": "Explain asynchronous programming.",
            }
        ],
        model_config={
            "model": "gpt-3.5-turbo",
            "max_tokens": -1,
            "temperature": 0.7,
            "chat_template": [
                {
                    "role": "system",
                    "content": "You are a helpful assistant who replies in the style of {{persona}}.",
                },
            ],
        },
        inputs={
            "persona": "the pirate Blackbeard",
        },
    )
    async for token in response.content:
        print(token)


asyncio.run(main())

Reference

humanloop.chat

Get a chat response by providing details of the model configuration in the request.

🛠️ Usage

create_response = humanloop.chat(
    messages=[
        {
            "role": "user",
        }
    ],
    model_config={
        "provider": "openai",
        "model": "model_example",
        "max_tokens": -1,
        "temperature": 1,
        "top_p": 1,
        "presence_penalty": 0,
        "frequency_penalty": 0,
        "endpoint": "complete",
    },
    project="string_example",
    project_id="string_example",
    session_id="string_example",
    session_reference_id="string_example",
    parent_id="string_example",
    parent_reference_id="string_example",
    inputs={},
    source="string_example",
    metadata={},
    save=True,
    source_datapoint_id="string_example",
    provider_api_keys={},
    num_samples=1,
    stream=False,
    user="string_example",
    seed=1,
    return_inputs=True,
    tool_choice="string_example",
    tool_call="string_example",
    response_format={
        "type": "json_object",
    },
)

⚙️ Parameters

messages: List[ChatMessageWithToolCall]

The messages passed to the to provider chat endpoint.

model_config: ModelConfigChatRequest

The model configuration used to create a chat response.

project: str

Unique project name. If no project exists with this name, a new project will be created.

project_id: str

Unique ID of a project to associate to the log. Either this or project must be provided.

session_id: str

ID of the session to associate the datapoint.

session_reference_id: str

A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id in subsequent log requests. Specify at most one of this or session_id.

parent_id: str

ID associated to the parent datapoint in a session.

parent_reference_id: str

A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id in a prior log request. Specify at most one of this or parent_id. Note that this cannot refer to a datapoint being logged in the same request.

inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

The inputs passed to the prompt template.

source: str

Identifies where the model was called from.

metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Any additional metadata to record.

save: bool

Whether the request/response payloads will be stored on Humanloop.

source_datapoint_id: str

ID of the source datapoint if this is a log derived from a datapoint in a dataset.

provider_api_keys: ProviderApiKeys

API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

num_samples: int

The number of generations.

stream: bool

If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.

user: str

End-user ID passed through to provider call.

seed: int

Deprecated field: the seed is instead set as part of the request.config object.

return_inputs: bool

Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.

tool_choice: Union[str, str, ToolChoice]

Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.

tool_call: Union[str, Dict[str, str]]

NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.

response_format: ResponseFormat

The format of the response. Only type json_object is currently supported for chat.

⚙️ Request Body

ChatRequest

🔄 Return

ChatResponse

🌐 Endpoint

/chat post

🔙 Back to Table of Contents


humanloop.chat_deployed

Get a chat response using the project's active deployment.

The active deployment can be a specific model configuration or an experiment.

🛠️ Usage

create_deployed_response = humanloop.chat_deployed(
    messages=[
        {
            "role": "user",
        }
    ],
    project="string_example",
    project_id="string_example",
    session_id="string_example",
    session_reference_id="string_example",
    parent_id="string_example",
    parent_reference_id="string_example",
    inputs={},
    source="string_example",
    metadata={},
    save=True,
    source_datapoint_id="string_example",
    provider_api_keys={},
    num_samples=1,
    stream=False,
    user="string_example",
    seed=1,
    return_inputs=True,
    tool_choice="string_example",
    tool_call="string_example",
    response_format={
        "type": "json_object",
    },
    environment="string_example",
)

⚙️ Parameters

messages: List[ChatMessageWithToolCall]

The messages passed to the to provider chat endpoint.

project: str

Unique project name. If no project exists with this name, a new project will be created.

project_id: str

Unique ID of a project to associate to the log. Either this or project must be provided.

session_id: str

ID of the session to associate the datapoint.

session_reference_id: str

A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id in subsequent log requests. Specify at most one of this or session_id.

parent_id: str

ID associated to the parent datapoint in a session.

parent_reference_id: str

A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id in a prior log request. Specify at most one of this or parent_id. Note that this cannot refer to a datapoint being logged in the same request.

inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

The inputs passed to the prompt template.

source: str

Identifies where the model was called from.

metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Any additional metadata to record.

save: bool

Whether the request/response payloads will be stored on Humanloop.

source_datapoint_id: str

ID of the source datapoint if this is a log derived from a datapoint in a dataset.

provider_api_keys: ProviderApiKeys

API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

num_samples: int

The number of generations.

stream: bool

If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.

user: str

End-user ID passed through to provider call.

seed: int

Deprecated field: the seed is instead set as part of the request.config object.

return_inputs: bool

Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.

tool_choice: Union[str, str, ToolChoice]

Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.

tool_call: Union[str, Dict[str, str]]

NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.

response_format: ResponseFormat

The format of the response. Only type json_object is currently supported for chat.

environment: str

The environment name used to create a chat response. If not specified, the default environment will be used.

⚙️ Request Body

ChatDeployedRequest

🔄 Return

ChatResponse

🌐 Endpoint

/chat-deployed post

🔙 Back to Table of Contents


humanloop.chat_experiment

Get a chat response for a specific experiment.

🛠️ Usage

create_experiment_response = humanloop.chat_experiment(
    messages=[
        {
            "role": "user",
        }
    ],
    experiment_id="string_example",
    project="string_example",
    project_id="string_example",
    session_id="string_example",
    session_reference_id="string_example",
    parent_id="string_example",
    parent_reference_id="string_example",
    inputs={},
    source="string_example",
    metadata={},
    save=True,
    source_datapoint_id="string_example",
    provider_api_keys={},
    num_samples=1,
    stream=False,
    user="string_example",
    seed=1,
    return_inputs=True,
    tool_choice="string_example",
    tool_call="string_example",
    response_format={
        "type": "json_object",
    },
)

⚙️ Parameters

messages: List[ChatMessageWithToolCall]

The messages passed to the to provider chat endpoint.

experiment_id: str

If an experiment ID is provided a model configuration will be sampled from the experiments active model configurations.

project: str

Unique project name. If no project exists with this name, a new project will be created.

project_id: str

Unique ID of a project to associate to the log. Either this or project must be provided.

session_id: str

ID of the session to associate the datapoint.

session_reference_id: str

A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id in subsequent log requests. Specify at most one of this or session_id.

parent_id: str

ID associated to the parent datapoint in a session.

parent_reference_id: str

A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id in a prior log request. Specify at most one of this or parent_id. Note that this cannot refer to a datapoint being logged in the same request.

inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

The inputs passed to the prompt template.

source: str

Identifies where the model was called from.

metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Any additional metadata to record.

save: bool

Whether the request/response payloads will be stored on Humanloop.

source_datapoint_id: str

ID of the source datapoint if this is a log derived from a datapoint in a dataset.

provider_api_keys: ProviderApiKeys

API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

num_samples: int

The number of chat responses, where each chat response will use a model configuration sampled from the experiment.

stream: bool

If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.

user: str

End-user ID passed through to provider call.

seed: int

Deprecated field: the seed is instead set as part of the request.config object.

return_inputs: bool

Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.

tool_choice: Union[str, str, ToolChoice]

Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.

tool_call: Union[str, Dict[str, str]]

NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.

response_format: ResponseFormat

The format of the response. Only type json_object is currently supported for chat.

⚙️ Request Body

ChatExperimentRequest

🔄 Return

ChatResponse

🌐 Endpoint

/chat-experiment post

🔙 Back to Table of Contents


humanloop.chat_model_config

Get chat response for a specific model configuration.

🛠️ Usage

create_model_config_response = humanloop.chat_model_config(
    messages=[
        {
            "role": "user",
        }
    ],
    model_config_id="string_example",
    project="string_example",
    project_id="string_example",
    session_id="string_example",
    session_reference_id="string_example",
    parent_id="string_example",
    parent_reference_id="string_example",
    inputs={},
    source="string_example",
    metadata={},
    save=True,
    source_datapoint_id="string_example",
    provider_api_keys={},
    num_samples=1,
    stream=False,
    user="string_example",
    seed=1,
    return_inputs=True,
    tool_choice="string_example",
    tool_call="string_example",
    response_format={
        "type": "json_object",
    },
)

⚙️ Parameters

messages: List[ChatMessageWithToolCall]

The messages passed to the to provider chat endpoint.

model_config_id: str

Identifies the model configuration used to create a chat response.

project: str

Unique project name. If no project exists with this name, a new project will be created.

project_id: str

Unique ID of a project to associate to the log. Either this or project must be provided.

session_id: str

ID of the session to associate the datapoint.

session_reference_id: str

A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id in subsequent log requests. Specify at most one of this or session_id.

parent_id: str

ID associated to the parent datapoint in a session.

parent_reference_id: str

A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id in a prior log request. Specify at most one of this or parent_id. Note that this cannot refer to a datapoint being logged in the same request.

inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

The inputs passed to the prompt template.

source: str

Identifies where the model was called from.

metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Any additional metadata to record.

save: bool

Whether the request/response payloads will be stored on Humanloop.

source_datapoint_id: str

ID of the source datapoint if this is a log derived from a datapoint in a dataset.

provider_api_keys: ProviderApiKeys

API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

num_samples: int

The number of generations.

stream: bool

If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.

user: str

End-user ID passed through to provider call.

seed: int

Deprecated field: the seed is instead set as part of the request.config object.

return_inputs: bool

Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.

tool_choice: Union[str, str, ToolChoice]

Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.

tool_call: Union[str, Dict[str, str]]

NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.

response_format: ResponseFormat

The format of the response. Only type json_object is currently supported for chat.

⚙️ Request Body

ChatModelConfigRequest

🔄 Return

ChatResponse

🌐 Endpoint

/chat-model-config post

🔙 Back to Table of Contents


humanloop.complete

Create a completion by providing details of the model configuration in the request.

🛠️ Usage

create_response = humanloop.complete(
    model_config={
        "provider": "openai",
        "model": "model_example",
        "max_tokens": -1,
        "temperature": 1,
        "top_p": 1,
        "presence_penalty": 0,
        "frequency_penalty": 0,
        "endpoint": "complete",
        "prompt_template": "{{question}}",
    },
    project="string_example",
    project_id="string_example",
    session_id="string_example",
    session_reference_id="string_example",
    parent_id="string_example",
    parent_reference_id="string_example",
    inputs={},
    source="string_example",
    metadata={},
    save=True,
    source_datapoint_id="string_example",
    provider_api_keys={},
    num_samples=1,
    stream=False,
    user="string_example",
    seed=1,
    return_inputs=True,
    logprobs=1,
    suffix="string_example",
)

⚙️ Parameters

model_config: ModelConfigCompletionRequest

The model configuration used to generate.

project: str

Unique project name. If no project exists with this name, a new project will be created.

project_id: str

Unique ID of a project to associate to the log. Either this or project must be provided.

session_id: str

ID of the session to associate the datapoint.

session_reference_id: str

A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id in subsequent log requests. Specify at most one of this or session_id.

parent_id: str

ID associated to the parent datapoint in a session.

parent_reference_id: str

A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id in a prior log request. Specify at most one of this or parent_id. Note that this cannot refer to a datapoint being logged in the same request.

inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

The inputs passed to the prompt template.

source: str

Identifies where the model was called from.

metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Any additional metadata to record.

save: bool

Whether the request/response payloads will be stored on Humanloop.

source_datapoint_id: str

ID of the source datapoint if this is a log derived from a datapoint in a dataset.

provider_api_keys: ProviderApiKeys

API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

num_samples: int

The number of generations.

stream: bool

If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.

user: str

End-user ID passed through to provider call.

seed: int

Deprecated field: the seed is instead set as part of the request.config object.

return_inputs: bool

Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.

logprobs: int

Include the log probabilities of the top n tokens in the provider_response

suffix: str

The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.

⚙️ Request Body

CompletionRequest

🔄 Return

CompletionResponse

🌐 Endpoint

/completion post

🔙 Back to Table of Contents


humanloop.complete_deployed

Create a completion using the project's active deployment.

The active deployment can be a specific model configuration or an experiment.

🛠️ Usage

create_deployed_response = humanloop.complete_deployed(
    project="string_example",
    project_id="string_example",
    session_id="string_example",
    session_reference_id="string_example",
    parent_id="string_example",
    parent_reference_id="string_example",
    inputs={},
    source="string_example",
    metadata={},
    save=True,
    source_datapoint_id="string_example",
    provider_api_keys={},
    num_samples=1,
    stream=False,
    user="string_example",
    seed=1,
    return_inputs=True,
    logprobs=1,
    suffix="string_example",
    environment="string_example",
)

⚙️ Parameters

project: str

Unique project name. If no project exists with this name, a new project will be created.

project_id: str

Unique ID of a project to associate to the log. Either this or project must be provided.

session_id: str

ID of the session to associate the datapoint.

session_reference_id: str

A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id in subsequent log requests. Specify at most one of this or session_id.

parent_id: str

ID associated to the parent datapoint in a session.

parent_reference_id: str

A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id in a prior log request. Specify at most one of this or parent_id. Note that this cannot refer to a datapoint being logged in the same request.

inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

The inputs passed to the prompt template.

source: str

Identifies where the model was called from.

metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Any additional metadata to record.

save: bool

Whether the request/response payloads will be stored on Humanloop.

source_datapoint_id: str

ID of the source datapoint if this is a log derived from a datapoint in a dataset.

provider_api_keys: ProviderApiKeys

API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

num_samples: int

The number of generations.

stream: bool

If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.

user: str

End-user ID passed through to provider call.

seed: int

Deprecated field: the seed is instead set as part of the request.config object.

return_inputs: bool

Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.

logprobs: int

Include the log probabilities of the top n tokens in the provider_response

suffix: str

The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.

environment: str

The environment name used to create a chat response. If not specified, the default environment will be used.

⚙️ Request Body

CompletionDeployedRequest

🔄 Return

CompletionResponse

🌐 Endpoint

/completion-deployed post

🔙 Back to Table of Contents


humanloop.complete_experiment

Create a completion for a specific experiment.

🛠️ Usage

create_experiment_response = humanloop.complete_experiment(
    experiment_id="string_example",
    project="string_example",
    project_id="string_example",
    session_id="string_example",
    session_reference_id="string_example",
    parent_id="string_example",
    parent_reference_id="string_example",
    inputs={},
    source="string_example",
    metadata={},
    save=True,
    source_datapoint_id="string_example",
    provider_api_keys={},
    num_samples=1,
    stream=False,
    user="string_example",
    seed=1,
    return_inputs=True,
    logprobs=1,
    suffix="string_example",
)

⚙️ Parameters

experiment_id: str

If an experiment ID is provided a model configuration will be sampled from the experiments active model configurations.

project: str

Unique project name. If no project exists with this name, a new project will be created.

project_id: str

Unique ID of a project to associate to the log. Either this or project must be provided.

session_id: str

ID of the session to associate the datapoint.

session_reference_id: str

A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id in subsequent log requests. Specify at most one of this or session_id.

parent_id: str

ID associated to the parent datapoint in a session.

parent_reference_id: str

A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id in a prior log request. Specify at most one of this or parent_id. Note that this cannot refer to a datapoint being logged in the same request.

inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

The inputs passed to the prompt template.

source: str

Identifies where the model was called from.

metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Any additional metadata to record.

save: bool

Whether the request/response payloads will be stored on Humanloop.

source_datapoint_id: str

ID of the source datapoint if this is a log derived from a datapoint in a dataset.

provider_api_keys: ProviderApiKeys

API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

num_samples: int

The number of chat responses, where each chat response will use a model configuration sampled from the experiment.

stream: bool

If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.

user: str

End-user ID passed through to provider call.

seed: int

Deprecated field: the seed is instead set as part of the request.config object.

return_inputs: bool

Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.

logprobs: int

Include the log probabilities of the top n tokens in the provider_response

suffix: str

The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.

⚙️ Request Body

CompletionExperimentRequest

🔄 Return

CompletionResponse

🌐 Endpoint

/completion-experiment post

🔙 Back to Table of Contents


humanloop.complete_model_configuration

Create a completion for a specific model configuration.

🛠️ Usage

create_model_config_response = humanloop.complete_model_configuration(
    model_config_id="string_example",
    project="string_example",
    project_id="string_example",
    session_id="string_example",
    session_reference_id="string_example",
    parent_id="string_example",
    parent_reference_id="string_example",
    inputs={},
    source="string_example",
    metadata={},
    save=True,
    source_datapoint_id="string_example",
    provider_api_keys={},
    num_samples=1,
    stream=False,
    user="string_example",
    seed=1,
    return_inputs=True,
    logprobs=1,
    suffix="string_example",
)

⚙️ Parameters

model_config_id: str

Identifies the model configuration used to create a chat response.

project: str

Unique project name. If no project exists with this name, a new project will be created.

project_id: str

Unique ID of a project to associate to the log. Either this or project must be provided.

session_id: str

ID of the session to associate the datapoint.

session_reference_id: str

A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id in subsequent log requests. Specify at most one of this or session_id.

parent_id: str

ID associated to the parent datapoint in a session.

parent_reference_id: str

A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id in a prior log request. Specify at most one of this or parent_id. Note that this cannot refer to a datapoint being logged in the same request.

inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

The inputs passed to the prompt template.

source: str

Identifies where the model was called from.

metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Any additional metadata to record.

save: bool

Whether the request/response payloads will be stored on Humanloop.

source_datapoint_id: str

ID of the source datapoint if this is a log derived from a datapoint in a dataset.

provider_api_keys: ProviderApiKeys

API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

num_samples: int

The number of generations.

stream: bool

If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.

user: str

End-user ID passed through to provider call.

seed: int

Deprecated field: the seed is instead set as part of the request.config object.

return_inputs: bool

Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.

logprobs: int

Include the log probabilities of the top n tokens in the provider_response

suffix: str

The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.

⚙️ Request Body

CompletionModelConfigRequest

🔄 Return

CompletionResponse

🌐 Endpoint

/completion-model-config post

🔙 Back to Table of Contents


humanloop.datapoints.delete

Deprecated

Delete a list of datapoints by their IDs.

WARNING: This endpoint has been decommisioned and no longer works. Please use the v5 datasets API instead.

🛠️ Usage

humanloop.datapoints.delete()

🌐 Endpoint

/datapoints delete

🔙 Back to Table of Contents


humanloop.datapoints.get

Get a datapoint by ID.

🛠️ Usage

get_response = humanloop.datapoints.get(
    id="id_example",
)

⚙️ Parameters

id: str

String ID of datapoint.

🔄 Return

DatapointResponse

🌐 Endpoint

/datapoints/{id} get

🔙 Back to Table of Contents


humanloop.datapoints.update

Deprecated

Edit the input, messages and criteria fields of a datapoint.

WARNING: This endpoint has been decommisioned and no longer works. Please use the v5 datasets API instead.

🛠️ Usage

update_response = humanloop.datapoints.update(
    id="id_example",
)

⚙️ Parameters

id: str

String ID of datapoint.

🔄 Return

DatapointResponse

🌐 Endpoint

/datapoints/{id} patch

🔙 Back to Table of Contents


humanloop.datasets.create

Create a new dataset for a project.

🛠️ Usage

create_response = humanloop.datasets.create(
    description="string_example",
    name="string_example",
    project_id="project_id_example",
)

⚙️ Parameters

description: str

The description of the dataset.

name: str

The name of the dataset.

project_id: str

⚙️ Request Body

CreateDatasetRequest

🔄 Return

DatasetResponse

🌐 Endpoint

/projects/{project_id}/datasets post

🔙 Back to Table of Contents


humanloop.datasets.create_datapoint

Create a new datapoint for a dataset.

Here in the v4 API, this has the following behaviour:

  • Retrieve the current latest version of the dataset.
  • Construct a new version of the dataset with the new testcases added.
  • Store that latest version as a committed version with an autogenerated commit message and return the new datapoints

🛠️ Usage

create_datapoint_response = humanloop.datasets.create_datapoint(
    body={
        "log_ids": ["log_ids_example"],
    },
    dataset_id="dataset_id_example",
    log_ids=["string_example"],
    inputs={
        "key": "string_example",
    },
    messages=[
        {
            "role": "user",
        }
    ],
    target={
        "key": "string_example",
    },
)

⚙️ Parameters

dataset_id: str

String ID of dataset. Starts with evts_.

requestBody: DatasetsCreateDatapointRequest

🔄 Return

DatasetsCreateDatapointResponse

🌐 Endpoint

/datasets/{dataset_id}/datapoints post

🔙 Back to Table of Contents


humanloop.datasets.delete

Delete a dataset by ID.

🛠️ Usage

delete_response = humanloop.datasets.delete(
    id="id_example",
)

⚙️ Parameters

id: str

String ID of dataset. Starts with evts_.

🌐 Endpoint

/datasets/{id} delete

🔙 Back to Table of Contents


humanloop.datasets.get

Get a single dataset by ID.

🛠️ Usage

get_response = humanloop.datasets.get(
    id="id_example",
)

⚙️ Parameters

id: str

String ID of dataset. Starts with evts_.

🔄 Return

DatasetResponse

🌐 Endpoint

/datasets/{id} get

🔙 Back to Table of Contents


humanloop.datasets.list

Get all Datasets for an organization.

🛠️ Usage

list_response = humanloop.datasets.list()

🔄 Return

DatasetsListResponse

🌐 Endpoint

/datasets get

🔙 Back to Table of Contents


humanloop.datasets.list_all_for_project

Deprecated

Get all datasets for a project.

🛠️ Usage

list_all_for_project_response = humanloop.datasets.list_all_for_project(
    project_id="project_id_example",
)

⚙️ Parameters

project_id: str

🔄 Return

DatasetsListAllForProjectResponse

🌐 Endpoint

/projects/{project_id}/datasets get

🔙 Back to Table of Contents


humanloop.datasets.list_datapoints

Get datapoints for a dataset.

🛠️ Usage

list_datapoints_response = humanloop.datasets.list_datapoints(
    dataset_id="dataset_id_example",
    page=0,
    size=50,
)

⚙️ Parameters

dataset_id: str

String ID of dataset. Starts with evts_.

page: int
size: int

🔄 Return

PaginatedDataDatapointResponse

🌐 Endpoint

/datasets/{dataset_id}/datapoints get

🔙 Back to Table of Contents


humanloop.datasets.update

Update a testset by ID.

🛠️ Usage

update_response = humanloop.datasets.update(
    id="id_example",
    description="string_example",
    name="string_example",
)

⚙️ Parameters

id: str

String ID of testset. Starts with evts_.

description: str

The description of the dataset.

name: str

The name of the dataset.

⚙️ Request Body

UpdateDatasetRequest

🔄 Return

DatasetResponse

🌐 Endpoint

/datasets/{id} patch

🔙 Back to Table of Contents


humanloop.evaluations.add_evaluators

Add evaluators to an existing evaluation run.

🛠️ Usage

add_evaluators_response = humanloop.evaluations.add_evaluators(
    id="id_example",
    evaluator_ids=["string_example"],
    evaluator_version_ids=["string_example"],
)

⚙️ Parameters

id: str

String ID of evaluation run. Starts with ev_.

evaluator_ids: AddEvaluatorsRequestEvaluatorIds
evaluator_version_ids: AddEvaluatorsRequestEvaluatorVersionIds

⚙️ Request Body

AddEvaluatorsRequest

🔄 Return

EvaluationResponse

🌐 Endpoint

/evaluations/{id}/evaluators patch

🔙 Back to Table of Contents


humanloop.evaluations.create

Create an evaluation.

🛠️ Usage

create_response = humanloop.evaluations.create(
    config_id="string_example",
    evaluator_ids=["string_example"],
    dataset_id="string_example",
    project_id="project_id_example",
    provider_api_keys={},
    hl_generated=True,
)

⚙️ Parameters

config_id: str

ID of the config to evaluate. Starts with config_.

evaluator_ids: CreateEvaluationRequestEvaluatorIds
dataset_id: str

ID of the dataset to use in this evaluation. Starts with evts_.

project_id: str

String ID of project. Starts with pr_.

provider_api_keys: ProviderApiKeys

API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization. Ensure you provide an API key for the provider for the model config you are evaluating, or have one saved to your organization.

hl_generated: bool

Whether the log generations for this evaluation should be performed by Humanloop. If False, the log generations should be submitted by the user via the API.

⚙️ Request Body

CreateEvaluationRequest

🔄 Return

EvaluationResponse

🌐 Endpoint

/projects/{project_id}/evaluations post

🔙 Back to Table of Contents


humanloop.evaluations.get

Get evaluation by ID.

🛠️ Usage

get_response = humanloop.evaluations.get(
    id="id_example",
    evaluator_aggregates=True,
    evaluatee_id="string_example",
)

⚙️ Parameters

id: str

String ID of evaluation run. Starts with ev_.

evaluator_aggregates: bool

Whether to include evaluator aggregates in the response.

evaluatee_id: str

String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_.

🔄 Return

EvaluationResponse

🌐 Endpoint

/evaluations/{id} get

🔙 Back to Table of Contents


humanloop.evaluations.list

Get the evaluations associated with a project.

Sorting and filtering are supported through query params for categorical columns and the created_at timestamp.

Sorting is supported for the dataset, config, status and evaluator-{evaluator_id} columns. Specify sorting with the sort query param, with values {column}.{ordering}. E.g. ?sort=dataset.asc&sort=status.desc will yield a multi-column sort. First by dataset then by status.

Filtering is supported for the id, dataset, config and status columns.

Specify filtering with the id_filter, dataset_filter, config_filter and status_filter query params.

E.g. ?dataset_filter=my_dataset&dataset_filter=my_other_dataset&status_filter=running will only show rows where the dataset is "my_dataset" or "my_other_dataset", and where the status is "running".

An additional date range filter is supported for the created_at column. Use the start_date and end_date query parameters to configure this.

🛠️ Usage

list_response = humanloop.evaluations.list(
    project_id="project_id_example",
    id=["string_example"],
    start_date="1970-01-01",
    end_date="1970-01-01",
    size=50,
    page=0,
    evaluatee_id="string_example",
)

⚙️ Parameters

project_id: str

String ID of project. Starts with pr_.

id: List[str]

A list of evaluation run ids to filter on. Starts with ev_.

start_date: date

Only return evaluations created after this date.

end_date: date

Only return evaluations created before this date.

size: int
page: int
evaluatee_id: str

String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_.

🔄 Return

PaginatedDataEvaluationResponse

🌐 Endpoint

/evaluations get

🔙 Back to Table of Contents


humanloop.evaluations.list_all_for_project

Deprecated

Get all the evaluations associated with your project.

Deprecated: This is a legacy unpaginated endpoint. Use /evaluations instead, with appropriate sorting, filtering and pagination options.

🛠️ Usage

list_all_for_project_response = humanloop.evaluations.list_all_for_project(
    project_id="project_id_example",
    evaluatee_id="string_example",
    evaluator_aggregates=True,
)

⚙️ Parameters

project_id: str

String ID of project. Starts with pr_.

evaluatee_id: str

String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_.

evaluator_aggregates: bool

Whether to include evaluator aggregates in the response.

🔄 Return

EvaluationsGetForProjectResponse

🌐 Endpoint

/projects/{project_id}/evaluations get

🔙 Back to Table of Contents


humanloop.evaluations.list_datapoints

Get testcases by evaluation ID.

🛠️ Usage

list_datapoints_response = humanloop.evaluations.list_datapoints(
    id="id_example",
    page=1,
    size=10,
    evaluatee_id="string_example",
)

⚙️ Parameters

id: str

String ID of evaluation. Starts with ev_.

page: int

Page to fetch. Starts from 1.

size: int

Number of evaluation results to retrieve.

evaluatee_id: str

String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_.

🔄 Return

PaginatedDataEvaluationDatapointSnapshotResponse

🌐 Endpoint

/evaluations/{id}/datapoints get

🔙 Back to Table of Contents


humanloop.evaluations.log

Log an external generation to an evaluation run for a datapoint.

The run must have status 'running'.

🛠️ Usage

log_response = humanloop.evaluations.log(
    datapoint_id="string_example",
    log={
        "save": True,
    },
    evaluation_id="evaluation_id_example",
    evaluatee_id="string_example",
)

⚙️ Parameters

datapoint_id: str

The datapoint for which a log was generated. Must be one of the datapoints in the dataset being evaluated.

log: LogRequest

The log generated for the datapoint.

evaluation_id: str

ID of the evaluation run. Starts with evrun_.

evaluatee_id: str

String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_.

⚙️ Request Body

CreateEvaluationLogRequest

🔄 Return

CreateLogResponse

🌐 Endpoint

/evaluations/{evaluation_id}/log post

🔙 Back to Table of Contents


humanloop.evaluations.result

Log an evaluation result to an evaluation run.

The run must have status 'running'. One of result or error must be provided.

🛠️ Usage

result_response = humanloop.evaluations.result(
    log_id="string_example",
    evaluator_id="string_example",
    evaluation_id="evaluation_id_example",
    result=True,
    error="string_example",
    evaluatee_id="string_example",
)

⚙️ Parameters

log_id: str

The log that was evaluated. Must have as its source_datapoint_id one of the datapoints in the dataset being evaluated.

evaluator_id: str

ID of the evaluator that evaluated the log. Starts with evfn_. Must be one of the evaluator IDs associated with the evaluation run being logged to.

evaluation_id: str

ID of the evaluation run. Starts with evrun_.

result: Union[bool, int, Union[int, float]]

The result value of the evaluation.

error: str

An error that occurred during evaluation.

evaluatee_id: str

String ID of evaluatee version to return. If not defined, the first evaluatee will be returned. Starts with evv_.

⚙️ Request Body

CreateEvaluationResultLogRequest

🔄 Return

EvaluationResultResponse

🌐 Endpoint

/evaluations/{evaluation_id}/result post

🔙 Back to Table of Contents


humanloop.evaluations.update_status

Update the status of an evaluation run.

Can only be used to update the status of an evaluation run that uses external or human evaluators. The evaluation must currently have status 'running' if swithcing to completed, or it must have status 'completed' if switching back to 'running'.

🛠️ Usage

update_status_response = humanloop.evaluations.update_status(
    status="pending",
    id="id_example",
)

⚙️ Parameters

status: EvaluationStatus

The new status of the evaluation.

id: str

String ID of evaluation run. Starts with ev_.

⚙️ Request Body

UpdateEvaluationStatusRequest

🔄 Return

EvaluationResponse

🌐 Endpoint

/evaluations/{id}/status patch

🔙 Back to Table of Contents


humanloop.evaluators.create

Create an evaluator within your organization.

🛠️ Usage

create_response = humanloop.evaluators.create(
    description="string_example",
    name="a",
    arguments_type="target_free",
    return_type="boolean",
    type="python",
    code="string_example",
    model_config={
        "provider": "openai",
        "model": "model_example",
        "max_tokens": -1,
        "temperature": 1,
        "top_p": 1,
        "presence_penalty": 0,
        "frequency_penalty": 0,
        "endpoint": "complete",
        "prompt_template": "{{question}}",
    },
)

⚙️ Parameters

description: str

The description of the evaluator.

name: str

The name of the evaluator.

arguments_type: EvaluatorArgumentsType

Whether this evaluator is target-free or target-required.

return_type: EvaluatorReturnTypeEnum

The type of the return value of the evaluator.

type: EvaluatorType

The type of the evaluator.

code: str

The code for the evaluator. This code will be executed in a sandboxed environment.

model_config: ModelConfigCompletionRequest

The model configuration used to generate.

⚙️ Request Body

CreateEvaluatorRequest

🔄 Return

EvaluatorResponse

🌐 Endpoint

/evaluators post

🔙 Back to Table of Contents


humanloop.evaluators.delete

Delete an evaluator within your organization.

🛠️ Usage

humanloop.evaluators.delete(
    id="id_example",
)

⚙️ Parameters

id: str

🌐 Endpoint

/evaluators/{id} delete

🔙 Back to Table of Contents


humanloop.evaluators.get

Get an evaluator within your organization.

🛠️ Usage

get_response = humanloop.evaluators.get(
    id="id_example",
)

⚙️ Parameters

id: str

🔄 Return

EvaluatorResponse

🌐 Endpoint

/evaluators/{id} get

🔙 Back to Table of Contents


humanloop.evaluators.list

Get all evaluators within your organization.

🛠️ Usage

list_response = humanloop.evaluators.list()

🔄 Return

EvaluatorsListResponse

🌐 Endpoint

/evaluators get

🔙 Back to Table of Contents


humanloop.evaluators.update

Update an evaluator within your organization.

🛠️ Usage

update_response = humanloop.evaluators.update(
    id="id_example",
    description="string_example",
    name="string_example",
    arguments_type="target_free",
    return_type="boolean",
    code="string_example",
    model_config={
        "provider": "openai",
        "model": "model_example",
        "max_tokens": -1,
        "temperature": 1,
        "top_p": 1,
        "presence_penalty": 0,
        "frequency_penalty": 0,
        "endpoint": "complete",
        "prompt_template": "{{question}}",
    },
)

⚙️ Parameters

id: str
description: str

The description of the evaluator.

name: str

The name of the evaluator.

arguments_type: EvaluatorArgumentsType

Whether this evaluator is target-free or target-required.

return_type: EvaluatorReturnTypeEnum

The type of the return value of the evaluator.

code: str

The code for the evaluator. This code will be executed in a sandboxed environment.

model_config: ModelConfigCompletionRequest

The model configuration used to generate.

⚙️ Request Body

UpdateEvaluatorRequest

🔄 Return

EvaluatorResponse

🌐 Endpoint

/evaluators/{id} patch

🔙 Back to Table of Contents


humanloop.experiments.create

Create an experiment for your project.

You can optionally specify IDs of your project's model configs to include in the experiment, along with a set of labels to consider as positive feedback and whether the experiment should be set as active.

🛠️ Usage

create_response = humanloop.experiments.create(
    name="string_example",
    positive_labels=[
        {
            "type": "type_example",
            "value": "value_example",
        }
    ],
    project_id="project_id_example",
    config_ids=["string_example"],
    set_active=False,
)

⚙️ Parameters

name: str

Name of experiment.

positive_labels: List[PositiveLabel]

Feedback labels to treat as positive user feedback. Used to monitor the performance of model configs in the experiment.

project_id: str

String ID of project. Starts with pr_.

config_ids: CreateExperimentRequestConfigIds
set_active: bool

Whether to set the created project as the project's active experiment.

⚙️ Request Body

CreateExperimentRequest

🔄 Return

ExperimentResponse

🌐 Endpoint

/projects/{project_id}/experiments post

🔙 Back to Table of Contents


humanloop.experiments.delete

Delete the experiment with the specified ID.

🛠️ Usage

humanloop.experiments.delete(
    experiment_id="experiment_id_example",
)

⚙️ Parameters

experiment_id: str

String ID of experiment. Starts with exp_.

🌐 Endpoint

/experiments/{experiment_id} delete

🔙 Back to Table of Contents


humanloop.experiments.list

Get an array of experiments associated to your project.

🛠️ Usage

list_response = humanloop.experiments.list(
    project_id="project_id_example",
)

⚙️ Parameters

project_id: str

String ID of project. Starts with pr_.

🔄 Return

ExperimentsListResponse

🌐 Endpoint

/projects/{project_id}/experiments get

🔙 Back to Table of Contents


humanloop.experiments.sample

Samples a model config from the experiment's active model configs.

🛠️ Usage

sample_response = humanloop.experiments.sample(
    experiment_id="experiment_id_example",
)

⚙️ Parameters

experiment_id: str

String ID of experiment. Starts with exp_.

🔄 Return

GetModelConfigResponse

🌐 Endpoint

/experiments/{experiment_id}/model-config get

🔙 Back to Table of Contents


humanloop.experiments.update

Update your experiment, including registering and de-registering model configs.

🛠️ Usage

update_response = humanloop.experiments.update(
    experiment_id="experiment_id_example",
    name="string_example",
    positive_labels=[
        {
            "type": "type_example",
            "value": "value_example",
        }
    ],
    config_ids_to_register=["string_example"],
    config_ids_to_deregister=["string_example"],
)

⚙️ Parameters

experiment_id: str

String ID of experiment. Starts with exp_.

name: str

Name of experiment.

positive_labels: List[PositiveLabel]

Feedback labels to treat as positive user feedback. Used to monitor the performance of model configs in the experiment.

config_ids_to_register: UpdateExperimentRequestConfigIdsToRegister
config_ids_to_deregister: UpdateExperimentRequestConfigIdsToDeregister

⚙️ Request Body

UpdateExperimentRequest

🔄 Return

ExperimentResponse

🌐 Endpoint

/experiments/{experiment_id} patch

🔙 Back to Table of Contents


humanloop.feedback

Submit an array of feedback for existing data_ids

🛠️ Usage

feedback_response = humanloop.feedback(
    body=[
        {
            "type": "string_example",
        }
    ],
    type="string_example",
    value="string_example",
    data_id="string_example",
    user="string_example",
    created_at="1970-01-01T00:00:00.00Z",
    unset=True,
)

⚙️ Parameters

type: Union[FeedbackType, str]

The type of feedback. The default feedback types available are 'rating', 'action', 'issue', 'correction', and 'comment'.

value: str

The feedback value to be set. This field should be left blank when unsetting 'rating', 'correction' or 'comment', but is required otherwise.

data_id: str

ID to associate the feedback to a previously logged datapoint.

user: str

A unique identifier to who provided the feedback.

created_at: datetime

User defined timestamp for when the feedback was created.

unset: bool

If true, the value for this feedback is unset.

⚙️ Request Body

FeedbackSubmitRequest

🔄 Return

FeedbackSubmitResponse

🌐 Endpoint

/feedback post

🔙 Back to Table of Contents


humanloop.logs.delete

Delete

🛠️ Usage

humanloop.logs.delete(
    id=["string_example"],
)

⚙️ Parameters

id: List[str]

🌐 Endpoint

/logs delete

🔙 Back to Table of Contents


humanloop.logs.get

Retrieve a log by log id.

🛠️ Usage

get_response = humanloop.logs.get(
    id="id_example",
)

⚙️ Parameters

id: str

String ID of log to return. Starts with data_.

🔄 Return

LogResponse

🌐 Endpoint

/logs/{id} get

🔙 Back to Table of Contents


humanloop.logs.list

Retrieve paginated logs from the server.

Sorting and filtering are supported through query params.

Sorting is supported for the source, model, timestamp, and feedback-{output_name} columns. Specify sorting with the sort query param, with values {column}.{ordering}. E.g. ?sort=source.asc&sort=model.desc will yield a multi-column sort. First by source then by model.

Filtering is supported for the source, model, feedback-{output_name}, evaluator-{evaluator_external_id} columns.

Specify filtering with the source_filter, model_filter, feedback-{output.name}_filter and evaluator-{evaluator_external_id}_filter query params.

E.g. ?source_filter=AI&source_filter=user_1234&feedback-explicit_filter=good will only show rows where the source is "AI" or "user_1234", and where the latest feedback for the "explicit" output group is "good".

An additional date range filter is supported for the Timestamp column (i.e. Log.created_at). These are supported through the start_date and end_date query parameters. The date format could be either date: YYYY-MM-DD, e.g. 2024-01-01 or datetime: YYYY-MM-DD[T]HH:MM[:SS[.ffffff]][Z or [±]HH[:]MM], e.g. 2024-01-01T00:00:00Z.

Searching is supported for the model inputs and output. Specify a search term with the search query param. E.g. ?search=hello%20there will cause a case-insensitive search across model inputs and output.

🛠️ Usage

list_response = humanloop.logs.list(
    project_id="project_id_example",
    search="string_example",
    metadata_search="string_example",
    version_status="uncommitted",
    start_date="1970-01-01",
    end_date="1970-01-01",
    size=50,
    page=0,
)

⚙️ Parameters

project_id: str
search: str
metadata_search: str
version_status: VersionStatus
start_date: Union[date, datetime]
end_date: Union[date, datetime]
size: int
page: int

🔄 Return

PaginatedDataLogResponse

🌐 Endpoint

/logs get

🔙 Back to Table of Contents


humanloop.log

Log a datapoint or array of datapoints to your Humanloop project.

🛠️ Usage

log_response = humanloop.log(
    body=[
        {
            "save": True,
        }
    ],
    project="string_example",
    project_id="string_example",
    session_id="string_example",
    session_reference_id="string_example",
    parent_id="string_example",
    parent_reference_id="string_example",
    inputs={},
    source="string_example",
    metadata={},
    save=True,
    source_datapoint_id="string_example",
    reference_id="string_example",
    trial_id="string_example",
    messages=[
        {
            "role": "user",
        }
    ],
    output="string_example",
    judgment=True,
    config_id="string_example",
    config={
        "provider": "openai",
        "model": "model_example",
        "max_tokens": -1,
        "temperature": 1,
        "top_p": 1,
        "presence_penalty": 0,
        "frequency_penalty": 0,
        "endpoint": "complete",
        "type": "ModelConfigRequest",
    },
    environment="string_example",
    feedback={
        "type": "string_example",
        "value": 3.14,
    },
    created_at="1970-01-01T00:00:00.00Z",
    error="string_example",
    duration=3.14,
    output_message={
        "role": "user",
    },
    prompt_tokens=1,
    output_tokens=1,
    prompt_cost=3.14,
    output_cost=3.14,
    provider_request={},
    provider_response={},
)

⚙️ Parameters

project: str

Unique project name. If no project exists with this name, a new project will be created.

project_id: str

Unique ID of a project to associate to the log. Either this or project must be provided.

session_id: str

ID of the session to associate the datapoint.

session_reference_id: str

A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id in subsequent log requests. Specify at most one of this or session_id.

parent_id: str

ID associated to the parent datapoint in a session.

parent_reference_id: str

A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id in a prior log request. Specify at most one of this or parent_id. Note that this cannot refer to a datapoint being logged in the same request.

inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

The inputs passed to the prompt template.

source: str

Identifies where the model was called from.

metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Any additional metadata to record.

save: bool

Whether the request/response payloads will be stored on Humanloop.

source_datapoint_id: str

ID of the source datapoint if this is a log derived from a datapoint in a dataset.

reference_id: str

A unique string to reference the datapoint. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id in a subsequent log request.

trial_id: str

Unique ID of an experiment trial to associate to the log.

messages: List[ChatMessageWithToolCall]

The messages passed to the to provider chat endpoint.

output: str

Generated output from your model for the provided inputs. Can be None if logging an error, or if logging a parent datapoint with the intention to populate it later

judgment: Union[bool, Union[int, float]]
config_id: str

Unique ID of a config to associate to the log.

config: Union[ModelConfigRequest, ToolConfigRequest]

The model config used for this generation. Required unless config_id or trial_id is provided.

environment: str

The environment name used to create the log.

feedback: Union[Feedback, List[Feedback]]

Optional parameter to provide feedback with your logged datapoint.

created_at: datetime

User defined timestamp for when the log was created.

error: str

Error message if the log is an error.

duration: Union[int, float]

Duration of the logged event in seconds.

output_message: ChatMessageWithToolCall

The message returned by the provider.

prompt_tokens: int

Number of tokens in the prompt used to generate the output.

output_tokens: int

Number of tokens in the output generated by the model.

prompt_cost: Union[int, float]

Cost in dollars associated to the tokens in the prompt.

output_cost: Union[int, float]

Cost in dollars associated to the tokens in the output.

provider_request: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Raw request sent to provider.

provider_response: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Raw response received the provider.

⚙️ Request Body

LogDatapointRequest

🔄 Return

LogsLogResponse

🌐 Endpoint

/logs post

🔙 Back to Table of Contents


humanloop.logs.update

Update a logged datapoint in your Humanloop project.

🛠️ Usage

update_response = humanloop.logs.update(
    id="id_example",
    output="string_example",
    error="string_example",
    duration=3.14,
)

⚙️ Parameters

id: str

String ID of logged datapoint to return. Starts with data_.

output: str

Generated output from your model for the provided inputs.

error: str

Error message if the log is an error.

duration: Union[int, float]

Duration of the logged event in seconds.

⚙️ Request Body

UpdateLogRequest

🔄 Return

LogResponse

🌐 Endpoint

/logs/{id} patch

🔙 Back to Table of Contents


humanloop.logs.update_by_ref

Update a logged datapoint by its reference ID.

The reference_id query parameter must be provided, and refers to the reference_id of a previously-logged datapoint.

🛠️ Usage

update_by_ref_response = humanloop.logs.update_by_ref(
    reference_id="reference_id_example",
    output="string_example",
    error="string_example",
    duration=3.14,
)

⚙️ Parameters

reference_id: str

A unique string to reference the datapoint. Identifies the logged datapoint created with the same reference_id.

output: str

Generated output from your model for the provided inputs.

error: str

Error message if the log is an error.

duration: Union[int, float]

Duration of the logged event in seconds.

⚙️ Request Body

UpdateLogRequest

🔄 Return

LogResponse

🌐 Endpoint

/logs patch

🔙 Back to Table of Contents


humanloop.model_configs.deserialize

Deserialize a model config from a .prompt file format.

🛠️ Usage

deserialize_response = humanloop.model_configs.deserialize(
    config="string_example",
)

⚙️ Parameters

config: str

⚙️ Request Body

BodyModelConfigsDeserialize

🔄 Return

ModelConfigResponse

🌐 Endpoint

/model-configs/deserialize post

🔙 Back to Table of Contents


humanloop.model_configs.export

Export a model config to a .prompt file by ID.

🛠️ Usage

export_response = humanloop.model_configs.export(
    id="id_example",
)

⚙️ Parameters

id: str

String ID of the model config. Starts with config_.

🌐 Endpoint

/model-configs/{id}/export post

🔙 Back to Table of Contents


humanloop.model_configs.get

Get a specific model config by ID.

🛠️ Usage

get_response = humanloop.model_configs.get(
    id="id_example",
)

⚙️ Parameters

id: str

String ID of the model config. Starts with config_.

🔄 Return

ModelConfigResponse

🌐 Endpoint

/model-configs/{id} get

🔙 Back to Table of Contents


humanloop.model_configs.register

Register a model config to a project and optionally add it to an experiment.

If the project name provided does not exist, a new project will be created automatically.

If an experiment name is provided, the specified experiment must already exist. Otherwise, an error will be raised.

If the model config is the first to be associated to the project, it will be set as the active model config.

🛠️ Usage

register_response = humanloop.model_configs.register(
    model="string_example",
    description="string_example",
    name="string_example",
    provider="openai",
    max_tokens=-1,
    temperature=1,
    top_p=1,
    stop="string_example",
    presence_penalty=0,
    frequency_penalty=0,
    other={},
    seed=1,
    response_format={
        "type": "json_object",
    },
    project="string_example",
    project_id="string_example",
    experiment="string_example",
    prompt_template="string_example",
    chat_template=[
        {
            "role": "user",
        }
    ],
    endpoint="complete",
    tools=[
        {
            "id": "id_example",
            "source": "organization",
        }
    ],
)

⚙️ Parameters

model: str

The model instance used. E.g. text-davinci-002.

description: str

A description of the model config.

name: str

A friendly display name for the model config. If not provided, a name will be generated.

provider: ModelProviders

The company providing the underlying model service.

max_tokens: int

The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt

temperature: Union[int, float]

What sampling temperature to use when making a generation. Higher values means the model will be more creative.

top_p: Union[int, float]

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.

stop: Union[str, List[str]]

The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.

presence_penalty: Union[int, float]

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.

frequency_penalty: Union[int, float]

Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.

other: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Other parameter values to be passed to the provider call.

seed: int

If specified, model will make a best effort to sample deterministically, but it is not guaranteed.

response_format: ResponseFormat

The format of the response. Only type json_object is currently supported for chat.

project: str

Unique project name. If it does not exist, a new project will be created.

project_id: str

Unique project ID

experiment: str

If specified, the model config will be added to this experiment. Experiments are used for A/B testing and optimizing hyperparameters.

prompt_template: str

Prompt template that will take your specified inputs to form your final request to the provider model. NB: Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.

chat_template: List[ChatMessageWithToolCall]

Messages prepended to the list of messages sent to the provider. These messages that will take your specified inputs to form your final request to the provider model. NB: Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.

endpoint: ModelEndpoints

Which of the providers model endpoints to use. For example Complete or Edit.

tools: ProjectModelConfigRequestTools

⚙️ Request Body

ProjectModelConfigRequest

🔄 Return

ProjectConfigResponse

🌐 Endpoint

/model-configs post

🔙 Back to Table of Contents


humanloop.model_configs.serialize

Serialize a model config to a .prompt file format.

🛠️ Usage

serialize_response = humanloop.model_configs.serialize(
    body={
        "provider": "openai",
        "model": "model_example",
        "max_tokens": -1,
        "temperature": 1,
        "top_p": 1,
        "presence_penalty": 0,
        "frequency_penalty": 0,
        "endpoint": "complete",
    },
    description="string_example",
    name="string_example",
    provider="openai",
    model="string_example",
    max_tokens=-1,
    temperature=1,
    top_p=1,
    stop="string_example",
    presence_penalty=0,
    frequency_penalty=0,
    other={},
    seed=1,
    response_format={
        "type": "json_object",
    },
    endpoint="complete",
    chat_template=[
        {
            "role": "user",
        }
    ],
    tools=[
        {
            "id": "id_example",
            "source": "organization",
        }
    ],
    prompt_template="{{question}}",
)

⚙️ Parameters

description: str

A description of the model config.

name: str

A friendly display name for the model config. If not provided, a name will be generated.

provider: ModelProviders

The company providing the underlying model service.

model: str

The model instance used. E.g. text-davinci-002.

max_tokens: int

The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt

temperature: Union[int, float]

What sampling temperature to use when making a generation. Higher values means the model will be more creative.

top_p: Union[int, float]

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.

stop: Union[str, List[str]]

The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.

presence_penalty: Union[int, float]

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.

frequency_penalty: Union[int, float]

Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.

other: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]

Other parameter values to be passed to the provider call.

seed: int

If specified, model will make a best effort to sample deterministically, but it is not guaranteed.

response_format: ResponseFormat

The format of the response. Only type json_object is currently supported for chat.

endpoint: ModelEndpoints

The provider model endpoint used.

chat_template: List[ChatMessageWithToolCall]

Messages prepended to the list of messages sent to the provider. These messages that will take your specified inputs to form your final request to the provider model. Input variables within the template should be specified with syntax: {{INPUT_NAME}}.

tools: ModelConfigChatRequestTools
prompt_template: str

Prompt template that will take your specified inputs to form your final request to the model. Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.

⚙️ Request Body

ModelConfigsSerializeRequest

🌐 Endpoint

/model-configs/serialize post

🔙 Back to Table of Contents


humanloop.projects.create

Create a new project.

🛠️ Usage

create_response = humanloop.projects.create(
    name="string_example",
    feedback_types=[
        {
            "type": "type_example",
            "_class": "select",
        }
    ],
    directory_id="string_example",
)

⚙️ Parameters

name: str

Unique project name.

feedback_types: List[FeedbackTypeRequest]

Feedback types to be created.

directory_id: str

ID of directory to assign project to. Starts with dir_. If not provided, the project will be created in the root directory.

⚙️ Request Body

CreateProjectRequest

🔄 Return

ProjectResponse

🌐 Endpoint

/projects post

🔙 Back to Table of Contents


humanloop.projects.create_feedback_type

Create Feedback Type

🛠️ Usage

create_feedback_type_response = humanloop.projects.create_feedback_type(
    type="string_example",
    id="id_example",
    values=[
        {
            "value": "value_example",
            "sentiment": "positive",
        }
    ],
    _class="select",
)

⚙️ Parameters

type: str

The type of feedback to update.

id: str

String ID of project. Starts with pr_.

values: List[FeedbackLabelRequest]

The feedback values to be available. This field should only be populated when updating a 'select' or 'multi_select' feedback class.

_class: FeedbackClass

The data type associated to this feedback type; whether it is a 'text'/'select'/'multi_select'. This is optional when updating the default feedback types (i.e. when type is 'rating', 'action' or 'issue').

⚙️ Request Body

FeedbackTypeRequest

🔄 Return

FeedbackTypeModel

🌐 Endpoint

/projects/{id}/feedback-types post

🔙 Back to Table of Contents


humanloop.projects.deactivate_config

Remove the project's active config, if set.

This has no effect if the project does not have an active model config set.

🛠️ Usage

deactivate_config_response = humanloop.projects.deactivate_config(
    id="id_example",
    environment="string_example",
)

⚙️ Parameters

id: str

String ID of project. Starts with pr_.

environment: str

Name for the environment. E.g. 'production'. If not provided, will delete the active config for the default environment.

🔄 Return

ProjectResponse

🌐 Endpoint

/projects/{id}/active-config delete

🔙 Back to Table of Contents


humanloop.projects.deactivate_experiment

Remove the project's active experiment, if set.

This has no effect if the project does not have an active experiment set.

🛠️ Usage

deactivate_experiment_response = humanloop.projects.deactivate_experiment(
    id="id_example",
    environment="string_example",
)

⚙️ Parameters

id: str

String ID of project. Starts with pr_.

environment: str

Name for the environment. E.g. 'producton'. If not provided, will return the experiment for the default environment.

🔄 Return

ProjectResponse

🌐 Endpoint

/projects/{id}/active-experiment delete

🔙 Back to Table of Contents


humanloop.projects.delete

Delete a specific file.

🛠️ Usage

humanloop.projects.delete(
    id="id_example",
)

⚙️ Parameters

id: str

String ID of project. Starts with pr_.

🌐 Endpoint

/projects/{id} delete

🔙 Back to Table of Contents


humanloop.projects.delete_deployed_config

Remove the verion deployed to environment.

This has no effect if the project does not have an active version set.

🛠️ Usage

delete_deployed_config_response = humanloop.projects.delete_deployed_config(
    project_id="project_id_example",
    environment_id="environment_id_example",
)

⚙️ Parameters

project_id: str
environment_id: str

🌐 Endpoint

/projects/{project_id}/deployed-config/{environment_id} delete

🔙 Back to Table of Contents


humanloop.projects.deploy_config

Deploy a model config to an environment.

If the environment already has a model config deployed, it will be replaced.

🛠️ Usage

deploy_config_response = humanloop.projects.deploy_config(
    project_id="project_id_example",
    config_id="string_example",
    experiment_id="string_example",
    environments=[
        {
            "id": "id_example",
        }
    ],
)

⚙️ Parameters

project_id: str
config_id: str

Model config unique identifier generated by Humanloop.

experiment_id: str

String ID of experiment. Starts with exp_.

environments: List[EnvironmentRequest]

List of environments to associate with the model config.

⚙️ Request Body

EnvironmentProjectConfigRequest

🔄 Return

ProjectsDeployConfigToEnvironmentsResponse

🌐 Endpoint

/projects/{project_id}/deploy-config patch

🔙 Back to Table of Contents


humanloop.projects.export

Export all logged datapoints associated to your project.

Results are paginated and sorts the datapoints based on created_at in descending order.

🛠️ Usage

export_response = humanloop.projects.export(
    id="id_example",
    page=0,
    size=10,
)

⚙️ Parameters

id: str

String ID of project. Starts with pr_.

page: int

Page offset for pagination.

size: int

Page size for pagination. Number of logs to export.

🔄 Return

PaginatedDataLogResponse

🌐 Endpoint

/projects/{id}/export post

🔙 Back to Table of Contents


humanloop.projects.get

Get a specific project.

🛠️ Usage

get_response = humanloop.projects.get(
    id="id_example",
)

⚙️ Parameters

id: str

String ID of project. Starts with pr_.

🔄 Return

ProjectResponse

🌐 Endpoint

/projects/{id} get

🔙 Back to Table of Contents


humanloop.projects.get_active_config

Retrieves a config to use to execute your model.

A config will be selected based on the project's active config/experiment settings.

🛠️ Usage

get_active_config_response = humanloop.projects.get_active_config(
    id="id_example",
    environment="string_example",
)

⚙️ Parameters

id: str

String ID of project. Starts with pr_.

environment: str

Name for the environment. E.g. 'producton'. If not provided, will return the active config for the default environment.

🔄 Return

GetModelConfigResponse

🌐 Endpoint

/projects/{id}/active-config get

🔙 Back to Table of Contents


humanloop.projects.list

Get a paginated list of files.

🛠️ Usage

list_response = humanloop.projects.list(
    page=0,
    size=10,
    filter="string_example",
    user_filter="string_example",
    sort_by="created_at",
    order="asc",
)

⚙️ Parameters

page: int

Page offset for pagination.

size: int

Page size for pagination. Number of projects to fetch.

filter: str

Case-insensitive filter for project name.

user_filter: str

Case-insensitive filter for users in the project. This filter matches against both email address and name of users.

sort_by: ProjectSortBy

Field to sort projects by

order: SortOrder

Direction to sort by.

🔄 Return

PaginatedDataProjectResponse

🌐 Endpoint

/projects get

🔙 Back to Table of Contents


humanloop.projects.list_configs

Get an array of versions associated to your file.

🛠️ Usage

list_configs_response = humanloop.projects.list_configs(
    id="id_example",
    evaluation_aggregates=True,
)

⚙️ Parameters

id: str

String ID of project. Starts with pr_.

evaluation_aggregates: bool

🔄 Return

ProjectsGetConfigsResponse

🌐 Endpoint

/projects/{id}/configs get

🔙 Back to Table of Contents


humanloop.projects.list_deployed_configs

Get an array of environments with the deployed configs associated to your project.

🛠️ Usage

list_deployed_configs_response = humanloop.projects.list_deployed_configs(
    id="id_example",
)

⚙️ Parameters

id: str

String ID of project. Starts with pr_.

🔄 Return

ProjectsGetDeployedConfigsResponse

🌐 Endpoint

/projects/{id}/deployed-configs get

🔙 Back to Table of Contents


humanloop.projects.update

Update a specific project.

Set the project's active model config/experiment by passing either active_experiment_id or active_model_config_id. These will be set to the Default environment unless a list of environments are also passed in specifically detailing which environments to assign the active config or experiment.

Set the feedback labels to be treated as positive user feedback used in calculating top-level project metrics by passing a list of labels in positive_labels.

🛠️ Usage

update_response = humanloop.projects.update(
    id="id_example",
    name="string_example",
    active_experiment_id="string_example",
    active_config_id="string_example",
    positive_labels=[
        {
            "type": "type_example",
            "value": "value_example",
        }
    ],
    directory_id="string_example",
)

⚙️ Parameters

id: str

String ID of project. Starts with pr_.

name: str

The new unique project name. Caution, if you are using the project name as the unique identifier in your API calls, changing the name will break the calls.

active_experiment_id: str

ID for an experiment to set as the project's active deployment. Starts with 'exp_'. At most one of 'active_experiment_id' and 'active_model_config_id' can be set.

active_config_id: str

ID for a config to set as the project's active deployment. Starts with 'config_'. At most one of 'active_experiment_id' and 'active_config_id' can be set.

positive_labels: List[PositiveLabel]

The full list of labels to treat as positive user feedback.

directory_id: str

ID of directory to assign project to. Starts with dir_.

⚙️ Request Body

UpdateProjectRequest

🔄 Return

ProjectResponse

🌐 Endpoint

/projects/{id} patch

🔙 Back to Table of Contents


humanloop.projects.update_feedback_types

Update feedback types.

Allows enabling the available feedback types and setting status of feedback types/categorical values.

This behaves like an upsert; any feedback categorical values that do not already exist in the project will be created.

🛠️ Usage

update_feedback_types_response = humanloop.projects.update_feedback_types(
    body=[
        {
            "type": "type_example",
            "_class": "select",
        }
    ],
    id="id_example",
)

⚙️ Parameters

id: str

String ID of project. Starts with pr_.

requestBody: ProjectsUpdateFeedbackTypesRequest

🔄 Return

FeedbackTypes

🌐 Endpoint

/projects/{id}/feedback-types patch

🔙 Back to Table of Contents


humanloop.sessions.create

Create a new session.

Returns a session ID that can be used to log datapoints to the session.

🛠️ Usage

create_response = humanloop.sessions.create()

🔄 Return

CreateSessionResponse

🌐 Endpoint

/sessions post

🔙 Back to Table of Contents


humanloop.sessions.get

Get a session by ID.

🛠️ Usage

get_response = humanloop.sessions.get(
    id="id_example",
)

⚙️ Parameters

id: str

String ID of session to return. Starts with sesh_.

🔄 Return

SessionResponse

🌐 Endpoint

/sessions/{id} get

🔙 Back to Table of Contents


humanloop.sessions.list

Get a page of sessions.

🛠️ Usage

list_response = humanloop.sessions.list(
    project_id="project_id_example",
    page=1,
    size=10,
)

⚙️ Parameters

project_id: str

String ID of project to return sessions for. Sessions that contain any datapoints associated to this project will be returned. Starts with pr_.

page: int

Page to fetch. Starts from 1.

size: int

Number of sessions to retrieve.

🔄 Return

PaginatedDataSessionResponse

🌐 Endpoint

/sessions get

🔙 Back to Table of Contents


Author

This Python package is automatically generated by Konfig

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

humanloop-0.7.0b31.tar.gz (329.9 kB view details)

Uploaded Source

Built Distribution

humanloop-0.7.0b31-py3-none-any.whl (1.4 MB view details)

Uploaded Python 3

File details

Details for the file humanloop-0.7.0b31.tar.gz.

File metadata

  • Download URL: humanloop-0.7.0b31.tar.gz
  • Upload date:
  • Size: 329.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.2

File hashes

Hashes for humanloop-0.7.0b31.tar.gz
Algorithm Hash digest
SHA256 22aeff7fd15ae59eba397ac67e4676e4e668a5fcc594069688290b270d397d05
MD5 72e4995aaf210b306f18b5ec21ee69ef
BLAKE2b-256 8b8a1cdedb28e7aab88e5edab8219cfb9c8ef7019c8486ab2b91e91c63de6d0b

See more details on using hashes here.

File details

Details for the file humanloop-0.7.0b31-py3-none-any.whl.

File metadata

File hashes

Hashes for humanloop-0.7.0b31-py3-none-any.whl
Algorithm Hash digest
SHA256 507f2e96da8510e9e04153e6c82043bdc2216ebb8e2784e657e65397a4adda1a
MD5 25ca14fc8a4b689d81de5adcda0ab10c
BLAKE2b-256 17fd4bc4b44d9084f4e8faba6d9ba61fa22b0544bf18010c482593a337f25804

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page