Client for Humanloop API
Project description
[!WARNING] This SDK has breaking changes in
>= 0.6.0
versions. All methods now return Pydantic models.Before (
< 0.6.0
)Previously, you had to use the
[]
syntax to access response values. This required a little more code for every property access.chat_response = humanloop.chat( # parameters ) print(chat_response.body["project_id"])After (
>= 0.6.0
)With Pydantic-based response values, you can use the
.
syntax to access. This is slightly less verbose and looks more Pythonic.chat_response = humanloop.chat( # parameters ) print(chat_response.project_id)To reuse existing implementations from
< 0.6.0
, use the.raw
namespace as specified in the Raw HTTP Response section.
Table of Contents
- Requirements
- Installation
- Getting Started
- Async
- Raw HTTP Response
- Streaming
- Reference
humanloop.chat
humanloop.chat_deployed
humanloop.chat_experiment
humanloop.chat_model_config
humanloop.complete
humanloop.complete_deployed
humanloop.complete_experiment
humanloop.complete_model_configuration
humanloop.datapoints.delete
humanloop.datapoints.get
humanloop.datapoints.update
humanloop.datasets.create
humanloop.datasets.create_datapoint
humanloop.datasets.delete
humanloop.datasets.get
humanloop.datasets.list_all_for_project
humanloop.datasets.list_datapoints
humanloop.datasets.update
humanloop.evaluations.add_evaluators
humanloop.evaluations.create
humanloop.evaluations.get
humanloop.evaluations.list
humanloop.evaluations.list_all_for_project
humanloop.evaluations.list_datapoints
humanloop.evaluations.log
humanloop.evaluations.result
humanloop.evaluations.update_status
humanloop.evaluators.create
humanloop.evaluators.delete
humanloop.evaluators.get
humanloop.evaluators.list
humanloop.evaluators.update
humanloop.experiments.create
humanloop.experiments.delete
humanloop.experiments.list
humanloop.experiments.sample
humanloop.experiments.update
humanloop.feedback
humanloop.finetunes.create
humanloop.finetunes.list_all_for_project
humanloop.finetunes.summary
humanloop.finetunes.update
humanloop.logs.delete
humanloop.logs.get
humanloop.logs.list
humanloop.log
humanloop.logs.update
humanloop.logs.update_by_ref
humanloop.model_configs.deserialize
humanloop.model_configs.export
humanloop.model_configs.get
humanloop.model_configs.register
humanloop.model_configs.serialize
humanloop.projects.create
humanloop.projects.create_feedback_type
humanloop.projects.deactivate_config
humanloop.projects.deactivate_experiment
humanloop.projects.delete
humanloop.projects.delete_deployed_config
humanloop.projects.deploy_config
humanloop.projects.export
humanloop.projects.get
humanloop.projects.get_active_config
humanloop.projects.list
humanloop.projects.list_configs
humanloop.projects.list_deployed_configs
humanloop.projects.update
humanloop.projects.update_feedback_types
humanloop.sessions.create
humanloop.sessions.get
humanloop.sessions.list
Requirements
Python >=3.7
Installation
pip install humanloop==0.7.0-beta.10
Getting Started
from pprint import pprint
from humanloop import Humanloop, ApiException
humanloop = Humanloop(
api_key="YOUR_API_KEY",
openai_api_key="YOUR_OPENAI_API_KEY",
anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)
try:
# Chat
chat_response = humanloop.chat(
project="sdk-example",
messages=[
{
"role": "user",
"content": "Explain asynchronous programming.",
}
],
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"chat_template": [
{
"role": "system",
"content": "You are a helpful assistant who replies in the style of {{persona}}.",
},
],
},
inputs={
"persona": "the pirate Blackbeard",
},
stream=False,
)
print(chat_response)
except ApiException as e:
print("Exception when calling .chat: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
try:
# Complete
complete_response = humanloop.complete(
project="sdk-example",
inputs={
"text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
},
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
},
stream=False,
)
print(complete_response)
except ApiException as e:
print("Exception when calling .complete: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
try:
# Feedback
feedback_response = humanloop.feedback(
type="rating",
value="good",
data_id="data_[...]",
user="user@example.com",
)
print(feedback_response)
except ApiException as e:
print("Exception when calling .feedback: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
try:
# Log
log_response = humanloop.log(
project="sdk-example",
inputs={
"text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
},
output="Llamas can be friendly and curious if they are trained to be around people, but if they are treated too much like pets when they are young, they can become difficult to handle when they grow up. This means they might spit, kick, and wrestle with their necks.",
source="sdk",
config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
"type": "model",
},
)
print(log_response)
except ApiException as e:
print("Exception when calling .log: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
Async
async
support is available by prepending a
to any method.
import asyncio
from pprint import pprint
from humanloop import Humanloop, ApiException
humanloop = Humanloop(
api_key="YOUR_API_KEY",
openai_api_key="YOUR_OPENAI_API_KEY",
anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)
async def main():
try:
complete_response = await humanloop.acomplete(
project="sdk-example",
inputs={
"text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
},
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
},
stream=False,
)
print(complete_response)
except ApiException as e:
print("Exception when calling .complete: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
asyncio.run(main())
Raw HTTP Response
To access raw HTTP response values, use the .raw
namespace.
from pprint import pprint
from humanloop import Humanloop, ApiException
humanloop = Humanloop(
openai_api_key="OPENAI_API_KEY",
openai_azure_api_key="OPENAI_AZURE_API_KEY",
openai_azure_endpoint_api_key="OPENAI_AZURE_ENDPOINT_API_KEY",
ai21_api_key="AI21_API_KEY",
mock_api_key="MOCK_API_KEY",
anthropic_api_key="ANTHROPIC_API_KEY",
cohere_api_key="COHERE_API_KEY",
api_key="YOUR_API_KEY",
)
try:
# Chat
create_response = humanloop.chats.raw.create(
messages=[
{
"role": "string_example",
}
],
model_config={
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
},
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
tool_choice="string_example",
tool_call="string_example",
response_format={
"type": "json_object",
},
)
pprint(create_response.body)
pprint(create_response.body["data"])
pprint(create_response.body["provider_responses"])
pprint(create_response.body["project_id"])
pprint(create_response.body["num_samples"])
pprint(create_response.body["logprobs"])
pprint(create_response.body["suffix"])
pprint(create_response.body["user"])
pprint(create_response.body["usage"])
pprint(create_response.body["metadata"])
pprint(create_response.body["provider_request"])
pprint(create_response.body["session_id"])
pprint(create_response.body["tool_choice"])
pprint(create_response.headers)
pprint(create_response.status)
pprint(create_response.round_trip_time)
except ApiException as e:
print("Exception when calling ChatsApi.create: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
Streaming
Streaming support is available by suffixing a chat
or complete
method with _stream
.
import asyncio
from humanloop import Humanloop
humanloop = Humanloop(
api_key="YOUR_API_KEY",
openai_api_key="YOUR_OPENAI_API_KEY",
anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)
async def main():
response = await humanloop.chat_stream(
project="sdk-example",
messages=[
{
"role": "user",
"content": "Explain asynchronous programming.",
}
],
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"chat_template": [
{
"role": "system",
"content": "You are a helpful assistant who replies in the style of {{persona}}.",
},
],
},
inputs={
"persona": "the pirate Blackbeard",
},
)
async for token in response.content:
print(token)
asyncio.run(main())
Reference
humanloop.chat
Get a chat response by providing details of the model configuration in the request.
🛠️ Usage
create_response = humanloop.chat(
messages=[
{
"role": "string_example",
}
],
model_config={
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
},
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
tool_choice="string_example",
tool_call="string_example",
response_format={
"type": "json_object",
},
)
⚙️ Parameters
messages: List[ChatMessageWithToolCall
]
The messages passed to the to provider chat endpoint.
model_config: ModelConfigChatRequest
The model configuration used to create a chat response.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
save: bool
Whether the request/response payloads will be stored on Humanloop.
source_datapoint_id: str
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of generations.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
seed: int
Deprecated field: the seed is instead set as part of the request.config object.
return_inputs: bool
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
tool_choice: Union[str
, str
, ToolChoice
]
Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
tool_call: Union[str
, Dict[str, str]
]
NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.
response_format: ResponseFormat
The format of the response. Only type json_object is currently supported for chat.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/chat
post
humanloop.chat_deployed
Get a chat response using the project's active deployment. The active deployment can be a specific model configuration or an experiment.
🛠️ Usage
create_deployed_response = humanloop.chat_deployed(
messages=[
{
"role": "string_example",
}
],
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
tool_choice="string_example",
tool_call="string_example",
response_format={
"type": "json_object",
},
environment="string_example",
)
⚙️ Parameters
messages: List[ChatMessageWithToolCall
]
The messages passed to the to provider chat endpoint.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
save: bool
Whether the request/response payloads will be stored on Humanloop.
source_datapoint_id: str
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of generations.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
seed: int
Deprecated field: the seed is instead set as part of the request.config object.
return_inputs: bool
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
tool_choice: Union[str
, str
, ToolChoice
]
Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
tool_call: Union[str
, Dict[str, str]
]
NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.
response_format: ResponseFormat
The format of the response. Only type json_object is currently supported for chat.
environment: str
The environment name used to create a chat response. If not specified, the default environment will be used.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/chat-deployed
post
humanloop.chat_experiment
Get a chat response for a specific experiment.
🛠️ Usage
create_experiment_response = humanloop.chat_experiment(
messages=[
{
"role": "string_example",
}
],
experiment_id="string_example",
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
tool_choice="string_example",
tool_call="string_example",
response_format={
"type": "json_object",
},
)
⚙️ Parameters
messages: List[ChatMessageWithToolCall
]
The messages passed to the to provider chat endpoint.
experiment_id: str
If an experiment ID is provided a model configuration will be sampled from the experiments active model configurations.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
save: bool
Whether the request/response payloads will be stored on Humanloop.
source_datapoint_id: str
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of chat responses, where each chat response will use a model configuration sampled from the experiment.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
seed: int
Deprecated field: the seed is instead set as part of the request.config object.
return_inputs: bool
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
tool_choice: Union[str
, str
, ToolChoice
]
Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
tool_call: Union[str
, Dict[str, str]
]
NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.
response_format: ResponseFormat
The format of the response. Only type json_object is currently supported for chat.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/chat-experiment
post
humanloop.chat_model_config
Get chat response for a specific model configuration.
🛠️ Usage
create_model_config_response = humanloop.chat_model_config(
messages=[
{
"role": "string_example",
}
],
model_config_id="string_example",
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
tool_choice="string_example",
tool_call="string_example",
response_format={
"type": "json_object",
},
)
⚙️ Parameters
messages: List[ChatMessageWithToolCall
]
The messages passed to the to provider chat endpoint.
model_config_id: str
Identifies the model configuration used to create a chat response.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
save: bool
Whether the request/response payloads will be stored on Humanloop.
source_datapoint_id: str
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of generations.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
seed: int
Deprecated field: the seed is instead set as part of the request.config object.
return_inputs: bool
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
tool_choice: Union[str
, str
, ToolChoice
]
Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
tool_call: Union[str
, Dict[str, str]
]
NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.
response_format: ResponseFormat
The format of the response. Only type json_object is currently supported for chat.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/chat-model-config
post
humanloop.complete
Create a completion by providing details of the model configuration in the request.
🛠️ Usage
create_response = humanloop.complete(
model_config={
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"prompt_template": "{{question}}",
},
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
logprobs=1,
suffix="string_example",
)
⚙️ Parameters
model_config: ModelConfigCompletionRequest
The model configuration used to generate.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
save: bool
Whether the request/response payloads will be stored on Humanloop.
source_datapoint_id: str
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of generations.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
seed: int
Deprecated field: the seed is instead set as part of the request.config object.
return_inputs: bool
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
logprobs: int
Include the log probabilities of the top n tokens in the provider_response
suffix: str
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/completion
post
humanloop.complete_deployed
Create a completion using the project's active deployment. The active deployment can be a specific model configuration or an experiment.
🛠️ Usage
create_deployed_response = humanloop.complete_deployed(
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
logprobs=1,
suffix="string_example",
environment="string_example",
)
⚙️ Parameters
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
save: bool
Whether the request/response payloads will be stored on Humanloop.
source_datapoint_id: str
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of generations.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
seed: int
Deprecated field: the seed is instead set as part of the request.config object.
return_inputs: bool
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
logprobs: int
Include the log probabilities of the top n tokens in the provider_response
suffix: str
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
environment: str
The environment name used to create a chat response. If not specified, the default environment will be used.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/completion-deployed
post
humanloop.complete_experiment
Create a completion for a specific experiment.
🛠️ Usage
create_experiment_response = humanloop.complete_experiment(
experiment_id="string_example",
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
logprobs=1,
suffix="string_example",
)
⚙️ Parameters
experiment_id: str
If an experiment ID is provided a model configuration will be sampled from the experiments active model configurations.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
save: bool
Whether the request/response payloads will be stored on Humanloop.
source_datapoint_id: str
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of chat responses, where each chat response will use a model configuration sampled from the experiment.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
seed: int
Deprecated field: the seed is instead set as part of the request.config object.
return_inputs: bool
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
logprobs: int
Include the log probabilities of the top n tokens in the provider_response
suffix: str
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/completion-experiment
post
humanloop.complete_model_configuration
Create a completion for a specific model configuration.
🛠️ Usage
create_model_config_response = humanloop.complete_model_configuration(
model_config_id="string_example",
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
seed=1,
return_inputs=True,
logprobs=1,
suffix="string_example",
)
⚙️ Parameters
model_config_id: str
Identifies the model configuration used to create a chat response.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
save: bool
Whether the request/response payloads will be stored on Humanloop.
source_datapoint_id: str
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of generations.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
seed: int
Deprecated field: the seed is instead set as part of the request.config object.
return_inputs: bool
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
logprobs: int
Include the log probabilities of the top n tokens in the provider_response
suffix: str
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/completion-model-config
post
humanloop.datapoints.delete
Delete a list of datapoints by their IDs.
🛠️ Usage
humanloop.datapoints.delete(
body=["datapoints_delete_request_example"],
)
⚙️ Request Body
🌐 Endpoint
/datapoints
delete
humanloop.datapoints.get
Get a datapoint by ID.
🛠️ Usage
get_response = humanloop.datapoints.get(
id="id_example",
)
⚙️ Parameters
id: str
String ID of datapoint. Starts with evtc_
.
🔄 Return
🌐 Endpoint
/datapoints/{id}
get
humanloop.datapoints.update
Edit the input, messages and criteria fields of a datapoint. The fields passed in the request are the ones edited. Passing null
as a value for a field will delete that field. In order to signify not changing a field, it should be omitted from the request body.
🛠️ Usage
update_response = humanloop.datapoints.update(
id="id_example",
inputs={
"key": "string_example",
},
messages=[
{
"role": "string_example",
}
],
target={
"key": "string_example",
},
)
⚙️ Parameters
id: str
String ID of datapoint. Starts with evtc_
.
inputs: UpdateDatapointRequestInputs
messages: List[ChatMessageWithToolCall
]
The chat messages for this datapoint.
target: UpdateDatapointRequestTarget
⚙️ Request Body
🔄 Return
🌐 Endpoint
/datapoints/{id}
patch
humanloop.datasets.create
Create a new dataset for a project.
🛠️ Usage
create_response = humanloop.datasets.create(
description="string_example",
name="string_example",
project_id="project_id_example",
)
⚙️ Parameters
description: str
The description of the dataset.
name: str
The name of the dataset.
project_id: str
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{project_id}/datasets
post
humanloop.datasets.create_datapoint
Create a new testcase for a testset.
🛠️ Usage
create_datapoint_response = humanloop.datasets.create_datapoint(
body={
"log_ids": ["log_ids_example"],
},
dataset_id="dataset_id_example",
log_ids=["string_example"],
inputs={
"key": "string_example",
},
messages=[
{
"role": "string_example",
}
],
target={
"key": "string_example",
},
)
⚙️ Parameters
dataset_id: str
String ID of dataset. Starts with evts_
.
requestBody: DatasetsCreateDatapointRequest
🔄 Return
DatasetsCreateDatapointResponse
🌐 Endpoint
/datasets/{dataset_id}/datapoints
post
humanloop.datasets.delete
Delete a dataset by ID.
🛠️ Usage
delete_response = humanloop.datasets.delete(
id="id_example",
)
⚙️ Parameters
id: str
String ID of dataset. Starts with evts_
.
🔄 Return
🌐 Endpoint
/datasets/{id}
delete
humanloop.datasets.get
Get a single dataset by ID.
🛠️ Usage
get_response = humanloop.datasets.get(
id="id_example",
)
⚙️ Parameters
id: str
String ID of dataset. Starts with evts_
.
🔄 Return
🌐 Endpoint
/datasets/{id}
get
humanloop.datasets.list_all_for_project
Get all datasets for a project.
🛠️ Usage
list_all_for_project_response = humanloop.datasets.list_all_for_project(
project_id="project_id_example",
)
⚙️ Parameters
project_id: str
🔄 Return
DatasetsListAllForProjectResponse
🌐 Endpoint
/projects/{project_id}/datasets
get
humanloop.datasets.list_datapoints
Get datapoints for a dataset.
🛠️ Usage
list_datapoints_response = humanloop.datasets.list_datapoints(
dataset_id="dataset_id_example",
page=0,
size=50,
)
⚙️ Parameters
dataset_id: str
String ID of dataset. Starts with evts_
.
page: int
size: int
🔄 Return
PaginatedDataDatapointResponse
🌐 Endpoint
/datasets/{dataset_id}/datapoints
get
humanloop.datasets.update
Update a testset by ID.
🛠️ Usage
update_response = humanloop.datasets.update(
id="id_example",
description="string_example",
name="string_example",
)
⚙️ Parameters
id: str
String ID of testset. Starts with evts_
.
description: str
The description of the dataset.
name: str
The name of the dataset.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/datasets/{id}
patch
humanloop.evaluations.add_evaluators
Add evaluators to an existing evaluation run.
🛠️ Usage
add_evaluators_response = humanloop.evaluations.add_evaluators(
evaluator_ids=["string_example"],
id="id_example",
)
⚙️ Parameters
evaluator_ids: AddEvaluatorsRequestEvaluatorIds
id: str
String ID of evaluation run. Starts with ev_
.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/evaluations/{id}/evaluators
patch
humanloop.evaluations.create
Create an evaluation.
🛠️ Usage
create_response = humanloop.evaluations.create(
config_id="string_example",
evaluator_ids=["string_example"],
dataset_id="string_example",
project_id="project_id_example",
provider_api_keys={},
max_concurrency=5,
hl_generated=True,
)
⚙️ Parameters
config_id: str
ID of the config to evaluate. Starts with config_
.
evaluator_ids: CreateEvaluationRequestEvaluatorIds
dataset_id: str
ID of the dataset to use in this evaluation. Starts with evts_
.
project_id: str
String ID of project. Starts with pr_
.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization. Ensure you provide an API key for the provider for the model config you are evaluating, or have one saved to your organization.
max_concurrency: int
The maximum number of concurrent generations to run. A higher value will result in faster completion of the evaluation but may place higher load on your provider rate-limits.
hl_generated: bool
Whether the log generations for this evaluation should be performed by Humanloop. If False
, the log generations should be submitted by the user via the API.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{project_id}/evaluations
post
humanloop.evaluations.get
Get evaluation by ID.
🛠️ Usage
get_response = humanloop.evaluations.get(
id="id_example",
evaluator_aggregates=True,
)
⚙️ Parameters
id: str
String ID of evaluation run. Starts with ev_
.
evaluator_aggregates: bool
Whether to include evaluator aggregates in the response.
🔄 Return
🌐 Endpoint
/evaluations/{id}
get
humanloop.evaluations.list
Get the evaluations associated with a project. Sorting and filtering are supported through query params for categorical columns and the created_at
timestamp. Sorting is supported for the dataset
, config
, status
and evaluator-{evaluator_id}
columns. Specify sorting with the sort
query param, with values {column}.{ordering}
. E.g. ?sort=dataset.asc&sort=status.desc will yield a multi-column sort. First by dataset then by status. Filtering is supported for the id
, dataset
, config
and status
columns. Specify filtering with the id_filter
, dataset_filter
, config_filter
and status_filter
query params. E.g. ?dataset_filter=my_dataset&dataset_filter=my_other_dataset&status_filter=running will only show rows where the dataset is "my_dataset" or "my_other_dataset", and where the status is "running". An additional date range filter is supported for the created_at
column. Use the start_date
and end_date
query parameters to configure this.
🛠️ Usage
list_response = humanloop.evaluations.list(
project_id="project_id_example",
id=["string_example"],
start_date="1970-01-01",
end_date="1970-01-01",
size=50,
page=0,
)
⚙️ Parameters
project_id: str
String ID of project. Starts with pr_
.
id: List[str
]
A list of evaluation run ids to filter on. Starts with ev_
.
start_date: date
Only return evaluations created after this date.
end_date: date
Only return evaluations created before this date.
size: int
page: int
🔄 Return
PaginatedDataEvaluationResponse
🌐 Endpoint
/evaluations
get
humanloop.evaluations.list_all_for_project
Get all the evaluations associated with your project. Deprecated: This is a legacy unpaginated endpoint. Use /evaluations
instead, with appropriate sorting, filtering and pagination options.
🛠️ Usage
list_all_for_project_response = humanloop.evaluations.list_all_for_project(
project_id="project_id_example",
evaluator_aggregates=True,
)
⚙️ Parameters
project_id: str
String ID of project. Starts with pr_
.
evaluator_aggregates: bool
Whether to include evaluator aggregates in the response.
🔄 Return
EvaluationsGetForProjectResponse
🌐 Endpoint
/projects/{project_id}/evaluations
get
humanloop.evaluations.list_datapoints
Get testcases by evaluation ID.
🛠️ Usage
list_datapoints_response = humanloop.evaluations.list_datapoints(
id="id_example",
page=1,
size=10,
)
⚙️ Parameters
id: str
String ID of evaluation. Starts with ev_
.
page: int
Page to fetch. Starts from 1.
size: int
Number of evaluation results to retrieve.
🔄 Return
PaginatedDataEvaluationDatapointSnapshotResponse
🌐 Endpoint
/evaluations/{id}/datapoints
get
humanloop.evaluations.log
Log an external generation to an evaluation run for a datapoint. The run must have status 'running'.
🛠️ Usage
log_response = humanloop.evaluations.log(
datapoint_id="string_example",
log={
"save": True,
},
evaluation_id="evaluation_id_example",
)
⚙️ Parameters
datapoint_id: str
The datapoint for which a log was generated. Must be one of the datapoints in the dataset being evaluated.
log: LogRequest
The log generated for the datapoint.
evaluation_id: str
ID of the evaluation run. Starts with evrun_
.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/evaluations/{evaluation_id}/log
post
humanloop.evaluations.result
Log an evaluation result to an evaluation run. The run must have status 'running'. One of result
or error
must be provided.
🛠️ Usage
result_response = humanloop.evaluations.result(
log_id="string_example",
evaluator_id="string_example",
evaluation_id="evaluation_id_example",
result=True,
error="string_example",
)
⚙️ Parameters
log_id: str
The log that was evaluated. Must have as its source_datapoint_id
one of the datapoints in the dataset being evaluated.
evaluator_id: str
ID of the evaluator that evaluated the log. Starts with evfn_
. Must be one of the evaluator IDs associated with the evaluation run being logged to.
evaluation_id: str
ID of the evaluation run. Starts with evrun_
.
result: Union[bool
, int
, Union[int, float]
]
The result value of the evaluation.
error: str
An error that occurred during evaluation.
⚙️ Request Body
CreateEvaluationResultLogRequest
🔄 Return
🌐 Endpoint
/evaluations/{evaluation_id}/result
post
humanloop.evaluations.update_status
Update the status of an evaluation run. Can only be used to update the status of an evaluation run that uses external or human evaluators. The evaluation must currently have status 'running' if swithcing to completed, or it must have status 'completed' if switching back to 'running'.
🛠️ Usage
update_status_response = humanloop.evaluations.update_status(
status="string_example",
id="id_example",
)
⚙️ Parameters
status: EvaluationStatus
The new status of the evaluation.
id: str
String ID of evaluation run. Starts with ev_
.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/evaluations/{id}/status
patch
humanloop.evaluators.create
Create an evaluator within your organization.
🛠️ Usage
create_response = humanloop.evaluators.create(
description="string_example",
name="a",
arguments_type="string_example",
return_type="string_example",
type="string_example",
code="string_example",
model_config={
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"prompt_template": "{{question}}",
},
)
⚙️ Parameters
description: str
The description of the evaluator.
name: str
The name of the evaluator.
arguments_type: EvaluatorArgumentsType
Whether this evaluator is target-free or target-required.
return_type: EvaluatorReturnTypeEnum
The type of the return value of the evaluator.
type: EvaluatorType
The type of the evaluator.
code: str
The code for the evaluator. This code will be executed in a sandboxed environment.
model_config: ModelConfigCompletionRequest
The model configuration used to generate.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/evaluators
post
humanloop.evaluators.delete
Delete an evaluator within your organization.
🛠️ Usage
humanloop.evaluators.delete(
id="id_example",
)
⚙️ Parameters
id: str
🌐 Endpoint
/evaluators/{id}
delete
humanloop.evaluators.get
Get an evaluator within your organization.
🛠️ Usage
get_response = humanloop.evaluators.get(
id="id_example",
)
⚙️ Parameters
id: str
🔄 Return
🌐 Endpoint
/evaluators/{id}
get
humanloop.evaluators.list
Get all evaluators within your organization.
🛠️ Usage
list_response = humanloop.evaluators.list()
🔄 Return
🌐 Endpoint
/evaluators
get
humanloop.evaluators.update
Update an evaluator within your organization.
🛠️ Usage
update_response = humanloop.evaluators.update(
id="id_example",
description="string_example",
name="string_example",
arguments_type="string_example",
return_type="string_example",
code="string_example",
model_config={
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"prompt_template": "{{question}}",
},
)
⚙️ Parameters
id: str
description: str
The description of the evaluator.
name: str
The name of the evaluator.
arguments_type: EvaluatorArgumentsType
Whether this evaluator is target-free or target-required.
return_type: EvaluatorReturnTypeEnum
The type of the return value of the evaluator.
code: str
The code for the evaluator. This code will be executed in a sandboxed environment.
model_config: ModelConfigCompletionRequest
The model configuration used to generate.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/evaluators/{id}
patch
humanloop.experiments.create
Create an experiment for your project. You can optionally specify IDs of your project's model configs to include in the experiment, along with a set of labels to consider as positive feedback and whether the experiment should be set as active.
🛠️ Usage
create_response = humanloop.experiments.create(
name="string_example",
positive_labels=[
{
"type": "type_example",
"value": "value_example",
}
],
project_id="project_id_example",
config_ids=["string_example"],
set_active=False,
)
⚙️ Parameters
name: str
Name of experiment.
positive_labels: List[PositiveLabel
]
Feedback labels to treat as positive user feedback. Used to monitor the performance of model configs in the experiment.
project_id: str
String ID of project. Starts with pr_
.
config_ids: CreateExperimentRequestConfigIds
set_active: bool
Whether to set the created project as the project's active experiment.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{project_id}/experiments
post
humanloop.experiments.delete
Delete the experiment with the specified ID.
🛠️ Usage
humanloop.experiments.delete(
experiment_id="experiment_id_example",
)
⚙️ Parameters
experiment_id: str
String ID of experiment. Starts with exp_
.
🌐 Endpoint
/experiments/{experiment_id}
delete
humanloop.experiments.list
Get an array of experiments associated to your project.
🛠️ Usage
list_response = humanloop.experiments.list(
project_id="project_id_example",
)
⚙️ Parameters
project_id: str
String ID of project. Starts with pr_
.
🔄 Return
🌐 Endpoint
/projects/{project_id}/experiments
get
humanloop.experiments.sample
Samples a model config from the experiment's active model configs.
🛠️ Usage
sample_response = humanloop.experiments.sample(
experiment_id="experiment_id_example",
)
⚙️ Parameters
experiment_id: str
String ID of experiment. Starts with exp_
.
🔄 Return
🌐 Endpoint
/experiments/{experiment_id}/model-config
get
humanloop.experiments.update
Update your experiment, including registering and de-registering model configs.
🛠️ Usage
update_response = humanloop.experiments.update(
experiment_id="experiment_id_example",
name="string_example",
positive_labels=[
{
"type": "type_example",
"value": "value_example",
}
],
config_ids_to_register=["string_example"],
config_ids_to_deregister=["string_example"],
)
⚙️ Parameters
experiment_id: str
String ID of experiment. Starts with exp_
.
name: str
Name of experiment.
positive_labels: List[PositiveLabel
]
Feedback labels to treat as positive user feedback. Used to monitor the performance of model configs in the experiment.
config_ids_to_register: UpdateExperimentRequestConfigIdsToRegister
config_ids_to_deregister: UpdateExperimentRequestConfigIdsToDeregister
⚙️ Request Body
🔄 Return
🌐 Endpoint
/experiments/{experiment_id}
patch
humanloop.feedback
Submit an array of feedback for existing data_ids
🛠️ Usage
feedback_response = humanloop.feedback(
body=[
{
"type": "string_example",
}
],
type="string_example",
value="string_example",
data_id="string_example",
user="string_example",
created_at="1970-01-01T00:00:00.00Z",
unset=True,
)
⚙️ Parameters
type: Union[FeedbackType
, str
]
The type of feedback. The default feedback types available are 'rating', 'action', 'issue', 'correction', and 'comment'.
value: str
The feedback value to be set. This field should be left blank when unsetting 'rating', 'correction' or 'comment', but is required otherwise.
data_id: str
ID to associate the feedback to a previously logged datapoint.
user: str
A unique identifier to who provided the feedback.
created_at: datetime
User defined timestamp for when the feedback was created.
unset: bool
If true, the value for this feedback is unset.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/feedback
post
humanloop.finetunes.create
Trigger the fine-tuning process for a specific base model and data snapshot.
🛠️ Usage
create_response = humanloop.finetunes.create(
name="string_example",
dataset_id="string_example",
config={
"base_model": "base_model_example",
},
project_id="project_id_example",
metadata={},
provider_api_keys={},
)
⚙️ Parameters
name: str
User defined friendly name for a finetuning run
dataset_id: str
ID of dataset used for finetuning
config: FinetuneConfig
Configuration and hyper-parameters for the fine-tuning process
project_id: str
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata that you would like to log for reference.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{project_id}/finetunes
post
humanloop.finetunes.list_all_for_project
Get a list of all fine-tuned models associated to a given project.
🛠️ Usage
list_all_for_project_response = humanloop.finetunes.list_all_for_project(
project_id="project_id_example",
)
⚙️ Parameters
project_id: str
🔄 Return
FinetunesListAllForProjectResponse
🌐 Endpoint
/projects/{project_id}/finetunes
get
humanloop.finetunes.summary
Checks data for errors and generates finetune data summary. Does not actually trigger the finetuning process or persist the data.
🛠️ Usage
summary_response = humanloop.finetunes.summary(
name="string_example",
dataset_id="string_example",
config={
"base_model": "base_model_example",
},
project_id="project_id_example",
metadata={},
provider_api_keys={},
)
⚙️ Parameters
name: str
User defined friendly name for a finetuning run
dataset_id: str
ID of dataset used for finetuning
config: FinetuneConfig
Configuration and hyper-parameters for the fine-tuning process
project_id: str
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata that you would like to log for reference.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{project_id}/finetunes/summary
post
humanloop.finetunes.update
Update user-specified attributes of the specified finetuned models.
🛠️ Usage
update_response = humanloop.finetunes.update(
id="id_example",
project_id="project_id_example",
name="string_example",
)
⚙️ Parameters
id: str
project_id: str
name: str
⚙️ Request Body
🔄 Return
🌐 Endpoint
/finetunes/{id}
patch
humanloop.logs.delete
Delete
🛠️ Usage
humanloop.logs.delete(
id=["string_example"],
)
⚙️ Parameters
id: List[str
]
🌐 Endpoint
/logs
delete
humanloop.logs.get
Retrieve a log by log id.
🛠️ Usage
get_response = humanloop.logs.get(
id="id_example",
)
⚙️ Parameters
id: str
String ID of log to return. Starts with data_
.
🔄 Return
🌐 Endpoint
/logs/{id}
get
humanloop.logs.list
Retrieve paginated logs from the server. Sorting and filtering are supported through query params. Sorting is supported for the source
, model
, timestamp
, and feedback-{output_name}
columns. Specify sorting with the sort
query param, with values {column}.{ordering}
. E.g. ?sort=source.asc&sort=model.desc will yield a multi-column sort. First by source then by model. Filtering is supported for the source
, model
, feedback-{output_name}
, evaluator-{evaluator_external_id}
columns. Specify filtering with the source_filter
, model_filter
, feedback-{output.name}_filter
and evaluator-{evaluator_external_id}_filter
query params. E.g. ?source_filter=AI&source_filter=user_1234&feedback-explicit_filter=good will only show rows where the source is "AI" or "user_1234", and where the latest feedback for the "explicit" output group is "good". An additional date range filter is supported for the Timestamp
column (i.e. Log.created_at
). These are supported through the start_date
and end_date
query parameters. Searching is supported for the model inputs and output. Specify a search term with the search
query param. E.g. ?search=hello%20there will cause a case-insensitive search across model inputs and output.
🛠️ Usage
list_response = humanloop.logs.list(
project_id="project_id_example",
search="string_example",
metadata_search="string_example",
start_date="1970-01-01",
end_date="1970-01-01",
size=50,
page=0,
)
⚙️ Parameters
project_id: str
search: str
metadata_search: str
start_date: date
end_date: date
size: int
page: int
🔄 Return
🌐 Endpoint
/logs
get
humanloop.log
Log a datapoint or array of datapoints to your Humanloop project.
🛠️ Usage
log_response = humanloop.log(
body=[
{
"save": True,
}
],
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
save=True,
source_datapoint_id="string_example",
reference_id="string_example",
trial_id="string_example",
messages=[
{
"role": "string_example",
}
],
output="string_example",
config_id="string_example",
config={
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"type": "ModelConfigRequest",
},
environment="string_example",
feedback={
"type": "string_example",
"value": 3.14,
},
created_at="1970-01-01T00:00:00.00Z",
error="string_example",
duration=3.14,
output_message={
"role": "string_example",
},
prompt_tokens=1,
output_tokens=1,
provider_request={},
provider_response={},
)
⚙️ Parameters
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
save: bool
Whether the request/response payloads will be stored on Humanloop.
source_datapoint_id: str
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
reference_id: str
A unique string to reference the datapoint. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a subsequent log request.
trial_id: str
Unique ID of an experiment trial to associate to the log.
messages: List[ChatMessageWithToolCall
]
The messages passed to the to provider chat endpoint.
output: str
Generated output from your model for the provided inputs. Can be None
if logging an error, or if logging a parent datapoint with the intention to populate it later
config_id: str
Unique ID of a config to associate to the log.
config: Union[ModelConfigRequest
, ToolConfigRequest
]
The model config used for this generation. Required unless config_id
or trial_id
is provided.
environment: str
The environment name used to create the log.
feedback: Union[Feedback
, List[Feedback
]]
Optional parameter to provide feedback with your logged datapoint.
created_at: datetime
User defined timestamp for when the log was created.
error: str
Error message if the log is an error.
duration: Union[int, float]
Duration of the logged event in seconds.
output_message: ChatMessageWithToolCall
The message returned by the provider.
prompt_tokens: int
Number of tokens in the prompt used to generate the output.
output_tokens: int
Number of tokens in the output generated by the model.
provider_request: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Raw request sent to provider.
provider_response: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Raw response received the provider.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/logs
post
humanloop.logs.update
Update a logged datapoint in your Humanloop project.
🛠️ Usage
update_response = humanloop.logs.update(
id="id_example",
output="string_example",
error="string_example",
duration=3.14,
)
⚙️ Parameters
id: str
String ID of logged datapoint to return. Starts with data_
.
output: str
Generated output from your model for the provided inputs.
error: str
Error message if the log is an error.
duration: Union[int, float]
Duration of the logged event in seconds.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/logs/{id}
patch
humanloop.logs.update_by_ref
Update a logged datapoint by its reference ID. The reference_id
query parameter must be provided, and refers to the reference_id
of a previously-logged datapoint.
🛠️ Usage
update_by_ref_response = humanloop.logs.update_by_ref(
reference_id="reference_id_example",
output="string_example",
error="string_example",
duration=3.14,
)
⚙️ Parameters
reference_id: str
A unique string to reference the datapoint. Identifies the logged datapoint created with the same reference_id
.
output: str
Generated output from your model for the provided inputs.
error: str
Error message if the log is an error.
duration: Union[int, float]
Duration of the logged event in seconds.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/logs
patch
humanloop.model_configs.deserialize
Deserialize a model config from a .prompt file format.
🛠️ Usage
deserialize_response = humanloop.model_configs.deserialize(
config="string_example",
)
⚙️ Parameters
config: str
⚙️ Request Body
🔄 Return
🌐 Endpoint
/model-configs/deserialize
post
humanloop.model_configs.export
Export a model config to a .prompt file by ID.
🛠️ Usage
export_response = humanloop.model_configs.export(
id="id_example",
)
⚙️ Parameters
id: str
String ID of the model config. Starts with config_
.
🌐 Endpoint
/model-configs/{id}/export
post
humanloop.model_configs.get
Get a specific model config by ID.
🛠️ Usage
get_response = humanloop.model_configs.get(
id="id_example",
)
⚙️ Parameters
id: str
String ID of the model config. Starts with config_
.
🔄 Return
🌐 Endpoint
/model-configs/{id}
get
humanloop.model_configs.register
Register a model config to a project and optionally add it to an experiment. If the project name provided does not exist, a new project will be created automatically. If an experiment name is provided, the specified experiment must already exist. Otherwise, an error will be raised. If the model config is the first to be associated to the project, it will be set as the active model config.
🛠️ Usage
register_response = humanloop.model_configs.register(
model="string_example",
description="string_example",
name="string_example",
provider="string_example",
max_tokens=-1,
temperature=1,
top_p=1,
stop="string_example",
presence_penalty=0,
frequency_penalty=0,
other={},
seed=1,
response_format={
"type": "json_object",
},
project="string_example",
project_id="string_example",
experiment="string_example",
prompt_template="string_example",
chat_template=[
{
"role": "string_example",
}
],
endpoint="string_example",
tools=[
{
"id": "id_example",
"source": "organization",
}
],
)
⚙️ Parameters
model: str
The model instance used. E.g. text-davinci-002.
description: str
A description of the model config.
name: str
A friendly display name for the model config. If not provided, a name will be generated.
provider: ModelProviders
The company providing the underlying model service.
max_tokens: int
The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt
temperature: Union[int, float]
What sampling temperature to use when making a generation. Higher values means the model will be more creative.
top_p: Union[int, float]
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
stop: Union[str
, List[str]
]
The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.
presence_penalty: Union[int, float]
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.
frequency_penalty: Union[int, float]
Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.
other: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Other parameter values to be passed to the provider call.
seed: int
If specified, model will make a best effort to sample deterministically, but it is not guaranteed.
response_format: ResponseFormat
The format of the response. Only type json_object is currently supported for chat.
project: str
Unique project name. If it does not exist, a new project will be created.
project_id: str
Unique project ID
experiment: str
If specified, the model config will be added to this experiment. Experiments are used for A/B testing and optimizing hyperparameters.
prompt_template: str
Prompt template that will take your specified inputs to form your final request to the provider model. NB: Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.
chat_template: List[ChatMessageWithToolCall
]
Messages prepended to the list of messages sent to the provider. These messages that will take your specified inputs to form your final request to the provider model. NB: Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.
endpoint: ModelEndpoints
Which of the providers model endpoints to use. For example Complete or Edit.
tools: ProjectModelConfigRequestTools
⚙️ Request Body
🔄 Return
🌐 Endpoint
/model-configs
post
humanloop.model_configs.serialize
Serialize a model config to a .prompt file format.
🛠️ Usage
serialize_response = humanloop.model_configs.serialize(
body={
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
},
description="string_example",
name="string_example",
provider="string_example",
model="string_example",
max_tokens=-1,
temperature=1,
top_p=1,
stop="string_example",
presence_penalty=0,
frequency_penalty=0,
other={},
seed=1,
response_format={
"type": "json_object",
},
endpoint="string_example",
chat_template=[
{
"role": "string_example",
}
],
tools=[
{
"id": "id_example",
"source": "organization",
}
],
prompt_template="{{question}}",
)
⚙️ Parameters
description: str
A description of the model config.
name: str
A friendly display name for the model config. If not provided, a name will be generated.
provider: ModelProviders
The company providing the underlying model service.
model: str
The model instance used. E.g. text-davinci-002.
max_tokens: int
The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt
temperature: Union[int, float]
What sampling temperature to use when making a generation. Higher values means the model will be more creative.
top_p: Union[int, float]
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
stop: Union[str
, List[str]
]
The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.
presence_penalty: Union[int, float]
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.
frequency_penalty: Union[int, float]
Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.
other: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Other parameter values to be passed to the provider call.
seed: int
If specified, model will make a best effort to sample deterministically, but it is not guaranteed.
response_format: ResponseFormat
The format of the response. Only type json_object is currently supported for chat.
endpoint: ModelEndpoints
The provider model endpoint used.
chat_template: List[ChatMessageWithToolCall
]
Messages prepended to the list of messages sent to the provider. These messages that will take your specified inputs to form your final request to the provider model. Input variables within the template should be specified with syntax: {{INPUT_NAME}}.
tools: ModelConfigChatRequestTools
prompt_template: str
Prompt template that will take your specified inputs to form your final request to the model. Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.
⚙️ Request Body
🌐 Endpoint
/model-configs/serialize
post
humanloop.projects.create
Create a new project.
🛠️ Usage
create_response = humanloop.projects.create(
name="string_example",
feedback_types=[
{
"type": "type_example",
}
],
directory_id="string_example",
)
⚙️ Parameters
name: str
Unique project name.
feedback_types: List[FeedbackTypeRequest
]
Feedback types to be created.
directory_id: str
ID of directory to assign project to. Starts with dir_
. If not provided, the project will be created in the root directory.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects
post
humanloop.projects.create_feedback_type
Create Feedback Type
🛠️ Usage
create_feedback_type_response = humanloop.projects.create_feedback_type(
type="string_example",
id="id_example",
values=[
{
"value": "value_example",
"sentiment": "positive",
}
],
_class="string_example",
)
⚙️ Parameters
type: str
The type of feedback to update.
id: str
String ID of project. Starts with pr_
.
values: List[FeedbackLabelRequest
]
The feedback values to be available. This field should only be populated when updating a 'select' or 'multi_select' feedback class.
_class: FeedbackClass
The data type associated to this feedback type; whether it is a 'text'/'select'/'multi_select'. This is optional when updating the default feedback types (i.e. when type
is 'rating', 'action' or 'issue').
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{id}/feedback-types
post
humanloop.projects.deactivate_config
Remove the project's active config, if set. This has no effect if the project does not have an active model config set.
🛠️ Usage
deactivate_config_response = humanloop.projects.deactivate_config(
id="id_example",
environment="string_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
environment: str
Name for the environment. E.g. 'production'. If not provided, will delete the active config for the default environment.
🔄 Return
🌐 Endpoint
/projects/{id}/active-config
delete
humanloop.projects.deactivate_experiment
Remove the project's active experiment, if set. This has no effect if the project does not have an active experiment set.
🛠️ Usage
deactivate_experiment_response = humanloop.projects.deactivate_experiment(
id="id_example",
environment="string_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
environment: str
Name for the environment. E.g. 'producton'. If not provided, will return the experiment for the default environment.
🔄 Return
🌐 Endpoint
/projects/{id}/active-experiment
delete
humanloop.projects.delete
Delete a specific file.
🛠️ Usage
humanloop.projects.delete(
id="id_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
🌐 Endpoint
/projects/{id}
delete
humanloop.projects.delete_deployed_config
Remove the verion deployed to environment. This has no effect if the project does not have an active version set.
🛠️ Usage
delete_deployed_config_response = humanloop.projects.delete_deployed_config(
project_id="project_id_example",
environment_id="environment_id_example",
)
⚙️ Parameters
project_id: str
environment_id: str
🌐 Endpoint
/projects/{project_id}/deployed-config/{environment_id}
delete
humanloop.projects.deploy_config
Deploy a model config to an environment. If the environment already has a model config deployed, it will be replaced.
🛠️ Usage
deploy_config_response = humanloop.projects.deploy_config(
project_id="project_id_example",
config_id="string_example",
experiment_id="string_example",
environments=[
{
"id": "id_example",
}
],
)
⚙️ Parameters
project_id: str
config_id: str
Model config unique identifier generated by Humanloop.
experiment_id: str
String ID of experiment. Starts with exp_
.
environments: List[EnvironmentRequest
]
List of environments to associate with the model config.
⚙️ Request Body
EnvironmentProjectConfigRequest
🔄 Return
ProjectsDeployConfigToEnvironmentsResponse
🌐 Endpoint
/projects/{project_id}/deploy-config
patch
humanloop.projects.export
Export all logged datapoints associated to your project. Results are paginated and sorts the datapoints based on created_at
in descending order.
🛠️ Usage
export_response = humanloop.projects.export(
id="id_example",
page=0,
size=10,
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
page: int
Page offset for pagination.
size: int
Page size for pagination. Number of logs to export.
🔄 Return
🌐 Endpoint
/projects/{id}/export
post
humanloop.projects.get
Get a specific project.
🛠️ Usage
get_response = humanloop.projects.get(
id="id_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
🔄 Return
🌐 Endpoint
/projects/{id}
get
humanloop.projects.get_active_config
Retrieves a config to use to execute your model. A config will be selected based on the project's active config/experiment settings.
🛠️ Usage
get_active_config_response = humanloop.projects.get_active_config(
id="id_example",
environment="string_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
environment: str
Name for the environment. E.g. 'producton'. If not provided, will return the active config for the default environment.
🔄 Return
🌐 Endpoint
/projects/{id}/active-config
get
humanloop.projects.list
Get a paginated list of files.
🛠️ Usage
list_response = humanloop.projects.list(
page=0,
size=10,
filter="string_example",
user_filter="string_example",
sort_by="string_example",
order="string_example",
)
⚙️ Parameters
page: int
Page offset for pagination.
size: int
Page size for pagination. Number of projects to fetch.
filter: str
Case-insensitive filter for project name.
user_filter: str
Case-insensitive filter for users in the project. This filter matches against both email address and name of users.
sort_by: ProjectSortBy
Field to sort projects by
order: SortOrder
Direction to sort by.
🔄 Return
🌐 Endpoint
/projects
get
humanloop.projects.list_configs
Get an array of versions associated to your file.
🛠️ Usage
list_configs_response = humanloop.projects.list_configs(
id="id_example",
evaluation_aggregates=True,
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
evaluation_aggregates: bool
🔄 Return
🌐 Endpoint
/projects/{id}/configs
get
humanloop.projects.list_deployed_configs
Get an array of environments with the deployed configs associated to your project.
🛠️ Usage
list_deployed_configs_response = humanloop.projects.list_deployed_configs(
id="id_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
🔄 Return
ProjectsGetDeployedConfigsResponse
🌐 Endpoint
/projects/{id}/deployed-configs
get
humanloop.projects.update
Update a specific project. Set the project's active model config/experiment by passing either active_experiment_id
or active_model_config_id
. These will be set to the Default environment unless a list of environments are also passed in specifically detailing which environments to assign the active config or experiment. Set the feedback labels to be treated as positive user feedback used in calculating top-level project metrics by passing a list of labels in positive_labels
.
🛠️ Usage
update_response = humanloop.projects.update(
id="id_example",
name="string_example",
active_experiment_id="string_example",
active_config_id="string_example",
positive_labels=[
{
"type": "type_example",
"value": "value_example",
}
],
directory_id="string_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
name: str
The new unique project name. Caution, if you are using the project name as the unique identifier in your API calls, changing the name will break the calls.
active_experiment_id: str
ID for an experiment to set as the project's active deployment. Starts with 'exp_'. At most one of 'active_experiment_id' and 'active_model_config_id' can be set.
active_config_id: str
ID for a config to set as the project's active deployment. Starts with 'config_'. At most one of 'active_experiment_id' and 'active_config_id' can be set.
positive_labels: List[PositiveLabel
]
The full list of labels to treat as positive user feedback.
directory_id: str
ID of directory to assign project to. Starts with dir_
.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{id}
patch
humanloop.projects.update_feedback_types
Update feedback types. Allows enabling the available feedback types and setting status of feedback types/categorical values. This behaves like an upsert; any feedback categorical values that do not already exist in the project will be created.
🛠️ Usage
update_feedback_types_response = humanloop.projects.update_feedback_types(
body=[
{
"type": "type_example",
}
],
id="id_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
requestBody: ProjectsUpdateFeedbackTypesRequest
🔄 Return
🌐 Endpoint
/projects/{id}/feedback-types
patch
humanloop.sessions.create
Create a new session. Returns a session ID that can be used to log datapoints to the session.
🛠️ Usage
create_response = humanloop.sessions.create()
🔄 Return
🌐 Endpoint
/sessions
post
humanloop.sessions.get
Get a session by ID.
🛠️ Usage
get_response = humanloop.sessions.get(
id="id_example",
)
⚙️ Parameters
id: str
String ID of session to return. Starts with sesh_
.
🔄 Return
🌐 Endpoint
/sessions/{id}
get
humanloop.sessions.list
Get a page of sessions.
🛠️ Usage
list_response = humanloop.sessions.list(
project_id="project_id_example",
page=1,
size=10,
)
⚙️ Parameters
project_id: str
String ID of project to return sessions for. Sessions that contain any datapoints associated to this project will be returned. Starts with pr_
.
page: int
Page to fetch. Starts from 1.
size: int
Number of sessions to retrieve.
🔄 Return
🌐 Endpoint
/sessions
get
Author
This Python package is automatically generated by Konfig
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file humanloop-0.7.0b10.tar.gz
.
File metadata
- Download URL: humanloop-0.7.0b10.tar.gz
- Upload date:
- Size: 335.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8bf150908d2aeb0bdf537b7919973d0388872a771e3ff075159b5201fe7e2f18 |
|
MD5 | 9c6f064e6816b3965774efee1b0f5f23 |
|
BLAKE2b-256 | ffa09b1c87bd7dfce0a6ed6e98a72960aa2a6c214f4058754e9028c5975ca604 |
File details
Details for the file humanloop-0.7.0b10-py3-none-any.whl
.
File metadata
- Download URL: humanloop-0.7.0b10-py3-none-any.whl
- Upload date:
- Size: 1.4 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 51afb283edaccce7118afb386c50d974747cf74c933a24cfa1ea332f6c2a49b8 |
|
MD5 | 48a2b1c616ede56c7551d83c3aeec163 |
|
BLAKE2b-256 | 3eccb08e5a838388f730a0d25edc712f50e35a23e633bc67a8a97a2f066d092c |