Client for Humanloop API
Project description
Table of Contents
- Requirements
- Installing
- Getting Started
- Async
- Streaming
- Reference
humanloop.chat
humanloop.chat_deployed
humanloop.chat_experiment
humanloop.chat_model_configuration
humanloop.complete
humanloop.complete_deployed
humanloop.complete_experiment
humanloop.complete_model_configuration
humanloop.datapoints.delete
humanloop.datapoints.get
humanloop.datapoints.update
humanloop.datasets.create
humanloop.datasets.create_datapoint
humanloop.datasets.delete
humanloop.datasets.get
humanloop.datasets.list_all_for_project
humanloop.datasets.list_datapoints
humanloop.datasets.update
humanloop.evaluations.create
humanloop.evaluations.get
humanloop.evaluations.list_all_for_project
humanloop.evaluations.list_datapoints
humanloop.evaluators.create
humanloop.evaluators.delete
humanloop.evaluators.list
humanloop.evaluators.update
humanloop.experiments.create
humanloop.experiments.delete
humanloop.experiments.list
humanloop.experiments.sample
humanloop.experiments.update
humanloop.feedback
humanloop.finetunes.create
humanloop.finetunes.list_all_for_project
humanloop.finetunes.summary
humanloop.finetunes.update
humanloop.log
humanloop.logs.update
humanloop.logs.update_by_ref
humanloop.model_configs.get
humanloop.model_configs.register
humanloop.projects.create
humanloop.projects.create_feedback_type
humanloop.projects.deactivate_config
humanloop.projects.deactivate_experiment
humanloop.projects.delete_deployed_config
humanloop.projects.deploy_config
humanloop.projects.export
humanloop.projects.get
humanloop.projects.get_active_config
humanloop.projects.list
humanloop.projects.list_configs
humanloop.projects.list_deployed_configs
humanloop.projects.update
humanloop.projects.update_feedback_types
humanloop.sessions.create
humanloop.sessions.get
humanloop.sessions.list
Requirements
Python >=3.7
Installing
pip install humanloop==0.5.17
Getting Started
from pprint import pprint
from humanloop import Humanloop, ApiException
humanloop = Humanloop(
api_key="YOUR_API_KEY",
openai_api_key="YOUR_OPENAI_API_KEY",
anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)
try:
# Chat
chat_response = humanloop.chat(
project="sdk-example",
messages=[
{
"role": "user",
"content": "Explain asynchronous programming.",
}
],
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"chat_template": [
{
"role": "system",
"content": "You are a helpful assistant who replies in the style of {{persona}}.",
},
],
},
inputs={
"persona": "the pirate Blackbeard",
},
stream=False,
)
pprint(chat_response.body)
pprint(chat_response.body["project_id"])
pprint(chat_response.body["data"][0])
pprint(chat_response.body["provider_responses"])
pprint(chat_response.headers)
pprint(chat_response.status)
pprint(chat_response.round_trip_time)
except ApiException as e:
print("Exception when calling .chat: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
try:
# Complete
complete_response = humanloop.complete(
project="sdk-example",
inputs={
"text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
},
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
},
stream=False,
)
pprint(complete_response.body)
pprint(complete_response.body["project_id"])
pprint(complete_response.body["data"][0])
pprint(complete_response.body["provider_responses"])
pprint(complete_response.headers)
pprint(complete_response.status)
pprint(complete_response.round_trip_time)
except ApiException as e:
print("Exception when calling .complete: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
try:
# Feedback
feedback_response = humanloop.feedback(
type="rating",
value="good",
data_id="data_[...]",
user="user@example.com",
)
pprint(feedback_response.body)
pprint(feedback_response.headers)
pprint(feedback_response.status)
pprint(feedback_response.round_trip_time)
except ApiException as e:
print("Exception when calling .feedback: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
try:
# Log
log_response = humanloop.log(
project="sdk-example",
inputs={
"text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
},
output="Llamas can be friendly and curious if they are trained to be around people, but if they are treated too much like pets when they are young, they can become difficult to handle when they grow up. This means they might spit, kick, and wrestle with their necks.",
source="sdk",
config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
"type": "model",
},
)
pprint(log_response.body)
pprint(log_response.headers)
pprint(log_response.status)
pprint(log_response.round_trip_time)
except ApiException as e:
print("Exception when calling .log: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
Async
async
support is available by prepending a
to any method.
import asyncio
from pprint import pprint
from humanloop import Humanloop, ApiException
humanloop = Humanloop(
api_key="YOUR_API_KEY",
openai_api_key="YOUR_OPENAI_API_KEY",
anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)
async def main():
try:
complete_response = await humanloop.acomplete(
project="sdk-example",
inputs={
"text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]",
},
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n",
},
stream=False,
)
pprint(complete_response.body)
pprint(complete_response.body["project_id"])
pprint(complete_response.body["data"][0])
pprint(complete_response.body["provider_responses"])
pprint(complete_response.headers)
pprint(complete_response.status)
pprint(complete_response.round_trip_time)
except ApiException as e:
print("Exception when calling .complete: %s\n" % e)
pprint(e.body)
if e.status == 422:
pprint(e.body["detail"])
pprint(e.headers)
pprint(e.status)
pprint(e.reason)
pprint(e.round_trip_time)
asyncio.run(main())
Streaming
Streaming support is available by suffixing a chat
or complete
method with _stream
.
import asyncio
from humanloop import Humanloop
humanloop = Humanloop(
api_key="YOUR_API_KEY",
openai_api_key="YOUR_OPENAI_API_KEY",
anthropic_api_key="YOUR_ANTHROPIC_API_KEY",
)
async def main():
response = await humanloop.chat_stream(
project="sdk-example",
messages=[
{
"role": "user",
"content": "Explain asynchronous programming.",
}
],
model_config={
"model": "gpt-3.5-turbo",
"max_tokens": -1,
"temperature": 0.7,
"chat_template": [
{
"role": "system",
"content": "You are a helpful assistant who replies in the style of {{persona}}.",
},
],
},
inputs={
"persona": "the pirate Blackbeard",
},
)
async for token in response.content:
print(token)
asyncio.run(main())
Reference
humanloop.chat
Get a chat response by providing details of the model configuration in the request.
🛠️ Usage
create_response = humanloop.chat(
messages=[
{
"role": "user",
}
],
model_config={
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
},
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
tool_call="string_example",
)
⚙️ Parameters
messages: List[ChatMessage
]
The messages passed to the to provider chat endpoint.
model_config: ModelConfigChatRequest
The model configuration used to create a chat response.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of chat responses.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
tool_call: Union[str
, Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
]
Controls how the model uses tools - has the same behaviour as OpenAIs function_call parameter. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/chat
post
humanloop.chat_deployed
Get a chat response using the project's active deployment. The active deployment can be a specific model configuration or an experiment.
🛠️ Usage
create_deployed_response = humanloop.chat_deployed(
messages=[
{
"role": "user",
}
],
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
tool_call="string_example",
environment="string_example",
)
⚙️ Parameters
messages: List[ChatMessage
]
The messages passed to the to provider chat endpoint.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of chat responses.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
tool_call: Union[str
, Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
]
Controls how the model uses tools - has the same behaviour as OpenAIs function_call parameter. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.
environment: str
The environment name used to create a chat response. If not specified, the default environment will be used.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/chat-deployed
post
humanloop.chat_experiment
Get a chat response for a specific experiment.
🛠️ Usage
create_experiment_response = humanloop.chat_experiment(
messages=[
{
"role": "user",
}
],
experiment_id="string_example",
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
tool_call="string_example",
)
⚙️ Parameters
messages: List[ChatMessage
]
The messages passed to the to provider chat endpoint.
experiment_id: str
If an experiment ID is provided a model configuration will be sampled from the experiments active model configurations.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of chat responses, where each chat response will use a model configuration sampled from the experiment.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
tool_call: Union[str
, Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
]
Controls how the model uses tools - has the same behaviour as OpenAIs function_call parameter. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/chat-experiment
post
humanloop.chat_model_configuration
Get chat response for a specific model configuration.
🛠️ Usage
create_model_config_response = humanloop.chat_model_configuration(
messages=[
{
"role": "user",
}
],
model_config_id="string_example",
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
provider_api_keys={},
num_samples=1,
stream=False,
user="string_example",
tool_call="string_example",
)
⚙️ Parameters
messages: List[ChatMessage
]
The messages passed to the to provider chat endpoint.
model_config_id: str
Identifies the model configuration used to create a chat response.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of chat responses.
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
user: str
End-user ID passed through to provider call.
tool_call: Union[str
, Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
]
Controls how the model uses tools - has the same behaviour as OpenAIs function_call parameter. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'name': <TOOL_NAME>} forces the model to use the provided tool of the same name.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/chat-model-config
post
humanloop.complete
Create a completion by providing details of the model configuration in the request.
🛠️ Usage
create_response = humanloop.complete(
model_config={
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"prompt_template": "{{question}}",
},
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
provider_api_keys={},
num_samples=1,
logprobs=1,
stream=False,
suffix="string_example",
user="string_example",
)
⚙️ Parameters
model_config: ModelConfigCompletionRequest
The model configuration used to generate.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of generations.
logprobs: int
Include the log probabilities of the top n tokens in the provider_response
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
suffix: str
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
user: str
End-user ID passed through to provider call.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/completion
post
humanloop.complete_deployed
Create a completion using the project's active deployment. The active deployment can be a specific model configuration or an experiment.
🛠️ Usage
create_deployed_response = humanloop.complete_deployed(
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
provider_api_keys={},
num_samples=1,
logprobs=1,
stream=False,
suffix="string_example",
user="string_example",
environment="string_example",
)
⚙️ Parameters
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of generations.
logprobs: int
Include the log probabilities of the top n tokens in the provider_response
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
suffix: str
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
user: str
End-user ID passed through to provider call.
environment: str
The environment name used to create a chat response. If not specified, the default environment will be used.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/completion-deployed
post
humanloop.complete_experiment
Create a completion for a specific experiment.
🛠️ Usage
create_experiment_response = humanloop.complete_experiment(
experiment_id="string_example",
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
provider_api_keys={},
num_samples=1,
logprobs=1,
stream=False,
suffix="string_example",
user="string_example",
)
⚙️ Parameters
experiment_id: str
If an experiment ID is provided a model configuration will be sampled from the experiments active model configurations.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of chat responses, where each chat response will use a model configuration sampled from the experiment.
logprobs: int
Include the log probabilities of the top n tokens in the provider_response
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
suffix: str
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
user: str
End-user ID passed through to provider call.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/completion-experiment
post
humanloop.complete_model_configuration
Create a completion for a specific model configuration.
🛠️ Usage
create_model_config_response = humanloop.complete_model_configuration(
model_config_id="string_example",
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
provider_api_keys={},
num_samples=1,
logprobs=1,
stream=False,
suffix="string_example",
user="string_example",
)
⚙️ Parameters
model_config_id: str
Identifies the model configuration used to create a chat response.
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
num_samples: int
The number of generations.
logprobs: int
Include the log probabilities of the top n tokens in the provider_response
stream: bool
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
suffix: str
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
user: str
End-user ID passed through to provider call.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/completion-model-config
post
humanloop.datapoints.delete
Delete a list of testsets by their IDs.
🛠️ Usage
humanloop.datapoints.delete(
body=["datapoints_delete_request_example"],
)
⚙️ Request Body
🌐 Endpoint
/datapoints
delete
humanloop.datapoints.get
Get a datapoint by ID.
🛠️ Usage
get_response = humanloop.datapoints.get(
id="id_example",
)
⚙️ Parameters
id: str
String ID of datapoint. Starts with evtc_
.
🔄 Return
🌐 Endpoint
/datapoints/{id}
get
humanloop.datapoints.update
Edit the input, messages and criteria fields of a datapoint. The fields passed in the request are the ones edited. Passing null
as a value for a field will delete that field. In order to signify not changing a field, it should be omitted from the request body.
🛠️ Usage
update_response = humanloop.datapoints.update(
id="id_example",
inputs={
"key": "string_example",
},
messages=[
{
"role": "user",
}
],
target={
"key": {},
},
)
⚙️ Parameters
id: str
String ID of datapoint. Starts with evtc_
.
inputs: UpdateDatapointRequestInputs
messages: List[ChatMessage
]
The chat messages for this datapoint.
target: UpdateDatapointRequestTarget
⚙️ Request Body
🔄 Return
🌐 Endpoint
/datapoints/{id}
patch
humanloop.datasets.create
Create a new dataset for a project.
🛠️ Usage
create_response = humanloop.datasets.create(
description="string_example",
name="string_example",
project_id="project_id_example",
)
⚙️ Parameters
description: str
The description of the dataset.
name: str
The name of the dataset.
project_id: str
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{project_id}/datasets
post
humanloop.datasets.create_datapoint
Create a new testcase for a testset.
🛠️ Usage
create_datapoint_response = humanloop.datasets.create_datapoint(
body={
"log_ids": ["log_ids_example"],
},
dataset_id="dataset_id_example",
log_ids=["string_example"],
inputs={
"key": "string_example",
},
messages=[
{
"role": "user",
}
],
target={
"key": {},
},
)
⚙️ Parameters
dataset_id: str
String ID of dataset. Starts with evts_
.
requestBody: DatasetsCreateDatapointRequest
🔄 Return
DatasetsCreateDatapointResponse
🌐 Endpoint
/datasets/{dataset_id}/datapoints
post
humanloop.datasets.delete
Delete a dataset by ID.
🛠️ Usage
delete_response = humanloop.datasets.delete(
id="id_example",
)
⚙️ Parameters
id: str
String ID of dataset. Starts with evts_
.
🔄 Return
🌐 Endpoint
/datasets/{id}
delete
humanloop.datasets.get
Get a single dataset by ID.
🛠️ Usage
get_response = humanloop.datasets.get(
id="id_example",
)
⚙️ Parameters
id: str
String ID of dataset. Starts with evts_
.
🔄 Return
🌐 Endpoint
/datasets/{id}
get
humanloop.datasets.list_all_for_project
Get all datasets for a project.
🛠️ Usage
list_all_for_project_response = humanloop.datasets.list_all_for_project(
project_id="project_id_example",
)
⚙️ Parameters
project_id: str
🔄 Return
DatasetsListAllForProjectResponse
🌐 Endpoint
/projects/{project_id}/datasets
get
humanloop.datasets.list_datapoints
Get datapoints for a dataset.
🛠️ Usage
list_datapoints_response = humanloop.datasets.list_datapoints(
dataset_id="dataset_id_example",
page=0,
size=50,
)
⚙️ Parameters
dataset_id: str
String ID of dataset. Starts with evts_
.
page: int
size: int
🌐 Endpoint
/datasets/{dataset_id}/datapoints
get
humanloop.datasets.update
Update a testset by ID.
🛠️ Usage
update_response = humanloop.datasets.update(
id="id_example",
description="string_example",
name="string_example",
)
⚙️ Parameters
id: str
String ID of testset. Starts with evts_
.
description: str
The description of the dataset.
name: str
The name of the dataset.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/datasets/{id}
patch
humanloop.evaluations.create
Create an evaluation.
🛠️ Usage
create_response = humanloop.evaluations.create(
config_id="string_example",
evaluator_ids=["string_example"],
dataset_id="string_example",
project_id="project_id_example",
provider_api_keys={},
)
⚙️ Parameters
config_id: str
ID of the config to evaluate. Starts with config_
.
evaluator_ids: CreateEvaluationRequestEvaluatorIds
dataset_id: str
ID of the dataset to use in this evaluation. Starts with evts_
.
project_id: str
String ID of project. Starts with pr_
.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization. Ensure you provide an API key for the provider for the model config you are evaluating, or have one saved to your organization.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{project_id}/evaluations
post
humanloop.evaluations.get
Get evaluation by ID.
🛠️ Usage
get_response = humanloop.evaluations.get(
id="id_example",
evaluator_aggregates=True,
)
⚙️ Parameters
id: str
String ID of evaluation run. Starts with ev_
.
evaluator_aggregates: bool
Whether to include evaluator aggregates in the response.
🔄 Return
🌐 Endpoint
/evaluations/{id}
get
humanloop.evaluations.list_all_for_project
Get all the evaluations associated with your project.
🛠️ Usage
list_all_for_project_response = humanloop.evaluations.list_all_for_project(
project_id="project_id_example",
evaluator_aggregates=True,
)
⚙️ Parameters
project_id: str
String ID of project. Starts with pr_
.
evaluator_aggregates: bool
Whether to include evaluator aggregates in the response.
🔄 Return
EvaluationsGetForProjectResponse
🌐 Endpoint
/projects/{project_id}/evaluations
get
humanloop.evaluations.list_datapoints
Get testcases by evaluation ID.
🛠️ Usage
list_datapoints_response = humanloop.evaluations.list_datapoints(
id="id_example",
page=1,
size=10,
)
⚙️ Parameters
id: str
String ID of evaluation. Starts with ev_
.
page: int
Page to fetch. Starts from 1.
size: int
Number of evaluation results to retrieve.
🌐 Endpoint
/evaluations/{id}/datapoints
get
humanloop.evaluators.create
Create an evaluator within your organization.
🛠️ Usage
create_response = humanloop.evaluators.create(
description="string_example",
name="string_example",
code="string_example",
arguments_type="string_example",
return_type="string_example",
)
⚙️ Parameters
description: str
The description of the evaluator.
name: str
The name of the evaluator.
code: str
The code for the evaluator. This code will be executed in a sandboxed environment.
arguments_type: EvaluatorArgumentsType
Whether this evaluator is target-free or target-required.
return_type: EvaluatorReturnTypeEnum
The type of the return value of the evaluator.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/evaluators
post
humanloop.evaluators.delete
Delete an evaluator within your organization.
🛠️ Usage
humanloop.evaluators.delete(
id="id_example",
)
⚙️ Parameters
id: str
🌐 Endpoint
/evaluators/{id}
delete
humanloop.evaluators.list
Get all evaluators within your organization.
🛠️ Usage
list_response = humanloop.evaluators.list()
🔄 Return
🌐 Endpoint
/evaluators
get
humanloop.evaluators.update
Update an evaluator within your organization.
🛠️ Usage
update_response = humanloop.evaluators.update(
id="id_example",
description="string_example",
name="string_example",
code="string_example",
arguments_type="string_example",
return_type="string_example",
)
⚙️ Parameters
id: str
description: str
The description of the evaluator.
name: str
The name of the evaluator.
code: str
The code for the evaluator. This code will be executed in a sandboxed environment.
arguments_type: EvaluatorArgumentsType
Whether this evaluator is target-free or target-required.
return_type: EvaluatorReturnTypeEnum
The type of the return value of the evaluator.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/evaluators/{id}
patch
humanloop.experiments.create
Create an experiment for your project. You can optionally specify IDs of your project's model configs to include in the experiment, along with a set of labels to consider as positive feedback and whether the experiment should be set as active.
🛠️ Usage
create_response = humanloop.experiments.create(
name="string_example",
positive_labels=[
{
"type": "type_example",
"value": "value_example",
}
],
project_id="project_id_example",
config_ids=["string_example"],
set_active=False,
)
⚙️ Parameters
name: str
Name of experiment.
positive_labels: List[PositiveLabel
]
Feedback labels to treat as positive user feedback. Used to monitor the performance of model configs in the experiment.
project_id: str
String ID of project. Starts with pr_
.
config_ids: CreateExperimentRequestConfigIds
set_active: bool
Whether to set the created project as the project's active experiment.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{project_id}/experiments
post
humanloop.experiments.delete
Delete the experiment with the specified ID.
🛠️ Usage
humanloop.experiments.delete(
experiment_id="experiment_id_example",
)
⚙️ Parameters
experiment_id: str
String ID of experiment. Starts with exp_
.
🌐 Endpoint
/experiments/{experiment_id}
delete
humanloop.experiments.list
Get an array of experiments associated to your project.
🛠️ Usage
list_response = humanloop.experiments.list(
project_id="project_id_example",
)
⚙️ Parameters
project_id: str
String ID of project. Starts with pr_
.
🔄 Return
🌐 Endpoint
/projects/{project_id}/experiments
get
humanloop.experiments.sample
Samples a model config from the experiment's active model configs.
🛠️ Usage
sample_response = humanloop.experiments.sample(
experiment_id="experiment_id_example",
)
⚙️ Parameters
experiment_id: str
String ID of experiment. Starts with exp_
.
🔄 Return
🌐 Endpoint
/experiments/{experiment_id}/model-config
get
humanloop.experiments.update
Update your experiment, including registering and de-registering model configs.
🛠️ Usage
update_response = humanloop.experiments.update(
experiment_id="experiment_id_example",
name="string_example",
positive_labels=[
{
"type": "type_example",
"value": "value_example",
}
],
config_ids_to_register=["string_example"],
config_ids_to_deregister=["string_example"],
)
⚙️ Parameters
experiment_id: str
String ID of experiment. Starts with exp_
.
name: str
Name of experiment.
positive_labels: List[PositiveLabel
]
Feedback labels to treat as positive user feedback. Used to monitor the performance of model configs in the experiment.
config_ids_to_register: UpdateExperimentRequestConfigIdsToRegister
config_ids_to_deregister: UpdateExperimentRequestConfigIdsToDeregister
⚙️ Request Body
🔄 Return
🌐 Endpoint
/experiments/{experiment_id}
patch
humanloop.feedback
Submit an array of feedback for existing data_ids
🛠️ Usage
feedback_response = humanloop.feedback(
body=[
{
"type": "string_example",
}
],
type="string_example",
value="string_example",
data_id="string_example",
user="string_example",
created_at="1970-01-01T00:00:00.00Z",
unset=True,
)
⚙️ Parameters
type: Union[FeedbackType
, str
]
The type of feedback. The default feedback types available are 'rating', 'action', 'issue', 'correction', and 'comment'.
value: str
The feedback value to be set. This field should be left blank when unsetting 'rating', 'correction' or 'comment', but is required otherwise.
data_id: str
ID to associate the feedback to a previously logged datapoint.
user: str
A unique identifier to who provided the feedback.
created_at: datetime
User defined timestamp for when the feedback was created.
unset: bool
If true, the value for this feedback is unset.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/feedback
post
humanloop.finetunes.create
Trigger the fine-tuning process for a specific base model and data snapshot.
🛠️ Usage
create_response = humanloop.finetunes.create(
name="string_example",
dataset_id="string_example",
config={
"base_model": "base_model_example",
},
project_id="project_id_example",
metadata={},
provider_api_keys={},
)
⚙️ Parameters
name: str
User defined friendly name for a finetuning run
dataset_id: str
ID of dataset used for finetuning
config: FinetuneConfig
Configuration and hyper-parameters for the fine-tuning process
project_id: str
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata that you would like to log for reference.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{project_id}/finetunes
post
humanloop.finetunes.list_all_for_project
Get a list of all fine-tuned models associated to a given project.
🛠️ Usage
list_all_for_project_response = humanloop.finetunes.list_all_for_project(
project_id="project_id_example",
)
⚙️ Parameters
project_id: str
🔄 Return
FinetunesListAllForProjectResponse
🌐 Endpoint
/projects/{project_id}/finetunes
get
humanloop.finetunes.summary
Checks data for errors and generates finetune data summary. Does not actually trigger the finetuning process or persist the data.
🛠️ Usage
summary_response = humanloop.finetunes.summary(
name="string_example",
dataset_id="string_example",
config={
"base_model": "base_model_example",
},
project_id="project_id_example",
metadata={},
provider_api_keys={},
)
⚙️ Parameters
name: str
User defined friendly name for a finetuning run
dataset_id: str
ID of dataset used for finetuning
config: FinetuneConfig
Configuration and hyper-parameters for the fine-tuning process
project_id: str
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata that you would like to log for reference.
provider_api_keys: ProviderApiKeys
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{project_id}/finetunes/summary
post
humanloop.finetunes.update
Update user-specified attributes of the specified finetuned models.
🛠️ Usage
update_response = humanloop.finetunes.update(
id="id_example",
project_id="project_id_example",
name="string_example",
)
⚙️ Parameters
id: str
project_id: str
name: str
⚙️ Request Body
🔄 Return
🌐 Endpoint
/finetunes/{id}
patch
humanloop.log
Log a datapoint or array of datapoints to your Humanloop project.
🛠️ Usage
log_response = humanloop.log(
body=[{}],
project="string_example",
project_id="string_example",
session_id="string_example",
session_reference_id="string_example",
parent_id="string_example",
parent_reference_id="string_example",
inputs={},
source="string_example",
metadata={},
reference_id="string_example",
trial_id="string_example",
messages=[
{
"role": "user",
}
],
output="string_example",
config={
"type": "AgentConfigRequest",
"agent_class": "agent_class_example",
"model_config": {
"model": "model_example",
"max_tokens": -1,
"temperature": 1,
"top_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"type": "model",
},
},
feedback={
"type": "string_example",
"value": 3.14,
},
created_at="1970-01-01T00:00:00.00Z",
error="string_example",
duration=3.14,
)
⚙️ Parameters
project: str
Unique project name. If no project exists with this name, a new project will be created.
project_id: str
Unique ID of a project to associate to the log. Either this or project
must be provided.
session_id: str
ID of the session to associate the datapoint.
session_reference_id: str
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
parent_id: str
ID associated to the parent datapoint in a session.
parent_reference_id: str
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
inputs: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
The inputs passed to the prompt template.
source: str
Identifies where the model was called from.
metadata: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Any additional metadata to record.
reference_id: str
A unique string to reference the datapoint. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a subsequent log request.
trial_id: str
Unique ID of an experiment trial to associate to the log.
messages: List[ChatMessage
]
The messages passed to the to provider chat endpoint.
output: str
Generated output from your model for the provided inputs. Can be None
if logging an error, or if logging a parent datapoint with the intention to populate it later
config: Union[ModelConfigRequest
, ToolConfigRequest
, GenericConfigRequest
, AgentConfigRequest
]
The model config used for this generation. Required unless trial_id
is provided.
feedback: Union[Feedback
, List[Feedback
]]
Optional parameter to provide feedback with your logged datapoint.
created_at: datetime
User defined timestamp for when the log was created.
error: str
Error message if the log is an error.
duration: Union[int, float]
Duration of the logged event in seconds.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/logs
post
humanloop.logs.update
Update a logged datapoint in your Humanloop project.
🛠️ Usage
update_response = humanloop.logs.update(
id="id_example",
output="string_example",
error="string_example",
duration=3.14,
)
⚙️ Parameters
id: str
String ID of logged datapoint to return. Starts with data_
.
output: str
Generated output from your model for the provided inputs.
error: str
Error message if the log is an error.
duration: Union[int, float]
Duration of the logged event in seconds.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/logs/{id}
patch
humanloop.logs.update_by_ref
Update a logged datapoint by its reference ID. The reference_id
query parameter must be provided, and refers to the reference_id
of a previously-logged datapoint.
🛠️ Usage
update_by_ref_response = humanloop.logs.update_by_ref(
reference_id="reference_id_example",
output="string_example",
error="string_example",
duration=3.14,
)
⚙️ Parameters
reference_id: str
A unique string to reference the datapoint. Identifies the logged datapoint created with the same reference_id
.
output: str
Generated output from your model for the provided inputs.
error: str
Error message if the log is an error.
duration: Union[int, float]
Duration of the logged event in seconds.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/logs
patch
humanloop.model_configs.get
Get a specific model config by ID.
🛠️ Usage
get_response = humanloop.model_configs.get(
id="id_example",
)
⚙️ Parameters
id: str
String ID of the model config. Starts with config_
.
🔄 Return
🌐 Endpoint
/model-configs/{id}
get
humanloop.model_configs.register
Register a model config to a project and optionally add it to an experiment. If the project name provided does not exist, a new project will be created automatically. If an experiment name is provided, the specified experiment must already exist. Otherwise, an error will be raised. If the model config is the first to be associated to the project, it will be set as the active model config.
🛠️ Usage
register_response = humanloop.model_configs.register(
model="string_example",
description="string_example",
name="string_example",
provider="string_example",
max_tokens=-1,
temperature=1,
top_p=1,
stop="string_example",
presence_penalty=0,
frequency_penalty=0,
other={},
project="string_example",
project_id="string_example",
experiment="string_example",
prompt_template="string_example",
chat_template=[
{
"role": "user",
}
],
endpoint="string_example",
tools=[
{
"name": "name_example",
}
],
)
⚙️ Parameters
model: str
The model instance used. E.g. text-davinci-002.
description: str
A description of the model config.
name: str
A friendly display name for the model config. If not provided, a name will be generated.
provider: ModelProviders
The company providing the underlying model service.
max_tokens: int
The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt
temperature: Union[int, float]
What sampling temperature to use when making a generation. Higher values means the model will be more creative.
top_p: Union[int, float]
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
stop: Union[str
, List[str]
]
The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.
presence_penalty: Union[int, float]
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.
frequency_penalty: Union[int, float]
Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.
other: Dict[str, Union[bool, date, datetime, dict, float, int, list, str, None]]
Other parameter values to be passed to the provider call.
project: str
Unique project name. If it does not exist, a new project will be created.
project_id: str
Unique project ID
experiment: str
If specified, the model config will be added to this experiment. Experiments are used for A/B testing and optimizing hyperparameters.
prompt_template: str
Prompt template that will take your specified inputs to form your final request to the provider model. NB: Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.
chat_template: List[ChatMessage
]
Messages prepended to the list of messages sent to the provider. These messages that will take your specified inputs to form your final request to the provider model. NB: Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.
endpoint: ModelEndpoints
Which of the providers model endpoints to use. For example Complete or Edit.
tools: List[ModelConfigToolRequest
]
Make tools available to OpenAIs chat model as functions.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/model-configs
post
humanloop.projects.create
Create a new project.
🛠️ Usage
create_response = humanloop.projects.create(
name="string_example",
feedback_types=[
{
"type": "type_example",
}
],
)
⚙️ Parameters
name: str
Unique project name.
feedback_types: List[FeedbackTypeRequest
]
Feedback types to be created.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects
post
humanloop.projects.create_feedback_type
Create Feedback Type
🛠️ Usage
create_feedback_type_response = humanloop.projects.create_feedback_type(
type="string_example",
id="id_example",
values=[
{
"value": "value_example",
"sentiment": "positive",
}
],
_class="string_example",
)
⚙️ Parameters
type: str
The type of feedback to update.
id: str
String ID of project. Starts with pr_
.
values: List[FeedbackLabelRequest
]
The feedback values to be available. This field should only be populated when updating a 'select' or 'multi_select' feedback class.
_class: FeedbackClass
The data type associated to this feedback type; whether it is a 'text'/'select'/'multi_select'. This is optional when updating the default feedback types (i.e. when type
is 'rating', 'action' or 'issue').
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{id}/feedback-types
post
humanloop.projects.deactivate_config
Remove the project's active config, if set. This has no effect if the project does not have an active model config set.
🛠️ Usage
deactivate_config_response = humanloop.projects.deactivate_config(
id="id_example",
environment="string_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
environment: str
Name for the environment. E.g. 'producton'. If not provided, will delete the active config for the default environment.
🔄 Return
🌐 Endpoint
/projects/{id}/active-config
delete
humanloop.projects.deactivate_experiment
Remove the project's active experiment, if set. This has no effect if the project does not have an active experiment set.
🛠️ Usage
deactivate_experiment_response = humanloop.projects.deactivate_experiment(
id="id_example",
environment="string_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
environment: str
Name for the environment. E.g. 'producton'. If not provided, will return the experiment for the default environment.
🔄 Return
🌐 Endpoint
/projects/{id}/active-experiment
delete
humanloop.projects.delete_deployed_config
Remove the model config deployed to environment. This has no effect if the project does not have an active model config set.
🛠️ Usage
delete_deployed_config_response = humanloop.projects.delete_deployed_config(
project_id="project_id_example",
environment_id="environment_id_example",
)
⚙️ Parameters
project_id: str
environment_id: str
🌐 Endpoint
/projects/{project_id}/deployed-config/{environment_id}
delete
humanloop.projects.deploy_config
Deploy a model config to an environment. If the environment already has a model config deployed, it will be replaced.
🛠️ Usage
deploy_config_response = humanloop.projects.deploy_config(
project_id="project_id_example",
config_id="string_example",
experiment_id="string_example",
environments=[
{
"id": "id_example",
}
],
)
⚙️ Parameters
project_id: str
config_id: str
Model config unique identifier generated by Humanloop.
experiment_id: str
String ID of experiment. Starts with exp_
.
environments: List[EnvironmentRequest
]
List of environments to associate with the model config.
⚙️ Request Body
EnvironmentProjectConfigRequest
🔄 Return
ProjectsDeployConfigToEnvironmentsResponse
🌐 Endpoint
/projects/{project_id}/deploy-config
patch
humanloop.projects.export
Export all logged datapoints associated to your project. Results are paginated and sorts the datapoints based on created_at
in descending order.
🛠️ Usage
export_response = humanloop.projects.export(
id="id_example",
page=0,
size=10,
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
page: int
Page offset for pagination.
size: int
Page size for pagination. Number of logs to export.
🌐 Endpoint
/projects/{id}/export
post
humanloop.projects.get
Get a specific project.
🛠️ Usage
get_response = humanloop.projects.get(
id="id_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
🔄 Return
🌐 Endpoint
/projects/{id}
get
humanloop.projects.get_active_config
Retrieves a config to use to execute your model. A config will be selected based on the project's active config/experiment settings.
🛠️ Usage
get_active_config_response = humanloop.projects.get_active_config(
id="id_example",
environment="string_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
environment: str
Name for the environment. E.g. 'producton'. If not provided, will return the active config for the default environment.
🔄 Return
🌐 Endpoint
/projects/{id}/active-config
get
humanloop.projects.list
Get a paginated list of projects.
🛠️ Usage
list_response = humanloop.projects.list(
page=0,
size=10,
organization_id="string_example",
filter="string_example",
user_filter="string_example",
sort_by="string_example",
order="string_example",
)
⚙️ Parameters
page: int
Page offset for pagination.
size: int
Page size for pagination. Number of projects to fetch.
organization_id: str
ID of organization that fetched projects belong to. Starts with org_
.
filter: str
Case-insensitive filter for project name.
user_filter: str
Case-insensitive filter for users in the project. This filter matches against both email address and name of users.
sort_by: ProjectSortBy
Field to sort projects by
order: SortOrder
Direction to sort by.
🌐 Endpoint
/projects
get
humanloop.projects.list_configs
Get an array of configs associated to your project.
🛠️ Usage
list_configs_response = humanloop.projects.list_configs(
id="id_example",
evaluation_aggregates=True,
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
evaluation_aggregates: bool
🔄 Return
🌐 Endpoint
/projects/{id}/configs
get
humanloop.projects.list_deployed_configs
Get an array of environments with the deployed configs associated to your project.
🛠️ Usage
list_deployed_configs_response = humanloop.projects.list_deployed_configs(
id="id_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
🔄 Return
ProjectsGetDeployedConfigsResponse
🌐 Endpoint
/projects/{id}/deployed-configs
get
humanloop.projects.update
Update a specific project. Set the project's active model config/experiment by passing either active_experiment_id
or active_model_config_id
. These will be set to the Default environment unless a list of environments are also passed in specifically detailing which environments to assign the active config or experiment. Set the feedback labels to be treated as positive user feedback used in calculating top-level project metrics by passing a list of labels in positive_labels
.
🛠️ Usage
update_response = humanloop.projects.update(
id="id_example",
name="string_example",
active_experiment_id="string_example",
active_config_id="string_example",
positive_labels=[
{
"type": "type_example",
"value": "value_example",
}
],
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
name: str
The new unique project name. Caution, if you are using the project name as the unique identifier in your API calls, changing the name will break the calls.
active_experiment_id: str
ID for an experiment to set as the project's active deployment. Starts with 'exp_'. At most one of 'active_experiment_id' and 'active_model_config_id' can be set.
active_config_id: str
ID for a config to set as the project's active deployment. Starts with 'config_'. At most one of 'active_experiment_id' and 'active_config_id' can be set.
positive_labels: List[PositiveLabel
]
The full list of labels to treat as positive user feedback.
⚙️ Request Body
🔄 Return
🌐 Endpoint
/projects/{id}
patch
humanloop.projects.update_feedback_types
Update feedback types. Allows enabling the available feedback types and setting status of feedback types/categorical values. This behaves like an upsert; any feedback categorical values that do not already exist in the project will be created.
🛠️ Usage
update_feedback_types_response = humanloop.projects.update_feedback_types(
body=[
{
"type": "type_example",
}
],
id="id_example",
)
⚙️ Parameters
id: str
String ID of project. Starts with pr_
.
requestBody: ProjectsUpdateFeedbackTypesRequest
🔄 Return
🌐 Endpoint
/projects/{id}/feedback-types
patch
humanloop.sessions.create
Create a new session. Returns a session ID that can be used to log datapoints to the session.
🛠️ Usage
create_response = humanloop.sessions.create()
🔄 Return
🌐 Endpoint
/sessions
post
humanloop.sessions.get
Get a session by ID.
🛠️ Usage
get_response = humanloop.sessions.get(
id="id_example",
)
⚙️ Parameters
id: str
String ID of session to return. Starts with sesh_
.
🔄 Return
🌐 Endpoint
/sessions/{id}
get
humanloop.sessions.list
Get a page of sessions.
🛠️ Usage
list_response = humanloop.sessions.list(
project_id="project_id_example",
page=1,
size=10,
)
⚙️ Parameters
project_id: str
String ID of project to return sessions for. Sessions that contain any datapoints associated to this project will be returned. Starts with pr_
.
page: int
Page to fetch. Starts from 1.
size: int
Number of sessions to retrieve.
🌐 Endpoint
/sessions
get
Author
This Python package is automatically generated by Konfig
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file humanloop-0.5.17.tar.gz
.
File metadata
- Download URL: humanloop-0.5.17.tar.gz
- Upload date:
- Size: 225.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6b0d860ba23613a93a6113d9770574b4a705b02d5c060ed6deb26bf1278ab02e |
|
MD5 | 0ce3f860286572d002bad0c1273636ed |
|
BLAKE2b-256 | 459a755fafd95d428293cc2f2833303ead85e3d2b408a901aad07318053de33e |
File details
Details for the file humanloop-0.5.17-py3-none-any.whl
.
File metadata
- Download URL: humanloop-0.5.17-py3-none-any.whl
- Upload date:
- Size: 1.0 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 18b9e70e499aaeb8ff44e9c9aa2c34c457f261d3e23cac8d4422d0a44a33a64c |
|
MD5 | 5ff04afd6285d73d28d580e7311fe96d |
|
BLAKE2b-256 | 6cb5ace5be5a6f7d495b8cf4fe5dbb1a35b6171996c93209028b9a424fb45c3b |