Simple types & utilities built for the OpenAI Chat Completions API specification.
Project description
💭 chatspec
Tiny types & utilities built for the OpenAI Chat Completions API specification.
Hammad Saeed ~
@hsaeed3
📦 Installation
pip install chatspec
📚 Documentation & Examples
chatspec provides a 'prethora' (as many as would actually be useful) of types, models & methods for validating, converting and augmenting objects used in the OpenAI chat completions API specification, as well as a MockAI client & mock_completion() method for creating mock llm responses quickly. I use Instructor for all of my structured outputs, so Pydantic is a core part of this library. The point of this library is to provide a common interface for methods that I have found myself needing to replicate across multiple projects.
✨ Quickstart
# just import the package
import chatspec
# create some tool for the llm to call
def get_capital(city : str) -> str:
return "Paris"
# create a mock completion with a simulated tool call & stream it!
stream = chatspec.mock_completion(
# format threads easily
# this creates a proper list of messages from a:
# - string
# - list of messages
# - single message (pydantic or dict)
messages = chatspec.normalize_messages("What is the capital of France?"),
# convert python functions / pydantic models / and more to tools easily
tools = [chatspec.convert_to_tool(get_capital)],
# streaming is supported & properly typed / handled
stream = True
)
# chatspec provides a plethora of 'ease of use' methods
chatspec.print_stream(stream)
# >>> Mock response to: What is the capital of France?
# >>> Tool Call:
# >>> ID: 79d46439-3df2-49de-b978-8be0ef54dccf
# >>> Type: function
# >>> Function: get_capital
# >>> Arguments: {"city": "mock_string"}
📝 Table of Contents
- Mock Completions
- Chat Messages
- Tools & Tool Calling
- Completion Responses & Streams
- Types & Parameters
- Pydantic Models & Structured Outputs
- Markdown Formatting
🥸 Mock Completions
chatspec provides both async & synchronous mock completion methods with support for streaming,
simulated tool calls, all with proper typing and overloading for response types.
# create a mock streamed completion
stream = mock_completion(
messages=[{"role": "user", "content": "Hello, how are you?"}],
model="gpt-4o-mini",
stream=True,
)
# chatspec provides a helper method to easily print streams
# everything is typed & overloaded properly for streamed & non-streamed responses
# its like its a real client!
chatspec.print_stream(stream)
# >>> Mock response to: Hello, how are you?
# you can also simulate tool calls
# this also works both for streamed & non-streamed responses
mock_completion(
messages=[
{"role": "user", "content": "What is the capital of France?"}
],
model="gpt-4o-mini",
tools=[
{
"type": "function",
"function": {
"name": "get_capital",
"description": "Get the capital of France",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string"}
}
},
},
}
],
)
Output
Completion(
id='85aa7221-54db-4ee1-90a4-8b467c90bd02',
choices=[
Choice(
message=CompletionMessage(
role='assistant',
content='Mock response to: What is the capital of France?',
name=None,
function_call=None,
tool_calls=[
CompletionToolCall(
id='17825e39-a2eb-430f-9f2a-7db467d1ec16',
type='function',
function=CompletionFunction(name='get_capital', arguments='{"city": "mock_string"}')
)
],
tool_call_id=None
),
finish_reason='tool_calls',
index=0,
logprobs=None
)
],
created=1739599399,
model='gpt-4o-mini',
object='chat.completion',
service_tier=None,
system_fingerprint=None,
usage=None
)
💬 Chat Messages
chatspec provides a variety of utility when working with Message objects. These methods can be used for validation, conversion, creation of
specific message types & more.
Instance Checking & Validation of Messages
import chatspec
# easily check if an object is a valid message
chatspec.is_message(
{
"role" : "assistant",
"content" : "Hello, how are you?",
"tool_calls" : [
{
"id" : "123",
"function" : {"name" : "my_function", "arguments" : "{}"}
}
]
}
)
# >>> True
chatspec.is_message(
# 'context' key is invalid
{"role": "user", "context": "Hello, how are you?"}
)
# >>> False
Validation & Normalization of Messages & System Prompts
import chatspec
# easily validate & normalize into chat message threads
chatspec.normalize_messages("Hello!")
# >>> [{"role": "user", "content": "Hello!"}]
chatspec.normalize_messages({
"role" : "system",
"content" : "You are a helpful assistant."
})
# >>> [{"role": "system", "content": "You are a helpful assistant."}]
# use the `normalize_system_prompt` method to 'normalize' a thread for use with a singular system
# prompt.
# this method automatically formats the entire thread so the system prompt is always the first message.
chatspec.normalize_system_prompt(
[
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hello!"}
],
system_prompt = "You are a helpful assistant."
)
# >>> [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}, {"role": "assistant", "content": "Hello!"}]
chatspec.normalize_system_prompt(
[
{"role": "user", "content": "Hello!"},
{"role": "system", "content": "You are a helpful"},
{"role": "system", "content": "assistant."}
],
)
# >>> [[{'role': 'system', 'content': 'You are a helpful\nassistant.'}, {'role': 'user', 'content': 'Hello!'}]
Convert or Create Specific Message Types
Using one of the various create_*_message methods, you can easily convert to or create specific Message types.
import chatspec
# create a tool message from a completion response
# and a function's output
chatspec.create_tool_message()
# create a message with image content
chatspec.create_image_message()
# create a message with input audio content
chatspec.create_input_audio_message()
🔧 Tools & Tool Calling
Instance Checking & Validation of Tools
Same as the Message types, tools can be validated using the is_tool method.
import chatspec
my_tool = {
"type": "function",
"function": {
"name": "my_function",
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "Some properties"
}
}
}
}
}
chatspec.is_tool(my_tool)
# >>> True
chatspec.is_tool({})
# >>> False
Convert Python Functions, Pydantic Models, Dataclasses & more to Tools
import chatspec
# you can be super minimal
def my_tool(x : str) -> str:
return x
chatspec.convert_to_tool(my_tool)
# >>> {
# "type": "function",
# "function": {
# "name": "my_tool",
# "parameters": {"type": "object", "properties": {"x": {"type": "string"}}}
# }
# }
# or fully define docstrings/annotations
def my_tool(x : str) -> str:
"""
A tool with some glorious purpose.
Args:
x (str): The input to the tool.
Returns:
str: The output of the tool.
"""
return x
chatspec.convert_to_tool(my_tool)
# >>> {
# 'type': 'function',
# 'function': {
# 'name': 'my_tool',
# 'parameters': {'type': 'object', 'properties': {'x': {'type': 'string', 'description': 'The input to the tool.'}}, 'required': ['x'], 'additionalProperties': False},
# 'description': 'A tool with some glorious purpose.\n',
# 'returns': 'The output of the tool.'
# }
# }
Interacting with Tool Calls in Completions & Executing Tools
import chatspec
# easily check if a completion or stream has a tool call
chatspec.has_tool_call()
# get the tool calls from a completion or a stream
chatspec.get_tool_calls(completion)
# run a tool using a completion response
# this will only run the function if the tool call was present in the completion
chatspec.run_tool(completion, my_tool)
# create a tool message from a completion response
# and a function's output
chatspec.create_tool_message(completion, my_tool_output)
✨ Completion Responses & Streams
Instance Checking & Validation of Completions & Streams
import chatspec
# check if an object is a valid chat completion or stream
chatspec.is_completion(completion)
# check if an object is a valid stream
chatspec.is_stream(stream)
The Stream Passthrough & Stream Specific Methods
chatspec provides an internal system for caching & storing stream responses from chat completions, for use & reuse for any of the methods
within this library for streams. This is helpful, as the user is able to send/display the initial stream response to the client, while still
being able to use it internally for any other use case.
import chatspec
# `openai` is not included in the package, so you'll need to install it separately
from openai import OpenAI
client = OpenAI()
# run the stream through the passthrough
stream = chatspec.stream_passthrough(client.chat.completions.create(
messages = [{"role": "user", "content": "Hello, how are you?"}],
model = "gpt-4o-mini",
stream = True,
))
# print the stream
chatspec.is_stream(stream)
# >>> True
# run any number of other methods over the stream
chatspec.dump_stream_to_message(stream)
# >>> {"role": "assistant", "content": "Hello! I'm just a program, so I don't have feelings, but I'm here and ready to help you. How can I
# >>> assist you today?"}
# chatspec.dump_stream_to_completion(stream)
# chatspec.is_completion(stream)
# chatspec.print_stream(stream)
🆉 Types & Parameters
Use any of the provided types & models for schema reference, as well as quick parameter collection & validation with the Params model,
as well as a few specific parameter types for quick use.
from chatspec import Params
# or get specific parameter types
from chatspec.params import MessagesParam, StreamOptionsParam
# easily define a collection of parameters for a chat completion
params = Params(
messages = [{"role": "user", "content": "Hello, how are you?"}],
model = "gpt-4o-mini",
temperature = 0.5,
)
# use any of the provided types from `chatspec.types`
from chatspec.types import Message, Tool, ToolCall, Completion
# ...
# create objects directly from the types easily
message = Message(role = "user", content = "Hello, how are you?")
# >>> {"role": "user", "content": "Hello, how are you?"}
𝌭 Pydantic Models & Structured Outputs
Documentation coming soon!
📕 Markdown Formatting
Documentation coming soon!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file chatspec-0.0.10.tar.gz.
File metadata
- Download URL: chatspec-0.0.10.tar.gz
- Upload date:
- Size: 154.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cf98ce3cee607bf5d5dc966b4631102e80f20c2923a18622590b3bde47990f30
|
|
| MD5 |
42aa716b40da0e9cdea38dcafdfbfb03
|
|
| BLAKE2b-256 |
ddde55674eb843de4ee127e2b5a148f734f9feb280bdbbafb20ecb3799b28656
|
File details
Details for the file chatspec-0.0.10-py3-none-any.whl.
File metadata
- Download URL: chatspec-0.0.10-py3-none-any.whl
- Upload date:
- Size: 40.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
47e0566fb325330aab8d8a4ab638583d55ef3843ceb096df426fd5696b028c4f
|
|
| MD5 |
715f1db0561f9d70173895aea0dcc431
|
|
| BLAKE2b-256 |
9aa92eab539ab38ed3abda4211826ecb2ad652384c2fb91a587d97f930bd9e5d
|