Skip to main content

LLM client library OpenAI should have made

Project description

keeptalking

A simple, pythonic interface to any OpenAI-compatible LLM server. You will never type response.choices[0].message.content ever again.

Installation

pip install keeptalking

Usage

The entire library is 3 functions:

from keeptalking import talk, write, vibe

Conversation

talk(model='google/gemini-2.5-flash', 
     roles=['system', 'user'], 
     messages=['Solve a math problem', 'Sum up all possible bases in which 97 is divisible by 17'],
     structure=int,
     tokens=10)

will use grammar constrained decoding and return a single integer with the answer. The return value of talk will always be of type structure, which defaults to str is omitted. If roles are omitted, the first message is considered a system message, the rest are user messages. If model is omitted, gemini-2.5-flash is used (default model can be overriden by setting the MODEL environment variable). If tokens is omitted, generation is limited to 2048 new tokens (default token limit can be overridden by setting the TOKENS environment variable).

The only parameter that should not be omitted is messages:

talk(['Solve a math problem. Provide your reasoning', 'Sum up all possible bases in which 97 is divisible by 17'])

write is an asynchronous version of talk that lets you beautifully parallelize batch requests:

asyncio.gather(
    write(model='google/gemini-2.5-flash', 
          roles=['system', 'user'], 
          messages=[sys, berry],
          structure=int,
          tokens=10)
    for berry in ['Strawberry', 'Blackberry', 'Raspberry', 'Blueberry', 'Canterbury']
)

write automatically self-throttles as necessary so it's safe to call thousands of write()s in parallel with no external rate limiting.

Vibe functions

Vibe functions are functions defined in natural language.

@vibe(model='google/gemini-2.5-flash', tokens=10)
def do_job(job_details):
    """System message"""
    return f"User message with {job_details}"

ELL users will notice that this format is shamelessly stolen inspired by ELL. However, keeptalking is much simpler than ELL. Despite being much simpler, keeptalking supports additional features like async vibe functions

@vibe()
async def homework_assistant(topic, pages=5):
    """Help the student with their homework"""
    return f"Write a {pages}-page essay on {topic}"

fully parallelizable like so

asyncio.gather(
    homework_assistant('Math'),
    homework_assistant('History'),
    homework_assistant('English')
)

and enabling structured outputs with a single type hint:

@vibe()
def count_rs(request) -> int:
    """Count how many Rs are in the request"""
    return request

unlike in the rest of the Python ecosystem, in vibe functions type hints actually ensure that the return value is always of the type in question

Backend configuration

The model server to be used is defined via environment variables. It can be defined directly by setting BASE_URL and API_KEY. If those are not set, keeptalking will default to OpenRouter if OPENROUTER_API_KEY is set, then OpenAI if OPENAI_API_KEY is set. Perv advanced users can monkey patch keeptalking.client_sync and keeptalking.client_async instead.

Example

You will find a detailed example in example.py. It takes top 10 models from openrouter's model catalog and reads text description of each model to filter out specialized models like coding or edit models, then runs a small test on each to check if they are working.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

keeptalking-0.3.1.tar.gz (3.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

keeptalking-0.3.1-py3-none-any.whl (4.3 kB view details)

Uploaded Python 3

File details

Details for the file keeptalking-0.3.1.tar.gz.

File metadata

  • Download URL: keeptalking-0.3.1.tar.gz
  • Upload date:
  • Size: 3.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.12.11 Darwin/24.5.0

File hashes

Hashes for keeptalking-0.3.1.tar.gz
Algorithm Hash digest
SHA256 b051f839d963025a66243a00994d42cdc6d21091b62f4598d519e3952ecc6e3b
MD5 5f0fe9e565ed6b79fef1b3bb96c12408
BLAKE2b-256 a15b39b790b698c3456685ac262c1aa206116b09aaff104f02e264d063c837ef

See more details on using hashes here.

File details

Details for the file keeptalking-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: keeptalking-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 4.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.12.11 Darwin/24.5.0

File hashes

Hashes for keeptalking-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 193f24fb75c82f1b37ce5b324d62f9c955eec14054723ade8ae904f2ddc18c1a
MD5 c0fb611e79f1d9b12530d02b935567fe
BLAKE2b-256 f6fd8183c06f2ca92eb42721edcbf4def9f5e83d9002a932e9054a37eac4c7c1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page