Skip to main content

LLM provider abstraction layer.

Project description

oneping

Give me a ping, Vasily. One ping only, please.

One ping only, please.

This is a library for querying LLM providers such as OpenAI or Anthropic, as well as local models. Currently the following providers are supported: openai, anthropic, fireworks, and local (local models).

Requesting a local provider will target localhost and use an OpenAI-compatible API as in llama.cpp or llama-cpp-python. Also included is a simple function to start a llama-cpp-python server on the fly (oneping.server.run).

The various native libraries are soft dependencies and the library can still partially function with or without any or all of them. The native packages for these providers are: openai, anthropic, and fireworks-ai.

There is also a Chat interface that automatically tracks message history. Kind of departing from the "one ping" notion, but oh well. This accepts a provider and system argument. Other parameters are passed by calling it (an alias for chat) or to stream.

Installation

For standard usage, install with:

pip install oneping

To include the native provider dependencies, install with:

pip install oneping[native]

To include the chat and web interface dependencies, install with:

pip install oneping[chat]

Library Usage

Basic usage with Anthropic through the URL interface:

response = oneping.reply(prompt, provider='anthropic')

The reply function accepts a number of arguments including:

  • prompt (required): The prompt to send to the LLM (required)
  • provider = local: The provider to use: openai, anthropic, fireworks, or local
  • system = None: The system prompt to use (not required, but recommended)
  • prefill = None: Start "assistant" response with a string (Anthropic doesn't like newlines in this)
  • model = None: Indicate the desired model for the provider
  • max_tokens = 1024: The maximum number of tokens to return
  • history = None: List of prior messages or True to request full history as return value
  • native = False: Use the native provider libraries
  • url = None: Override the default URL for the provider
  • port = 8000: Which port to use for local or custom provider
  • api_key = None: The API key to use for non-local providers

For example, to use the OpenAI API with a custom system prompt:

response = oneping.reply(prompt, provider='openai', system=system)

To conduct a full conversation with a local LLM:

history = True
history = oneping.reply(prompt1, provider='local', history=history)
history = oneping.reply(prompt2, provider='local', history=history)

For streaming, use the function stream and for async streaming, use stream_async. Both of these take the same arguments as reply.

Command Line

You can call the oneping module directly and use the following subcommands:

  • reply: get a single response from the LLM
  • stream: stream a response from the LLM
  • console: start a console (Textual) chat
  • web: start a web (FastHTML) chat

These accept the arguments listed above for reply as command line arguments. For example:

python -m oneping stream "Does Jupiter have a solid core?" --provider anthropic

Or you can pipe in your query from stdin:

echo "Does Jupiter have a solid core?" | python -m oneping stream --provider anthropic

Chat Interface

The Chat interface is a simple wrapper for a conversation history. It can be used to chat with an LLM provider or to simply maintain a conversation history for your bot. If takes the usual reply, stream, and stream_async functions, and calling it directly will map to reply.

chat = oneping.Chat(provider='anthropic', system=system)
response1 = chat(prompt1)
response2 = chat(prompt2)

There is also a textual powered console interface and a fasthtml powered web interface. You can call these with: python -m oneping console or python -m oneping web.

Textual Chat FastHTML Chat

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

oneping-0.4.tar.gz (14.4 kB view details)

Uploaded Source

Built Distribution

oneping-0.4-py3-none-any.whl (16.7 kB view details)

Uploaded Python 3

File details

Details for the file oneping-0.4.tar.gz.

File metadata

  • Download URL: oneping-0.4.tar.gz
  • Upload date:
  • Size: 14.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for oneping-0.4.tar.gz
Algorithm Hash digest
SHA256 8f993e152bca1db64ca2180738be8ae76a3a03abcf9a5827acfd3fff5ad0cbf4
MD5 82225ad420324811e1f8b85316a72d93
BLAKE2b-256 49b176dbc595929a5657632ecd18ac0481ae1104df017a9a8b5f4d27f62b6c4d

See more details on using hashes here.

File details

Details for the file oneping-0.4-py3-none-any.whl.

File metadata

  • Download URL: oneping-0.4-py3-none-any.whl
  • Upload date:
  • Size: 16.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for oneping-0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 6e5b86dff1f707e609c9ed56cb32fc5efaf4376d72188fed249586d25452b197
MD5 5e02bf01697a07b9d27fe5282b63f404
BLAKE2b-256 020685ee2b37cf83bf30c4352eaac43c3a8f37262587ea9c514853717f9fb610

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page