Skip to main content

Sopel AI - an LLM enhanced chat bot plug-in

Project description

% sopel_ai(1) Version 1.1.0 chatbot plugin

Name

Sopel AI - AI query/response plugin

Synopsis

Enable Sopel to respond to queries using a Perplexity AI back-end, featuring the ability to plug different LLMs on a per-user basis.

pip install -U sopel_ai

sopel configure
sopel

From a channel where Sopel AI is present enter a query:

.q Summarize the plot of The Martian by Andy Weir.

This plugin requires an API key issued by the service provider.

Installation

pip install -U sopel_ai

The installation assumes that the Sopel chatbot is already installed in the target system and in the same environment as the pip installation.

Confirm the installed package and version:

echo "import sopel_ai ; print(sopel_ai.__VERSION__)" | python

Commands

Listed in order of frequency of use:

Command Arguments Effect
.q Some question The model produces a response
.qpm Some question Same as .q but in private message
.models n/a Lists all models that Sopel AI supports
.mymodel number Request or set the model to use for the current /nick
.req n/a Return the GitHub URL for Sopel AI feature requests
.bug n/a Same as .req

Other available commands if the standard Sopen infobot plugins are enabled:

Command Arguments Effect
.search Some question Search using Bing or DuckDuckGo
.dict Word Get a dictionary definition if one is available
.tr Word or phrase Translate to English
.w Word or topic Search Wikipedia for articles

Usage

The most common usage is to enter a query in-channel or private message, and wait for the bot to respond.

.q Quote the Three Law of Robotics as a list and without details.

Users may check the current model used for producing their responses by issuing:

.mymodel

The bot produces a numbered list of supported models by issuing:

.models

Users are welcome to change the default model to one of those listed by issuing the .mymodel command followed by the item number for the desired model from the list:

.mymodel 1

Users may request private instead of in-channel responses:

.qpm Quote the Three Laws of Robotics and give me examples.

Responses generated by making .q queries are expected to be short or are trunked at 480 characters. They are intended to appear in-channel and to be as brief as possible.

Responses generated from a .qpm query are expected to be long and detailed, with a 16 KB length limit, span multpile messages (due to ircv3 limitations), and sopel_ai presents them to the user in private message, regardless of whether they were issued from a channel or a direct message.

Users can query the bot plugin and AI provider using:

.mver

AI providers

The current version uses the Perplexity AI models and API. Future versions may support other providers.

API Key

All AI services providers require an API key for access. The API key is configured via:

sopel config

Or edit this section in the Sopel configuration file:

[sopel_ai]
.
.
llm_key = pplx-3a45enteryourkeykere

License

The Sopel AI Sopel plugin, package, documentation, and examples are licensed under the BSD-3 open source license at https://github.com/pr3d4t0r/sopel_ai/blob/master/LICENSE.txt.

See also

Bugs

Feature requests and bug reports:

https://github.com/pr3d4t0r/sopel_ai/issues

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

sopel_ai-1.1.0-py3-none-any.whl (10.9 kB view details)

Uploaded Python 3

File details

Details for the file sopel_ai-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: sopel_ai-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 10.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.13

File hashes

Hashes for sopel_ai-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f83b973b9aea4f5b9a951b8f1fc34acec44a9465c9a4196ed3a38c7194a0c761
MD5 fc1bb6a7c461a6d0b8ab324f2e1e5a33
BLAKE2b-256 8e23cf3e302a1c8849e013f08dadab4be6cc22f35dd0fdfe06cd5ebbb8961b06

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page