A hybrid chatbot.
Project description
WAFL 0.1.0
Introduction
WAFL is a framework for personal agents. It integrates Large language models, speech recognition and text to speech. This framework combines Large Language Models and rules to create a predictable behavior. A set of rules is used to define the behavior of the agent, supporting function calling and a working memory. The current version requires the user to specify the rules to follow.
Installation
In this version, WAFL is a two-part system. Both can be installed on the same machine.
Interface side
The first part is local to your machine and needs to have access to a microphone and speaker. To install it, run the following commands:
$ sudo apt-get install portaudio19-dev ffmpeg
$ pip install wafl
After installing the requirements, you can initialize the interface by running the following command:
$ wafl init
which creates a config.json
file that you can edit to change the default settings.
A standard rule file is also created as wafl.rules
.
Please see the examples in the following chapters.
LLM side (needs a GPU)
The second part (LLM side) is a model server for the speech-to-text model, the LLM, the embedding system, and the text-to-speech model.
Installation
In order to quickly run the LLM side, you can use the following installation commands:
pip install wafl-llm
wafl-llm start
which will use the default models and start the server on port 8080.
The interface side has a config.json
file that needs to be filled with the IP address of the LLM side.
The default is localhost.
Alternatively, you can run the LLM side by cloning this repository.
Running WAFL
This document contains a few examples of how to use the wafl
CLI.
There are four modes in which to run the system
$ wafl run
Starts all the available interfaces of the chatbot at the same time.
$ wafl run-audio
This is the main mode of operation. It will run the system in a loop, waiting for the user to speak a command. The activation word is the name defined in config.json. The default name is "computer", but you can change it to whatever you want.
$ wafl run-server
It runs a local web server that listens for HTTP requests on port 8090. The server will act as a chatbot, executing commands and returning the result as defined in the rules.
$ wafl run-cli
This command works as for the run-server command, but it will listen for commands on the command line. It does not run a webserver and is useful for testing purposes.
$ wafl run-tests
This command will run all the tests defined in the file testcases.txt.
Documentation
The documentation can be found at wafl.readthedocs.io.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file wafl-0.1.2.tar.gz
.
File metadata
- Download URL: wafl-0.1.2.tar.gz
- Upload date:
- Size: 385.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.10.0 requests/2.31.0 setuptools/52.0.0 requests-toolbelt/1.0.0 tqdm/4.64.1 CPython/3.9.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6318f609d2d853f046a76cee41e94df7123231b6c570ce2132b21796720cfd9e |
|
MD5 | 1d511537e7396927f141ff227379a7ab |
|
BLAKE2b-256 | f3a736c6d9038f81eae9d3a39e20afe4ca05a24956a22355e7e6fd956af4c70c |
File details
Details for the file wafl-0.1.2-py3-none-any.whl
.
File metadata
- Download URL: wafl-0.1.2-py3-none-any.whl
- Upload date:
- Size: 404.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.10.0 requests/2.31.0 setuptools/52.0.0 requests-toolbelt/1.0.0 tqdm/4.64.1 CPython/3.9.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 932d03bee29df05816d9549ed51d3919e37ebb3a5e0b7b0a39f03cfa2b87ddf9 |
|
MD5 | 0464039e06326784e9e05bdbed9a0343 |
|
BLAKE2b-256 | cfb27c0e2e5d5396a1cdfe7b391b27c5388091eedd5a9ebc159f1ad7a669c5d4 |