HCLI hg is a python package wrapper that contains an HCLI sample application (hg); hg is an HCLI for interacting with GPT-3.5-Turbo via terminal input and output streams.
Reason this release was yanked:
deprecating current use and recuperating namespace for a different hcli use
Project description
HCLI hg
HCLI hg is a python package wrapper that contains an HCLI sample application (hg); hg is an HCLI for interacting with GPT-3.5-Turbo via terminal input and output streams.
HCLI hg wraps hg (an HCLI) and is intended to be used with an HCLI Client [1] as presented via an HCLI Connector [2].
You can find out more about HCLI on hcli.io [3]
[1] https://github.com/cometaj2/huckle
[2] https://github.com/cometaj2/hcli_core
[3] http://hcli.io
Installation
HCLI hc requires a supported version of Python and pip.
You’ll need an HCLI Connector to run hc. For example, you can use HCLI Core (https://github.com/cometaj2/hcli_core), a WSGI server such as Green Unicorn (https://gunicorn.org/), and an HCLI Client like Huckle (https://github.com/cometaj2/huckle).
pip install hcli-hg
pip install hcli-core
pip install huckle
pip install gunicorn
gunicorn --workers=1 --threads=1 -b 127.0.0.1:8000 --chdir `hcli_core path` "hcli_core:connector(\"`hcli_hg path`\")"
Usage
Open a different shell window.
Setup the huckle env eval in your .bash_profile (or other bash configuration) to avoid having to execute eval everytime you want to invoke HCLIs by name (e.g. hc).
Note that no CLI is actually installed by Huckle. Huckle reads the HCLI semantics exposed by the API via HCLI Connector and ends up behaving like the CLI it targets.
huckle cli install http://127.0.0.1:8000
eval $(huckle env)
hg help
Versioning
This project makes use of semantic versioning (http://semver.org) and may make use of the “devx”, “prealphax”, “alphax” “betax”, and “rcx” extensions where x is a number (e.g. 0.3.0-prealpha1) on github.
Supports
Chatting by sending command line input streams (e.g. via pipes).
Getting and setting a context to setup a new conversation or to save a conversation.
Behavior setting to allow for persistent chatbot’s behvior (e.g. the Do Anything Now (DAN) prompt).
To Do
- A memory layer for the GPT-3.5-Turbo HCLI (hg).
Automatic context switching per NLP on received input stream.
Context blending to mary different contexts.
Automatic context compression to yield a more substantial memory footprint per context window.
Additional commands to better save and restore conversations/contexts.
A shell mode for the GPT-3.5-Turbo HCLI (hg) to enable shell CLI execution per sought goal.
Bugs
N/A
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file hcli_hg-0.1.5.tar.gz
.
File metadata
- Download URL: hcli_hg-0.1.5.tar.gz
- Upload date:
- Size: 17.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4bc7c7999c09c5bc1990e22140aa5605f94990b420958a732566baab9c222ac3 |
|
MD5 | 497eab8b79f333bfe487fdcbd1628b87 |
|
BLAKE2b-256 | fabb5fd28884f0bc89f81b4d8a1285c8428ed96394f2df8754333581c8590ea7 |
File details
Details for the file hcli_hg-0.1.5-py2.py3-none-any.whl
.
File metadata
- Download URL: hcli_hg-0.1.5-py2.py3-none-any.whl
- Upload date:
- Size: 18.5 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 16e986e8b89535f2c83292078cbecd44fe8463d8b9d53c54f0aa36287a2eaf9f |
|
MD5 | fb35cb890b651e8c334aafc6cdd5c143 |
|
BLAKE2b-256 | 3b011f6d2ed0ef2bfdea2c10d8ea3da25e7dd0be9ef1bf1c4e733164b6f3da31 |