Core Neon LLM service
Project description
NeonAI Core LLM
Core module for Neon LLM's
Request Format
API requests should include history
, a list of tuples of strings, and the current
query
Example Request:
{ "history": [["user", "hello"], ["llm", "hi"]], "query": "how are you?" }
Response Format
Responses will be returned as dictionaries. Responses should contain the following:
response
- String LLM response to the query
Connection Configuration
When running this as a docker container, the XDG_CONFIG_HOME
envvar is set to /config
.
A configuration file at /config/neon/diana.yaml
is required and should look like:
MQ:
port: <MQ Port>
server: <MQ Hostname or IP>
users:
<LLM MQ service_name>:
user: <MQ user>
password: <MQ user's password>
LLM_<LLM NAME uppercase>:
num_parallel_processes: <integer > 0>
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
neon-llm-core-0.0.6.tar.gz
(5.8 kB
view hashes)
Built Distribution
Close
Hashes for neon_llm_core-0.0.6-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 14c9e2c4b90a8a210c2c109f9ab9b19cac836de345dab2dfc410250fd33b847c |
|
MD5 | 45b09aa62d4cbcfed235e1a336d02eb0 |
|
BLAKE2b-256 | 7d6f7ac14df51d0e7104d33e459469eadef04c055d9b21d8d5018674b93efec3 |