Skip to main content

Core Neon LLM service

Project description

NeonAI Core LLM

Core module for Neon LLM's

Request Format

API requests should include history, a list of tuples of strings, and the current query

Example Request:

{
 "history": [["user", "hello"], ["llm", "hi"]],
 "query": "how are you?"
}

Response Format

Responses will be returned as dictionaries. Responses should contain the following:

  • response - String LLM response to the query

Connection Configuration

When running this as a docker container, the XDG_CONFIG_HOME envvar is set to /config. A configuration file at /config/neon/diana.yaml is required and should look like:

MQ:
  port: <MQ Port>
  server: <MQ Hostname or IP>
  users:
    <LLM MQ service_name>:
      user: <MQ user>
      password: <MQ user's password>
  LLM_<LLM NAME uppercase>:
    num_parallel_processes: <integer > 0>

Enabling Chatbot personas

An LLM may be configured to connect to a /chatbots vhost and participate in discussions as described in the chatbots project. One LLM may define multiple personas to participate as:

llm_bots:
  <LLM Name>:
    - name: Assistant
      description: You are a personal assistant who responds in 40 words or less
    - name: Author
      description: You are an author and expert in literary history
    - name: Student
      description: You are a graduate student working in the field of artificial intelligence
      enabled: False

LLM Name is defined in the property NeonLLMMQConnector.name

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neon-llm-core-0.1.1a2.tar.gz (11.4 kB view details)

Uploaded Source

Built Distribution

neon_llm_core-0.1.1a2-py3-none-any.whl (22.2 kB view details)

Uploaded Python 3

File details

Details for the file neon-llm-core-0.1.1a2.tar.gz.

File metadata

  • Download URL: neon-llm-core-0.1.1a2.tar.gz
  • Upload date:
  • Size: 11.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for neon-llm-core-0.1.1a2.tar.gz
Algorithm Hash digest
SHA256 703d09dce570f380c420f551688b148d617f5f5988410b713e5256304c2c2cc5
MD5 812c4cc3b20072657265a36c16af2785
BLAKE2b-256 b25de90d2f35ea51005530703e1c8f9e9cead913d0c635a928f8237f067a1bf1

See more details on using hashes here.

File details

Details for the file neon_llm_core-0.1.1a2-py3-none-any.whl.

File metadata

File hashes

Hashes for neon_llm_core-0.1.1a2-py3-none-any.whl
Algorithm Hash digest
SHA256 5d315eea91452a4fe568079740686ed05b332d20bc0b98e23e43dc615bc8b2d9
MD5 ff732b7c786e6ea6e85c1c7c8cbd3487
BLAKE2b-256 6fd29c07b1ec8f9fd1f07dcfcc6b6f9845d038640cec20ed35bb9c86cb0d03f5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page