Skip to main content

Core Neon LLM service

Project description

NeonAI Core LLM

Core module for Neon LLM's

Request Format

API requests should include history, a list of tuples of strings, and the current query

Example Request:

{
 "history": [["user", "hello"], ["llm", "hi"]],
 "query": "how are you?"
}

Response Format

Responses will be returned as dictionaries. Responses should contain the following:

  • response - String LLM response to the query

Connection Configuration

When running this as a docker container, the XDG_CONFIG_HOME envvar is set to /config. A configuration file at /config/neon/diana.yaml is required and should look like:

MQ:
  port: <MQ Port>
  server: <MQ Hostname or IP>
  users:
      <LLM MQ service_name>:
      user: <MQ user>
      password: <MQ user's password>
  LLM_<LLM NAME uppercase>:
      num_parallel_processes: < integer > 0 >

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neon-llm-core-0.0.4.tar.gz (5.8 kB view hashes)

Uploaded Source

Built Distribution

neon_llm_core-0.0.4-py3-none-any.whl (9.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page