Skip to main content

No project description provided

Project description

LLM Fallback

Summary

Get an LLM response from the Neon Diana backend.

Description

Converse with an LLM and enable LLM responses when Neon doesn't have a better response.

To send a single query to an LLM, you can ask Neon to "ask Chat GPT ". To start conversing with an LLM, ask to "talk to Chat GPT" and have all of your input sent to an LLM until you say goodbye or stop talking for a while.

Enable fallback behavior by asking to "enable LLM fallback skill" or disable it by asking to "disable LLM fallback".

To have a copy of LLM interactions sent via email, ask Neon to "email me a copy of our conversation".

Examples

  • "Explain quantum computing in simple terms"
  • "Ask chat GPT what an LLM is"
  • "Talk to chat GPT"
  • "Enable LLM fallback skill"
  • "Disable LLM fallback skill"
  • "Email me a copy of our conversation"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neon_skill_fallback_llm-0.0.2-py3-none-any.whl (28.7 kB view details)

Uploaded Python 3

File details

Details for the file neon_skill_fallback_llm-0.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for neon_skill_fallback_llm-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 2437bb7f90e6a30e60a1c8eb4590f95bff3c73ab31d074cf932528c955bdd201
MD5 f5a63c3db0152733f4b65b22c3068b53
BLAKE2b-256 cfa214d130690ffa57c073082c49511fc2051e5f03725e8d5398301482c0d06b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page