Skip to main content

No project description provided

Project description

LLM Fallback

Summary

Get an LLM response from the Neon Diana backend.

Description

Converse with an LLM and enable LLM responses when Neon doesn't have a better response.

To send a single query to an LLM, you can ask Neon to "ask Chat GPT ". To start conversing with an LLM, ask to "talk to Chat GPT" and have all of your input sent to an LLM until you say goodbye or stop talking for a while.

Enable fallback behavior by asking to "enable LLM fallback skill" or disable it by asking to "disable LLM fallback".

To have a copy of LLM interactions sent via email, ask Neon to "email me a copy of our conversation".

Examples

  • "Explain quantum computing in simple terms"
  • "Ask chat GPT what an LLM is"
  • "Talk to chat GPT"
  • "Enable LLM fallback skill"
  • "Disable LLM fallback skill"
  • "Email me a copy of our conversation"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neon_skill_fallback_llm-0.0.1a4-py3-none-any.whl (28.5 kB view details)

Uploaded Python 3

File details

Details for the file neon_skill_fallback_llm-0.0.1a4-py3-none-any.whl.

File metadata

File hashes

Hashes for neon_skill_fallback_llm-0.0.1a4-py3-none-any.whl
Algorithm Hash digest
SHA256 ef8a87717ca7437bbab07aaf0abc69316ef18fcbf4def723d1ef08ffb35973d5
MD5 897bc6357044c0fabb1639a1afca8d6e
BLAKE2b-256 c72ae4870c30807162033ca5350ce86cd49d7d329a367b3004af4c983e4212ce

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page