Skip to main content

Embedded Llama-Stack server inference for Next Gen UI Agent

Project description

Next Gen UI Embedded Llama Stack Server Inference

This module is part of the Next Gen UI Agent project.

Module Category Module Status

Support for LLM inference using Embedded Llama Stack server.

Provides

  • LlamaStackEmbeddedAsyncAgentInference to use LLM hosted in embedded Llama Stack server, started from provided Llama Stack config file.
  • init_inference_from_env method to init Llama Stack inference (remote or embedded) based on environment variables
    • INFERENCE_MODEL - LLM model to use - inference is not created if undefined (default value can be provided as method parameter)
    • LLAMA_STACK_HOST - remote LlamaStack host - if defined then it is used with LLAMA_STACK_PORT to create remote LlamaStack inference
    • LLAMA_STACK_PORT - remote LlamaStack port - optional, defaults to 5001
    • LLAMA_STACK_URL - remote LlamaStack url - if LLAMA_STACK_HOST is not defined, but this url is defined, then it is used to create remote LlamaStack inference
    • LLAMA_STACK_CONFIG_FILE - path to embedded LlamaStack server config file, used only if no remote LlamaStack is configured (default value can be provided as method parameter)
  • examples/llamastack-ollama.yaml example of the LlamaStack config file to use LLM from Ollama running on localhost (with model also taken from INFERENCE_MODEL env variable).

Installation

pip install -U next_gen_ui_llama_stack_embedded

Example

Instantiation of LlamaStackEmbeddedAsyncAgentInference

from next_gen_ui_llama_stack_embedded import LlamaStackEmbeddedAsyncAgentInference

config_file = "example/llamastack-ollama.yaml"
model = "llama3.2:latest"

inference = LlamaStackEmbeddedAsyncAgentInference(config_file, model)

# init UI Agent using inference

Inference initialization from environment variables

from next_gen_ui_llama_stack_embedded import init_inference_from_env

# default model used if env variable is not defined
INFERENCE_MODEL_DEFAULT = "granite3.3:2b"

inference = init_inference_from_env(default_model=INFERENCE_MODEL_DEFAULT)

if (inference):
    # init UI Agent using inference
else:
    print("Inference not initialized because not configured in env variables")

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file next_gen_ui_llama_stack_embedded-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for next_gen_ui_llama_stack_embedded-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a0db98b460dd8b220399420f00ab8a9b8997d79a6d8aa517bffe4f8510d5af7c
MD5 8f5557d7c9d2b2b063f93cbe64e91699
BLAKE2b-256 1db6ae93e4a722196defd11553642524c00df6a877001710c52096efb3a35b73

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page