Embedded Llama-Stack server inference for Next Gen UI Agent
Project description
Next Gen UI Embedded Llama Stack Server Inference
This module is part of the Next Gen UI Agent project.
Support for LLM inference using Embedded Llama Stack server.
Provides
LlamaStackEmbeddedAsyncAgentInferenceto use LLM hosted in embedded Llama Stack server, started from provided Llama Stack config file.init_inference_from_envmethod to init Llama Stack inference (remote or embedded) based on environment variablesINFERENCE_MODEL- LLM model to use - inference is not created if undefined (default value can be provided as method parameter)LLAMA_STACK_HOST- remote LlamaStack host - if defined then it is used with LLAMA_STACK_PORT to create remote LlamaStack inferenceLLAMA_STACK_PORT- remote LlamaStack port - optional, defaults to5001LLAMA_STACK_URL- remote LlamaStack url - ifLLAMA_STACK_HOSTis not defined, but this url is defined, then it is used to create remote LlamaStack inferenceLLAMA_STACK_CONFIG_FILE- path to embedded LlamaStack server config file, used only if no remote LlamaStack is configured (default value can be provided as method parameter)
examples/llamastack-ollama.yamlexample of the LlamaStack config file to use LLM from Ollama running on localhost (with model also taken fromINFERENCE_MODELenv variable).
Installation
pip install -U next_gen_ui_llama_stack_embedded
Example
Instantiation of LlamaStackEmbeddedAsyncAgentInference
from next_gen_ui_llama_stack_embedded import LlamaStackEmbeddedAsyncAgentInference
config_file = "example/llamastack-ollama.yaml"
model = "llama3.2:latest"
inference = LlamaStackEmbeddedAsyncAgentInference(config_file, model)
# init UI Agent using inference
Inference initialization from environment variables
from next_gen_ui_llama_stack_embedded import init_inference_from_env
# default model used if env variable is not defined
INFERENCE_MODEL_DEFAULT = "granite3.3:2b"
inference = init_inference_from_env(default_model=INFERENCE_MODEL_DEFAULT)
if (inference):
# init UI Agent using inference
else:
print("Inference not initialized because not configured in env variables")
Links
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file next_gen_ui_llama_stack_embedded-0.3.0-py3-none-any.whl.
File metadata
- Download URL: next_gen_ui_llama_stack_embedded-0.3.0-py3-none-any.whl
- Upload date:
- Size: 4.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fcdb76c11726e621caa22d0b119e850343b7d11d424f3cf9eb26fe69d3b65007
|
|
| MD5 |
5209291efdcaa892c7b0b145ffb222d9
|
|
| BLAKE2b-256 |
f542f4a3465624d61a9c925dd786b86018bbf633b06e8ae5a09b29c5b25e0910
|