Skip to main content

An external provider for Llama Stack allowing for the use of RamaLama for inference.

Project description

ramalama-stack

PyPI version PyPI - Downloads License

An external provider for Llama Stack allowing for the use of RamaLama for inference.

Installing

You can install ramalama-stack from PyPI via pip install ramalama-stack

This will install Llama Stack and RamaLama as well if they are not installed already.

Usage

[!WARNING] The following workaround is currently needed to run this provider - see https://github.com/containers/ramalama-stack/issues/53 for more details

curl --create-dirs --output ~/.llama/providers.d/remote/inference/ramalama.yaml https://raw.githubusercontent.com/containers/ramalama-stack/refs/tags/v0.2.4/src/ramalama_stack/providers.d/remote/inference/ramalama.yaml
curl --create-dirs --output ~/.llama/distributions/ramalama/ramalama-run.yaml https://raw.githubusercontent.com/containers/ramalama-stack/refs/tags/v0.2.4/src/ramalama_stack/ramalama-run.yaml
  1. First you will need a RamaLama server running - see the RamaLama project docs for more information.

  2. Ensure you set your INFERENCE_MODEL environment variable to the name of the model you have running via RamaLama.

  3. You can then run the RamaLama external provider via llama stack run ~/.llama/distributions/ramalama/ramalama-run.yaml

[!NOTE] You can also run the RamaLama external provider inside of a container via Podman

podman run \
 --net=host \
 --env RAMALAMA_URL=http://0.0.0.0:8080 \
 --env INFERENCE_MODEL=$INFERENCE_MODEL \
 quay.io/ramalama/llama-stack

This will start a Llama Stack server which will use port 8321 by default. You can test this works by configuring the Llama Stack Client to run against this server and sending a test request.

  • If your client is running on the same machine as the server, you can run llama-stack-client configure --endpoint http://0.0.0.0:8321 --api-key none
  • If your client is running on a different machine, you can run llama-stack-client configure --endpoint http://<hostname>:8321 --api-key none
  • The client should give you a message similar to Done! You can now use the Llama Stack Client CLI with endpoint <endpoint>
  • You can then test the server by running llama-stack-client inference chat-completion --message "tell me a joke" which should return something like
ChatCompletionResponse(
    completion_message=CompletionMessage(
        content='A man walked into a library and asked the librarian, "Do you have any books on Pavlov\'s dogs
and Schrödinger\'s cat?" The librarian replied, "It rings a bell, but I\'m not sure if it\'s here or not."',
        role='assistant',
        stop_reason='end_of_turn',
        tool_calls=[]
    ),
    logprobs=None,
    metrics=[
        Metric(metric='prompt_tokens', value=14.0, unit=None),
        Metric(metric='completion_tokens', value=63.0, unit=None),
        Metric(metric='total_tokens', value=77.0, unit=None)
    ]
)

Llama Stack User Interface

Llama Stack includes an experimental user-interface, check it out here.

To deploy the UI, run this:

podman run -d --rm --network=container:ramalama --name=streamlit quay.io/redhat-et/streamlit_client:0.1.0

[!NOTE] If running on MacOS (not Linux), --network=host doesn't work. You'll need to publish additional ports 8321:8321 and 8501:8501 with the ramalama serve command, then run with network=container:ramalama.

If running on Linux use --network=host or -p 8501:8501 instead. The streamlit container will be able to access the ramalama endpoint with either.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ramalama_stack-0.3.0a0.tar.gz (159.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ramalama_stack-0.3.0a0-py3-none-any.whl (17.5 kB view details)

Uploaded Python 3

File details

Details for the file ramalama_stack-0.3.0a0.tar.gz.

File metadata

  • Download URL: ramalama_stack-0.3.0a0.tar.gz
  • Upload date:
  • Size: 159.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for ramalama_stack-0.3.0a0.tar.gz
Algorithm Hash digest
SHA256 5c31df808c12f90ed8d31488065a3e9f2b3a00241f7bd70a292e424557db5eb5
MD5 dd1d94ede529518f6e7ad63236b27d61
BLAKE2b-256 81523d6cae85e1c18cfffa616736db1395eafcc5dc9f951de9dc0eeadc576aa9

See more details on using hashes here.

Provenance

The following attestation bundles were made for ramalama_stack-0.3.0a0.tar.gz:

Publisher: pypi.yml on containers/ramalama-stack

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ramalama_stack-0.3.0a0-py3-none-any.whl.

File metadata

File hashes

Hashes for ramalama_stack-0.3.0a0-py3-none-any.whl
Algorithm Hash digest
SHA256 e56d8530cd05bf16f135f436592720830bec278bec97fea24d379450f771aafb
MD5 713a58689e16c027b8772aacd1c3b8ad
BLAKE2b-256 ffe582f4cfe8dc59860691c8ae2f792e47fd5ba62fb402acc9614cee95648dae

See more details on using hashes here.

Provenance

The following attestation bundles were made for ramalama_stack-0.3.0a0-py3-none-any.whl:

Publisher: pypi.yml on containers/ramalama-stack

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page