An opinionated Llama Server engine with a focus on agentic tasks
Project description
gallama - Guided Agentic Llama
gallama is an opinionated Python library that provides a LLM inference API service backend optimized for local agentic tasks. It focuses on model serving, realtime, multimodal, and local inference integrations rather than multi-agent orchestration.
Gallama is predominantly tested with the Exllama V3 workflow at this point. Other backends are still available, but they may have bugs or rough edges depending on the model and feature path.
Currently, the backend is mainly using Exllama-family backends. Llama.cpp support is under experiment.
Do checkout TabbyAPI if you want a reliable and pure ExllamaV3 API backend.
Key Feature:
- Native Tool Calling
- OpenAI chat completion API
- Anthropic message API
- Compatible with Claude Code
Native Tool Calling
Gallama supports native tool calling. Instead of forcing every model into one synthetic format, Gallama uses the model's own tool-calling format when that format is supported by a parser in src/gallama/backend/llm/prompt_engine/by_model.
Current models with custom native tool parsers:
Qwen JSON familyCoversqwen2,qwen2_5_vl,qwen3,qwen3_moe,qwen3_next,qwen3_vl,qwen3_vl_moeQwen XML familyCoversqwen3_5,qwen3_5_moe,step3p5,nemotron_hGPT-OSS Harmony familyCoversgpt_ossGLM-4 familyCoversglm4,glm4_moe,glm4v,glm4v_moeMiniMax familyCoversminimax,minimax_m2Ministral / Mistral 3 familyCoversministral3,mistral3(including Devstral-stylemistral3models)
For these models, Gallama expects the model to emit its native tool-call structure, and Gallama parses that structure back into OpenAI-compatible tool_calls or Anthropic-compatible tool_use blocks.
If you want to use a new model with a different native tool-calling format, Gallama will usually need a new parser added under src/gallama/backend/llm/prompt_engine/by_model so the backend can interpret that model correctly. Without a matching parser, tool calling may fail or be decoded incorrectly even if the model itself knows how to call tools.
Reasoning Output
Gallama also returns model reasoning when the model emits it.
With the OpenAI-compatible API, reasoning is returned on the assistant message as reasoning in the raw response payload:
completion = client.chat.completions.create(
model="qwen3",
messages=[{"role": "user", "content": "Solve 27 * 43. Give only the answer."}],
)
message = completion.choices[0].message
print(message.content)
# Depending on the SDK version, custom fields may be available either directly
# or through a raw/model-extra view of the response object.
print(getattr(message, "reasoning", None))
print(getattr(message, "model_extra", {}).get("reasoning") if getattr(message, "model_extra", None) else None)
With the Anthropic-compatible API, reasoning is returned as thinking blocks inside response.content:
response = client.messages.create(
model="qwen3",
max_tokens=4096,
thinking={"type": "enabled", "budget_tokens": 1024},
messages=[{"role": "user", "content": "Solve 27 * 43. Give only the answer."}],
)
thinking_blocks = [block for block in response.content if block.type == "thinking"]
reasoning_text = "\n".join(block.thinking for block in thinking_blocks)
print(reasoning_text)
This makes it possible to inspect the model's intermediate reasoning while still using standard OpenAI or Anthropic client libraries against Gallama.
Quick Start
Head down to the installation guide at the bottom of this page.
Then check out the Examples_Notebook.ipynb in the examples folder
A simple python streamlit frontend chat UI code is included in the examples folder streamlit
Or checkout GallamaUI
You can also refer to src/tests folder for more example using OpenAI and Anthropic client.
Features
OpenAI Compatible Server
Fully compatible with the OpenAI client.
Install openai client and overwrite its base setting as follow:
pip install openai
import os
from openai import OpenAI
os.environ['OPENAI_API_KEY'] = 'test'
client = OpenAI(base_url='http://127.0.0.1:8000/v1')
messages = [{"role": "user", "content": "Which is faster in terms of reaction speed: a cat or a dog?"}]
completion = client.chat.completions.create(
model="mistral",
messages=messages,
tool_choice="auto"
)
print(completion)
See src/tests/test_openai.py and src/tests/test_openai_server.py for more complete examples.
Anthropic Compatible Server
Gallama also exposes an Anthropic-compatible Messages endpoint.
Install the Anthropic SDK and point it at your local server:
pip install anthropic
import anthropic
client = anthropic.Anthropic(
base_url="http://127.0.0.1:8000",
api_key="test",
)
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Which is faster in terms of reaction speed: a cat or a dog?"}
],
)
print(response.content)
See src/tests/test_anthropic.py for a more complete Anthropic client example suite.
Claude Code
You can also point Claude Code at a local Gallama server by overriding the Anthropic base URL and auth token:
ANTHROPIC_BASE_URL="http://127.0.0.1:8000/" ANTHROPIC_AUTH_TOKEN="local" claude --model minimax
This lets Claude Code talk to your local model through Gallama's Anthropic-compatible API.
MCP
Gallama can discover and execute tools from a remote streamable HTTP MCP server on the server side. The request shape depends on which client surface you use:
- OpenAI Chat Completions: add a tool with
"type": "mcp" - OpenAI Responses: add a tool with
"type": "mcp" - Anthropic Messages: define
mcp_serversand reference them with a"type": "mcp_toolset"entry intools
Current limitations:
- MCP currently works only for non-streaming requests
require_approvalis only supported as"never"right now- Mixing MCP tool calls and normal function tool calls in the same model turn is not supported yet
OpenAI Chat Completions
from openai import OpenAI
client = OpenAI(
base_url="http://127.0.0.1:8000/v1",
api_key="test",
)
completion = client.chat.completions.create(
model="qwen3",
max_tokens=3000,
messages=[
{
"role": "user",
"content": "Use the MCP weather tool and tell me the result.",
}
],
tools=[
{
"type": "mcp",
"server_label": "weather",
"server_url": "http://127.0.0.1:18001/mcp",
"allowed_tools": ["get_weather"],
"require_approval": "never",
}
],
)
print(completion.choices[0].message.content)
OpenAI Responses API
from openai import OpenAI
client = OpenAI(
base_url="http://127.0.0.1:8000/v1",
api_key="test",
)
response = client.responses.create(
model="qwen3",
input="Use the MCP weather tool and tell me the result.",
max_output_tokens=300,
tools=[
{
"type": "mcp",
"server_label": "weather",
"server_url": "http://127.0.0.1:18001/mcp",
"allowed_tools": ["get_weather"],
"require_approval": "never",
}
],
)
print(response.output_text)
Gallama also prepends MCP trace items to the Responses output, so you will see mcp_list_tools and mcp_call entries alongside the assistant output.
Anthropic Messages API
Gallama accepts an Anthropic-compatible MCP request shape on /v1/messages, but this is not a byte-for-byte mirror of Anthropic's current hosted MCP connector beta. In Anthropic's official API, MCP is documented separately under the MCP connector docs and requires a beta header. Gallama's local compatibility layer does not require that beta header.
import json
import urllib.request
payload = {
"model": "claude-sonnet-4-20250514",
"max_tokens": 3000,
"messages": [
{
"role": "user",
"content": "Use the MCP weather tool and tell me the result.",
}
],
"mcp_servers": [
{
"type": "url",
"name": "weather",
"url": "http://127.0.0.1:18001/mcp",
}
],
"tools": [
{
"type": "mcp_toolset",
"mcp_server_name": "weather",
"allowed_tools": ["get_weather"],
}
],
}
request = urllib.request.Request(
"http://127.0.0.1:8000/v1/messages",
data=json.dumps(payload).encode("utf-8"),
headers={
"content-type": "application/json",
"x-api-key": "test",
"anthropic-version": "2023-06-01",
},
method="POST",
)
with urllib.request.urlopen(request) as response:
data = json.loads(response.read().decode("utf-8"))
for block in data["content"]:
print(block)
When using the Anthropic-compatible endpoint, Gallama returns MCP activity as mcp_tool_use and mcp_tool_result blocks before the normal text block.
If your MCP server requires auth, include authorization_token or headers on the MCP server/tool definition.
If you are targeting Anthropic's hosted API instead of Gallama, use Anthropic's MCP connector docs and beta versioning instead of this local Gallama example.
See src/tests/test_openai.py, src/tests/test_anthropic.py, and src/tests/test_responses.py for live end-to-end MCP examples against a dummy MCP server.
Function Calling
Supports function calling for all models, mimicking OpenAI's behavior for tool_choice="auto" where if tool usage is not applicable, model will generate normal response.
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
}
]
messages = [{"role": "user", "content": "What's the weather like in Boston today?"}]
completion = client.chat.completions.create(
model="mistral",
messages=messages,
tools=tools,
tool_choice="auto"
)
print(completion.choices[0].message.tool_calls[0].function)
Multiple Concurrent Models
Run multiple models (different or same) with automatic load balancing and request routing. Model VRAM usage can be auto_loaded or with specific GPUs spliting. Each model will be run as a dedicated FastAPI to ensure no threading issue and guarantee speed. However, do note that this will be more demanding on the system as there will be multiple FastAPI running
Basic
gallama run -id "model_id=llama-3.1-8B" -id "model_id=mistral"
Customize GPUs split
gallama run -id "model_id=qwen2-72B gpus=20,15,15,0" -id "model_id=Llama3.1-8B gpus=0,0,0,20"
OpenAI Embedding Endpoint
Utilize Infinity Embedding library for both embedding via OpenAI client.
response = client.embeddings.create(
input="Your text string for embedding goes here",
model="Alibaba-NLP/gte-large-en-v1.5"
)
print(response.data[0].embedding)
Legacy OpenAI Completion Endpoint
Support for the Legacy Completion Endpoint.
client.completions.create(
model="mistral",
prompt="Tell me a story about a Llama in 200 words",
max_tokens=1000,
temperature=0
)
Format Enforcement
Ensure output conforms to specified patterns with a following options that can be specified in the extra_body when using OpenAI client.
completion = client.chat.completions.create(
model="mistral",
messages=[{"role": "user", "content": "Is smoking bad for health? Answer with Yes or No"}],
temperature=0.1,
max_tokens=200,
extra_body={
# "leading_prompt": leading_prompt, # prefix the generation with some string
# "regex_pattern": regex_pattern, # define the regex for the whole generation
# "regex_prefix_pattern": '(Yes|No)\.', # define the regex to match the starting words
# "stop_words": stop_words, # define the word to stop generation
},
)
Streaming
Streaming is fully supported.
messages = [{"role": "user", "content": "Tell me a 200-word story about a Llama"}]
completion = client.chat.completions.create(
model="mistral",
messages=messages,
stream=True,
temperature=0.1,
)
for chunk in completion:
print(chunk.choices[0].delta.content, end='')
Remote Model Management
Load and unload models via API calls.
start gallama server if it is not current running:
gallama run
import requests
api_url = "http://127.0.0.1:8000/add_model"
payload = [
{
"model_id": "qwen-2-72B",
"gpus": [22,22,4,0],
"cache_size": 32768,
},
{
"model_id": "gemma-2-9b",
"gpus": [0,0,0,12],
"cache_size": 32768,
},
{
"model_id": "multilingual-e5-large-instruct",
"gpus": [0,0,0,5],
},
]
response = requests.post(api_url, json=payload)
Installation
gallama requires certain components to be installed and functioning.
Ensure that you have a working backend installed before using Gallama. In practice, Exllama V3 is the backend I test against most often. Other backends may still work, but they may need extra debugging depending on the model and feature set.
OS level package required as followed: For Speech to Text, You will need to install the dependency as required by faster whisper Most notably is CuDNN https://developer.nvidia.com/cudnn
For Text to Speech, install the following package:
apt-get install portaudio19-dev ffmpeg
Now to install gallama from pip
pip install gallama
Optional extras are available if you only want specific components:
pip install "gallama[all]"
pip install "gallama[exl2]"
pip install "gallama[exl3]"
pip install "gallama[llama-cpp]"
pip install "gallama[transformers-backend]"
pip install "gallama[utils]"
pip install "gallama[embedding]"
pip install "gallama[stt]"
pip install "gallama[tts]"
pip install "gallama[video]"
pip install "gallama[vllm]"
pip install "gallama[sglang]"
Extras can be combined as needed:
pip install "gallama[exl3,tts]"
pip install "gallama[llama-cpp,stt]"
For newer model support, the latest transformers release is often needed. If a model is not loading correctly, update it with:
pip install -U transformers
Or, install from source:
git clone https://github.com/remichu-ai/gallama.git
cd gallama
pip install .
If you're starting from scratch and don't have these dependencies yet, follow these steps:
-
Create a virtual environment (recommended): Recommend to use python 3.12 if you can or minimally 3.11 for future tensor parallel compatibility.
conda create --name genv python=3.12 conda activate genv
-
Install and verify your backend:
- Exllama V3 is the recommended path if you want the setup closest to what is actively tested.
- Exllama V2, llama.cpp, transformers, vLLM, sglang, and other backends are still available, but expect some backend-specific rough edges.
(Optional) Install llama cpp-python:
- Follow instructions at llama-cpp-python
- Test with examples from llama-cpp-python Examples
-
(Optional) Install Flash Attention for improved performance:
- Follow instructions at Flash Attention GitHub
-
(Optional) Install Llama.cpp:
- Follow instructions at Llama-cpp-python
- Note: ExLlamaV2 is currently recommended. Llama.cpp support is under development.
-
Install gallama:
pip install gallama
Or install with only the components you need:
pip install "gallama[all]" pip install "gallama[exl3]" pip install "gallama[exl2,stt]"
Or, install from source:
git clone https://github.com/remichu-ai/gallama.git cd gallama pip install ".[all]"
Usage
Follow these steps to use the model.
Setup
-
Initialize gallama:
gallama runThis creates a
model_config.yamlfile in~/gallama. -
Update
~/gallama/model_config.yamlwith your model configurations. -
Launch the model: Simple method
gallama run mistral
Advanced method
gallama run -id "model_id=mistral"
model_config.yaml
Each top-level key is the model name that Gallama will expose through the API. The value under that key is the configuration used to load the backend.
Minimal Exllama example:
mistral:
backend: exllama
model_id: /home/your-user/gallama/models/Mistral-7B-instruct-v0.3-4.5bpw-exl2
prompt_template: Mistral_large
gpus: auto
Typical keys:
backend: backend name such asexllama,llama_cpp,llama_cpp_server,ik_llama,transformers,embedding, orkokoromodel_id: local path to the model or model directoryprompt_template: prompt formatter to use for the model familygpus: usuallyauto, but can also be a per-GPU splitmax_seq_len: override context length if neededcache_quant: KV cache quantization such asFP16,Q4,Q6, orQ8quant: optional metadata for the model quantization you downloadedeos_token_list: optional extra EOS tokens for models that need thembackend_extra_args: backend-specific options, commonly used fortransformers,sglang,kokoro, and similar backends
Example with a transformers backend:
llama-3.2-Vision-11B_transformers:
backend: transformers
model_id: /home/your-user/gallama/models/llama-3.2-Vision-11B-4.0bpw-transformers
prompt_template: Llama3.2-VL
gpus: auto
cache_quant: Q4
quant: 4.0
backend_extra_args:
model_class: transformers.MllamaForConditionalGeneration
tokenizer_class: transformers.AutoTokenizer
processor_class: transformers.AutoProcessor
model_class_extra_kwargs:
attn_implementation: sdpa
Example with a llama_cpp backend:
codestral_llama_cpp:
backend: llama_cpp
model_id: /home/your-user/gallama/models/codestral-4.0bpw-llama_cpp/Codestral-22B-v0.1-Q4_K_M.gguf
prompt_template: Mistral
gpus: auto
cache_quant: Q4
quant: 4.0
Example with a llama_cpp_server backend:
codestral_llama_cpp_server:
backend: llama_cpp_server
model_id: mistralai/Codestral-22B-v0.1
prompt_template: Mistral
max_seq_len: 32768
backend_extra_args:
base_url: http://127.0.0.1:8080
cache_prompt: true
use_server_tokenizer: true
This backend keeps prompt templating in Gallama and uses llama-server mainly as a generation engine through /completion and /tokenize.
Start llama-server separately, for example:
llama-server -m /path/to/model.gguf --port 8080 --ctx-size 32768
Notes for llama_cpp_server:
backend_extra_args.base_urlis required.model_idis still used by Gallama's prompt engine. Ifprompt_templateis omitted,model_idmust be a valid Hugging Face model/tokenizer identifier or a local tokenizer directory so Gallama can load the chat template.- If you want to avoid Hugging Face tokenizer loading, set an explicit
prompt_templatesuch asMistral,Llama3, or another template fromsrc/gallama/data/model_token.yaml. - Gallama tokenizes prompts through
llama-serverwithadd_special=false, then sends token arrays to/completionfor text-only requests. - Image inputs are supported by switching
/completioninto prompt-object mode withprompt_string + multimodal_data. - Audio inputs are not supported yet.
- Direct video input is not sent to
llama-server, but Gallama can still fall back to converting video frames into images for backends that support images. use_server_tokenizermust staytruein the current implementation.
Example with an ik_llama backend:
qwen_ik_llama:
backend: ik_llama
model_id: /home/your-user/models/qwen.gguf
prompt_template: Qwen2-VL
max_seq_len: 32768
backend_extra_args:
base_url: http://127.0.0.1:8080
cache_prompt: true
use_server_tokenizer: true
Notes for ik_llama:
- This backend inherits the
llama_cpp_serverintegration and uses the same/completionand/tokenizeflow. - It automatically applies
backend_extra_args.multimodal_marker: "<__media__>"for multimodal/completionrequests unless you override it explicitly. - Use
ik_llamawhen the basellama_cpp_serverbackend works for text butik_llama.cppvision requests require the server-side MTMD marker format.
Notes:
- Use the YAML key itself as the API model name. For example, if the key is
qwen-2.5-32B, then that is the model string to pass in the client request. prompt_templatematters. If the wrong one is chosen, the model may still load but chat quality or tool use can break.backend_extra_argsis the place for backend-specific tuning such as custom tokenizer/model/processor classes or TTS model paths.- You can keep your Gallama config in another location by setting
GALLAMA_HOME_PATH.
Advanced Usage
Using gallama run -id followed by a string which is a dictionary of key-value pair will unlock additional option as following:
Customize the model launch using various parameters. Available parameters for the -id option include:
model_name: API model name to expose from Gallama. Required when running without a matchingmodel_config.yamlentry.model_id: Model path or Hugging Face repo ID. Required for YAML-free launch and optional when it already exists inmodel_config.yaml.gpus: VRAM usage for each GPU, comma-separated list of floats (optional)cache_size: Context length for cache text in integers (optional)cache_quant: Quantization to use for cache, options are "FP16", "Q4", "Q6", "Q8" (optional)max_seq_len: Maximum sequence length (optional)backend: Model engine backend. Options includeexllama,exllamav3,llama_cpp,llama_cpp_server,ik_llama,transformers,vllm,sglang,mlx_vlm,embedding,faster_whisper,mlx_whisper,kokoro.tp: enable tensor parallel with exllama v2 (experimental). See further below
Run Without model_config.yaml
If you fully specify the model on the CLI, Gallama can run it without a matching entry in ~/gallama/model_config.yaml.
Minimum required arguments for a YAML-free LLM launch:
model_namemodel_idbackend
Example:
gallama run -id "model_name=minimax model_id=/path/to/model backend=exllamav3"
To also write the same CLI logs to a file:
gallama run -id "model_name=minimax model_id=/path/to/model backend=exllamav3" `--log-file ./log/gallama.log`
To control log verbosity:
gallama run -id "model_name=minimax model_id=/path/to/model backend=exllamav3" -v
gallama run -id "model_name=minimax model_id=/path/to/model backend=exllamav3" -vv
Useful optional arguments:
max_seq_len=32768gpus=20,20or leave it as automaticcache_size=32768cache_quant=Q4prompt_template=<template-name>strict=Truemax_concurrent_requests=<n>--log-file ./log/gallama.logto mirror CLI logs into a file-vto enable debug logging while still truncating large base64 image payloads in API request logs-vvto enable maximum verbosity, including full base64 image payloads in API request logs
Notes:
- If you omit
prompt_template, Gallama will use the tokenizer's built-in Hugging Face chat template. That is usually fine for modern transformers models, but older or custom models may still need an explicit prompt template. - Draft/speculative decoding still expects the draft model to exist in
model_config.yamlunless you pass a fulldraft_model_iddirectly. - This is mainly useful for multimodal requests with large message histories or
data:image/...;base64,...inputs. At normal verbosity Gallama truncates those image payloads in logs to keep them readable.
Speculative Decoding Parameters
draft_model_id: ID of the draft model (optional)draft_model_name: Name of the draft model (optional)draft_gpus: VRAM usage for each GPU for the draft model, comma-separated list of floats (optional)draft_cache_size: Context length for cache text in integers for the draft model (optional)draft_cache_quant: Quantization to use for cache for the draft model, options are "FP16", "Q4", "Q6", "Q8" (optional)
Examples
-
Launch two models simultaneously:
gallama run -id "model_name=mistral model_id=/path/to/mistral backend=exllamav3" -id "model_name=llama3 model_id=/path/to/llama3 backend=exllamav3"
-
Launch a model with specific VRAM limits per GPU:
gallama run -id "model_name=qwen2-72B model_id=/path/to/qwen2-72B backend=exllamav3 gpus=22,22,10,0"
This limits memory usage to 22GB for GPU0 and GPU1, 10GB for GPU2, and 0GB for GPU3.
-
Launch a model with custom cache size and quantization: By default cache_size is initialized to max sequence length of the model. However, if there is VRAM to spare, increase cache_size will have model to perform better for concurrent and batched request. By default, cache_quant=Q4 will be used. However, do adjust it if required e.g. Qwen2 1.5B doesn't work well with Q4 cache, please use Q6 or Q8.
gallama run -id "model_name=mistral model_id=/path/to/mistral backend=exllamav3 cache_size=102400 cache_quant=Q8"
-
Launch a model with reduced cache size and quantization: For model with high context, lower the sequence length can significantly reduce VRAM usage. e.g. Mistral Large 2 can handle 128K content, however, it will require significant VRAM for the cache
gallama run -id "model_name=mistral_large model_id=/path/to/mistral_large backend=exllamav3 max_seq_len=32768"
-
Launch a model for embedding:
gallama run -id "model_name=gte-large-en-v1.5 model_id=Alibaba-NLP/gte-large-en-v1.5 backend=embedding"
-
Launch a model with speculative decoding: Only model with same vocabulary should be used for speculative decoding. For reference, by enabling speculative decoding, qwen2-72B generation speed improve from 20tok/s to 25-35tok/s on my 4090s. Highly recommend speculative decoding if you have VRAM to spare.
gallama run -id "model_name=qwen2-72B model_id=/path/to/qwen2-72B backend=exllama draft_model_id=/path/to/qwen2-1.5B"
Ensure your GPU settings can accommodate the model requirements. Trial and adjust parameters as needed for your specific use case. Note: The backend is assumed to be the same for both the main model and the draft model in speculative decoding.
-
Tensor Parallel (TP) Exllama V2 Tensor Parallel support Tensor Parallel from v0.1.9.
- Update your python>=3.11
- Install ExllamaV2>=0.1.9
- Only support Qwen2-72B, Llama3.1-70B and Mistral Large at the moment
- Do run a draft model to help further with speed (Qwen2-1.5B, Llama3.1-8B, Mistral v0.3 respectively)
To enable tensor parallel, simply add
tp=TrueExllama tensor parallel support parallelism on odd number of GPUs. Also exact matching of GPU is not requirement The speed boost for TP for dense model is huge (close to X1.5-X2).
gallama run -id "model_name=qwen-2-72B model_id=/path/to/qwen-2-72B backend=exllama draft_model_id=/path/to/qwen-2-1.5B tp=True"
-
Others If you keep gallama config folder in another location instead of
~home/gallamathen you can set env parameterGALLAMA_HOME_PATHwhen running.
OpenAI realtime websocket (Experimental)
From version 0.0.9, gallama does provide a OpenAI Realtime websocket by wrapping Websocker over a TTS + LLM + TTS setup. While this is not true Sound to Sound set up, it does provide a mock-up of OpenAI realtime websocket for testing. The setup also provide integration with Video from Livekit for video voice chat app.
The Realtime Websocket API is tested working with follow:
- https://github.com/livekit-examples/realtime-playground.git
- https://github.com/openai/openai-realtime-console/tree/websockets
API Spec:
To Use Video Chat feature Please refer to the PAI app here:
Do note that there are some package at Linux level that you will need to install. Refer to installation portion below. While it does mimic openai realtime, there could be bug due to it not using native audio to audio model
Legacy Model Downloader
The built-in model downloader is now considered outdated.
The preferred workflow is:
- Download or prepare your model manually using your normal tooling.
- Put it wherever you want on disk.
- Add or update the corresponding entry in
~/gallama/model_config.yaml.
The legacy downloader commands may still exist in parts of the codebase, but they are no longer the recommended way to manage models.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gallama-0.1.2.tar.gz.
File metadata
- Download URL: gallama-0.1.2.tar.gz
- Upload date:
- Size: 523.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6ff04056d3eb3679bb494e4d570e767add4881287b02a35fcc5256ab11dacc00
|
|
| MD5 |
32664c0883804504cd9b3b701f531f16
|
|
| BLAKE2b-256 |
0bea3bc7163b890e49716a6e0d080f5f96fc2b94b27dda81502cccd5725a1f8c
|
File details
Details for the file gallama-0.1.2-py3-none-any.whl.
File metadata
- Download URL: gallama-0.1.2-py3-none-any.whl
- Upload date:
- Size: 564.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e218cd80f4bc9453b1a64047e0d7b77749741eac16f88bf6614b808c43fc7614
|
|
| MD5 |
ebb7c33678730a33bc4ee422b2efcad8
|
|
| BLAKE2b-256 |
f59a9962703f1d86a305d6f6aacf6a7510c2b97423b53e626b4b4bc1e6a07ccc
|