Skip to main content

A simple command line interface for LLM with MCP server and OpenAI-compatible API support

Project description

Coala Client

A simple command line interface for LLM with MCP (Model Context Protocol) server support and OpenAI-compatible API support.

Features

  • OpenAI-compatible API support: Works with OpenAI, Google Gemini, Ollama, and any OpenAI-compatible API
  • MCP Server integration: Connect to multiple MCP servers for extended tool capabilities
  • Interactive chat: Rich terminal UI with streaming responses
  • Tool calling: Automatic tool execution with MCP servers

Installation

pip install coala-client

Quick Start

1. Initialize Configuration

coala init

This creates a default MCP servers configuration file at ~/.config/coala/mcps/mcp_servers.json.

2. Set API Key

# For OpenAI
export OPENAI_API_KEY=your-openai-api-key

# For Gemini
export GEMINI_API_KEY=your-gemini-api-key

# Ollama doesn't require an API key (runs locally)

3. Start Chatting

# Interactive chat with default provider (OpenAI)
coala

# Use a specific provider
coala -p gemini
coala -p ollama

# Use a specific model
coala -p openai -m gpt-4-turbo

# Single prompt
coala ask "What is the capital of France?"

# Disable MCP servers
coala --no-mcp

Configuration

Environment Variables

Variable Description Default
PROVIDER Default LLM provider openai
OPENAI_API_KEY OpenAI API key -
OPENAI_BASE_URL OpenAI base URL https://api.openai.com/v1
OPENAI_MODEL OpenAI model gpt-4o
GEMINI_API_KEY Gemini API key -
GEMINI_BASE_URL Gemini base URL https://generativelanguage.googleapis.com/v1beta/openai
GEMINI_MODEL Gemini model gemini-2.5-flash-lite
OLLAMA_BASE_URL Ollama base URL http://localhost:11434/v1
OLLAMA_MODEL Ollama model qwen3
SYSTEM_PROMPT System prompt You are a helpful assistant.
MAX_TOKENS Max tokens in response 4096
TEMPERATURE Temperature 0.7
MCP_CONFIG_FILE MCP config file path ~/.config/coala/mcps/mcp_servers.json

MCP Servers Configuration

Edit ~/.config/coala/mcps/mcp_servers.json:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"],
      "env": {}
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token"
      }
    }
  }
}

Environment Variables for MCP Servers

You can set environment variables that will be available to all MCP servers by editing ~/.config/coala/env:

# Environment variables for MCP servers
# Format: KEY=value

# Set default provider (openai, gemini, ollama, custom)
PROVIDER=gemini

# API keys and model settings
GEMINI_API_KEY=your-gemini-api-key
GEMINI_MODEL=gemini-2.5-flash-lite

Note: The PROVIDER variable in the env file will set the default LLM provider. These variables will be merged with server-specific env settings in mcp_servers.json. Server-specific environment variables take precedence over the base environment variables.

CLI Commands

Interactive Chat

coala [OPTIONS]
coala chat [OPTIONS]

Options:

  • -p, --provider: LLM provider (openai/gemini/ollama/custom)
  • -m, --model: Model name override
  • --no-mcp: Disable MCP servers
  • --sandbox: Enable run_command tool so the LLM can run basic Linux shell commands (timeout 30s)

Single Prompt

coala ask "Your prompt here"
coala -c "Your prompt here"

Chat Commands

During interactive chat:

  • /help - Show help
  • /exit / /quit - Exit chat
  • /clear - Clear conversation history
  • /tools - List available MCP tools
  • /servers - List connected MCP servers
  • /skill - List installed skills (from ~/.config/coala/skills/)
  • /skill <name> - Load a skill into the chat (adds its instructions to context)
  • /model - Show current model info
  • /switch <provider> - Switch provider

Configuration

coala init    # Create default config files
coala config  # Show current configuration

CWL toolset as MCP server

# Import one or more CWL files into a named toolset (copied to ~/.config/coala/mcps/<toolset>/)
coala mcp-import <TOOLSET> file1.cwl [file2.cwl ...]

# Import a zip of CWL files (extracted to ~/.config/coala/mcps/<toolset>/)
coala mcp-import <TOOLSET> tools.zip

# SOURCES can also be http(s) URLs to a .cwl file or a .zip
coala mcp-import <TOOLSET> https://example.com/tools.zip
coala mcp-import <TOOLSET> https://example.com/tool.cwl

This creates run_mcp.py in ~/.config/coala/mcps/<toolset>/, adds the server to ~/.config/coala/mcps/mcp_servers.json, and prints the MCP entry. The generated script uses coala.mcp_api (stdio transport). Ensure the coala package is installed in the environment that runs the MCP server.

List servers and tools:

# List configured MCP server names
coala mcp-list

# Show tool schemas (name, description, inputSchema) for a server
coala mcp-list <SERVER_NAME>

Call an MCP tool directly:

coala mcp-call <SERVER>.<TOOL> --args '<JSON>'
# Example:
coala mcp-call gene-variant.ncbi_datasets_gene --args '{"data": [{"gene": "TP53", "taxon": "human"}]}'

Skills

# Import skills from a GitHub folder (e.g. vercel-labs/agent-skills/skills)
coala skill https://github.com/vercel-labs/agent-skills/tree/main/skills

# Import from a zip URL or local zip/directory
coala skill http://localhost:3000/files/bedtools/bedtools-skills.zip
coala skill ./my-skills.zip

All skills are copied to ~/.config/coala/skills/. Each source gets its own subfolder (e.g. skills/bedtools/ for a zip from .../bedtools/bedtools-skills.zip, skills/agent-skills/ for the GitHub repo).

Examples

Using with Ollama

# Start Ollama server
ollama serve

# Pull a model
ollama pull llama3.2

# Chat with Ollama
coala -p ollama -m llama3.2

Using with Gemini

export GEMINI_API_KEY=your-api-key
coala -p gemini

Using Custom OpenAI-compatible API

export CUSTOM_API_KEY=your-api-key
export CUSTOM_BASE_URL=https://your-api.com/v1
export CUSTOM_MODEL=your-model
coala -p custom

Development

# Install with dev dependencies
uv pip install -e ".[dev]"

# Run tests
pytest

Publishing to PyPI

The repo includes a GitHub Action (.github/workflows/release.yml) that builds with Poetry and publishes to PyPI when a release is published.

  1. Create a GitHub environment named pypi (optional but recommended).
  2. Configure PyPI using one of:
    • Trusted Publishing (recommended): In PyPI → Your projects → coala-client → Publishing, add a new trusted publisher: GitHub, this repo, workflow publish-pypi.yml, environment pypi. No secrets needed.
    • API token: Generate a token at pypi.org, add it as repository (or pypi environment) secret PYPI_API_TOKEN.
  3. Publish: Create a new release (tag e.g. v0.1.0). The workflow runs on release and uploads the built package. You can also run it manually (Actions → Build and publish to PyPI → Run workflow).

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

coala_client-0.1.0.tar.gz (22.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

coala_client-0.1.0-py3-none-any.whl (24.6 kB view details)

Uploaded Python 3

File details

Details for the file coala_client-0.1.0.tar.gz.

File metadata

  • Download URL: coala_client-0.1.0.tar.gz
  • Upload date:
  • Size: 22.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for coala_client-0.1.0.tar.gz
Algorithm Hash digest
SHA256 47ac79fcce764b30f3318c7ea55037ae77fe3e0568f791b387f4c54e8ae9ca0a
MD5 c23f57ae08f4519ca8d82f2f34207a48
BLAKE2b-256 412b9b433bd8f93341c90c502751544b10060780806ccb5f86ae838f034a9621

See more details on using hashes here.

Provenance

The following attestation bundles were made for coala_client-0.1.0.tar.gz:

Publisher: release.yml on coala-info/coala_client

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file coala_client-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: coala_client-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 24.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for coala_client-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6146c953c352cbb1536feab2daf7a9e6825dbfc2f87882b62096f6206234b240
MD5 999c495c4f7e1e55e67c2f0731b2d5c0
BLAKE2b-256 7d70ec3293c45ee75d9ccd4ff5c8fd4aae9db5d9d2c7dd22b383ccf260b93fa0

See more details on using hashes here.

Provenance

The following attestation bundles were made for coala_client-0.1.0-py3-none-any.whl:

Publisher: release.yml on coala-info/coala_client

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page