Skip to main content

A command-line interface for Ollama API

Project description

mdllama

Build and Publish mdllama DEB and RPM

Build and Publish mdllama DEB and RPM (testing branch)

Publish to PyPI on mdllama.py Update

PPA development (GH Pages)

A CLI tool that lets you chat with Ollama and OpenAI models right from your terminal, with built-in Markdown rendering.

mdllama makes it easy to interact with AI models directly from your command line, meanwhile providing you with real-time Markdown rendering.

Features

Core Functionality

  • Multi-provider support: Chat with both Ollama and OpenAI-compatible models from the terminal
  • Built-in Markdown rendering: Rich text display with proper formatting
  • Interactive chat sessions: Full conversation management with context
  • Streaming responses: Real-time response generation

Web Integration/access

  • Web search: Search the web using DuckDuckGo directly from chat
  • Smart search queries: AI-powered search query optimization with spelling correction
  • Website content fetching: Extract and include content from any website in your conversations
  • Multiple search modes:
    • search:query - Basic web search with results added to context
    • searchask:query|question - Search and ask a specific question about results
    • websearch:question - AI-generated search with automatic query optimization
    • site:url - Fetch content from a specific website

Session Management

  • Conversation history: Save and restore chat sessions with timestamps
  • Sessions: Continue conversations across sessions
  • Context: Clear context, load previous sessions, and manage conversation flow
  • Multiple session support: Switch between different conversations

File Integration

  • Local file support: Include files from your filesystem in conversations (up to 2MB per file)
  • Multiple file formats: Support for text files, code, documentation, and more
  • Context-aware processing: Files are integrated into conversation contexxt

TODO--------------------------------------------------------------|

Screenshots

Chat Interface

Chat

Help

Help

Interactive Commands

When using mdllama run for interactive chat, you have access to special commands:

Basic Commands

  • exit or quit - End the interactive session
  • clear - Clear the current conversation context
  • models - Show numbered list of available models for selection
  • """ - Start/end multiline input mode for longer messages

File and Content Integration

  • file:path/to/file - Include local file content in your next message
  • site:url - Fetch and include website content in conversation context
  • system:prompt - Set or change the system prompt (use without prompt to clear)

Web Search Commands

  • search:query - Search the web and add results to conversation context
  • searchask:query|question - Search for specific query and ask a question about results
  • websearch:question - Let AI generate optimized search query and get results

Runtime Controls

  • temp:value - Change temperature setting (0.0 to 1.0)
  • model:name - Switch to different model (or show list if name omitted)

Command Line Options

  • mdllama search "query" - Standalone web search command
  • mdllama sessions - List all saved conversation sessions
  • mdllama load-session session_id - Load a previous conversation
  • mdllama clear-context - Clear current conversation context
  • mdllama models - List available models
  • mdllama pull model_name - Download a model from Ollama registry

OpenAI and Provider Support

Supported Providers

  • Ollama: Local models running on your machine
  • OpenAI: Official OpenAI API (GPT-3.5, GPT-4, etc.)
  • OpenAI-compatible: Any API that follows OpenAI's format (Hackclub AI, LocalAI, etc.)

Setup Instructions

For Ollama (Default)

mdllama setup
# Or specify explicitly
mdllama setup --provider ollama

For OpenAI

mdllama setup --provider openai
# Will prompt for your OpenAI API key

For OpenAI-Compatible APIs

mdllama setup --provider openai --openai-api-base https://ai.hackclub.com
# Then provide your API key when prompted

Usage Examples

# Use with OpenAI
mdllama chat --provider openai "Explain quantum computing"

# Use with specific model and provider
mdllama run --provider openai --model gpt-4

# Interactive session with streaming
mdllama run --provider openai --stream --render-markdown

Live Demo

Go to this mdllama demo to try it out live in your browser. The API key is 9c334d5a0863984b641b1375a850fb5d

[!NOTE] Try asking the model to give you some markdown-formatted text, or test the web search features:

  • Give me a markdown-formatted text about the history of AI.
  • search:Python 3.13 (web search)
  • site:python.org (fetch website content)
  • websearch:What are the latest Python features? (AI-powered search)

So try it out and see how it works!

Installation

Install using package manager (recommended)

Debian/Ubuntu Installation

  1. Add the PPA to your sources list:

    echo 'deb [trusted=yes] https://packages.qincai.xyz/debian stable main' | sudo tee /etc/apt/sources.list.d/qincai-ppa.list
    sudo apt update
    
  2. Install mdllama:

    sudo apt install python3-mdllama
    

Fedora Installation

  1. Download the latest RPM from: https://packages.qincai.xyz/fedora/

    Or, to install directly:

    sudo dnf install https://packages.qincai.xyz/fedora/mdllama-<version>.noarch.rpm
    

    Replace <version> with the latest version number.

  2. (Optional, highly recommended) To enable as a repository for updates, create /etc/yum.repos.d/qincai-ppa.repo:

    [qincai-ppa]
    name=Raymont's Personal RPMs
    baseurl=https://packages.qincai.xyz/fedora/
    enabled=1
    metadata_expire=0
    gpgcheck=0
    

    Then install with:

    sudo dnf install mdllama
    

3, Install the ollama library from pip:

pip install ollama

You can also install it globally with:

sudo pip install ollama

[!NOTE] The ollama library is not installed by default in the RPM package since there is no system ollama package avaliable (python3-ollama). You need to install it manually using pip in order to use mdllama with Ollama models. This issue has been resolved by including a post-installation script for RPM packages that automatically installs the ollama library using pip.


PyPI Installation (Cross-Platform)

Install via pip (recommended for Windows/macOS and Python virtual environments):

pip install mdllama

Traditional Bash Script Installation (Linux)

To install mdllama using the traditional bash script, run:

bash <(curl -fsSL https://raw.githubusercontent.com/QinCai-rui/mdllama/refs/heads/main/install.sh)

To uninstall mdllama, run:

bash <(curl -fsSL https://raw.githubusercontent.com/QinCai-rui/mdllama/refs/heads/main/uninstall.sh)

License

This project is licensed under the GNU General Public License v3.0. See the LICENSE file for details.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mdllama-4.2.3.tar.gz (48.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mdllama-4.2.3-py3-none-any.whl (51.9 kB view details)

Uploaded Python 3

File details

Details for the file mdllama-4.2.3.tar.gz.

File metadata

  • Download URL: mdllama-4.2.3.tar.gz
  • Upload date:
  • Size: 48.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for mdllama-4.2.3.tar.gz
Algorithm Hash digest
SHA256 37ca0ffe77d7660718bc82f0583f9c5ca8ed1cde32c90eb1fbe7375fd2a7ff52
MD5 8a47ae267d1ca332409e82ab1879d2c2
BLAKE2b-256 2de2728faed556076a26f3cb291b7985978465456c96896e45ac4401f8adcea7

See more details on using hashes here.

Provenance

The following attestation bundles were made for mdllama-4.2.3.tar.gz:

Publisher: publish-to-pypi.yml on QinCai-rui/mdllama

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mdllama-4.2.3-py3-none-any.whl.

File metadata

  • Download URL: mdllama-4.2.3-py3-none-any.whl
  • Upload date:
  • Size: 51.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for mdllama-4.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 eec8a1eac1415e96aa5f9a6f10162079238402f224602432891843de436fb106
MD5 077a61c18fdcf7823f5c41a98c686558
BLAKE2b-256 9e72131c68b2b919f40424ade7c5f4de2ebcea746ec09cb47825f56c1748753f

See more details on using hashes here.

Provenance

The following attestation bundles were made for mdllama-4.2.3-py3-none-any.whl:

Publisher: publish-to-pypi.yml on QinCai-rui/mdllama

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page