Skip to main content

Chatshell - An interactive in-chat shell for LLM-based tasks.

Project description

Logo

Chatshell: Local open-source interaction layer for AI workflows

Chatshell is a free and open-source application that provides local Large Language Models (LLMs) combined with advanced Retrieval-Augmented Generation (RAG) and operating system integrations - all controlled through a chat-based, Discord-like interface.

It runs quietly in the background and acts as a middleware layer between the user, the operating system and AI models, turning chat conversations into powerful, reproducible AI workflows. Custom tasks can be created e.g. for summarization or updating the RAG context and run them with one single command.

No cloud lock-in.
No hidden APIs.
No dependency on big tech platforms.


✨ Why Chatshell?

  • Workflow-Oriented - Chat is not just conversation, it’s orchestration
  • Discord-like Interaction - Commands and conversation live side-by-side
  • User Interface Idependent - Every OpenAI-compatible Chat UI can be used (e.g. Jan or OpenWebUI)
  • Composable AI - Combine tools, context, and models in a single chat flow and define Tasks
  • Advanced RAG - Chat with documents, websites, clipboard content, and more
  • Open Source & Independent - No vendor lock-in, no proprietary backends
  • Local & Private - Run everything entirely on your machine

Chatshell is designed to be an AI assistant framework, not just another chat UI. AI should be a tool, not a service you depend on.


💡 What Is Chatshell?

Chatshell is:

  • A conversational shell for AI-powered workflows
  • A local LLM runtime manager
  • A RAG engine for documents, websites, and live content
  • An OpenAI-compatible API server
  • A text-based control interface for automation and analysis

Chatshell is not:

  • A hosted SaaS
  • A closed ecosystem
  • A single-purpose chatbot

🗨️ Discord-Like Chat Experience

Chatshell combines natural conversation with command-based automation:

You can just chat naturally and insert commands like:

/summarize cat_manual.pdf
/chatwithwebsite https://allaboutcats.com
What are signs of a happy cat?
/forgetcontext

Commands are embedded directly into the chat flow, enabling:

  • Context injection and removal
  • Tool invocation
  • Model and endpoint control
  • Document and website analysis
  • Automation without leaving the conversation

🚀 Features

Local AI & Model Control

  • Run local LLMs via llama.cpp
  • Manage multiple inference endpoints
  • Start, stop, restart models on demand
  • Auto-start preferred model

Advanced RAG

  • Chat with PDFs and text files
  • Chat with websites (shallow or deep crawl)
  • Summarize documents or URLs
  • Inject clipboard content into conversations
  • Define tasks for summarization

Middleware Capabilities

  • Acts as a bridge between LLM and OS
  • Shell-like interaction
  • OpenAI-compatible API for external tools

📦 Installation

pip install chatshell-python

⚙️ Configuration

On first run, Chatshell automatically creates configuration files in your user config directory (see appdirs):

  • chatshell_server_config.json - Server, RAG, and runtime settings
  • llm_config.json - LLM endpoints and model configurations

Edit these files to configure:

  • Model paths
  • Document directories
  • Ports and server behavior
  • Default endpoints

🦙 llama.cpp Binaries

Chatshell uses llama.cpp for local inference. You can download the binaries from https://github.com/ggml-org/llama.cpp/releases and extract them to /home/user/chatshell/Llamacpp or use the llama.cpp python server bindings.

Note: The python bindings are outdated at the moment and do not support Huggingface parameters. If you want to download models automatically from Huggingface, you have to use the current binaries.

macOS

  1. Download prebuilt binaries from the llama.cpp releases

  2. Allow execution of unsigned binaries:

    cd /Users/<current user>/chatshell/Llamacpp
    xattr -d com.apple.quarantine *
    

Linux

Compile from source:

git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build --config Release

Note: This should also work in Termux on Android if you have installed all requirements. You can use the provided script termux_compile_llamacpp.sh.


🖥️ Usage

CLI Mode

Start the interactive shell:

./chatshell-server

You’ll see:

chatshell >

Chatshell runs an OpenAI-compatible FastAPI server. Configure your preferred Chat frontend with the default chatshell API endpoint:

http://localhost:4001/v1/chat/completions

You're ready now!


📚 Available Commands

Command Description
/help Show this help message
/filechat <filename.pdf> Load a PDF or text file and chat with it
/webchat <URL> Load a website and chat with it
/webchat /deep <URL> Load a website and all sublinks, then chat
/clipchat Fetch clipboard content and chat with it
/summarize <filename.pdf or URL> Summarize a document or website
/summarize /clipboard Summarize clipboard contents
/summarize /setprompt "Additional prompt for summary" Add an additional prompt for customizing your summary
/addclipboard Inject clipboard content into every message
/savetask /<Task type> <Task name> Save the current task (file, web, summarize)
/runtask <Task name> Load and run a saved task
/listtasks List all saved tasks
/taskinfo <Task name> Show detailed info for a specific task
/forgetall Disable RAG and all inserted contexts
/forgetctx Disable inserted context only
/forgetdoc Disable document/website RAG only
/updatemodels Update model catalog from GitHub
/startendpoint <name> Start a specific LLM endpoint
/restartendpoint <name> Restart an LLM endpoint
/stopendpoint <name> Stop an LLM endpoint
/stopallendpnts Stop all LLM endpoints
/llmstatus Show endpoint status
/setautostartendpoint <name> Set endpoint for autostart
/listendpoints List all endpoint configs
/shellmode Enter shell-only mode (no LLM)
/exit Quit Chatshell

📜 License

This project is released under an open-source license. See LICENSE for details.


💡 Contributing

Contributions, ideas, and feedback are welcome. Chatshell is meant to evolve as a community-driven AI workflow platform.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatshell_python-0.3.0.tar.gz (63.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

chatshell_python-0.3.0-py3-none-any.whl (51.0 kB view details)

Uploaded Python 3

File details

Details for the file chatshell_python-0.3.0.tar.gz.

File metadata

  • Download URL: chatshell_python-0.3.0.tar.gz
  • Upload date:
  • Size: 63.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for chatshell_python-0.3.0.tar.gz
Algorithm Hash digest
SHA256 f957359d7277181320ebab9ccd915bf7ce790829e3dda64f932d6854c315b305
MD5 198b7f2859cbb6d4405b08d27f9a23f5
BLAKE2b-256 de774b1e444f2ce80dbae4880c21ca6e5a3a167c03a954e9480e13199b4b4b76

See more details on using hashes here.

File details

Details for the file chatshell_python-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for chatshell_python-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 91e0ce71212bbde3b7ec737daad381d6f8e5afe315cfe9dcaaa7011416fbb6e0
MD5 2ce38423005c0236b9fead0c460e546a
BLAKE2b-256 49615a8b5ee7338b1b701d0cbed5c4e51465d4890e38b34a0e1799a79e771df1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page