Skip to main content

A CLI tool to chat with LLM models including GPT and Claude.

Project description

fire-chat

Overview

This project provides a command-line interface (CLI) for interacting with various large language models (LLMs) using the LiteLLM wrapper. It supports multiple providers, including OpenAI, Anthropic, Azure, and Gemini. The CLI allows users to chat with these models, manage budgets, and handle API keys efficiently.

Configuration

The configuration is managed through a $HOME/.config/fire-chat/config.yaml file. The first time you run the CLI run. You can copy paste the starting config file config.yaml to the location, adds your API key, and quick start the application fire-chat.

Installation and Usage

  1. Install the CLI:

    pip install --user fire-chat # requires python 3.10+
    
  2. Configure the CLI:

    Edit the $HOME/.config/fire-chat/config.yaml file to set your preferred provider, model, and other settings.

  3. Run the CLI:

    fire-chat
    

    or run with arguments (overriding config yaml file)

    fire-chat --model=gpt-4o
    

    for full list of configs, see main.py.

  4. Exit: To exit the CLI, Ctrl+C.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fire_chat-0.0.4.tar.gz (84.5 kB view hashes)

Uploaded Source

Built Distribution

fire_chat-0.0.4-py3-none-any.whl (17.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page