Skip to main content

Terminal-based GPT CLI with memory, markdown, and stream support

Project description

BLACKCORTEX GPT CLI

Publish to PyPI

A Conversational Assistant for the Terminal

A terminal-based GPT assistant powered by the OpenAI API, developed by Konijima and now maintained under the BlackCortex organization.

Features

  • Persistent memory across sessions with summarization
  • Streaming output support
  • Command history and logging
  • Configurable prompt, model, and temperature
  • .env-based secure configuration

Installation

Requires Python 3.8+.

Using PyPI

pip install blackcortex-gpt-cli

Using pipx (recommended)

pipx install blackcortex-gpt-cli

From GitHub

pip install git+https://github.com/BlackCortexAgent/blackcortex-gpt-cli.git
# or with pipx
pipx install git+https://github.com/BlackCortexAgent/blackcortex-gpt-cli.git

Development Installation

git clone https://github.com/BlackCortexAgent/blackcortex-gpt-cli.git
cd blackcortex-gpt-cli
pip install .

Updating

pip install --upgrade blackcortex-gpt-cli
# or
pipx upgrade blackcortex-gpt-cli

Environment Setup

Create a .env file to configure your API and options:

touch ~/.gpt-cli/.env

Sample .env

OPENAI_API_KEY=your-api-key-here
OPENAI_MODEL=gpt-4o
OPENAI_DEFAULT_PROMPT=You are a helpful CLI assistant.
OPENAI_LOGFILE=~/.gpt.log
OPENAI_TEMPERATURE=0.5
OPENAI_MAX_TOKENS=4096
OPENAI_MAX_SUMMARY_TOKENS=2048
OPENAI_MEMORY_PATH=~/.gpt_memory.json
OPENAI_STREAM_ENABLED=false

Usage

After installation, use the gpt command globally.

gpt [-h] [--no-markdown] [--stream] [--reset] [--summary] [--env]
    [--set-key [API_KEY]] [--ping] [--log] [--clear-log]
    [--update] [--uninstall] [input_data ...]

Positional Arguments

  • input_data – One-shot prompt input. Example:
    gpt "Summarize the history of aviation"
    

Options

  • -h, --help — Show help message and exit
  • --no-markdown — Disable Markdown formatting in output
  • --stream — Enable live token streaming during response
  • --reset — Reset memory and exit
  • --summary — Display current conversation summary
  • --env — Edit the .env file
  • --set-key [API_KEY] — Update your OpenAI API key
  • --ping — Test connection with OpenAI API
  • --log — Show the full conversation log
  • --clear-log — Clear the conversation log file
  • --update — Update GPT CLI to the latest version
  • --uninstall — Uninstall GPT CLI completely

Environment Configuration

The GPT CLI loads settings from two locations:

  1. .env file in the current working directory (if present)
  2. ~/.gpt-cli/.env (default persistent configuration)

You can configure model behavior, memory, logging, and streaming options.

Sample .env File

OPENAI_API_KEY=your-api-key-here             # Required
OPENAI_MODEL=gpt-4o                          # Model ID (default: gpt-4o)
OPENAI_DEFAULT_PROMPT=You are a helpful assistant.
OPENAI_LOGFILE=~/.gpt.log                    # Log file location
OPENAI_TEMPERATURE=0.5                       # Response randomness (default: 0.5)
OPENAI_MAX_TOKENS=4096                       # Max response tokens
OPENAI_MAX_SUMMARY_TOKENS=2048               # Max tokens for memory summarization
OPENAI_MEMORY_PATH=~/.gpt_memory.json        # Path to memory file
OPENAI_MEMORY_LIMIT=10                       # Number of recent messages stored (default: 10)
OPENAI_STREAM_ENABLED=false                  # Enable token-by-token streaming (true/false)

Use gpt --env to open and edit the .env file in your terminal editor.

Memory System

Memory includes:

  • Rolling conversation summary
  • The 10 most recent messages

Older messages are summarized once the limit is reached. Use --reset to clear memory.

Troubleshooting

  • Missing API key: Check .env for OPENAI_API_KEY
  • Client init failed: Verify internet and credentials
  • Token limit exceeded: Reduce input size or use summarization

Interactive Example Output

You: Tell me a joke about databases

GPT: Why did the database break up with the spreadsheet?

Because it couldn't handle the rows of emotions.
────────────────────────────────────────────────────────

License

This project is licensed under the MIT License, an OSI-approved open source license that permits the following:

  • ✅ Free use for personal, academic, or commercial purposes
  • ✅ Permission to modify, merge, publish, and distribute the software
  • ✅ Usage with or without attribution (attribution encouraged but not required)
  • ✅ No warranty is provided — use at your own risk

Credits

Originally created by Konijima, now maintained by the BlackCortex team.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

blackcortex_gpt_cli-1.2.0.tar.gz (13.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

blackcortex_gpt_cli-1.2.0-py3-none-any.whl (12.5 kB view details)

Uploaded Python 3

File details

Details for the file blackcortex_gpt_cli-1.2.0.tar.gz.

File metadata

  • Download URL: blackcortex_gpt_cli-1.2.0.tar.gz
  • Upload date:
  • Size: 13.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for blackcortex_gpt_cli-1.2.0.tar.gz
Algorithm Hash digest
SHA256 a552e64f10ca967001ef5dc99efb5e065b2c1f8ce9582fce94986fe4c541e78b
MD5 92a34d5f89629ec90fbb5b284be64a49
BLAKE2b-256 713316595d867eaea9bb2938bfb65fa0691a79293f88220c21cb436f99fe9a99

See more details on using hashes here.

File details

Details for the file blackcortex_gpt_cli-1.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for blackcortex_gpt_cli-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 381c94f6ca5eaf5e009f5604bb7f87c98a89bf197875a94a1ada99e5dcba6454
MD5 1d78407e014169cc936e2ff70048a1d0
BLAKE2b-256 27e7424693e07b88707055e48314e5aeb35a96247158cfe3f0b086ada7485cde

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page