Skip to main content

A fake shell powered by raw LLM completion. The model hallucinates command output until it emits the next prompt.

Project description

kenoma

CI PyPI Downloads License: AGPL v3 Python 3.9+

A shell powered by LLM completion.

Install

pip install kenoma

Or from source:

git clone https://github.com/a9lim/kenoma
cd kenoma
pip install -e .

For bitsandbytes quantization (CUDA only):

pip install kenoma[quantize]

Usage

kenoma                                # defaults to Qwen/Qwen2.5-0.5B
kenoma google/gemma-3n-E4B
kenoma /path/to/local/model

The model argument is any HuggingFace model id or a local path. This is meant for base/completion models, instruction-tuned models may not work properly.

Configuration

By precedence: CLI flags, then KENOMA_* environment variables, then a TOML config file at $XDG_CONFIG_HOME/kenoma/config.toml (falls back to ~/.config/kenoma/config.toml).

Example config:

model = "google/gemma-3n-E4B"
device = "auto"
temperature = 1.0
top_p = 0.95
repetition_penalty = 1.05
max_new_tokens = 2048
context_chars = 6000
history = 20
tmux_lines = 300
quantize = "none"
kv_cache = true
compile = false

The env var for any key is KENOMA_<KEY> uppercased, so KENOMA_MODEL=gpt2 kenoma works.

Flags:

  • --version: print version and exit.
  • --prompt: override the captured PS1. Multi-line prompts are not supported and fall back to a constructed user@host:cwd $ .
  • --device {auto,cuda,mps,cpu}: auto resolves to cuda, then mps, then cpu.
  • --quantize {none,4bit,8bit}: bitsandbytes quantization. Requires CUDA and the quantize extra.
  • --no-kv-cache: disable KV cache reuse across turns.
  • --compile: torch.compile the model with a static KV cache for faster decode (best on CUDA). The first turn pays a compile cost; cross-turn KV cache reuse is forfeited because the static cache doesn't expose crop().
  • --history N: seed with the last N commands from shell history (0 disables).
  • --tmux-lines N: if inside tmux, seed with the last N lines of pane scrollback (0 disables).
  • --context-chars N: cap the rolling buffer at N chars.
  • --max-new-tokens N: per-turn cap on generated tokens.

Cancelling a turn. Ctrl-C during generation cancels the current turn, invalidates the KV cache, and redraws the prompt. Ctrl-C at the input prompt exits.

License

AGPL-3.0-or-later. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kenoma-1.1.0.tar.gz (46.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kenoma-1.1.0-py3-none-any.whl (34.1 kB view details)

Uploaded Python 3

File details

Details for the file kenoma-1.1.0.tar.gz.

File metadata

  • Download URL: kenoma-1.1.0.tar.gz
  • Upload date:
  • Size: 46.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for kenoma-1.1.0.tar.gz
Algorithm Hash digest
SHA256 f626940112b256415600777ac6ec8fc66818e08dbb2d84e00ece482eabc3db51
MD5 25defadd62c17fbe85d68941bdfb61dd
BLAKE2b-256 639b5c515ef1a578d6335a506b31b76fa2fca23c262928a040cce8a118450c4f

See more details on using hashes here.

Provenance

The following attestation bundles were made for kenoma-1.1.0.tar.gz:

Publisher: release.yml on a9lim/kenoma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file kenoma-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: kenoma-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 34.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for kenoma-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6eab392691eb480256bac1a2e96883cdc2b9c020fe0d71154d66e1c57d65c09b
MD5 124ce5470455149f59e39440f6c8988f
BLAKE2b-256 d6067e825a2c470ee01c63f851980cf62c6415ca9b47139b2a55894bb1557b8c

See more details on using hashes here.

Provenance

The following attestation bundles were made for kenoma-1.1.0-py3-none-any.whl:

Publisher: release.yml on a9lim/kenoma

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page