Skip to main content

VimLM - LLM-powered Vim assistant

Project description

VimLM - Vim Language Model Assistant for privacy-conscious developers

vimlm

VimLM brings AI-powered assistance directly into Vim, keeping your workflow smooth and uninterrupted. Instead of constantly switching between your editor and external tools, VimLM provides real-time, context-aware suggestions, explanations, and code insights—all within Vim.

Unlike proprietary cloud-based AI models, VimLM runs entirely offline, ensuring complete privacy, data security, and control. You’re not just using a tool—you own it. Fine-tune, modify, or extend it as needed, without reliance on big tech platforms.

Features

  • Real-Time Interaction with local LLMs: Runs fully offline with local models (default: uncensored Llama-3.2-3B).
  • Integration with Vim's Native Workflow: Simple Vim keybindings for quick access and split-window interface for non-intrusive responses.
  • Context-Awareness Beyond Single Files: Inline support for external documents and project files for richer, more informed responses.
  • Conversational AI Assistance: Goes beyond simple code suggestions-explains reasoning, provides alternative solutions, and allows interactive follow-ups.
  • Versatile Use Cases: Not just for coding-use it for documentation lookup, general Q&A, or even casual (uncensored) conversations.

Installation

pip install vimlm

Usage

  1. Start Vim with VimLM:
vimlm

or

vimlm your_file.js
  1. From Normal Mode:

    • Ctrl-l: Send current line + file context
    • Example prompt: "Regex for removing html tags in item.content"
  2. From Visual Mode:

    • Select text → Ctrl-l: Send selection + file context
    • Example prompt: "Convert this to async/await syntax"
  3. Add Context: Use !@#$ to include additional files/folders:

    • !@#$ (no path): Current folder
    • !@#$ ~/scrap/jph00/hypermedia-applications.summ.md: Specific folder
    • !@#$ ~/wtm/utils.py: Specific file
    • Example prompt: "AJAX-ify this app !@#$ ~/scrap/jph00/hypermedia-applications.summ.md"
  4. Follow-Up: After initial response:

    • Ctrl-R: Continue thread
    • Example follow-up: "In Manifest V3"

Key Bindings

Binding Mode Action
Ctrl-L Normal/Visual Send current file + selection to LLM
Ctrl-R Normal Continue conversation
Esc Prompt Cancel input

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vimlm-0.0.3.tar.gz (10.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vimlm-0.0.3-py3-none-any.whl (11.0 kB view details)

Uploaded Python 3

File details

Details for the file vimlm-0.0.3.tar.gz.

File metadata

  • Download URL: vimlm-0.0.3.tar.gz
  • Upload date:
  • Size: 10.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.1

File hashes

Hashes for vimlm-0.0.3.tar.gz
Algorithm Hash digest
SHA256 194125f06f49040f6596287a6344f69641ae919e0b11eb024041e046f7b41549
MD5 c300f7a27eb599a0fbf919b124c5a03a
BLAKE2b-256 421ca77238f5bc7343572b896abe790d061c2fc0e5654f6921231a149168d309

See more details on using hashes here.

File details

Details for the file vimlm-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: vimlm-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 11.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.1

File hashes

Hashes for vimlm-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 32d30a132ae97286ce8b3f0708f223868b78f3fd7022a7e511b64dfece7a48f4
MD5 6a263c9aa6532f830beed881352ccc00
BLAKE2b-256 063987bd24ef7b3f4d2198dfff30aa0efd7c2759c2d3c265c04eefccf34be829

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page