Skip to main content

VimLM - LLM-powered Vim assistant

Project description

VimLM - Vim Language Model Assistant for privacy-conscious developers

vimlm

VimLM brings AI-powered assistance directly into Vim, keeping your workflow smooth and uninterrupted. Instead of constantly switching between your editor and external tools, VimLM provides real-time, context-aware suggestions, explanations, and code insights—all within Vim.

Unlike proprietary cloud-based AI models, VimLM runs entirely offline, ensuring complete privacy, data security, and control. You’re not just using a tool—you own it. Fine-tune, modify, or extend it as needed, without reliance on big tech platforms.

Features

  • Real-Time Interaction with local LLMs: Runs fully offline with local models (default: uncensored Llama-3.2-3B).
  • Integration with Vim's Native Workflow: Simple Vim keybindings for quick access and split-window interface for non-intrusive responses.
  • Context-Awareness Beyond Single Files: Inline support for external documents and project files for richer, more informed responses.
  • Conversational AI Assistance: Goes beyond simple code suggestions-explains reasoning, provides alternative solutions, and allows interactive follow-ups.
  • Versatile Use Cases: Not just for coding-use it for documentation lookup, general Q&A, or even casual (uncensored) conversations.

Installation

pip install vimlm

Usage

  1. Start Vim with VimLM:
vimlm

or

vimlm your_file.js
  1. From Normal Mode:

    • Ctrl-l: Send current line + file context
    • Example prompt: "Regex for removing html tags in item.content"
  2. From Visual Mode:

    • Select text → Ctrl-l: Send selection + file context
    • Example prompt: "Convert this to async/await syntax"
  3. Add Context: Use !@#$ to include additional files/folders:

    • !@#$ (no path): Current folder
    • !@#$ ~/scrap/jph00/hypermedia-applications.summ.md: Specific folder
    • !@#$ ~/wtm/utils.py: Specific file
    • Example prompt: "AJAX-ify this app !@#$ ~/scrap/jph00/hypermedia-applications.summ.md"
  4. Follow-Up: After initial response:

    • Ctrl-R: Continue thread
    • Example follow-up: "In Manifest V3"

Key Bindings

Binding Mode Action
Ctrl-L Normal/Visual Send current file + selection to LLM
Ctrl-R Normal Continue conversation
Esc Prompt Cancel input

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vimlm-0.0.2.tar.gz (6.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vimlm-0.0.2-py3-none-any.whl (6.8 kB view details)

Uploaded Python 3

File details

Details for the file vimlm-0.0.2.tar.gz.

File metadata

  • Download URL: vimlm-0.0.2.tar.gz
  • Upload date:
  • Size: 6.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.1

File hashes

Hashes for vimlm-0.0.2.tar.gz
Algorithm Hash digest
SHA256 c227e2b27f8afc3166c79f7cdaea54036d4bf32407e97adc9e5a1ac3b51ce8aa
MD5 a78036eca8190860709a80d24155653f
BLAKE2b-256 2372daf909536de89d4bbadbc3e0dacd16b991407f6ca9e79db28ce6eaac45b8

See more details on using hashes here.

File details

Details for the file vimlm-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: vimlm-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 6.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.1

File hashes

Hashes for vimlm-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 dedf25590bae11c82fd513dc3bc68c8de5488f60de701391f2d4b169d46b0657
MD5 1bbc68c580157c983782ad31fb165350
BLAKE2b-256 f76d95ea70a5ee4dd69f47cf9910ffaf39cd7148eb52073762e7dc165a98d1c0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page