Skip to main content

VimLM - LLM-powered Vim assistant

Project description

VimLM - AI-Powered Coding Assistant for Vim

VimLM Demo

VimLM brings the power of AI directly into your Vim workflow. Maintain focus with keyboard-driven interactions while leveraging AI for code generation, refactoring, and documentation.

Get started quickly with the tutorial.

Features

  • Native Vim Integration - Split-window responses & intuitive keybindings
  • Offline First - 100% local execution with MLX-compatible models
  • Contextual Awareness - Integrates seamlessly with your codebase and external resources
  • Conversational Workflow - Iterate on responses with follow-up queries
  • Project Scaffolding - Generate and deploy code blocks to directories
  • Extensible - Create custom LLM workflows with command chains

Requirements

  • Apple Silicon (M-series)
  • Python 3.12.8
  • Vim 9.1

Quick Start

pip install vimlm
vimlm

Smart Autocomplete

Basic Usage

Key Binding Mode Action
Ctrl-l Insert Generate code suggestion
Ctrl-p Insert Insert generated code
Ctrl-j Insert Generate and insert code

Example Workflow:

  1. Place cursor where you need code
def quicksort(arr):
    if len(arr) <= 1:
        return arr
    pivot = arr[len(arr) // 2]
    # <Cursor here>
    middle = [x for x in arr if x == pivot]
    right = [x for x in arr if x > pivot]
    return quicksort(left) + middle + quicksort(right)
  1. Use Ctrl-j to generate and insert the code (or Ctrl-l to trigger → Ctrl-p to insert)

Repository-Level Code Completion

Option Description
--repo Paths to include as repository context

The --repo option enhances autocomplete by providing repository-level context to the LLM.

Example Workflow:

  1. Launch VimLM with repo context: vimlm main.py --repo utils/*
  2. In Insert mode, place cursor where completion is needed
  3. Ctrl-l to generate suggestions informed by repository context
  4. Ctrl-p to accept and insert the code

Conversational Assistance

Key Binding Mode Action
Ctrl-l Normal/Visual Prompt LLM
Ctrl-j Normal Continue conversation
Ctrl-p Normal/Visual Import generated code
Esc Prompt Cancel input

1. Contextual Prompting

Ctrl-l to prompt LLM with context:

  • Normal mode: Current file + line
  • Visual mode: Current file + selected block

Example Prompt: Create a Chrome extension

2. Conversational Refinement

Ctrl-j to continue current thread.

Example Prompt: Use manifest V3 instead

3. Code Substitution

Ctrl-p to insert generated code block

  • In Normal mode: Into last visual selection
  • In Visual mode: Into current visual selection

Example Workflow:

  1. Select a block of code in Visual mode
  2. Prompt with Ctrl-l: Use regex to remove html tags from item.content
  3. Press Ctrl-p to replace selection with generated code

Inline Directives

:VimLM [PROMPT] [!command1] [!command2]...

! prefix to embed inline directives in prompts:

Directive Description
!include PATH Add file/directory/shell output to context
!deploy DEST Save code blocks to directory
!continue N Continue stopped response
!followup Continue conversation

1. Context Layering

!include [PATH]  # Add files/folders to context
  • !include (no path): Current folder
  • !include ~/projects/utils.py: Specific file
  • !include ~/docs/api-specs/: Entire folder
  • !include $(...): Shell command output

Example: Summarize recent changes !include $(git log --oneline -n 50)

2. Code Deployment

!deploy [DEST_DIR]  # Extract code blocks to directory
  • !deploy (no path): Current directory
  • !deploy ./src: Specific directory

Example: Create REST API endpoint !deploy ./api

3. Extending Response

!continue [MAX_TOKENS]  # Continue stopped response
  • !continue: Default 2000 tokens
  • !continue 3000: Custom token limit

Example: tl;dr !include large-file.txt !continue 5000

Command-Line Mode

:VimLM prompt [!command1] [!command2]...

Simplify complex tasks by chaining multiple commands together into a single, reusable Vim command.

Examples:

" Debug CI failures using error logs
:VimLM Fix Dockerfile !include .gitlab-ci.yml !include $(tail -n 20 ci.log)

" Generate unit tests for selected functions and save to test/
:VimLM Write pytest tests for this !include ./src !deploy ./test

" Add docstrings to all Python functions in file
:VimLM Add Google-style docstrings !include % !continue 4000

Configuration

1. Model Settings

Edit ~/vimlm/cfg.json:

{
  "LLM_MODEL": "mlx-community/DeepSeek-R1-Distill-Qwen-7B-4bit",
  "NUM_TOKEN": 32768
}

2. Key Customization

{
  "USE_LEADER": true,
  "KEY_MAP": {
    "l": "]",
    "j": "[",
    "p": "p" 
  }
}

License

Apache 2.0 - See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vimlm-0.1.1.tar.gz (16.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vimlm-0.1.1-py3-none-any.whl (17.2 kB view details)

Uploaded Python 3

File details

Details for the file vimlm-0.1.1.tar.gz.

File metadata

  • Download URL: vimlm-0.1.1.tar.gz
  • Upload date:
  • Size: 16.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for vimlm-0.1.1.tar.gz
Algorithm Hash digest
SHA256 b28ca0ce34da8836aea76158b1b5ba01f68dd1a48b69c23d51cd5aa64553f436
MD5 22cdfe221543787486fe1d85d1c26770
BLAKE2b-256 eccf049e96a56e6cb170085bc7913f7e97a7c736410435f430e3600ea433be63

See more details on using hashes here.

File details

Details for the file vimlm-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: vimlm-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 17.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for vimlm-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d31007f2cff1d0f068792836716686c527b8ae642517c5c6ccfa41893ccddf94
MD5 daf7b74c5c6da08c018f962948a2a43f
BLAKE2b-256 e477bcfdf771b1653185c16f386f4e27fd882502b49d71ac8b6c401359e95fe1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page