OpenAI-compatible AI chat pane + FIM completion for Spyder 6
Project description
Spyder IDE AI Chat Plugin
An OpenAI-compatible AI chat panel for Spyder 6.x. Connect to 12 providers — OpenAI, Groq, Mistral, DeepSeek, Together AI, Fireworks AI, OpenRouter, Azure OpenAI, Ollama, LM Studio, vLLM, or any custom OpenAI-compatible endpoint — all from inside your IDE, without switching windows.
© 2026 Maciej Piecko — MIT License
What's new in 0.8.1
run:gitagentic fence — the LLM can run git commands via```run:gitfences; orange action block; non-blocking QThread execution with ⏳ spinner; inline output panel with 📎 Add to chat / ✕ Dismiss; git availability checked with a friendly error if git is not on PATH- Diff syntax highlighting —
patch:blocks render with coloured unified-diff highlighting in the chat and confirm dialog; internal patch text is always preserved unchanged - Robust patch application — fuzzy line-search fallback when LLM omits
@@line numbers, preventing silent no-op patches - Agentic system prompt — hash-based auto-upgrade removed; replaced with an amber upgrade notice in Settings; Reset to default button always visible above the collapsible prompt textarea; built-in prompt now enforces
patch:for edits,file:only for new files - Full absolute paths in file attachments — manually attached files and project context blocks store and display full absolute paths so the LLM can use them directly in fences
- Wrapping context bar — tag bar wraps to the next row instead of expanding the plugin window; project context toggle stays in a fixed right column
- New-chat project context — starting a new chat always disables project context (configurable in Settings → 📁 Context); enabling project context clears any manually attached files
- Live project root for agentic actions — base directory resolved lazily at click time using the live Spyder project root, even if project context is off; home directory is last-resort only
.gitmetadata files in project context —HEAD,config,COMMIT_EDITMSGetc. are now correctly collected when project context is active- Version shown in Settings — plugin version displayed in Settings → Connection tab
What's new in 0.8.0
- Named chat collections — organise saved chats into user-defined collections (folders inside
~/.spyder_ai_chat/chats/); fully backward-compatible with existing chat files in the Default collection - Collection selector in history popup — "Collection:" dropdown lets you browse one collection at a time or switch to ⊕ All Collections to search across all collections simultaneously
- Collection Manager dialog — ⚙ gear button opens a side-by-side manager: create, rename, delete collections; move chats between them individually or in bulk; delete dialog lets you choose to delete chats or move them to another collection first
- Right-click "Move to →" — context menu on any chat row lets you move it to another collection; moving the currently open chat updates the panel's active reference live
- Collection badge in All-Collections view — chat rows show a
[CollectionName]badge so the source is always visible when browsing all collections - Auto-complete settings fix — "Context before cursor" field no longer resets to 100 when clicking OK (Qt intermediate-validation bug with minimum=100 was the root cause; lowered to 1)
- Button icon fixes — gear icon in the toolbar now renders in monochrome on Windows instead of coloured emoji
What's new in 0.7.1
- Project Explorer context menu — right-click file(s) in the Project Explorer to add them to the AI Chat context; multi-select supported; disabled automatically when project context is ON
- Editor context menu fix (Spyder 6.1.4) — AI Chat submenu restored after Spyder 6.1.4 changed the editor menu API; backwards-compatible with Spyder ≤ 6.1.3
- Agentic fixes — overwrite header updates live after file creation; patch reloads the open editor buffer; startup chat action blocks (run/overwrite detection) work correctly on first click without requiring a chat switch
- Agentic: Apply patch colour — changed from purple to lime green for better contrast
- Settings: Agentic tab — compact 2-column layout; prompt template collapsible to save vertical space
What's new in 0.7.0
- Agentic mode — enable in Settings → 🤖 Agentic; the LLM can take direct actions via special code fences, each confirmed with a single click:
```file:path— create or overwrite a file; new files open automatically in the Spyder editor```run:python— send Python code to the active IPython console```install:pip— install packages via the console```patch:path— apply a unified diff patch to an existing file
- Agentic settings tab: master switch, per-action allow flags, default base path, customisable prompt template with Reset button
- Overwrite detection — action button shows blue "✓ Create file" or amber "⚠ Overwrite file" depending on whether the target exists
- Done badge — executed actions show ✓ Done; hover reveals ↺ Re-run; execution state persisted across chat reloads
- Regenerate fix — 🔄 Regenerate now correctly injects the agentic system prompt
- Streaming flash fix — eliminated the brief floating widget appearing during action block rendering
What's new in 0.6.0
- Project-wide context — enable the 📁 Proj. Context toggle in the chat bar to attach your entire Spyder project to the conversation:
- Folder-selection dialog with live token estimate lets you choose which top-level folders to include
- First message sends all selected files in full; subsequent messages send only changed files as a delta (token-efficient)
- Open files with unsaved edits use the live editor buffer — the LLM always sees what you are currently working on
- New unsaved files (not yet on disk) are auto-included from the editor buffer
- File watcher monitors the project directory; the badge shows a changed-file count before each send
- Re-opening a saved chat restores project context; files are silently re-expanded if hashes match, otherwise a stale badge is shown
- Whole-file attachments are blocked while project context is ON (editor selections and console attachments remain available)
- New 📁 Context tab in Settings: max file size, max file count, extra exclusion glob patterns (on top of built-in exclusions and
.gitignore)
What's new in 0.5.1
- Chat history search — live search field in the history popup filters by title preview and full message content simultaneously as you type
- Table
<br>tag fix — line breaks inside table cells are now rendered correctly instead of appearing as literal<br>text - Table scroll fix — mouse wheel over a table now scrolls the chat window instead of scrolling the table widget independently
What's new in 0.5.0
- IPython console context menu — right-click anywhere in the IPython console to access the AI Chat submenu: Add console content to context attaches the full console output (ANSI codes stripped); Add selection to context attaches the highlighted text
- Console attachment colour distinction — console context tags use a teal-green badge colour to distinguish them from the blue editor/file tags at a glance
- Think block show/hide scroll fix — toggling the thinking block no longer causes the chat pane to jump to the bottom; scroll position is preserved
- Live code block rendering — the code block widget appears at the first
```line and grows in real time; finalised when the closing fence arrives - Code block height fixes — accurate height from
fontMetrics().lineSpacing(); horizontal scrollbar space reserved; single-line blocks no longer clipped
What's new in 0.4.1
- Default system prompt for new chat — pick a saved prompt as the default in Settings → System Prompts; it is applied automatically every time a new chat is started
- Think block streaming fix —
<think>blocks render as the collapsible Thinking widget immediately after</think>arrives, not only at stream end - Nested list streaming fix — nested list items now have correct line breaks during progressive rendering
- Code-only message fix — a response that is a single code block no longer produces an empty code widget; it renders correctly when the closing fence arrives
build_code_blockcrash fix —UnboundLocalErrorwhen loading saved chats with code blocks is resolved
What's new in 0.4.0
- Processing spinner — braille spinner shown while waiting for the first LLM token; disappears the moment streaming starts
- Incremental markdown rendering — response formatted in real time as it streams; completed blocks become rendered widgets instantly; only the trailing incomplete block is shown as plain text
- HTTP error display — API errors shown in a dark-red styled box with an "⚠ Response error" header; no empty assistant block created on error
- Delete on error blocks — delete button on error response blocks now works correctly
- Regenerate on error — Regenerate button now appears on error response blocks for an immediate retry
What's new in 0.3.2
- Plugin entry-point renamed to
ai_chat_plugin(spyder.plugins) andai_fim_provider(spyder.completions) for clarity NAME/CONF_SECTIONinAIChatPluginupdated to"ai_chat_plugin";COMPLETION_PROVIDER_NAME/CONF_SECTIONinAiFimProviderupdated to"ai_fim_provider"/"ai_chat_plugin"
What's new in 0.3.1
- Settings → ⚡ Auto-complete tab redesigned as a step-by-step wizard: enable → set provider/URL → Load Models → pick model + backend; backend probe validates response body to avoid false-positive matches
- Model list and backend list persist after save and reopen — no need to re-run Load Models every time
- Test Connection button in the Connection tab — probes
GET /modelswith an OpenAI-SDK-styleUser-Agent(fixes Cloudflare 403 / error 1010 on Groq and similar providers) - System Prompts tab: Edit button removed; selecting a prompt immediately opens it for editing; Save activates only when content changes
- Commands tab: Edit button removed; selecting a command immediately opens it for editing; Save activates only when content changes
- Settings window wider (+10%); tabs stretch edge-to-edge; "🖊 Editor" tab renamed to "🖊 Dialogs"
- FIM cursor-offset bug fixed: completions are now correct for files with
\r\nline endings
What's new in 0.3.0
- AI auto-complete (FIM) — fill-in-middle ghost-text completions in the code editor
- Tab to accept, Escape to dismiss, Alt+\ for manual trigger
- Supports Ollama, LM Studio, vLLM, DeepSeek, Codestral/Mistral, OpenRouter, custom endpoints
- Trigger modes: auto (debounce), after new line, manual
Features
| Feature | Details | |
|---|---|---|
| 🗨️ | Chat panel | Scrollable conversation with colour-coded user / assistant messages |
| ⚡ | Streaming | Token-by-token streaming with live incremental markdown rendering — blocks are formatted as they arrive |
| 🔁 | Model selector | Dropdown populated live from the API — switch models instantly |
| 🔧 | 12 providers | OpenAI, Groq, Mistral, DeepSeek, Together, Fireworks, OpenRouter, Azure, Ollama, LM Studio, vLLM, Custom |
| ⚙ | Inference params | Per-chat hyperparameters popup — provider-aware, resets on New Chat |
| 🔑 | Optional API key | Leave blank for local models that need no authentication |
| 🧠 | System prompt | Custom prompt field, or select from a saved prompts library |
| 💬 | Saved system prompts | Define reusable prompts; manage via Settings → System Prompts tab |
| ⏹ | Stop | Cancel a streaming reply at any time |
| 🗑 | New Chat | Start a fresh conversation; current one saved automatically |
| 📋 | Chat history | Browse, load, and delete saved chats; live search by title or content; active chat highlighted in green |
| 🗂 | Chat collections | Organise chats into named collections; ⚙ manager to create / rename / delete; right-click to move chats; search within one or across all |
| 📎 | File context | Attach whole files or selected text from the editor, or IPython console output — colour-coded tags (blue = editor, teal = console) |
| 🖊️ | Markdown rendering | Headings, bold, italic, tables, code blocks, blockquotes, links, strikethrough |
| 🗂 | Nested lists | Arbitrarily deep bullet & numbered lists, mixed types at any level |
| 🧠 | Thinking blocks | <think> tags rendered as a collapsible scrollable box (DeepSeek-R1, QwQ, …) |
| 📋 | Copy to editor | Insert any code block or full response at the cursor in the active file |
| 🗑 | Delete exchange | Remove any exchange with a 3-second undo window |
| 🔄 | Regenerate | Re-run the last assistant response with one click |
| ↔ | Horizontal scroll | Wide code blocks scroll horizontally instead of clipping |
| ⚙ | Settings | Tabbed dialog: provider + Test Connection, dialog font sizes, history, system prompts, commands, auto-complete |
| / | Commands | Slash-command aliases with picker dropdown; expand to full prompts before sending |
| ✍️ | AI auto-complete | FIM ghost-text completions in the editor — Tab to accept, Escape to dismiss |
| 🤖 | Agentic mode | LLM creates files, runs console code, installs packages, applies patches — one-click confirmation per action |
Requirements
- Python ≥ 3.9
- Spyder ≥ 6.0
- No additional Python packages — HTTP via
urllib(stdlib), UI via Qt (bundled with Spyder)
Installation
From PyPI
pip install spyder-ai-chat
From source / development build
Clone the source code from the repository:
https://sourceforge.net/p/spyder-ai-chat-plugin/code/ci/master/tree/
Then install in editable mode:
cd spyder_ai_chat
pip install -e .
Important: install into the same Python environment that Spyder uses.
After installation, restart Spyder. The panel appears automatically. If not visible: Window → Panes → AI Chat.
Quick start
- Open Settings (⚙ button in the panel toolbar).
- On the Connection tab, select your Provider from the dropdown.
- Fill in the API URL and key as needed (pre-filled for known providers).
- Click Test Connection to verify credentials, then click OK.
- Click ⟳ to load the model list and pick a model.
- Type a message and press Ctrl+Enter or click Send.
To enable AI auto-complete in the editor:
- Open Settings → ⚡ Auto-complete.
- Check Enable AI auto-completion in the editor.
- Select a provider and API URL, then click Load Models.
- Choose a model and backend type, adjust parameters if needed, and click OK.
License
MIT — see the LICENSE file included in the package.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file spyder_ai_chat-0.8.1.tar.gz.
File metadata
- Download URL: spyder_ai_chat-0.8.1.tar.gz
- Upload date:
- Size: 140.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
28ad8253d53f28365b5075252536efa652ad85b44aada0c9a4bd85385cf543ee
|
|
| MD5 |
3496aa2ad5b875ca54904933f4b6708f
|
|
| BLAKE2b-256 |
68a9ddc788768bd8b4f99508f818d3335f8a13077bd50672778ae851702664d1
|
File details
Details for the file spyder_ai_chat-0.8.1-py3-none-any.whl.
File metadata
- Download URL: spyder_ai_chat-0.8.1-py3-none-any.whl
- Upload date:
- Size: 125.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eecbf8fdbc72938412a038ec4d24c84fe5aac26a2483fd9a370b3a9fd8a44729
|
|
| MD5 |
d6146245ba15a40bb3c792a4de9834f2
|
|
| BLAKE2b-256 |
79fd9c9e88b073e3e0705d0d15835254e0bf4d8e7ee66c323eece17c0352da0d
|