OpenAI-compatible AI chat pane + FIM completion for Spyder 6
Project description
Spyder IDE AI Chat Plugin
An OpenAI-compatible AI chat panel for Spyder 6.x. Connect to 12 providers — OpenAI, Groq, Mistral, DeepSeek, Together AI, Fireworks AI, OpenRouter, Azure OpenAI, Ollama, LM Studio, vLLM, or any custom OpenAI-compatible endpoint — all from inside your IDE, without switching windows.
© 2026 Maciej Piecko — MIT License
What's new in 0.5.1
- Chat history search — live search field in the history popup filters by title preview and full message content simultaneously as you type
- Table
<br>tag fix — line breaks inside table cells are now rendered correctly instead of appearing as literal<br>text - Table scroll fix — mouse wheel over a table now scrolls the chat window instead of scrolling the table widget independently
What's new in 0.5.0
- IPython console context menu — right-click anywhere in the IPython console to access the AI Chat submenu: Add console content to context attaches the full console output (ANSI codes stripped); Add selection to context attaches the highlighted text
- Console attachment colour distinction — console context tags use a teal-green badge colour to distinguish them from the blue editor/file tags at a glance
- Think block show/hide scroll fix — toggling the thinking block no longer causes the chat pane to jump to the bottom; scroll position is preserved
- Live code block rendering — the code block widget appears at the first
```line and grows in real time; finalised when the closing fence arrives - Code block height fixes — accurate height from
fontMetrics().lineSpacing(); horizontal scrollbar space reserved; single-line blocks no longer clipped
What's new in 0.4.1
- Default system prompt for new chat — pick a saved prompt as the default in Settings → System Prompts; it is applied automatically every time a new chat is started
- Think block streaming fix —
<think>blocks render as the collapsible Thinking widget immediately after</think>arrives, not only at stream end - Nested list streaming fix — nested list items now have correct line breaks during progressive rendering
- Code-only message fix — a response that is a single code block no longer produces an empty code widget; it renders correctly when the closing fence arrives
build_code_blockcrash fix —UnboundLocalErrorwhen loading saved chats with code blocks is resolved
What's new in 0.4.0
- Processing spinner — braille spinner shown while waiting for the first LLM token; disappears the moment streaming starts
- Incremental markdown rendering — response formatted in real time as it streams; completed blocks become rendered widgets instantly; only the trailing incomplete block is shown as plain text
- HTTP error display — API errors shown in a dark-red styled box with an "⚠ Response error" header; no empty assistant block created on error
- Delete on error blocks — delete button on error response blocks now works correctly
- Regenerate on error — Regenerate button now appears on error response blocks for an immediate retry
What's new in 0.3.2
- Plugin entry-point renamed to
ai_chat_plugin(spyder.plugins) andai_fim_provider(spyder.completions) for clarity NAME/CONF_SECTIONinAIChatPluginupdated to"ai_chat_plugin";COMPLETION_PROVIDER_NAME/CONF_SECTIONinAiFimProviderupdated to"ai_fim_provider"/"ai_chat_plugin"
What's new in 0.3.1
- Settings → ⚡ Auto-complete tab redesigned as a step-by-step wizard: enable → set provider/URL → Load Models → pick model + backend; backend probe validates response body to avoid false-positive matches
- Model list and backend list persist after save and reopen — no need to re-run Load Models every time
- Test Connection button in the Connection tab — probes
GET /modelswith an OpenAI-SDK-styleUser-Agent(fixes Cloudflare 403 / error 1010 on Groq and similar providers) - System Prompts tab: Edit button removed; selecting a prompt immediately opens it for editing; Save activates only when content changes
- Commands tab: Edit button removed; selecting a command immediately opens it for editing; Save activates only when content changes
- Settings window wider (+10%); tabs stretch edge-to-edge; "🖊 Editor" tab renamed to "🖊 Dialogs"
- FIM cursor-offset bug fixed: completions are now correct for files with
\r\nline endings
What's new in 0.3.0
- AI auto-complete (FIM) — fill-in-middle ghost-text completions in the code editor
- Tab to accept, Escape to dismiss, Alt+\ for manual trigger
- Supports Ollama, LM Studio, vLLM, DeepSeek, Codestral/Mistral, OpenRouter, custom endpoints
- Trigger modes: auto (debounce), after new line, manual
Features
| Feature | Details | |
|---|---|---|
| 🗨️ | Chat panel | Scrollable conversation with colour-coded user / assistant messages |
| ⚡ | Streaming | Token-by-token streaming with live incremental markdown rendering — blocks are formatted as they arrive |
| 🔁 | Model selector | Dropdown populated live from the API — switch models instantly |
| 🔧 | 12 providers | OpenAI, Groq, Mistral, DeepSeek, Together, Fireworks, OpenRouter, Azure, Ollama, LM Studio, vLLM, Custom |
| ⚙ | Inference params | Per-chat hyperparameters popup — provider-aware, resets on New Chat |
| 🔑 | Optional API key | Leave blank for local models that need no authentication |
| 🧠 | System prompt | Custom prompt field, or select from a saved prompts library |
| 💬 | Saved system prompts | Define reusable prompts; manage via Settings → System Prompts tab |
| ⏹ | Stop | Cancel a streaming reply at any time |
| 🗑 | New Chat | Start a fresh conversation; current one saved automatically |
| 📋 | Chat history | Browse, load, and delete saved chats; live search by title or content; active chat highlighted in green |
| 📎 | File context | Attach whole files or selected text from the editor, or IPython console output — colour-coded tags (blue = editor, teal = console) |
| 🖊️ | Markdown rendering | Headings, bold, italic, tables, code blocks, blockquotes, links, strikethrough |
| 🗂 | Nested lists | Arbitrarily deep bullet & numbered lists, mixed types at any level |
| 🧠 | Thinking blocks | <think> tags rendered as a collapsible scrollable box (DeepSeek-R1, QwQ, …) |
| 📋 | Copy to editor | Insert any code block or full response at the cursor in the active file |
| 🗑 | Delete exchange | Remove any exchange with a 3-second undo window |
| 🔄 | Regenerate | Re-run the last assistant response with one click |
| ↔ | Horizontal scroll | Wide code blocks scroll horizontally instead of clipping |
| ⚙ | Settings | Tabbed dialog: provider + Test Connection, dialog font sizes, history, system prompts, commands, auto-complete |
| / | Commands | Slash-command aliases with picker dropdown; expand to full prompts before sending |
| ✍️ | AI auto-complete | FIM ghost-text completions in the editor — Tab to accept, Escape to dismiss |
Requirements
- Python ≥ 3.9
- Spyder ≥ 6.0
- No additional Python packages — HTTP via
urllib(stdlib), UI via Qt (bundled with Spyder)
Installation
From PyPI
pip install spyder-ai-chat
From source / development build
Clone the source code from the repository:
https://sourceforge.net/p/spyder-ai-chat-plugin/code/ci/master/tree/
Then install in editable mode:
cd spyder_ai_chat
pip install -e .
Important: install into the same Python environment that Spyder uses.
After installation, restart Spyder. The panel appears automatically. If not visible: Window → Panes → AI Chat.
Quick start
- Open Settings (⚙ button in the panel toolbar).
- On the Connection tab, select your Provider from the dropdown.
- Fill in the API URL and key as needed (pre-filled for known providers).
- Click Test Connection to verify credentials, then click OK.
- Click ⟳ to load the model list and pick a model.
- Type a message and press Ctrl+Enter or click Send.
To enable AI auto-complete in the editor:
- Open Settings → ⚡ Auto-complete.
- Check Enable AI auto-completion in the editor.
- Select a provider and API URL, then click Load Models.
- Choose a model and backend type, adjust parameters if needed, and click OK.
License
MIT — see the LICENSE file included in the package.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file spyder_ai_chat-0.5.1.tar.gz.
File metadata
- Download URL: spyder_ai_chat-0.5.1.tar.gz
- Upload date:
- Size: 96.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
690709e2e86539fc5ff4f7ca7d6ca53df72a530a8d995ebc3511c52b7a3fcedc
|
|
| MD5 |
dffc46ff36b09b6c57e9c974111d4874
|
|
| BLAKE2b-256 |
d09fe21f870822b4f9c351529995bdb544c08cc29b2a7aad3bd8728c096c68b1
|
File details
Details for the file spyder_ai_chat-0.5.1-py3-none-any.whl.
File metadata
- Download URL: spyder_ai_chat-0.5.1-py3-none-any.whl
- Upload date:
- Size: 90.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
995d728bfe8655f41f548bc8e59f59ec3cf07d74306573eb9e916cf150fb8c1e
|
|
| MD5 |
08d7809360a55da31702a0b8f190f6b5
|
|
| BLAKE2b-256 |
ef92e1f3f34f8e167adec6e34484bca11408cd02f70b03bf6ae3121816ce4add
|