Skip to main content

OpenAI-compatible AI chat pane + FIM completion for Spyder 6

Project description

Spyder IDE AI Chat Plugin

An OpenAI-compatible AI chat panel for Spyder 6.x. Connect to 12 providers — OpenAI, Groq, Mistral, DeepSeek, Together AI, Fireworks AI, OpenRouter, Azure OpenAI, Ollama, LM Studio, vLLM, or any custom OpenAI-compatible endpoint — all from inside your IDE, without switching windows.

© 2026 Maciej Piecko — MIT License


What's new in 0.5.1

  • Chat history search — live search field in the history popup filters by title preview and full message content simultaneously as you type
  • Table <br> tag fix — line breaks inside table cells are now rendered correctly instead of appearing as literal <br> text
  • Table scroll fix — mouse wheel over a table now scrolls the chat window instead of scrolling the table widget independently

What's new in 0.5.0

  • IPython console context menu — right-click anywhere in the IPython console to access the AI Chat submenu: Add console content to context attaches the full console output (ANSI codes stripped); Add selection to context attaches the highlighted text
  • Console attachment colour distinction — console context tags use a teal-green badge colour to distinguish them from the blue editor/file tags at a glance
  • Think block show/hide scroll fix — toggling the thinking block no longer causes the chat pane to jump to the bottom; scroll position is preserved
  • Live code block rendering — the code block widget appears at the first ``` line and grows in real time; finalised when the closing fence arrives
  • Code block height fixes — accurate height from fontMetrics().lineSpacing(); horizontal scrollbar space reserved; single-line blocks no longer clipped

What's new in 0.4.1

  • Default system prompt for new chat — pick a saved prompt as the default in Settings → System Prompts; it is applied automatically every time a new chat is started
  • Think block streaming fix<think> blocks render as the collapsible Thinking widget immediately after </think> arrives, not only at stream end
  • Nested list streaming fix — nested list items now have correct line breaks during progressive rendering
  • Code-only message fix — a response that is a single code block no longer produces an empty code widget; it renders correctly when the closing fence arrives
  • build_code_block crash fixUnboundLocalError when loading saved chats with code blocks is resolved

What's new in 0.4.0

  • Processing spinner — braille spinner shown while waiting for the first LLM token; disappears the moment streaming starts
  • Incremental markdown rendering — response formatted in real time as it streams; completed blocks become rendered widgets instantly; only the trailing incomplete block is shown as plain text
  • HTTP error display — API errors shown in a dark-red styled box with an "⚠ Response error" header; no empty assistant block created on error
  • Delete on error blocks — delete button on error response blocks now works correctly
  • Regenerate on error — Regenerate button now appears on error response blocks for an immediate retry

What's new in 0.3.2

  • Plugin entry-point renamed to ai_chat_plugin (spyder.plugins) and ai_fim_provider (spyder.completions) for clarity
  • NAME / CONF_SECTION in AIChatPlugin updated to "ai_chat_plugin"; COMPLETION_PROVIDER_NAME / CONF_SECTION in AiFimProvider updated to "ai_fim_provider" / "ai_chat_plugin"

What's new in 0.3.1

  • Settings → ⚡ Auto-complete tab redesigned as a step-by-step wizard: enable → set provider/URL → Load Models → pick model + backend; backend probe validates response body to avoid false-positive matches
  • Model list and backend list persist after save and reopen — no need to re-run Load Models every time
  • Test Connection button in the Connection tab — probes GET /models with an OpenAI-SDK-style User-Agent (fixes Cloudflare 403 / error 1010 on Groq and similar providers)
  • System Prompts tab: Edit button removed; selecting a prompt immediately opens it for editing; Save activates only when content changes
  • Commands tab: Edit button removed; selecting a command immediately opens it for editing; Save activates only when content changes
  • Settings window wider (+10%); tabs stretch edge-to-edge; "🖊 Editor" tab renamed to "🖊 Dialogs"
  • FIM cursor-offset bug fixed: completions are now correct for files with \r\n line endings

What's new in 0.3.0

  • AI auto-complete (FIM) — fill-in-middle ghost-text completions in the code editor
  • Tab to accept, Escape to dismiss, Alt+\ for manual trigger
  • Supports Ollama, LM Studio, vLLM, DeepSeek, Codestral/Mistral, OpenRouter, custom endpoints
  • Trigger modes: auto (debounce), after new line, manual

Features

Feature Details
🗨️ Chat panel Scrollable conversation with colour-coded user / assistant messages
Streaming Token-by-token streaming with live incremental markdown rendering — blocks are formatted as they arrive
🔁 Model selector Dropdown populated live from the API — switch models instantly
🔧 12 providers OpenAI, Groq, Mistral, DeepSeek, Together, Fireworks, OpenRouter, Azure, Ollama, LM Studio, vLLM, Custom
Inference params Per-chat hyperparameters popup — provider-aware, resets on New Chat
🔑 Optional API key Leave blank for local models that need no authentication
🧠 System prompt Custom prompt field, or select from a saved prompts library
💬 Saved system prompts Define reusable prompts; manage via Settings → System Prompts tab
Stop Cancel a streaming reply at any time
🗑 New Chat Start a fresh conversation; current one saved automatically
📋 Chat history Browse, load, and delete saved chats; live search by title or content; active chat highlighted in green
📎 File context Attach whole files or selected text from the editor, or IPython console output — colour-coded tags (blue = editor, teal = console)
🖊️ Markdown rendering Headings, bold, italic, tables, code blocks, blockquotes, links, strikethrough
🗂 Nested lists Arbitrarily deep bullet & numbered lists, mixed types at any level
🧠 Thinking blocks <think> tags rendered as a collapsible scrollable box (DeepSeek-R1, QwQ, …)
📋 Copy to editor Insert any code block or full response at the cursor in the active file
🗑 Delete exchange Remove any exchange with a 3-second undo window
🔄 Regenerate Re-run the last assistant response with one click
Horizontal scroll Wide code blocks scroll horizontally instead of clipping
Settings Tabbed dialog: provider + Test Connection, dialog font sizes, history, system prompts, commands, auto-complete
/ Commands Slash-command aliases with picker dropdown; expand to full prompts before sending
✍️ AI auto-complete FIM ghost-text completions in the editor — Tab to accept, Escape to dismiss

Requirements

  • Python ≥ 3.9
  • Spyder ≥ 6.0
  • No additional Python packages — HTTP via urllib (stdlib), UI via Qt (bundled with Spyder)

Installation

From PyPI

pip install spyder-ai-chat

From source / development build

Clone the source code from the repository:

https://sourceforge.net/p/spyder-ai-chat-plugin/code/ci/master/tree/

Then install in editable mode:

cd spyder_ai_chat
pip install -e .

Important: install into the same Python environment that Spyder uses.

After installation, restart Spyder. The panel appears automatically. If not visible: Window → Panes → AI Chat.


Quick start

  1. Open Settings (⚙ button in the panel toolbar).
  2. On the Connection tab, select your Provider from the dropdown.
  3. Fill in the API URL and key as needed (pre-filled for known providers).
  4. Click Test Connection to verify credentials, then click OK.
  5. Click to load the model list and pick a model.
  6. Type a message and press Ctrl+Enter or click Send.

To enable AI auto-complete in the editor:

  1. Open Settings → ⚡ Auto-complete.
  2. Check Enable AI auto-completion in the editor.
  3. Select a provider and API URL, then click Load Models.
  4. Choose a model and backend type, adjust parameters if needed, and click OK.

License

MIT — see the LICENSE file included in the package.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spyder_ai_chat-0.5.1.tar.gz (96.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spyder_ai_chat-0.5.1-py3-none-any.whl (90.2 kB view details)

Uploaded Python 3

File details

Details for the file spyder_ai_chat-0.5.1.tar.gz.

File metadata

  • Download URL: spyder_ai_chat-0.5.1.tar.gz
  • Upload date:
  • Size: 96.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for spyder_ai_chat-0.5.1.tar.gz
Algorithm Hash digest
SHA256 690709e2e86539fc5ff4f7ca7d6ca53df72a530a8d995ebc3511c52b7a3fcedc
MD5 dffc46ff36b09b6c57e9c974111d4874
BLAKE2b-256 d09fe21f870822b4f9c351529995bdb544c08cc29b2a7aad3bd8728c096c68b1

See more details on using hashes here.

File details

Details for the file spyder_ai_chat-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: spyder_ai_chat-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 90.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for spyder_ai_chat-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 995d728bfe8655f41f548bc8e59f59ec3cf07d74306573eb9e916cf150fb8c1e
MD5 08d7809360a55da31702a0b8f190f6b5
BLAKE2b-256 ef92e1f3f34f8e167adec6e34484bca11408cd02f70b03bf6ae3121816ce4add

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page