Skip to main content

A terminal-first AI coding companion that runs on local LLMs

Project description

Songbird

███████╗ ██████╗ ███╗   ██╗ ██████╗ ██████╗ ██╗██████╗ ██████╗ 
██╔════╝██╔═══██╗████╗  ██║██╔════╝ ██╔══██╗██║██╔══██╗██╔══██╗
███████╗██║   ██║██╔██╗ ██║██║  ███╗██████╔╝██║██████╔╝██║  ██║
╚════██║██║   ██║██║╚██╗██║██║   ██║██╔══██╗██║██╔══██╗██║  ██║
███████║╚██████╔╝██║ ╚████║╚██████╔╝██████╔╝██║██║  ██║██████╔╝
╚══════╝ ╚═════╝ ╚═╝  ╚═══╝ ╚═════╝ ╚═════╝ ╚═╝╚═╝  ╚═╝╚═════╝

A terminal-first AI coding companion with 11 professional tools, smart task management, and persistent memory

CI PyPI version Python 3.10+ License: MIT PyPI version

Downloading

With uv (recommended)

uv tool install songbird-ai

With pipx (if available)

pipx install songbird-ai

With pip (traditional)

pip install songbird-ai

Quick Start

Option 1: With Gemini (Recommended)

# Install Songbird
pipx install songbird-ai

# Get your free Gemini API key
# Visit: https://aistudio.google.com/app/apikey

# Set your API key
export GOOGLE_API_KEY="your-api-key-here"

# Start coding with AI
songbird

# Continue your previous session
songbird --continue

# Resume from any previous session
songbird --resume

Option 2: With Local Ollama

# Install Songbird
pipx install songbird-ai

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Start Ollama and pull a model
ollama serve
ollama pull devstral:latest

# Start coding with AI
songbird --provider ollama

# Continue previous session with Ollama
songbird --provider ollama --continue

Features

11 Professional Tools for complete development workflows:

  • Enhanced File Operations: file_search, file_read, file_create, file_edit with syntax highlighting and diff previews
  • Smart Task Management: todo_read, todo_write with automatic prioritization and session persistence
  • Advanced File Discovery: glob pattern matching, grep regex search, enhanced ls directory listing
  • Atomic Operations: multi_edit for safe bulk file changes with rollback capabilities
  • Shell Integration: shell_exec with live output streaming and cross-platform support

Intelligent Task Management

  • LLM-Powered Auto-Completion: Automatically detects and completes tasks from natural language - just say "I implemented the JWT tokens" and the system intelligently marks related todos as complete
  • Session-Aware Todos: Create, track, and complete development tasks with automatic priority assignment
  • Smart Prioritization: AI analyzes task content to assign appropriate priority levels
  • Clean Visual Display: Simple bullet points with strikethrough for completed tasks
  • Semantic Understanding: The LLM understands context - "JWT token system" matches "JWT tokens for authentication"

Smart Todo Management Example:

# Create todos naturally
"I need to implement JWT authentication and user registration"
✓ Creates: "Implement JWT authentication" and "Add user registration"

# Complete todos intelligently  
"I finished the JWT token system and it's working"
✓ Auto-completes: "Implement JWT authentication" (semantic match)
✓ Shows updated list with strikethrough for completed items

# No manual marking needed - just describe what you did!

Advanced File Discovery & Search

  • Glob Patterns: Find files with patterns like **/*.py, src/**/*.js, *test*.{py,js}
  • Regex Content Search: Powerful regex search with context lines and highlighting
  • Enhanced Directory Listing: Rich formatted output with sorting and metadata
  • Smart File Detection: Automatically detects filename vs content searches
  • Type-Specific Search: Filter by file extensions (py, js, md, txt, json, yaml, etc.)

Atomic Multi-File Operations

  • Bulk Editing: Edit multiple files simultaneously with safety guarantees
  • Beautiful Previews: Unified diff display for all changes before applying
  • Rollback Protection: Automatic rollback if any operation fails
  • Atomic Transactions: All-or-nothing approach ensures consistency

Persistent Memory System

  • Session Persistence: Automatic conversation saving with project-aware storage
  • Seamless Continuation: Resume exactly where you left off with --continue
  • Session Browser: Interactive menu to select from previous sessions with --resume
  • Project Isolation: Each git repository gets separate session storage
  • Visual Replay: Perfect restoration of conversation history with tool outputs

Dynamic Command System

  • In-Chat Commands: Type / for instant command access without leaving conversation
  • Real-Time Model Switching: Change models with /model command - no session restart needed
  • Model Persistence: Model changes automatically save and restore across sessions
  • Help System: Comprehensive /help command with examples and documentation
  • Session Management: /clear command for conversation management

Multi-Provider AI Support

  • 5 AI Providers: OpenAI, Anthropic Claude, Google Gemini, Ollama, and OpenRouter
  • Automatic Provider Selection: Intelligent fallback based on available API keys
  • Cloud & Local: Use powerful cloud models or private local models
  • Dynamic Switching: Switch models and providers instantly during conversations

Safety & Security

  • Repository Sandboxing: Cannot access files outside your project
  • Diff Previews: Review all changes before applying with beautiful unified diffs
  • Atomic Operations: Safe multi-file editing with automatic rollback
  • Input Validation: Comprehensive validation for all tool operations

Installation

Recommended: pipx (for CLI tools)

# Install with pipx (isolated, globally available)
pipx install songbird-ai

# Verify installation
songbird --help

Alternative: uv (fast package manager)

# Install with uv
uv tool install songbird-ai

# Verify installation
songbird --help

Traditional: pip

# Install with pip (may conflict with other packages)
pip install songbird-ai

Getting Started

1. Install Ollama

Linux/WSL
curl -fsSL https://ollama.ai/install.sh | sh
macOS
# Using Homebrew
brew install ollama

# Or download from https://ollama.ai/download
Windows

Download and install from https://ollama.ai/download

2. Start Ollama Server

ollama serve

3. Pull a Coding Model

# Recommended: Devstral (enhanced coding capabilities)
ollama pull devstral:latest

# Alternatives
ollama pull codellama:7b        # Meta's CodeLlama
ollama pull llama3.2:3b         # General purpose, faster
ollama pull deepseek-coder:6.7b # DeepSeek Coder

4. Start Songbird

# Launch interactive chat (uses Gemini if API key is set, otherwise Ollama)
songbird

# Use specific provider
songbird --provider gemini
songbird --provider ollama

# Use specific model (or change models during conversation with /model)
/model gemini-2.0-flash-001    # Switch models in conversation
/model qwen2.5-coder:7b        # No restart needed

# Check available providers
songbird --list-providers

# Check version and commands
songbird --help
songbird version

Usage Examples

# Basic chat session (auto-selects best provider)
songbird

# Use Gemini (powerful, cloud-based)
songbird --provider gemini

# Use Ollama (private, local)
songbird --provider ollama

# List available providers
songbird --list-providers

# Session management
songbird --continue    # Continue latest session
songbird --resume      # Pick from previous sessions

# Show available commands
songbird --help

# Display version
songbird version

In-Chat Commands

Once in a conversation, use these powerful commands:

# Model switching (no session restart needed!)
/model                    # See available models and switch interactively
/model devstral:latest    # Switch to specific model directly
/model gemini-2.0-flash-001  # Switch to Gemini model

# Help and information
/help                     # Show all available commands
/help model               # Get help for specific command
/                         # Quick command menu

# Session management
/clear                    # Clear conversation history
/clear --force            # Clear without confirmation

Development

Prerequisites

  • Python 3.10 or higher
  • uv (recommended) or pip

Setup

# Clone the repository
git clone https://github.com/Spandan7724/songbird
cd songbird

# Install with uv (recommended)
uv sync
uv pip install -e .

# Or with traditional tools
python -m venv .venv
source .venv/bin/activate  # Windows: .venv\Scripts\activate
pip install -e .

Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=songbird

# Run specific test file
pytest tests/test_cli.py -v

# Run LLM integration tests (requires Ollama)
pytest tests/llm/ -v

Building

# Build wheel and source distribution
python -m build

# Test local installation
uv tool install ./dist/songbird_ai-*.whl

Roadmap

Songbird follows a test-driven, phase-based development approach:

  • Phase 1: LLM Provider Layer (OpenAI, Claude, Gemini, Ollama, OpenRouter)
  • Phase 2: File Search (enhanced with type filtering and smart detection)
  • Phase 3: Patch Generation & Apply (with beautiful diff previews)
  • Phase 4: Shell Execution (live streaming and cross-platform)
  • Phase 5: Conversation Orchestrator (multi-turn with tool calling)
  • Phase 6: Advanced UI/UX (interactive menus and rich displays)
  • Phase 7: Session Memory (complete with project-aware storage)
  • Phase 8: Dynamic Command System (in-chat model switching)
  • Phase 9: Feature Parity (11 professional tools, task management)
  • Phase 10: MCP Server Protocol
  • Phase 11: Advanced Safety & Permissions
  • Phase 12: Plugin System

Troubleshooting

Ollama Connection Issues
# Check if Ollama is running
curl http://localhost:11434/api/tags

# Restart Ollama service
ollama serve

# Check available models
ollama list
Model Not Found Errors
# Pull the required model
ollama pull qwen2.5-coder:7b

# List available models
ollama list
Gemini API Issues
# Check if API key is set
echo $GOOGLE_API_KEY

# Get a free API key
# Visit: https://aistudio.google.com/app/apikey

# Set API key permanently
echo 'export GOOGLE_API_KEY="your-key-here"' >> ~/.bashrc
source ~/.bashrc

# Test Gemini provider
songbird --provider gemini
Installation Issues
# Update pipx
pipx upgrade songbird-ai

# Or reinstall
pipx uninstall songbird-ai
pipx install songbird-ai

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

songbird_ai-0.1.7.tar.gz (151.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

songbird_ai-0.1.7-py3-none-any.whl (88.4 kB view details)

Uploaded Python 3

File details

Details for the file songbird_ai-0.1.7.tar.gz.

File metadata

  • Download URL: songbird_ai-0.1.7.tar.gz
  • Upload date:
  • Size: 151.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.13

File hashes

Hashes for songbird_ai-0.1.7.tar.gz
Algorithm Hash digest
SHA256 03842f679313b2d61b9f8c749f2bb585a5de69cb58e8db56a5b5437f35afd4d7
MD5 228fcc9fa44ac2a216cbcd6fe8d76607
BLAKE2b-256 e7f759c0970dcb865fed695ec50b431c7cd873568ad53678c148fd32c34c14d3

See more details on using hashes here.

File details

Details for the file songbird_ai-0.1.7-py3-none-any.whl.

File metadata

File hashes

Hashes for songbird_ai-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 22f36a9c07f47ffd1624e082d121bc975b2d6b1a4e646268348b75de487de339
MD5 d924e4ee566f288c06b4392c373c0216
BLAKE2b-256 2ac3439bcfc3d05d17076ac7c023e5fdbb3e8e46e0e830ccc6b192da3a92863c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page