Neural Shell (nlsh) - AI-driven command-line assistant
Project description
Neural Shell (nlsh)
nlsh (Neural Shell) is an AI-driven command-line assistant that generates shell commands and one-liners tailored to your system context.
Features
- 🔄 Multi-Backend LLM Support
Configure multiple OpenAI-compatible endpoints (e.g., local Ollama, DeepSeek API, Mistral API) and switch them using -0, -1, etc. - 🐚 Shell-Aware Generation
Set your shell (bash/zsh/fish/powershell) via config/env to ensure syntax compatibility. - 🛡️ Safety First
Never executes commands automatically, works in interactive confirmation mode. - ⚙️ Configurable
YAML configuration for backends and shell preferences.
Installation
-
Clone the repository
git clone https://github.com/eqld/nlsh.git cd nlsh
-
Install the package
# Option 1: Install in development mode with all dependencies pip install -r requirements.txt pip install -e . # Option 2: Simple installation pip install .
-
Create a configuration file
mkdir -p ~/.nlsh cp examples/config.yml ~/.nlsh/config.yml # Edit this file with your API keys
-
Set up your API keys
It is also available in pip:
pip install neural-shell
Usage
Basic usage:
nlsh -1 find all pdfs modified in the last 2 days and compress them
# Example output:
# Suggested: find . -name "*.pdf" -mtime -2 -exec tar czvf archive.tar.gz {} +
# [Confirm] Run this command? (y/N/r) y
# Executing:
# (command output appears here)
With verbose mode for reasoning models:
nlsh -v -2 count lines of code in all javascript files
# Example output:
# Reasoning: To count lines of code in JavaScript files, I can use the 'find' command to locate all .js files,
# then pipe the results to 'xargs wc -l' to count the lines in each file.
# Suggested: find . -name "*.js" -type f | xargs wc -l
# [Confirm] Run this command? (y/N/r) y
# Executing:
# (command output appears here)
Note on Command Execution: nlsh executes commands by reading stdout/stderr line by line. This works well for most commands but might not render the output of highly interactive commands (like those with progress bars) perfectly.
Using nlgc for Commit Messages
The package also includes nlgc (Neural Git Commit) to generate commit messages based on your staged changes:
# Stage your changes first
git add .
# Generate a commit message (using default backend)
nlgc
# Example output:
# Suggested commit message:
# --------------------
# feat: Add nlgc command for AI-generated commit messages
#
# Implements the nlgc command which analyzes staged git diffs
# and uses an LLM to generate conventional commit messages.
# Includes configuration options and CLI flags to control
# whether full file content is included in the prompt.
# --------------------
# [Confirm] Use this message? (y/N/e/r) y
# Executing: git commit -m "feat: Add nlgc command..."
# Commit successful.
# Generate using a specific backend and exclude full file content
nlgc -1 --no-full-files
# Edit the suggested message before committing
nlgc
# [Confirm] Use this message? (y/N/e/r) e
# (Opens your $EDITOR with the message)
# (Save and close editor)
# Using edited message:
# ...
# Commit with this message? (y/N) y
nlgc analyzes the diff of staged files and, optionally, their full content to generate a conventional commit message. You can confirm, edit (e), or regenerate (r) the message.
Configuration
Create ~/.nlsh/config.yml:
shell: "zsh" # Override with env $NLSH_SHELL
backends:
- name: "local-ollama"
url: "http://localhost:11434/v1"
api_key: "ollama"
model: "llama3"
- name: "groq-cloud"
url: "https://api.groq.com/v1"
api_key: $GROQ_KEY
model: "llama3-70b-8192"
- name: "deepseek-reasoner"
url: "https://api.deepseek.com/v1"
api_key: $DEEPSEEK_API_KEY
model: "deepseek-reasoner"
is_reasoning_model: true # Mark as a reasoning model for verbose mode
default_backend: 0
# Configuration for the 'nlgc' (Neural Git Commit) command
# Override with environment variable: NLSH_NLGC_INCLUDE_FULL_FILES (true/false)
nlgc:
# Whether to include the full content of changed files in the prompt
# sent to the LLM for commit message generation. Provides more context
# but increases token usage significantly. Can be overridden with
# --full-files or --no-full-files flags.
include_full_files: true
- The
is_reasoning_modelflag is used bynlshto identify models that provide reasoning tokens in their responses. When this flag is set totrueand verbose mode (-v) is enabled, the tool will display the model's reasoning process. - The
nlgc.include_full_filessetting controls whethernlgcsends the full content of changed files to the LLM by default. This provides more context but uses more tokens. Use the--full-filesor--no-full-filesflags withnlgcto override this setting for a single run. If the context becomes too large for the model,nlgcwill suggest using--no-full-files. Note thatnlgccurrently truncates individual files larger than ~100KB before adding them to the prompt to help prevent context overflows.
Environment Variable Overrides
You can override configuration settings using environment variables:
NLSH_SHELL: Overrides theshellsetting (e.g.,export NLSH_SHELL=fish).NLSH_DEFAULT_BACKEND: Overrides thedefault_backendindex (e.g.,export NLSH_DEFAULT_BACKEND=1).NLSH_NLGC_INCLUDE_FULL_FILES: Overridesnlgc.include_full_files(trueorfalse).[BACKEND_NAME]_API_KEY: Sets the API key for a named backend (e.g.,export OPENAI_API_KEY=sk-...). This takes precedence over$VARreferences in the config file.NLSH_BACKEND_[INDEX]_API_KEY: Sets the API key for a backend by its index (e.g.,export NLSH_BACKEND_0_API_KEY=sk-...).
Advanced Features
Command Regeneration
You can ask for a different command by responding with 'r':
nlsh -i find large files
# Example output:
# Suggested: find . -type f -size +100M
# [Confirm] Run this command? (y/N/r) r
# Regenerating command...
# Suggested: du -h -d 1 | sort -hr
# [Confirm] Run this command? (y/N/r) y
# (command output appears here)
This tells the model not to suggest the same command again and to try a different approach.
Request Logging
You can log all requests to the LLM and its responses to a file:
nlsh --log-file ~/.nlsh/logs/requests.log find all python files modified in the last week
The log file will contain JSON entries with timestamps, backend information, prompts, system context, and responses.
Verbose Mode
Use -v for reasoning tokens and -vv for additional debug information:
# Show reasoning (single verbose)
nlsh -v find all python files modified in the last week
# Example output:
# Reasoning: I need to find Python files that were modified in the last 7 days.
# The command to find files by extension is 'find' with the '-name' option.
# To filter by modification time, I'll use '-mtime -7' which means "modified less than 7 days ago".
# Suggested: find . -name "*.py" -mtime -7
# [Confirm] Run this command? (y/N/r) y
# (command output appears here)
# Show reasoning and debug info (double verbose)
nlsh -vv count lines in python files
# Example output:
# Reasoning: Let's break this down...
# (Plus stack traces and debug info in case of errors)
Single verbose mode (-v) shows the model's reasoning process, while double verbose mode (-vv) additionally displays stack traces and debug information when errors occur. The reasoning tokens are displayed in real-time as they're generated, giving you insight into how the model arrived at its answer.
Custom Prompts (nlsh only)
Use --prompt-file with nlsh for complex tasks:
nlsh --prompt-file migration_task.txt
nlgc Specific Options
--full-files: Forcesnlgcto include the full content of changed files in the prompt, overriding thenlgc.include_full_filesconfig setting.--no-full-files: Forcesnlgcto exclude the full content of changed files from the prompt, overriding the config setting. Useful if you encounter context length errors.-a,--all: Makesnlgcconsider all tracked, modified files, not just the ones staged for commit.
Security
- Command execution requires explicit user confirmation (
y/N/rory/N/e/rfornlgc). - Commands are only displayed and never executed automatically.
- All generated commands are shown to the user before any execution.
nlshusessubprocess.Popenwithshell=Trueto execute the generated commands. While necessary for interpreting complex shell syntax, this carries inherent risks if a user confirms a malicious command. The mandatory confirmation step is the primary safeguard against accidental execution of harmful commands. Always review suggested commands carefully.
Development
If you want to develop or debug nlsh locally without installing it system-wide, follow these steps to set up a virtual environment:
Setting Up a Virtual Environment
# Clone the repository if you haven't already
git clone https://github.com/eqld/nlsh.git
cd nlsh
# Create a virtual environment
python -m venv .venv
# Activate the virtual environment
# On Linux/macOS:
source .venv/bin/activate
# On Windows:
# .venv\Scripts\activate
# Install development dependencies
pip install -r requirements.txt
# Install the package in development mode
pip install -e .
Running the Development Version
Once you have set up your virtual environment and installed the package in development mode, you can run the development version of nlsh:
# Make sure your virtual environment is activated
python -m nlsh.main your prompt here
# Or use the entry points directly
nlsh your prompt here
nlgc
Debugging
For debugging, you can use your preferred IDE's debugging tools. For example, with VS Code:
- Set breakpoints in the code
- Create a launch configuration in
.vscode/launch.json:{ "version": "0.2.0", "configurations": [ { "name": "Debug nlsh", "type": "debugpy", "request": "launch", "module": "nlsh.main", # Or nlsh.git_commit for nlgc "args": ["Your test prompt"], # Leave empty for nlgc or provide flags "console": "integratedTerminal" } ] }
- Start debugging from the VS Code debug panel
Contributing
PRs welcome! Please make sure to set up a development environment as described above, and ensure all tests pass before submitting a pull request.
License
MIT © 2025 eqld
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neural_shell-1.0.1.tar.gz.
File metadata
- Download URL: neural_shell-1.0.1.tar.gz
- Upload date:
- Size: 24.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4faf5e66c0af9545941adf51b6043a1fc95dd3c5c698cfaa1c2e6898e04fe6e0
|
|
| MD5 |
a51a16e7ba9ef5428b14220efd866a50
|
|
| BLAKE2b-256 |
c14fc74f9e96b43eeadc0195d4595f6d030286dde0c085c3d0c8cb17aa0245ec
|
Provenance
The following attestation bundles were made for neural_shell-1.0.1.tar.gz:
Publisher:
python-publish.yml on eqld/nlsh
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
neural_shell-1.0.1.tar.gz -
Subject digest:
4faf5e66c0af9545941adf51b6043a1fc95dd3c5c698cfaa1c2e6898e04fe6e0 - Sigstore transparency entry: 192626280
- Sigstore integration time:
-
Permalink:
eqld/nlsh@0ea138ee50116a7032c70901c36572337996bca3 -
Branch / Tag:
refs/tags/v1.0.1 - Owner: https://github.com/eqld
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@0ea138ee50116a7032c70901c36572337996bca3 -
Trigger Event:
release
-
Statement type:
File details
Details for the file neural_shell-1.0.1-py3-none-any.whl.
File metadata
- Download URL: neural_shell-1.0.1-py3-none-any.whl
- Upload date:
- Size: 29.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d3d29f53691fb936376e71f5f2e7c35e7c7deca10541ebdbf07b7c5bc207e26e
|
|
| MD5 |
95b0deebef9ea7363d654098aafa633d
|
|
| BLAKE2b-256 |
50777a2c1507241ba315b45b09428634c1927c307d502d8fbfa275e36c12b181
|
Provenance
The following attestation bundles were made for neural_shell-1.0.1-py3-none-any.whl:
Publisher:
python-publish.yml on eqld/nlsh
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
neural_shell-1.0.1-py3-none-any.whl -
Subject digest:
d3d29f53691fb936376e71f5f2e7c35e7c7deca10541ebdbf07b7c5bc207e26e - Sigstore transparency entry: 192626285
- Sigstore integration time:
-
Permalink:
eqld/nlsh@0ea138ee50116a7032c70901c36572337996bca3 -
Branch / Tag:
refs/tags/v1.0.1 - Owner: https://github.com/eqld
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@0ea138ee50116a7032c70901c36572337996bca3 -
Trigger Event:
release
-
Statement type: