Skip to main content

A local AI assistant with OpenLLaMA and Mistral models, advanced identity protection and hardware optimization

Project description

Neuron AI Assistant

A powerful local AI assistant with advanced identity protection, hardware optimization, and comprehensive conversation management.

Created by: Dev Patel
Version: 0.4.91

Features

  • Identity Protection: Built-in safeguards against prompt injection and identity tampering
  • Hardware Optimization: Auto-detects CPU, GPU (CUDA), Apple Silicon (MPS), RAM, and VRAM
  • Multiple Models: Support for OpenLLaMA 7B v2 (CPU/GPU) and Mistral 7B (GPU-optimized)
  • Conversation Management: Save, export, and manage chat history
  • Config Security: Cryptographic signing and automatic backups
  • Error Recovery: Automatic backup restoration and config migration
  • Resource Management: Dynamic token limits and OOM handling
  • Diagnostic Tools: Built-in system health checks

Requirements

  • Python: 3.8 or higher
  • RAM: Minimum 4GB (8GB+ recommended)
  • Disk Space: 20GB free (for model downloads)
  • GPU (Optional): NVIDIA with CUDA support for better performance

Installation

Option 1: From PyPI (when published)

pip install neuron-ai-assistant

Option 2: From Source

# Clone the repository
git clone https://github.com/devpatel/neuron-ai-assistant.git
cd neuron-ai-assistant

# Install dependencies
pip install -r requirements.txt

# Or install with all features
pip install -e .[all]

Option 3: GPU Support

# For NVIDIA GPU (CUDA 11.8)
pip install torch==2.0.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

# For NVIDIA GPU (CUDA 12.1)
pip install torch==2.0.0+cu121 --extra-index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt

Option 4: CPU Only (Smaller)

pip install torch==2.0.0+cpu --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt

Quick Start

First Run

python neuron_assistant.py

On first run, you'll be asked to:

  1. Enter your name
  2. Select a model (OpenLLaMA 7B v2 or Mistral 7B)
  3. Wait for model download (if needed)

Using the Assistant

# After installation
neuron
# or
neuron-assistant

💻 Commands

Command Description
/help Show available commands
/clear Clear conversation history
/save Save conversation to text file
/export Export conversation to JSON
/stats Show system statistics
/tokens <n> Set max tokens (16-1024)
/model Change AI model
/migrate Fix/update old configs
/diagnose Run system diagnostics
/reset Reset assistant completely
/exit Exit gracefully

🔧 Configuration

The assistant creates these files automatically:

  • config.json - User and model settings
  • config.sig - Cryptographic signature
  • models/ - Downloaded AI models
  • backups/ - Config backups (last 5)
  • .neuron.lock - Instance lock file

Model Comparison

Model Size RAM VRAM Speed Quality
OpenLLaMA 7B v2 13.5GB 8GB 0GB Medium Excellent
Mistral 7B 14GB 16GB 12GB Medium Excellent

Advanced Usage

Set HuggingFace Token

export HF_TOKEN="your_token_here"
python neuron_assistant.py

Custom Token Limit

from neuron_assistant import NeuronAssistant

assistant = NeuronAssistant()
assistant.set_max_tokens(256)

Programmatic Use

from neuron_assistant import NeuronAssistant

# Initialize
assistant = NeuronAssistant(hf_token="optional_token")

# Chat
response = assistant.chat("Hello! How are you?")
print(response)

# Save conversation
assistant.save_history("my_chat.txt")
assistant.export_history_json("my_chat.json")

🐛 Troubleshooting

Model Download Fails

# Check disk space
df -h

# Verify internet connection
ping huggingface.co

# Manual download location
ls models/

Config Corrupted

# Run diagnostics
# In chat: /diagnose

# Migrate config
# In chat: /migrate

# Last resort - reset
# In chat: /reset

Out of Memory

# Use OpenLLaMA model instead of Mistral
# Reduce token limit: /tokens 64
# Clear history: /clear

GPU Not Detected

# Check CUDA installation
python -c "import torch; print(torch.cuda.is_available())"

# Reinstall PyTorch with CUDA
pip install torch==2.0.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118

🔒 Security Features

  • Creator Lock: Hardcoded creator name prevents identity theft
  • Config Signing: RSA signatures verify config integrity
  • Prompt Injection Detection: Blocks manipulation attempts
  • Output Sanitization: Removes references to other AI companies
  • Backup System: Auto-backups before changes

License

MIT License - See LICENSE file for details

Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

Support

Acknowledgments

Changelog

v0.4.91 (Current)

  • OpenLLaMA 7B v2 and Mistral 7B model support
  • Removed GPT4All dependency
  • Enhanced model selection with smart recommendations
  • Improved first-run experience
  • Advanced identity protection
  • Config migration system
  • Comprehensive diagnostics
  • Improved error handling
  • Backup/restore functionality

Made with heart by Dev Patel

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuron_v0_4-0.4.91.tar.gz (31.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuron_v0_4-0.4.91-py3-none-any.whl (28.6 kB view details)

Uploaded Python 3

File details

Details for the file neuron_v0_4-0.4.91.tar.gz.

File metadata

  • Download URL: neuron_v0_4-0.4.91.tar.gz
  • Upload date:
  • Size: 31.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for neuron_v0_4-0.4.91.tar.gz
Algorithm Hash digest
SHA256 68a99e428bc8c6141589a7d870d5f2a34305d067a71a376255f266bbf13c794e
MD5 ad992e587e27d64db0f1f0014f4b994a
BLAKE2b-256 4c7cfe79d4cdc765b46571684a64cd08af2b9fe96a857e66673c179952da4a96

See more details on using hashes here.

File details

Details for the file neuron_v0_4-0.4.91-py3-none-any.whl.

File metadata

  • Download URL: neuron_v0_4-0.4.91-py3-none-any.whl
  • Upload date:
  • Size: 28.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for neuron_v0_4-0.4.91-py3-none-any.whl
Algorithm Hash digest
SHA256 d8fb57579d05fa8fae580a291099fdad6979a8e8c9df138c03745c2f7a7d007c
MD5 d057708659ba496b8722f5f7d9382991
BLAKE2b-256 6534fb7bfe7f314357ca7384768a9de3054518ba6054b9d746e3779f80d79ecb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page