Skip to main content

A local AI assistant with advanced identity protection and hardware optimization

Project description

Neuron AI Assistant

A powerful local AI assistant with advanced identity protection, hardware optimization, and comprehensive conversation management.

Created by: Dev Patel
Version: 0.4.9

Features

  • Identity Protection: Built-in safeguards against prompt injection and identity tampering
  • Hardware Optimization: Auto-detects CPU, GPU (CUDA), Apple Silicon (MPS), RAM, and VRAM
  • Multiple Models: Support for GPT4All (CPU-friendly) and Mistral 7B (GPU-optimized)
  • Conversation Management: Save, export, and manage chat history
  • Config Security: Cryptographic signing and automatic backups
  • Error Recovery: Automatic backup restoration and config migration
  • Resource Management: Dynamic token limits and OOM handling
  • Diagnostic Tools: Built-in system health checks

Requirements

  • Python: 3.8 or higher
  • RAM: Minimum 4GB (8GB+ recommended)
  • Disk Space: 20GB free (for model downloads)
  • GPU (Optional): NVIDIA with CUDA support for better performance

Installation

Option 1: From PyPI (when published)

pip install neuron-ai-assistant

Option 2: From Source

# Clone the repository
git clone https://github.com/devpatel/neuron-ai-assistant.git
cd neuron-ai-assistant

# Install dependencies
pip install -r requirements.txt

# Or install with all features
pip install -e .[all]

Option 3: GPU Support

# For NVIDIA GPU (CUDA 11.8)
pip install torch==2.0.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

# For NVIDIA GPU (CUDA 12.1)
pip install torch==2.0.0+cu121 --extra-index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt

Option 4: CPU Only (Smaller)

pip install torch==2.0.0+cpu --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt

Quick Start

First Run

python neuron_assistant.py

On first run, you'll be asked to:

  1. Enter your name
  2. Select a model (GPT4All or Mistral)
  3. Wait for model download (if needed)

Using the Assistant

# After installation
neuron
# or
neuron-assistant

💻 Commands

Command Description
/help Show available commands
/clear Clear conversation history
/save Save conversation to text file
/export Export conversation to JSON
/stats Show system statistics
/tokens <n> Set max tokens (16-1024)
/model Change AI model
/migrate Fix/update old configs
/diagnose Run system diagnostics
/reset Reset assistant completely
/exit Exit gracefully

🔧 Configuration

The assistant creates these files automatically:

  • config.json - User and model settings
  • config.sig - Cryptographic signature
  • models/ - Downloaded AI models
  • backups/ - Config backups (last 5)
  • .neuron.lock - Instance lock file

Model Comparison

Model Size RAM VRAM Speed Quality
GPT4All-J 3.5GB 4GB 0GB Fast Good
Mistral 7B 14GB 16GB 12GB Medium Excellent

Advanced Usage

Set HuggingFace Token

export HF_TOKEN="your_token_here"
python neuron_assistant.py

Custom Token Limit

from neuron_assistant import NeuronAssistant

assistant = NeuronAssistant()
assistant.set_max_tokens(256)

Programmatic Use

from neuron_assistant import NeuronAssistant

# Initialize
assistant = NeuronAssistant(hf_token="optional_token")

# Chat
response = assistant.chat("Hello! How are you?")
print(response)

# Save conversation
assistant.save_history("my_chat.txt")
assistant.export_history_json("my_chat.json")

🐛 Troubleshooting

Model Download Fails

# Check disk space
df -h

# Verify internet connection
ping huggingface.co

# Manual download location
ls models/

Config Corrupted

# Run diagnostics
# In chat: /diagnose

# Migrate config
# In chat: /migrate

# Last resort - reset
# In chat: /reset

Out of Memory

# Use smaller model (GPT4All)
# Reduce token limit: /tokens 64
# Clear history: /clear

GPU Not Detected

# Check CUDA installation
python -c "import torch; print(torch.cuda.is_available())"

# Reinstall PyTorch with CUDA
pip install torch==2.0.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118

🔒 Security Features

  • Creator Lock: Hardcoded creator name prevents identity theft
  • Config Signing: RSA signatures verify config integrity
  • Prompt Injection Detection: Blocks manipulation attempts
  • Output Sanitization: Removes references to other AI companies
  • Backup System: Auto-backups before changes

License

MIT License - See LICENSE file for details

Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

Support

Acknowledgments

Changelog

v0.4.9 (Current)

  • Advanced identity protection
  • Config migration system
  • Comprehensive diagnostics
  • Improved error handling
  • Backup/restore functionality

Made with heart by Dev Patel

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuron_v0_4-0.4.9.tar.gz (32.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuron_v0_4-0.4.9-py3-none-any.whl (29.2 kB view details)

Uploaded Python 3

File details

Details for the file neuron_v0_4-0.4.9.tar.gz.

File metadata

  • Download URL: neuron_v0_4-0.4.9.tar.gz
  • Upload date:
  • Size: 32.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for neuron_v0_4-0.4.9.tar.gz
Algorithm Hash digest
SHA256 b117a1d2d5a3ca2824a8c5d733e60a109d4a515508709ae2a5d9ea9c42b33423
MD5 489003722e43273fddad4c46292a5e36
BLAKE2b-256 45f480c16419e7d95a84324a336684b7ad3c15db495a526c3ef61713036035c1

See more details on using hashes here.

File details

Details for the file neuron_v0_4-0.4.9-py3-none-any.whl.

File metadata

  • Download URL: neuron_v0_4-0.4.9-py3-none-any.whl
  • Upload date:
  • Size: 29.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for neuron_v0_4-0.4.9-py3-none-any.whl
Algorithm Hash digest
SHA256 109d8f5f3658d99419330fa9d294756c0c838351b8a37d378e9dedb4cbff63d7
MD5 3c2823e3d3202b6e7ebf966ee70523f5
BLAKE2b-256 5eaaa1b090bd33e349c2a465309d4fbc90246b5e2df868a825af58181f25999b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page