Skip to main content

A local AI assistant with advanced identity protection and hardware optimization

Project description

Neuron AI Assistant

A powerful local AI assistant with advanced identity protection, hardware optimization, and comprehensive conversation management.

Created by: Dev Patel
Version: 0.4.85

Features

  • Identity Protection: Built-in safeguards against prompt injection and identity tampering
  • Hardware Optimization: Auto-detects CPU, GPU (CUDA), Apple Silicon (MPS), RAM, and VRAM
  • Multiple Models: Support for GPT4All (CPU-friendly) and Mistral 7B (GPU-optimized)
  • Conversation Management: Save, export, and manage chat history
  • Config Security: Cryptographic signing and automatic backups
  • Error Recovery: Automatic backup restoration and config migration
  • Resource Management: Dynamic token limits and OOM handling
  • Diagnostic Tools: Built-in system health checks

Requirements

  • Python: 3.8 or higher
  • RAM: Minimum 4GB (8GB+ recommended)
  • Disk Space: 20GB free (for model downloads)
  • GPU (Optional): NVIDIA with CUDA support for better performance

Installation

Option 1: From PyPI (when published)

pip install neuron-ai-assistant

Option 2: From Source

# Clone the repository
git clone https://github.com/devpatel/neuron-ai-assistant.git
cd neuron-ai-assistant

# Install dependencies
pip install -r requirements.txt

# Or install with all features
pip install -e .[all]

Option 3: GPU Support

# For NVIDIA GPU (CUDA 11.8)
pip install torch==2.0.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

# For NVIDIA GPU (CUDA 12.1)
pip install torch==2.0.0+cu121 --extra-index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt

Option 4: CPU Only (Smaller)

pip install torch==2.0.0+cpu --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt

Quick Start

First Run

python neuron_assistant.py

On first run, you'll be asked to:

  1. Enter your name
  2. Select a model (GPT4All or Mistral)
  3. Wait for model download (if needed)

Using the Assistant

# After installation
neuron
# or
neuron-assistant

💻 Commands

Command Description
/help Show available commands
/clear Clear conversation history
/save Save conversation to text file
/export Export conversation to JSON
/stats Show system statistics
/tokens <n> Set max tokens (16-1024)
/model Change AI model
/migrate Fix/update old configs
/diagnose Run system diagnostics
/reset Reset assistant completely
/exit Exit gracefully

🔧 Configuration

The assistant creates these files automatically:

  • config.json - User and model settings
  • config.sig - Cryptographic signature
  • models/ - Downloaded AI models
  • backups/ - Config backups (last 5)
  • .neuron.lock - Instance lock file

Model Comparison

Model Size RAM VRAM Speed Quality
GPT4All-J 3.5GB 4GB 0GB Fast Good
Mistral 7B 14GB 16GB 12GB Medium Excellent

Advanced Usage

Set HuggingFace Token

export HF_TOKEN="your_token_here"
python neuron_assistant.py

Custom Token Limit

from neuron_assistant import NeuronAssistant

assistant = NeuronAssistant()
assistant.set_max_tokens(256)

Programmatic Use

from neuron_assistant import NeuronAssistant

# Initialize
assistant = NeuronAssistant(hf_token="optional_token")

# Chat
response = assistant.chat("Hello! How are you?")
print(response)

# Save conversation
assistant.save_history("my_chat.txt")
assistant.export_history_json("my_chat.json")

🐛 Troubleshooting

Model Download Fails

# Check disk space
df -h

# Verify internet connection
ping huggingface.co

# Manual download location
ls models/

Config Corrupted

# Run diagnostics
# In chat: /diagnose

# Migrate config
# In chat: /migrate

# Last resort - reset
# In chat: /reset

Out of Memory

# Use smaller model (GPT4All)
# Reduce token limit: /tokens 64
# Clear history: /clear

GPU Not Detected

# Check CUDA installation
python -c "import torch; print(torch.cuda.is_available())"

# Reinstall PyTorch with CUDA
pip install torch==2.0.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118

🔒 Security Features

  • Creator Lock: Hardcoded creator name prevents identity theft
  • Config Signing: RSA signatures verify config integrity
  • Prompt Injection Detection: Blocks manipulation attempts
  • Output Sanitization: Removes references to other AI companies
  • Backup System: Auto-backups before changes

License

MIT License - See LICENSE file for details

Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Submit a pull request

Support

Acknowledgments

Changelog

v0.4.85 (Current)

  • Advanced identity protection
  • Config migration system
  • Comprehensive diagnostics
  • Improved error handling
  • Backup/restore functionality

Made with heart by Dev Patel

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuron_v0_4-0.4.85.tar.gz (32.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuron_v0_4-0.4.85-py3-none-any.whl (29.1 kB view details)

Uploaded Python 3

File details

Details for the file neuron_v0_4-0.4.85.tar.gz.

File metadata

  • Download URL: neuron_v0_4-0.4.85.tar.gz
  • Upload date:
  • Size: 32.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for neuron_v0_4-0.4.85.tar.gz
Algorithm Hash digest
SHA256 63e41f41c57285df0e9c91596a5b3cf4f7d2745397699dee5acc753beb0176f4
MD5 852cad050f76dbd6418cf57a16e20978
BLAKE2b-256 0b1a21d563906aa381ef92f89e477e71f5b72cd9fa652113f7a1bb3eb7eae288

See more details on using hashes here.

File details

Details for the file neuron_v0_4-0.4.85-py3-none-any.whl.

File metadata

  • Download URL: neuron_v0_4-0.4.85-py3-none-any.whl
  • Upload date:
  • Size: 29.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for neuron_v0_4-0.4.85-py3-none-any.whl
Algorithm Hash digest
SHA256 90e947d54eeeb8e3044992f82ad7f42def73aab5f1af94ce83e89fb180bbb7f8
MD5 aa5889823b0404cac9b123ccb922a3e6
BLAKE2b-256 634e69e39383fed868b488a04de44fcbcd2faa4756d75659cdd8a2c1923a59ba

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page