Skip to main content

Secure Notification System with audio capabilities

Project description

Airgap SNS (Secure Notification System)

A complete, fully-functional Python implementation of an extensible, modular notification framework tailored specifically for handling LLM outputs, notifications upon certain triggers, and secure communication between air-gapped devices.

Features

  • Burst Sequence Parsing: Detect and parse special notification triggers in text
  • WebSocket Pub/Sub: Real-time notification delivery via WebSockets
  • Audio Transmission: Send data between air-gapped devices using sound (via ggwave)
  • Encryption: Optional AES encryption for secure communication
  • Webhooks: Integration with external systems via HTTP webhooks
  • Water-cooler Channels: Broadcast notifications to groups of subscribers
  • Interactive Client: Command-line interface for sending and receiving notifications
  • Modular Architecture: Easily extensible for custom notification types and delivery methods

Dependencies

pip install fastapi uvicorn websockets aiohttp python-dotenv cryptography ggwave sounddevice numpy

For LLM integration:

# For OpenAI API
pip install openai

# For Ollama (local LLMs)
pip install httpx

For secure tunnel support (optional):

pip install zrok

Note:

  • ggwave and sounddevice are optional dependencies for audio transmission features.
  • zrok is an optional dependency for creating secure tunnels for remote connections.
  • httpx is required for Ollama integration (local LLMs).

Project Structure

.
├── README.md
├── audio.py         # Audio transmission using ggwave
├── burst.py         # Burst sequence parsing
├── client.py        # Notification client
├── crypto.py        # Encryption utilities
├── host.py          # Notification host/server
├── scheduler.py     # Job scheduling
└── webhook.py       # Webhook integration

Burst Sequence Format

Burst sequences are special markers in text that trigger notifications:

!!BURST(dest=user123;wc=42;encrypt=yes;webhook=https://example.com/hook;audio=tx;pwd=secret)!!

Parameters:

  • dest: Destination client ID
  • wc: Water-cooler channel ID
  • encrypt: Whether to encrypt the message (yes/no)
  • webhook: URL to send a webhook notification
  • audio: Audio transmission (tx/none)
  • pwd: Optional password for encryption

Environment Variables

The system supports configuration via environment variables or a .env file. Create a .env file in the project root with the following variables:

# LLM Provider settings
# Choose between "openai" or "ollama"
LLM_PROVIDER=openai

# OpenAI settings (when LLM_PROVIDER=openai)
OPENAI_API_KEY=your_api_key_here
DEFAULT_MODEL=gpt-3.5-turbo

# Ollama settings (when LLM_PROVIDER=ollama)
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama2
OLLAMA_STREAM=true

# Authentication key for chat clients
AUTH_KEY=demo-key

# Chat channel name
CHANNEL=demo-chat

# Server configuration
HOST=0.0.0.0
PORT=9000

# Enable/disable features
# Set to "true" to enable, anything else to disable
TUNNEL_ENABLED=false
RELOAD_ENABLED=false

A sample .env.sample file is provided as a template.

Usage

Starting the Server

Using the provided script:

# Make the script executable
chmod +x run_server.sh

# Start the server
./run_server.sh

# Start with secure tunnel for remote connections
./run_server.sh --tunnel-on

# Start with auto-reload for development
./run_server.sh --reload

Or manually:

uvicorn host:app --host 0.0.0.0 --port 9000

Running the Client

Basic usage:

python client.py --id user123

With interactive mode:

python client.py --id user123 --interactive

With password for decryption:

python client.py --id user123 --password mysecretpassword

Disable audio features:

python client.py --id user123 --no-audio

Interactive Client Commands

  • /quit - Exit the client
  • /audio <message> - Send message via audio
  • /burst dest=<id>;wc=<channel>;... - Send custom burst
  • /help - Show help

Testing the System

The project includes several test scripts to verify functionality:

Quick Demo

For a quick demonstration of the system, use the provided shell scripts:

Basic Demo

# Make the script executable (if not already)
chmod +x run_demo.sh

# Run the demo
./run_demo.sh

This script uses tmux to start multiple components in separate windows:

  • Notification server
  • Webhook test server
  • Receiver client (interactive mode)
  • Sender client (interactive mode)

Chat Demo

# Make the script executable (if not already)
chmod +x run_chat_demo.sh

# Run the chat demo
./run_chat_demo.sh

# Run with secure tunnel for remote connections
./run_chat_demo.sh --tunnel-on

This script starts a multi-user chat environment with:

  • Notification server
  • LLM provider client (if OpenAI API key is set)
  • Multiple chat clients
  • Help window with instructions

You can then interact with the system by sending messages between clients.

Automated Tests

Run the automated test suite to verify core functionality:

# Start the server in one terminal
uvicorn host:app --host 0.0.0.0 --port 9000

# Run the tests in another terminal
python test_sns.py

# Include audio tests (requires ggwave and sounddevice)
python test_sns.py --test-audio

# Include webhook tests (requires webhook_test_server.py running)
python test_sns.py --test-webhook

Webhook Testing

To test webhook functionality, run the webhook test server:

# Start the webhook test server
python webhook_test_server.py --port 8000

# View received webhooks
curl http://localhost:8000/webhooks

LLM Integration Demo

Test integration with LLMs using the demo script:

# For OpenAI:
export LLM_PROVIDER=openai
export OPENAI_API_KEY=your_api_key_here
export DEFAULT_MODEL=gpt-3.5-turbo

# OR for Ollama (local LLMs):
export LLM_PROVIDER=ollama
export OLLAMA_MODEL=llama2
export OLLAMA_URL=http://localhost:11434

# Run the LLM integration demo
python llm_integration_demo.py

For the chat demo with Ollama:

# Make sure Ollama is running
ollama serve

# Run the chat demo with Ollama
export LLM_PROVIDER=ollama
./run_chat_demo.sh

Integration with LLMs

LLMs can be instructed to include burst sequences in their output to trigger notifications:

Here's your answer: The capital of France is Paris.

!!BURST(dest=user123;wc=geography;encrypt=no)!!

Audio Transmission

The system can transmit data between air-gapped devices using sound:

# Send a message via audio
python client.py --id sender --interactive
> /audio Hello from an air-gapped device!

Secure Tunnel for Remote Connections

The system supports creating secure tunnels for remote connections using zrok:

  1. Install zrok:

    pip install zrok
    
  2. Configure zrok (first time only):

    zrok login
    
  3. Start the server or chat demo with tunnel enabled:

    ./run_server.sh --tunnel-on
    # or
    ./run_chat_demo.sh --tunnel-on
    
  4. The tunnel URL will be displayed and saved to tunnel_connection.txt

  5. On the remote machine, connect using the tunnel URL:

    python client.py --id remote-user --uri <TUNNEL_URL>
    # or for chat demo
    python chat_app.py --id remote-user --channel demo-chat --host <TUNNEL_URL> --auth-key demo-key
    

Security Considerations

  • All WebSocket connections should be secured with TLS in production
  • Passwords for encryption should be strong and securely managed
  • Audio transmission is susceptible to eavesdropping in shared spaces
  • When using secure tunnels, ensure you trust the tunnel provider

Example Use Cases

  1. LLM Notifications: Get notified when an LLM completes a task or needs input
  2. Air-gapped Communication: Transfer data between isolated systems
  3. Secure Messaging: Send encrypted messages to specific recipients
  4. Broadcast Alerts: Notify groups of users about important events
  5. Webhook Integration: Trigger external systems based on notifications

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

airgap_sns-0.0.1.dev1.tar.gz (20.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

airgap_sns-0.0.1.dev1-py3-none-any.whl (22.7 kB view details)

Uploaded Python 3

File details

Details for the file airgap_sns-0.0.1.dev1.tar.gz.

File metadata

  • Download URL: airgap_sns-0.0.1.dev1.tar.gz
  • Upload date:
  • Size: 20.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for airgap_sns-0.0.1.dev1.tar.gz
Algorithm Hash digest
SHA256 e7b40cae1a28e613f44c7e2f236b3809223bdf8efa756abe7d3f829f761d9c39
MD5 79897c1fef23ab45e43da62cbdd4bb07
BLAKE2b-256 9447f808d40fef7d5eeae106238365d1bae0a080598b62a9fb00a952d7291410

See more details on using hashes here.

File details

Details for the file airgap_sns-0.0.1.dev1-py3-none-any.whl.

File metadata

File hashes

Hashes for airgap_sns-0.0.1.dev1-py3-none-any.whl
Algorithm Hash digest
SHA256 20b3d0b887ad4400c03f54c4a68d21c4887d1e89d957d0073f0d320351c7d187
MD5 c93451dfde35e6a57bbca9e25964d44e
BLAKE2b-256 c7c73696c615ee50a9cb0dbef2d808b47839ed606491c35c148fd292c11b5434

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page