Skip to main content

Secure Notification System with audio capabilities

Project description

Airgap SNS (Secure Notification System)

A complete, fully-functional Python implementation of an extensible, modular notification framework tailored specifically for handling LLM outputs, notifications upon certain triggers, and secure communication between air-gapped devices.

Features

  • Burst Sequence Parsing: Detect and parse special notification triggers in text
  • WebSocket Pub/Sub: Real-time notification delivery via WebSockets
  • Audio Transmission: Send data between air-gapped devices using sound (via ggwave)
  • Encryption: Optional AES encryption for secure communication
  • Webhooks: Integration with external systems via HTTP webhooks
  • Water-cooler Channels: Broadcast notifications to groups of subscribers
  • Interactive Client: Command-line interface for sending and receiving notifications
  • Modular Architecture: Easily extensible for custom notification types and delivery methods

Dependencies

pip install fastapi uvicorn websockets aiohttp python-dotenv cryptography ggwave sounddevice numpy

For LLM integration:

# For OpenAI API
pip install openai

# For Ollama (local LLMs)
pip install httpx

For secure tunnel support (optional):

pip install zrok

Note:

  • ggwave and sounddevice are optional dependencies for audio transmission features.
  • zrok is an optional dependency for creating secure tunnels for remote connections.
  • httpx is required for Ollama integration (local LLMs).

Project Structure

.
├── README.md
├── audio.py         # Audio transmission using ggwave
├── burst.py         # Burst sequence parsing
├── client.py        # Notification client
├── crypto.py        # Encryption utilities
├── host.py          # Notification host/server
├── scheduler.py     # Job scheduling
└── webhook.py       # Webhook integration

Burst Sequence Format

Burst sequences are special markers in text that trigger notifications:

!!BURST(dest=user123;wc=42;encrypt=yes;webhook=https://example.com/hook;audio=tx;pwd=secret)!!

Parameters:

  • dest: Destination client ID
  • wc: Water-cooler channel ID
  • encrypt: Whether to encrypt the message (yes/no)
  • webhook: URL to send a webhook notification
  • audio: Audio transmission (tx/none)
  • pwd: Optional password for encryption

Environment Variables

The system supports configuration via environment variables or a .env file. Create a .env file in the project root with the following variables:

# LLM Provider settings
# Choose between "openai" or "ollama"
LLM_PROVIDER=openai

# OpenAI settings (when LLM_PROVIDER=openai)
OPENAI_API_KEY=your_api_key_here
DEFAULT_MODEL=gpt-3.5-turbo

# Ollama settings (when LLM_PROVIDER=ollama)
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama2
OLLAMA_STREAM=true

# Authentication key for chat clients
AUTH_KEY=demo-key

# Chat channel name
CHANNEL=demo-chat

# Server configuration
HOST=0.0.0.0
PORT=9000

# Enable/disable features
# Set to "true" to enable, anything else to disable
TUNNEL_ENABLED=false
RELOAD_ENABLED=false

A sample .env.sample file is provided as a template.

Usage

Starting the Server

Using the provided script:

# Make the script executable
chmod +x run_server.sh

# Start the server
./run_server.sh

# Start with secure tunnel for remote connections
./run_server.sh --tunnel-on

# Start with auto-reload for development
./run_server.sh --reload

Or manually:

uvicorn host:app --host 0.0.0.0 --port 9000

Running the Client

Basic usage:

python client.py --id user123

With interactive mode:

python client.py --id user123 --interactive

With password for decryption:

python client.py --id user123 --password mysecretpassword

Disable audio features:

python client.py --id user123 --no-audio

Interactive Client Commands

  • /quit - Exit the client
  • /audio <message> - Send message via audio
  • /burst dest=<id>;wc=<channel>;... - Send custom burst
  • /help - Show help

Testing the System

The project includes several test scripts to verify functionality:

Quick Demo

For a quick demonstration of the system, use the provided shell scripts:

Basic Demo

# Make the script executable (if not already)
chmod +x run_demo.sh

# Run the demo
./run_demo.sh

This script uses tmux to start multiple components in separate windows:

  • Notification server
  • Webhook test server
  • Receiver client (interactive mode)
  • Sender client (interactive mode)

Chat Demo

# Make the script executable (if not already)
chmod +x run_chat_demo.sh

# Run the chat demo
./run_chat_demo.sh

# Run with secure tunnel for remote connections
./run_chat_demo.sh --tunnel-on

This script starts a multi-user chat environment with:

  • Notification server
  • LLM provider client (if OpenAI API key is set)
  • Multiple chat clients
  • Help window with instructions

You can then interact with the system by sending messages between clients.

Automated Tests

Run the automated test suite to verify core functionality:

# Start the server in one terminal
uvicorn host:app --host 0.0.0.0 --port 9000

# Run the tests in another terminal
python test_sns.py

# Include audio tests (requires ggwave and sounddevice)
python test_sns.py --test-audio

# Include webhook tests (requires webhook_test_server.py running)
python test_sns.py --test-webhook

Webhook Testing

To test webhook functionality, run the webhook test server:

# Start the webhook test server
python webhook_test_server.py --port 8000

# View received webhooks
curl http://localhost:8000/webhooks

LLM Integration Demo

Test integration with LLMs using the demo script:

# For OpenAI:
export LLM_PROVIDER=openai
export OPENAI_API_KEY=your_api_key_here
export DEFAULT_MODEL=gpt-3.5-turbo

# OR for Ollama (local LLMs):
export LLM_PROVIDER=ollama
export OLLAMA_MODEL=llama2
export OLLAMA_URL=http://localhost:11434

# Run the LLM integration demo
python llm_integration_demo.py

For the chat demo with Ollama:

# Make sure Ollama is running
ollama serve

# Run the chat demo with Ollama
export LLM_PROVIDER=ollama
./run_chat_demo.sh

Integration with LLMs

LLMs can be instructed to include burst sequences in their output to trigger notifications:

Here's your answer: The capital of France is Paris.

!!BURST(dest=user123;wc=geography;encrypt=no)!!

Audio Transmission

The system can transmit data between air-gapped devices using sound:

# Send a message via audio
python client.py --id sender --interactive
> /audio Hello from an air-gapped device!

Secure Tunnel for Remote Connections

The system supports creating secure tunnels for remote connections using zrok:

  1. Install zrok:

    pip install zrok
    
  2. Configure zrok (first time only):

    zrok login
    
  3. Start the server or chat demo with tunnel enabled:

    ./run_server.sh --tunnel-on
    # or
    ./run_chat_demo.sh --tunnel-on
    
  4. The tunnel URL will be displayed and saved to tunnel_connection.txt

  5. On the remote machine, connect using the tunnel URL:

    python client.py --id remote-user --uri <TUNNEL_URL>
    # or for chat demo
    python chat_app.py --id remote-user --channel demo-chat --host <TUNNEL_URL> --auth-key demo-key
    

Security Considerations

  • All WebSocket connections should be secured with TLS in production
  • Passwords for encryption should be strong and securely managed
  • Audio transmission is susceptible to eavesdropping in shared spaces
  • When using secure tunnels, ensure you trust the tunnel provider

Example Use Cases

  1. LLM Notifications: Get notified when an LLM completes a task or needs input
  2. Air-gapped Communication: Transfer data between isolated systems
  3. Secure Messaging: Send encrypted messages to specific recipients
  4. Broadcast Alerts: Notify groups of users about important events
  5. Webhook Integration: Trigger external systems based on notifications

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

airgap_sns-0.0.1.tar.gz (20.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

airgap_sns-0.0.1-py3-none-any.whl (22.6 kB view details)

Uploaded Python 3

File details

Details for the file airgap_sns-0.0.1.tar.gz.

File metadata

  • Download URL: airgap_sns-0.0.1.tar.gz
  • Upload date:
  • Size: 20.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for airgap_sns-0.0.1.tar.gz
Algorithm Hash digest
SHA256 d507a9c9aec8aeb1bf51859234e8065299cdbb1fa860af28075cf981dea55dce
MD5 60d72666ffc616871d55a8cf74860e3d
BLAKE2b-256 58fc6053098cca0f728969f71697ddedc09259046652639a7edb326a176faffe

See more details on using hashes here.

File details

Details for the file airgap_sns-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: airgap_sns-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 22.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for airgap_sns-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0bc1fa7313ce295aac57753bfc9d961af494ed0d7cb8f4fc13d215914dbc1b3a
MD5 1867172397619b6558e3c418faabc8ca
BLAKE2b-256 0b3edd718ee93e4818f7e229d140b420b6b21a64de0c11fb55409222b09b7813

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page