Skip to main content

Cybersecurity news aggregator.

Project description

🔍 Lightman AI

*AI-Powered Cybersecurity News Intelligence Platform


Lightman AI is an intelligent cybersecurity news aggregation and risk assessment platform that helps organizations stay ahead of potential security threats. By leveraging advanced AI agents, it automatically monitors cybersecurity news sources, analyzes content for relevance, and integrates with service desk systems for streamlined threat intelligence workflows.

✨ Key Features

  • 🤖 AI-Powered Classification: Uses OpenAI GPT and Google Gemini models to intelligently classify cybersecurity news
  • 📰 Automated News Aggregation: Monitors multiple cybersecurity news sources (TheHackerNews for now)
  • 🎯 Risk Scoring: Configurable relevance scoring to filter noise and focus on critical threats
  • 🔗 Service Desk Integration: Automatically creates tickets for identified security risks
  • 📊 Evaluation Framework: Built-in tools to test and optimize AI agent performance
  • ⚙️ Flexible Configuration: TOML-based configuration with multiple prompt templates
  • 🚀 CLI Interface: Simple command-line interface for automation and scripting

📖 Table of Contents

🚀 Quick Start

pip

  1. Install Lightman AI:

    pip install lightman_ai
    
  2. Configure your AI agent (OpenAI or Gemini):

    export OPENAI_API_KEY="your-api-key"
    # or
    export GOOGLE_API_KEY="your-api-key"
    

    or store you API KEYs in a .env file

    OPENAI_API_KEY="your-api-key"
    # or
    GOOGLE_API_KEY="your-api-key"
    
  3. Run the scanner:

    lightman run --agent openai --score 7
    

    or let it pick up the default values from your lightman.toml file

    lightman run
    

Docker

  1. Create configuration file:

    echo '[default]
    agent = "openai"
    score_threshold = 8
    prompt = "development"
    
    [prompts]
    development = "Analyze cybersecurity news for relevance to our organization."' > lightman.toml
    
  2. Run with Docker:

    docker run --rm \
      -v $(pwd)/lightman.toml:/app/lightman.toml \
      -e OPENAI_API_KEY="your-api-key" \
      elementsinteractive/lightman-ai:latest \
      lightman run --config-file /app/lightman.toml --score 7
    
  3. View results: Lightman will analyze cybersecurity news and output relevant articles that meet your score threshold.

📥 Installation

Docker

Lightman AI has an available Docker image on Docker Hub:

# Pull the latest image
docker pull elementsinteractive/lightman-ai:latest

# Create your configuration file

   echo '[default]
   agent = "openai"
   score_threshold = 8
   prompt = "development"
   
   [prompts]
   development = "Analyze cybersecurity news for relevance to our organization."' > lightman.toml

Run with mounted configuration

docker run -d \
  --name lightman-ai \
  -v $(pwd)/lightman.toml:/app/lightman.toml \
  -e OPENAI_API_KEY="your-api-key" \
  elementsinteractive/lightman-ai:latest \
  lightman run --config-file /app/lightman.toml

Docker Environment Variables:

  • OPENAI_API_KEY - Your OpenAI API key
  • GOOGLE_API_KEY - Your Google Gemini API key
  • SERVICE_DESK_URL - Service desk instance URL (optional)
  • SERVICE_DESK_USER - Service desk username (optional)
  • SERVICE_DESK_TOKEN - Service desk API token (optional)

Development Installation

git clone git@github.com:elementsinteractive/lightman-ai.git
cd lightman_ai
just venv  # Creates virtual environment and installs dependencies

⚙️ Configuration

Lightman AI uses TOML configuration files for flexible setup. Create a lightman.toml file:

[default]
agent = 'openai'              # AI agent to use (openai, gemini)
score_threshold = 8           # Minimum relevance score (1-10)
prompt = 'development'        # Prompt template to use

# Optional: Service desk integration
service_desk_project_key = "SEC"
service_desk_request_id_type = "incident"

[prompts]
development = """
Analyze the following cybersecurity news articles and determine their relevance to our organization.
Rate each article from 1-10 based on potential impact and urgency.
Focus on: data breaches, malware, vulnerabilities, and threat intelligence.
"""

custom_prompt = """
Your custom analysis prompt here...
"""

It also supports having separate files for your prompts and your configuration settings. Specify the path with --prompt.

lightman.toml

[default]
agent = 'openai'              # AI agent to use (openai, gemini)
score_threshold = 8           # Minimum relevance score (1-10)
prompt = 'development'        # Prompt template to use

# Optional: Service desk integration
service_desk_project_key = "SEC"
service_desk_request_id_type = "incident"

prompts.toml

[prompts]
development = """
Analyze the following cybersecurity news articles and determine their relevance to our organization.
Rate each article from 1-10 based on potential impact and urgency.
Focus on: data breaches, malware, vulnerabilities, and threat intelligence.
"""

custom_prompt = """
Your custom analysis prompt here...
"""

Environment Variables

Set up your AI provider credentials:

# For OpenAI
export OPENAI_API_KEY="your-openai-api-key"

# For Google Gemini
export GOOGLE_API_KEY="your-google-api-key"

# Optional: Service desk integration
export SERVICE_DESK_URL="https://your-company.atlassian.net"
export SERVICE_DESK_USER="your-username"
export SERVICE_DESK_TOKEN="your-api-token"

You can also specify a different path for your .env file with the --env-file option

🔧 Usage

Basic Usage

# Run with default settings
lightman run

# Use specific AI agent and score threshold
lightman run --agent gemini --score 7

# Use custom prompt template
lightman run --prompt custom_prompt --config-file ./my-config.toml

# Use custom environment file
lightman run --env-file production.env --agent openai --score 8

# Dry run (preview results without creating service desk tickets)
lightman run --dry-run --agent openai --score 9

Command Line Options

Option Description Default
--agent AI agent to use (openai, gemini) From config file
--score Minimum relevance score (1-10) From config file
--prompt Prompt template name From config file
--config-file Path to configuration file lightman.toml
--config Configuration section to use default
--env-file Path to environment variables file .env
--dry-run Preview results without taking action false
--prompt-file File containing prompt templates lightman.toml

Example Workflows

Daily Security Monitoring:

# Local installation
lightman run --agent openai --score 8 --prompt security_critical

# With custom environment file
lightman run --env-file production.env --agent openai --score 8

# Docker 
docker run --rm \
  -v $(pwd)/lightman.toml:/app/lightman.toml \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  elementsinteractive/lightman-ai:latest \
  lightman run --config-file /app/lightman.toml --score 8

Weekly Risk Assessment:

# Local installation
lightman run --agent gemini --score 6 --prompt weekly_assessment

# With environment-specific settings
lightman run --env-file weekly.env --agent gemini --score 6

# Docker 
docker run --rm \
  -v $(pwd)/lightman.toml:/app/lightman.toml \
  -e GOOGLE_API_KEY="$GOOGLE_API_KEY" \
  elementsinteractive/lightman-ai:latest \
  lightman run --config-file /app/lightman.toml --agent gemini --score 6

Integration Testing:

# Test configuration without creating tickets
lightman run --dry-run --config testing

# Test with staging environment
lightman run --env-file staging.env --dry-run --config testing

📊 Evaluation & Testing

Lightman AI includes a comprehensive evaluation framework to test and optimize AI agent performance:

Running Evaluations

# Evaluate agent performance
just eval --agent openai --samples 3 --score 7

# Compare different agents
just eval --agent gemini --samples 5 

# Add tags to differentiate runs from one another
just eval --agent gemini --samples 5 --tag "first-run"
just eval --agent gemini --samples 5 --tag "second-run"

# Test custom prompts
just eval --prompt custom_security --samples 10

# Use custom environment file for evaluation
python -m eval.cli --env-file production.env --agent openai --samples 3

You can also provide defaults in a toml file for eval.

[eval]
agent = 'openai'
score_threshold = 8
prompt = 'classify'
samples = 3

Evaluation Metrics

The evaluation system measures:

  • Precision: Accuracy of threat identification
  • Recall: Coverage of actual security threats
  • F1 Score: Balanced performance metric
  • Score Distribution: Analysis of relevance scoring patterns

Evaluation Dataset

For precision evaluation, Lightman AI uses a curated set of unclassified cybersecurity articles that serve as ground truth data. These articles include:

  • Real-world news articles from various cybersecurity sources
  • Mixed relevance levels - both highly relevant and irrelevant security news
  • Diverse threat categories - malware, data breaches, vulnerabilities, policy changes
  • Pre-validated classifications by security experts for accuracy benchmarking

The evaluation framework compares the AI agent's classifications against these known classifications to measure:

  • How accurately the agent identifies truly relevant threats (precision)
  • How well it avoids false positives from irrelevant news
  • Consistency across different types of security content

This approach ensures that performance metrics reflect real-world usage scenarios where the AI must distinguish between various types of cybersecurity news content.

Make sure to fill in the RELEVANT_ARTICLES with the ones you classify as relevant, so that you can compare the accuracy after running the eval script.*

Sentry

  • The application will automatically pick up and use environment variables if they are present in your environment or .env file.
  • To enable Sentry error monitoring, set the SENTRY_DSN environment variable. This is mandatory for Sentry to be enabled. If SENTRY_DSN is not set, Sentry will be skipped and the application will run normally.
  • If Sentry fails to initialize for any reason (e.g., network issues, invalid DSN), the application will log a warning and continue execution without error monitoring.
  • Sentry is optional: the application does not require it to function, and all features will work even if Sentry is not configured or fails to start.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • TheHackerNews for providing cybersecurity news data

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lightman_ai-0.20.1.tar.gz (15.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lightman_ai-0.20.1-py3-none-any.whl (21.4 kB view details)

Uploaded Python 3

File details

Details for the file lightman_ai-0.20.1.tar.gz.

File metadata

  • Download URL: lightman_ai-0.20.1.tar.gz
  • Upload date:
  • Size: 15.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.8.3

File hashes

Hashes for lightman_ai-0.20.1.tar.gz
Algorithm Hash digest
SHA256 eb681940c937bf808422233fef9d27af8dbb8492552b9872e6376b8efccdcd0a
MD5 6ed160a5e4f6870ea3a614477820bdd9
BLAKE2b-256 eb503bdf3392cef6bab3724b2b5cee77ed3974fa8d9f884081eca5950ff88bb6

See more details on using hashes here.

File details

Details for the file lightman_ai-0.20.1-py3-none-any.whl.

File metadata

File hashes

Hashes for lightman_ai-0.20.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3fdffc7c0abe0158d25c621139c49267772ad072ebe55bf4615e852f1caf0940
MD5 bbe6380a2f415d4a6d35a781f1348a77
BLAKE2b-256 90f48197ddf86a5eedb7bacd9060142ad15ddc1aafd08317881af839e46d5bad

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page