Skip to main content

Drop an AI agent onto any resource. Pay per GB.

Project description

TernaryPhysics Ops

Drop agents. The more you drop, the smarter they get.

Each agent lives on a resource. Drop more agents, they discover each other and form a mesh. Ask one question — the investigation traces causality across your entire infrastructure automatically.

  • Local AI — Runs on your hardware. Your data never leaves.
  • Agent mesh — Agents talk to each other. Cross-resource correlation automatic.
  • Human-in-the-loop — Agents investigate autonomously. Actions require your approval.
  • Pay per GB — $0.50/GB processed. No subscription. No surprises.

Installation

pip install tp-ops

Getting Started

# 1. Install on your resource
ssh my-server
pip install tp-ops

# 2. Login with your API token (get one at https://ternaryphysics.com/dashboard)
tp-ops login --token tp_live_xxxxxxxxxxxxxxxxxxxxx

# 3. Drop an agent (auto-detects resource type)
tp-ops drop

# 4. From anywhere, talk to it
tp-ops ask my-server

What It Looks Like

# On each resource, run:
$ ssh prod-cluster && pip install tp-ops && tp-ops drop
$ ssh payments-db && pip install tp-ops && tp-ops drop
$ ssh api-server && pip install tp-ops && tp-ops drop

# From anywhere, ask questions:
$ tp-ops ask prod-cluster

  prod-cluster > why is the API slow?

  Investigating across mesh...

  → k8s-agent: payment-api response times 3x baseline since 02:03 UTC
  → postgres-agent: Connection pool exhausted (147/150 connections)
  → cicd-agent: Deploy at 02:00 UTC changed POOL_SIZE config

  Root cause: Deploy removed pool size config, defaulting to 150.
  App maxed connections, starving other services.

  Fix: Restore POOL_SIZE=50 in payment-api config.
  Apply? [yes/no]

  prod-cluster > yes

  ✓ Config updated. Rolling restart...
  ✓ Connections dropped to 48. Latency normal.

  Resolved in 47 seconds. 3 agents contributed.
  Processed: 0.8 GB | Cost: $0.40

How It Works

1. Drop agents onto resources

SSH to each resource, install tp-ops, run tp-ops drop. The agent auto-detects what it's running on.

# On your K8s node:
$ ssh prod-cluster
$ pip install tp-ops
$ tp-ops drop                    # Auto-detects: k8s-agent

# On your database server:
$ ssh payments-db
$ pip install tp-ops
$ tp-ops drop                    # Auto-detects: postgres-agent

# On your VM:
$ ssh api-server
$ pip install tp-ops
$ tp-ops drop                    # Auto-detects: vm-agent

# For special agents (can't auto-detect):
$ tp-ops drop --type relay-agent
$ tp-ops drop --type apigw-agent --apim my-instance

2. They form a mesh

Agents discover each other automatically via mDNS, Kubernetes DNS, or static peers. When one agent finds a problem, it reaches across to others. The investigation follows the thread through every resource.

3. Ask a question — the mesh investigates

Open a conversation. Ask in plain English. The mesh determines which agents need to contribute and traces causality across resources automatically.

$ tp-ops ask payments-db

  payments-db > show me the slowest queries today

  Top 3 by total time:
  1. SELECT * FROM transactions WHERE customer_id = $1
     Avg: 230ms | 14,847 calls | Index scan   2. SELECT count(*) FROM transactions WHERE status = 'pending'
     Avg: 890ms | 720 calls | Seq scan  (missing index)
  3. UPDATE accounts SET balance = balance + $1 WHERE id = $2
     Avg: 45ms | 8,234 calls | High lock wait   Query #2 is doing a sequential scan on 42M rows.
  Want me to suggest an index?

  payments-db > yes

  CREATE INDEX CONCURRENTLY idx_txn_status_created
  ON transactions (status, created_at)
  WHERE status = 'pending';

  Estimated improvement: 890ms  ~5ms.
  No table lock during creation. Build time: ~2 min.

  Create this index? [yes/no]

  payments-db > yes

   Index created. Query #2: 890ms → 4ms. 222x faster.

3. Human approves every action

The agent reads everything autonomously. It never writes, modifies, deletes, restarts, or scales without your explicit approval. Every time.

  prod-cluster-01 > fix the scaling issue

  I'd recommend scaling db-nodes from 3 → 4:
  
  az aks nodepool scale --name dbnodes --node-count 4

  Risk: LOW — adds capacity, no disruption.
  Approve? [yes/no]

Two-Tier AI Architecture

Each agent runs two AI models on the resource:

┌─────────────────────────────────────────────────────────┐
│  YOUR RESOURCE                                          │
│                                                         │
│  ┌───────────────────────────────────────────────────┐  │
│  │ TIER 1: Ternary Neural Network (TNN)              │  │
│  │                                                   │  │
│  │ 2,888 parameters · <1KB · <1ms inference          │  │
│  │ Weights: {-1, 0, +1} · integer math only          │  │
│  │                                                   │  │
│  │ Always on. Watches metrics, logs, connections.    │  │
│  │ Learns THIS resource's normal patterns.           │  │
│  │ Flags anomalies. Triggers alerts.                 │  │
│  │ Hot-swap weight updates. Zero downtime.           │  │
│  └───────────────────────────────────────────────────┘  │
│                                                         │
│  ┌───────────────────────────────────────────────────┐  │
│  │ TIER 2: TernaryPhysics-7B (Quantized LLM)        │  │
│  │                                                   │  │
│  │ 7 billion parameters · 4-bit quantized            │  │
│  │ ~15 tok/s on commodity CPU · no GPU required      │  │
│  │                                                   │  │
│  │ Powers conversation. Reasons about problems.      │  │
│  │ Reads logs, metrics, configs, events.             │  │
│  │ Builds root cause chains. Generates reports.      │  │
│  │ Suggests specific commands with explanations.     │  │
│  └───────────────────────────────────────────────────┘  │
│                                                         │
│  TNN detects anomaly → triggers LLM investigation      │
│  Human asks question → LLM investigates and answers    │
│  LLM recommends action → Human approves → executes     │
└─────────────────────────────────────────────────────────┘

Both models run locally on the resource. No cloud. No GPU. No internet required. Patent pending.


Agent Catalog

☸ Kubernetes Agent — $0.50/GB

Drop onto a cluster. Ask it about pods, nodes, deployments, services, networking. It finds zombie pods, orphaned resources, overprovisioning.

$ tp-ops ask prod-cluster-01
  prod-cluster-01 > anything unusual today?
  prod-cluster-01 > why is the api namespace using so much memory?
  prod-cluster-01 > show me pods that haven't received traffic in 7 days
  prod-cluster-01 > what changed in the last hour?

🖥 VM Agent — $0.50/GB

Drop onto a machine. Ask it about processes, disk, memory, network, services. It knows every process on that box.

$ tp-ops ask api-server-03
  api-server-03 > what's eating CPU?
  api-server-03 > is the disk going to fill up?
  api-server-03 > show me what changed since yesterday
  api-server-03 > what's listening on this box?

🐘 PostgreSQL Agent — $0.50/GB

Drop onto a database. Ask it about queries, connections, indexes, replication, locks. It knows that database's specific patterns.

$ tp-ops ask payments-db
  payments-db > show me the slowest queries
  payments-db > are there any connection leaks?
  payments-db > how's replication lag looking?
  payments-db > suggest indexes I'm missing

🌐 API Gateway Agent — $0.50/GB

Drop onto your gateway. Ask it about endpoints, latency, errors, consumers. It finds non-prod endpoints in production.

$ tp-ops ask edge-gateway
  edge-gateway > which APIs have the highest error rate?
  edge-gateway > are there any non-prod endpoints in production?
  edge-gateway > show me cost per API per team
  edge-gateway > which deprecated APIs still get traffic?

🔒 Security Agent — $0.50/GB

Drop onto any resource. Ask it about RBAC, credentials, secrets, CVEs, TLS.

$ tp-ops ask prod-cluster-01 --agent security
  prod-cluster-01 > find overprivileged service accounts
  prod-cluster-01 > any secrets exposed in environment variables?
  prod-cluster-01 > show me credentials not used in 90 days
  prod-cluster-01 > are any containers running as root?

📊 Azure Monitor Agent — $0.50/GB

Drop onto a workspace. Ask it about exceptions, anomalies, health. Pre-built KQL under the hood.

$ tp-ops ask hub-workspace
  hub-workspace > what's causing the spike in 500 errors?
  hub-workspace > show me App Insights exceptions from today
  hub-workspace > are there any managed identities we're not using?
  hub-workspace > what's our monitoring bill look like?

The Mesh

The more agents you drop, the smarter the mesh gets.

Agents Capability
1 Useful. Expert on one resource.
5 Correlated insights across resources.
15+ Full observability mesh. Any question traces through everything.

Agents communicate inside your network via gRPC. No data leaves your environment. The mesh is entirely local.

┌─────────────────────────────────────────────────────────────┐
│                     Your Infrastructure                      │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐    │
│  │  K8s     │  │ Postgres │  │  CI/CD   │  │   VM     │    │
│  │  Agent   │◄─►│  Agent   │◄─►│  Agent   │◄─►│  Agent   │    │
│  └──────────┘  └──────────┘  └──────────┘  └──────────┘    │
│       ▲             ▲             ▲             ▲          │
│       └─────────────┴─────────────┴─────────────┘          │
│                    Agent Mesh (gRPC)                        │
│                                                             │
│  ┌─────────────────────────────────────────────────────┐   │
│  │          Local AI (runs on your hardware)            │   │
│  └─────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────┘

Leaving means losing all that cross-resource intelligence. The switching cost is the accumulated value.


Pricing

$0.50 per GB processed. First 1GB free.

Every question the agent answers processes some data — logs, metrics, configs, query stats. The agent meters exactly how much it touches. You see the running cost during your conversation.

  payments-db > how much has this session cost?

  8 questions | 1.8 GB processed | $0.90
  
  vs manual investigation: ~45 min, ~$60
  You saved: $59 and 31 minutes

No subscription. No monthly fee. No annual contract. Talk to your agent when you need it. Pay for what it processes.


Security

Human-in-the-loop. Reads everything autonomously. Writes nothing without your explicit "yes." Every time. No exceptions.

Runs locally. Both AI models run on your resource. Data stays on your infrastructure. Only the billing metadata leaves.

No credentials stored. Uses your existing kubeconfig, DB credentials, cloud tokens. Never copies or caches them.

Audit trail. Every conversation logged. Every action tracked. Export to your SIEM.


CLI Quick Reference

# Setup & Authentication
tp-ops init                    # First-time setup wizard
tp-ops login --token <key>     # Authenticate with API token
tp-ops logout                  # Clear stored credentials
tp-ops account                 # Show account info and plan

# Drop agents (run ON the resource itself)
tp-ops drop                    # Auto-detect resource type
tp-ops drop --type relay-agent # Override detection
tp-ops drop --type apigw-agent --apim <instance>
tp-ops drop --type azure-agent --subscription <id>
tp-ops drop --name <custom>    # Override resource name

# Talk to an agent (interactive conversation)
tp-ops ask <resource-name>

# Run a one-shot investigation (non-interactive)
tp-ops run <resource-name> "<problem>"

# Proactive scan
tp-ops scan <resource-name>
tp-ops scan <resource-name> --deep

# Security audit
tp-ops audit <resource-name>
tp-ops audit <resource-name> --compliance cis

# Approve/reject actions
tp-ops approve <resource-name>:action-<id>
tp-ops reject <resource-name>:action-<id>

# Management
tp-ops list                    # List all dropped agents
tp-ops history <resource-name> # Investigation history
tp-ops usage                   # Billing and GB usage
tp-ops usage --days 30         # Usage for last 30 days
tp-ops remove <resource-name>  # Remove agent

Project Structure

ternaryphysics-ops/
├── README.md
├── WHITEPAPER.md
├── AGENT_CATALOG.md
├── CLI_REFERENCE.md
├── CLI_INTERACTIVE.md
├── LICENSE
│
├── cli/
│   ├── main.py                    # Entry point (tp-ops)
│   ├── commands/
│   │   ├── drop.py                # Drop agent onto resource
│   │   ├── ask.py                 # Interactive conversation mode
│   │   ├── run.py                 # One-shot investigation
│   │   ├── scan.py                # Proactive scan
│   │   ├── approve.py             # Action approval
│   │   ├── list.py                # List agents
│   │   ├── history.py             # History
│   │   ├── usage.py               # Billing and usage display
│   │   ├── remove.py              # Remove agent
│   │   ├── init_cmd.py            # First-time setup wizard
│   │   ├── login.py               # tp-ops login --token
│   │   ├── logout.py              # tp-ops logout
│   │   └── account.py             # tp-ops account
│   ├── auth/
│   │   └── token.py               # Secure token storage (keyring/file)
│   ├── api/
│   │   ├── client.py              # HTTP client for backend API
│   │   ├── models.py              # API response models
│   │   └── offline_queue.py       # Queue usage when offline
│   ├── conversation/
│   │   ├── session.py             # Conversation session manager
│   │   ├── parser.py              # Natural language input parsing
│   │   └── renderer.py            # Response formatting
│   └── output/
│       ├── terminal.py
│       ├── report.py
│       └── audit.py
│
├── backend/
│   ├── app/
│   │   ├── main.py                # FastAPI application
│   │   ├── config.py              # Settings from environment
│   │   ├── api/v1/
│   │   │   ├── auth.py            # POST /auth/validate
│   │   │   ├── accounts.py        # GET /accounts/me
│   │   │   ├── usage.py           # Usage reporting endpoints
│   │   │   └── billing.py         # Stripe webhooks
│   │   ├── models/
│   │   │   ├── account.py         # Account model
│   │   │   ├── api_key.py         # API key model
│   │   │   └── usage_event.py     # Usage tracking
│   │   └── services/
│   │       └── stripe_service.py  # Stripe integration
│   ├── Dockerfile
│   ├── docker-compose.yml
│   └── requirements.txt
│
├── agents/
│   ├── base/
│   │   ├── agent.py               # Base agent class
│   │   ├── conversation.py        # Conversational interface
│   │   ├── investigation.py       # Investigation engine
│   │   ├── reasoning.py           # "How to think" reasoning
│   │   ├── delta_scanner.py       # What changed? (1h vs 23h)
│   │   ├── self_check.py          # Evidence verification
│   │   ├── executor.py            # Human-approved execution
│   │   ├── meter.py               # GB processed tracking
│   │   └── cross_agent.py         # Cross-agent communication
│   │
│   ├── kubernetes/
│   │   ├── agent.py
│   │   ├── conversation.py        # K8s-specific question handling
│   │   ├── tools/
│   │   │   ├── pod_inspector.py
│   │   │   ├── node_inspector.py
│   │   │   ├── deployment_inspector.py
│   │   │   ├── service_inspector.py
│   │   │   ├── event_analyzer.py
│   │   │   └── resource_analyzer.py
│   │   ├── scanners/
│   │   │   ├── zombie_detector.py
│   │   │   ├── orphan_detector.py
│   │   │   └── overprovisioning.py
│   │   └── knowledge/
│   │       └── patterns.py
│   │
│   ├── vm/
│   │   ├── agent.py
│   │   ├── conversation.py
│   │   ├── tools/
│   │   │   ├── process_inspector.py
│   │   │   ├── disk_inspector.py
│   │   │   ├── memory_inspector.py
│   │   │   ├── network_inspector.py
│   │   │   └── service_inspector.py
│   │   ├── scanners/
│   │   │   ├── runaway_detector.py
│   │   │   └── disk_predictor.py
│   │   └── knowledge/
│   │       └── patterns.py
│   │
│   ├── postgres/
│   │   ├── agent.py
│   │   ├── conversation.py
│   │   ├── tools/
│   │   │   ├── query_analyzer.py
│   │   │   ├── connection_monitor.py
│   │   │   ├── replication_monitor.py
│   │   │   ├── lock_inspector.py
│   │   │   ├── vacuum_analyzer.py
│   │   │   └── index_advisor.py
│   │   ├── scanners/
│   │   │   ├── bloat_detector.py
│   │   │   └── leak_detector.py
│   │   └── knowledge/
│   │       └── patterns.py
│   │
│   ├── api_gateway/
│   │   ├── agent.py
│   │   ├── conversation.py
│   │   ├── tools/
│   │   │   ├── endpoint_inspector.py
│   │   │   ├── traffic_analyzer.py
│   │   │   ├── cost_attributor.py
│   │   │   └── deprecation_tracker.py
│   │   ├── scanners/
│   │   │   ├── nonprod_detector.py
│   │   │   └── slo_monitor.py
│   │   └── knowledge/
│   │       └── patterns.py
│   │
│   ├── security/
│   │   ├── agent.py
│   │   ├── conversation.py
│   │   ├── tools/
│   │   │   ├── rbac_auditor.py
│   │   │   ├── identity_auditor.py
│   │   │   ├── secret_scanner.py
│   │   │   ├── cve_scanner.py
│   │   │   └── tls_auditor.py
│   │   └── knowledge/
│   │       └── patterns.py
│   │
│   └── azure_monitor/
│       ├── agent.py
│       ├── conversation.py
│       ├── tools/
│       │   ├── kql_executor.py
│       │   ├── app_insights_analyzer.py
│       │   └── log_analytics_analyzer.py
│       ├── scanners/
│       │   ├── cost_optimizer.py
│       │   └── alert_auditor.py
│       └── knowledge/
│           └── kql_library.py
│
├── engine/
│   ├── reasoning/
│   │   ├── principles.py          # Investigation reasoning
│   │   ├── delta_scan.py          # "What changed?" engine
│   │   ├── causal_chain.py        # Root cause builder
│   │   └── evidence_gate.py       # Self-check evidence gate
│   ├── execution/
│   │   ├── sandbox.py             # Command sandboxing
│   │   ├── approval_flow.py       # Human approval workflow
│   │   └── timeout.py             # Execution timeouts
│   ├── inference/
│   │   ├── tnn_detector.py        # Tier 1: TNN anomaly detection
│   │   ├── inference.py           # Tier 2: LLM inference (llama.cpp)
│   │   ├── models.py              # Model download and management
│   │   └── hot_swap.py            # Atomic model weight update
│   └── metering/
│       └── gb_tracker.py          # Per-GB billing
│
├── models/
│   ├── tnn/                       # Ternary Neural Network weights
│   │   ├── k8s_anomaly.bin
│   │   ├── vm_anomaly.bin
│   │   ├── postgres_anomaly.bin
│   │   └── network_anomaly.bin
│   └── llm/                       # LLM weights (auto-downloaded)
│       └── TernaryPhysics-7B-Q4_K_M.gguf
│
├── website/
│   └── index.html
│
└── tests/
    ├── test_conversation.py
    ├── test_k8s_agent.py
    ├── test_vm_agent.py
    ├── test_postgres_agent.py
    ├── test_reasoning.py
    ├── test_cross_agent.py
    └── fixtures/

About

Built by TernaryPhysics LLC, Mount Pleasant, SC.

Created by an SRE who was tired of investigating the same problems at 3am and wanted to talk to his infrastructure instead of stare at dashboards.

Patent Pending — USPTO Provisional Filed March 2, 2026 Covers: Kernel-space ternary neural network inference with continuous learning feedback loop

"A brain you drop onto your infrastructure. You talk to it. It talks back."

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tp_ops-0.1.5.tar.gz (317.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tp_ops-0.1.5-py3-none-any.whl (481.5 kB view details)

Uploaded Python 3

File details

Details for the file tp_ops-0.1.5.tar.gz.

File metadata

  • Download URL: tp_ops-0.1.5.tar.gz
  • Upload date:
  • Size: 317.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for tp_ops-0.1.5.tar.gz
Algorithm Hash digest
SHA256 42fbe371c653ab32d5cc481164659d2cf47953b90d77cc5aa7a7c32271339f8e
MD5 a0fed4bdcb702c6525a977959035c674
BLAKE2b-256 7323e82d5a5a7002d9aefa7616d7d6d3082730af1de2ba8b42ae4bbf2927c82b

See more details on using hashes here.

File details

Details for the file tp_ops-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: tp_ops-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 481.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for tp_ops-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 704f25144087a099fdd0ab1abbd0fcbbae370c5ca4b8116bff34ab2f26a81b94
MD5 954dfa6b1469a926f4ec305012a3fbf9
BLAKE2b-256 e6b16a2b0bdb3b67659418d97311203a35c4501533450d0c21d4c02b7d6f34c2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page