Skip to main content

MetricsGPT is a tool for generating PromQL queries from natural language queries. Talk to your metrics!

Project description

metricsGPT

Talk to your metrics.

Demo

[!NOTE]

This is a work in progress with no API guarantees. The current implementation needs work on scalability. Right now it will cause quite some load on your Prometheus API and take a while.

Installation

Ensure you have Python 3.12+ and Node v20+ locally .

By default this tool uses llama3 and nomic-embed-text.

ollama pull llama3
ollama pull nomic-embed-text

Have some prometheus up and running. You can use make run-prom to get one running in docker that scrapes itself.

You can choose to grab the CLI from https://pypi.org/project/metricsgpt/

pip3 install metricsgpt
metricsGPT --server --config=config.yaml

If building locally you can use Poetry,

poetry install
poetry run metricsGPT --server --config=config.yaml

and visit localhost:8081!

## Configuration

Edit config.yaml to suit your own models/Prometheus/Thanos setups.

# Prometheus Configuration
prometheus_url: "http://localhost:9090"
# prometheus_auth:
#   # Basic authentication
#   basic_auth:
#     username: "your_username"
#     password: "your_password"
  
#   # Or Bearer token
#   bearer_token: "your_token"
  
#   # Or custom headers
#   custom_headers:
#     Authorization: "Custom your_auth_header"
#     X-Custom-Header: "custom_value"
  
#   # TLS/SSL configuration
#   tls:
#     cert_file: "/path/to/cert.pem"
#     key_file: "/path/to/key.pem"
#     skip_verify: false  # Set to true to skip 

prom_external_url: null  # Optional external URL for links in the UI
query_lookback_hours: 1.0

# Storage Configuration
vectordb_path: "./data.db"
series_cache_file: "./series_cache.json"

# Server Configuration
refresh_interval: 900  # VectorDB Refresh interval in seconds 
server_host: "0.0.0.0"
server_port: 8081

# LLM Configuration
llm:
  provider: "ollama"
  model: "llama3.1"

embedding:
  provider: "ollama"  # or "openai"
  model: "nomic-embed-text"
  dimension: 768 # optional, defaults to this dimension

# For Azure OpenAI embeddings:
#embedding:
#  provider: "azure"
#  model: "text-embedding-ada-002"
#  deployment_name: "your-embedding-deployment"
#  api_key: "your-api-key"
#  endpoint: "your-azure-endpoint"
#  api_version: "2023-05-15"  
#  dimension: "dimensions of model"

# For Watson embeddings:
#embedding:
#  provider: "watsonx"
#  api_key: "your-api-key"
#  project_id: "your-project-id"
#  model_id: "google/flan-ul2"  # optional, defaults to this model
#  dimension: "dimensions of model"

# For OpenAI embeddings:
#embedding:
#  provider: "openai"
#  model: "text-embedding-ada-002"
#  api_key: "your-api-key"
#  dimension: "dimensions of model"

# Example configurations for different providers:

# For OpenAI:
#llm:
#  provider: "openai"
#  model: "gpt-4"
#  api_key: "your-api-key"

# For Ollama:
#llm:
#  provider: "ollama"
#  model: "metricsGPT"
#  timeout: 120.0

# For Azure:
#llm:
#  provider: "azure"
#  model: "gpt-4"
#  deployment_name: "your-deployment"
#  api_key: "your-api-key"
#  endpoint: "your-azure-endpoint"

# For Gemini:
#llm:
#  provider: "gemini"
#  model: "gemini-pro"
#  api_key: "your-api-key"

# For WatsonX:
#llm:
#  provider: "watsonx"
#  api_key: "your-api-key"
#  project_id: "your-project-id"
#  model_id: "your-model-id"

TODOs:

  • Much more efficient vectorDB ops
  • Use other Prom HTTP APIs for more context
  • Range queries
  • Visualize
  • Embed query results for better analysis
  • Process alerts

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

metricsgpt-0.3.0.tar.gz (1.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

metricsgpt-0.3.0-py3-none-any.whl (1.4 MB view details)

Uploaded Python 3

File details

Details for the file metricsgpt-0.3.0.tar.gz.

File metadata

  • Download URL: metricsgpt-0.3.0.tar.gz
  • Upload date:
  • Size: 1.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.0 CPython/3.13.1 Darwin/23.6.0

File hashes

Hashes for metricsgpt-0.3.0.tar.gz
Algorithm Hash digest
SHA256 8a57ec3067911d9c45b58fd9d4a97c1f15f4d383b78c50d38df924e719e4202a
MD5 b5f7d5e1a3ef62b7d5456cae42bf90ea
BLAKE2b-256 a5a701ed99325d5f00250c21dff00d8330e09a1351552933709a07b6aaa7d77c

See more details on using hashes here.

File details

Details for the file metricsgpt-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: metricsgpt-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 1.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.0.0 CPython/3.13.1 Darwin/23.6.0

File hashes

Hashes for metricsgpt-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 28e2f48839d99a429ae8c1ef5b87c29dabc7ebef917c5a11ade724edb3666bf6
MD5 ce1f9eae22bd792a2df2fb2eaace078f
BLAKE2b-256 f7a28addb49f365ea4e1188ecbae142701e039a890502abcb7468621111bcfcf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page