Skip to main content

Monitor system load of the server running the nvidia/cuda docker containers.

Project description

aidockermon

Monitor system load of the server running the nvidia/cuda docker containers.

Feature

  • sysinfo: system static info
  • sysload: system cpu/memory load
  • gpu: nvidia gpu load
  • disk: disk load
  • containers: containers' load that based on the nvidia/cuda image

Prerequisite

Python >= 3

Installation

pip install aidockermon

Or use setuptools

python setup.py install

Usage

$ aidockermon -h
usage: aidockermon [-h] [-v] {query,create-esindex,delete-esindex} ...

optional arguments:
  -h, --help            show this help message and exit
  -v, --version         show program's version number and exit

command:
  {query,create-esindex,delete-esindex}
    query               Query system info, log them via syslog protocol
    create-esindex      Create elasticsearch index
    delete-esindex      Delete elasticsearch index
$ aidockermon query -h
usage: aidockermon query [-h] [-l] [-r REPEAT] [-f FILTERS [FILTERS ...]] type

positional arguments:
  type                  info type: sysinfo, sysload, gpu, disk, containers

optional arguments:
  -h, --help            show this help message and exit
  -l, --stdout          Print pretty json to console instead of send a log
  -r REPEAT, --repeat REPEAT
                        n/i repeat n times every i seconds
  -f FILTERS [FILTERS ...], --filters FILTERS [FILTERS ...]
                        Filter the disk paths for disk type; filter the
                        container names for containers type

For example:

Show sysinfo

$ aidockermon query -l sysinfo
{
    "gpu": {
        "gpu_num": 2,
        "driver_version": "410.104",
        "cuda_version": "10.0"
    },
    "mem_tot": 67405533184,
    "kernel": "4.4.0-142-generic",
    "cpu_num": 12,
    "docker": {
        "version": "18.09.3"
    },
    "system": "Linux"
}

Show sys load

$ aidockermon query -l sysload
{
    "mem_free": 11866185728,
    "mem_used": 8023793664,
    "cpu_perc": 57.1,
    "mem_perc": 12.8,
    "mem_avail": 58803163136,
    "mem_tot": 67405533184
}

Show gpu load

$ aidockermon query -l gpu
{
    "mem_tot": 11177,
    "gpu_temperature": 76.0,
    "mem_free": 1047,
    "mem_used": 10130,
    "gpu_perc": 98.0,
    "gpu_id": 0,
    "mem_perc": 46.0
}
{
    "mem_tot": 11178,
    "gpu_temperature": 66.0,
    "mem_free": 3737,
    "mem_used": 7441,
    "gpu_perc": 95.0,
    "gpu_id": 1,
    "mem_perc": 44.0
}

Show disk usage

$ aidockermon query disk -l -f /
{
    "path": "/",
    "device": "/dev/nvme0n1p3",
    "total": 250702176256,
    "used": 21078355968,
    "free": 216865271808,
    "percent": 8.9
}

$ aidockermon query disk -l -f / /disk
{
    "path": "/",
    "device": "/dev/nvme0n1p3",
    "total": 250702176256,
    "used": 21078355968,
    "free": 216865271808,
    "percent": 8.9
}
{
    "path": "/disk",
    "device": "/dev/sda1",
    "total": 1968874311680,
    "used": 1551374692352,
    "free": 317462949888,
    "percent": 83.0
}

Show containers' load

$ aidockermon query containers -l -f DianAI
{
    "proc_name": "python3 test_run.py",
    "container": "DianAI",
    "started_time": 1554698236.51,
    "pid": 29794,
    "running_time": "0 6:58:58",
    "mem_used": 8623
}
{
    "proc_name": "python train.py",
    "container": "DianAI",
    "started_time": 1554707562.59,
    "pid": 15721,
    "running_time": "0 4:23:32",
    "mem_used": 1497
}
{
    "proc_name": "python train_end2end.py",
    "container": "UniqueAI",
    "started_time": 1554712634.14,
    "pid": 11796,
    "running_time": "0 2:59:0",
    "mem_used": 6999
}
{
    "mem_limit": 67481047040,
    "net_output": 47863240948,
    "block_read": 1327175626752,
    "net_input": 18802869033,
    "mem_perc": 14.637655604461704,
    "block_write": 132278439936,
    "name": "DianAI",
    "cpu_perc": 0.0,
    "mem_used": 9877643264
}

Config

logging

debug: false
log:
  version: 1

  # This is the default level, which could be ignored.
  # CRITICAL = 50
  # FATAL = CRITICAL
  # ERROR = 40
  # WARNING = 30
  # WARN = WARNING
  # INFO = 20
  # DEBUG = 10
  # NOTSET = 0
  #level: 20
  disable_existing_loggers: false
  formatters:
    simple:
      format: '%(levelname)s %(message)s'
    monitor:
      format: '%(message)s'
  filters:
    require_debug_true:
      (): 'aidockermon.handlers.RequireDebugTrue'
  handlers:
    console:
      level: DEBUG
      class: logging.StreamHandler
      formatter: simple
      filters: [require_debug_true]
    monitor:
      level: INFO
      class: rfc5424logging.handler.Rfc5424SysLogHandler
      address: [127.0.0.1, 1514]
      enterprise_id: 1
  loggers:
    runtime:
      handlers: [console]
      level: DEBUG
      propagate: false
    monitor:
      handlers: [monitor, console]
      level: INFO
      propagate: false

This is the default config, which should be located at /etc/aidockermon/config.yml.

You can modify the address value to specify the logging target.

  • address: [127.0.0.1, 1514]: UDP to 127.0.0.1:1514
  • address: /var/log/aidockermon: unix domain datagram socket

If you add an socktype argument, you can specify whether to use UDP or TCP as transport protocol.

  • socktype: 1: TCP
  • socktype: 2: UDP

Enable TLS/SSL:

tls_enable: true
tls_verify: true
tls_ca_bundle: /path/to/ca-bundle.pem

Set debug as true, you can see message output in the console.

Cronjob

sudo cp etc/cron.d/aidockermon /etc/cron.d
sudo systemctl restart cron

syslog-ng

Using syslog-ng to collect logs and send them to elasticsearch for future use such as visualization with kibana.

cp etc/syslog-ng/syslog-ng.conf /etc/syslog-ng/
sudo systemctl restart syslog-ng

Sample config:

@version: 3.20

destination d_elastic {
	elasticsearch2(
		index("syslog-ng")
		type("${.SDATA.meta.type}")
		flush-limit("0")
		cluster("es-syslog-ng")
		cluster-url("http://localhost:9200")
		client-mode("http")
		client-lib-dir(/usr/share/elasticsearch/lib)
		template("${MESSAGE}\n")
	);
};

source s_python {
  #unix-dgram("/var/log/aidockermon");
	syslog(ip(127.0.0.1) port(1514) transport("udp") flags(no-parse));
};

log {
	source (s_python);
  parser { syslog-parser(flags(syslog-protocol)); };
	destination (d_elastic);
};

Modify it to specify the elasticsearch server and the log source's port and protocol.

Contribute

Use the following command to generate requirements.txt, other wise there would be one line pkg-resources==0.0.0 which cause a failure to install dependencies.

pip freeze | grep -v "pkg-resources" > requirements.txt

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aidockermon-0.8.0.tar.gz (14.2 kB view details)

Uploaded Source

Built Distribution

aidockermon-0.8.0-py2.py3-none-any.whl (27.0 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file aidockermon-0.8.0.tar.gz.

File metadata

  • Download URL: aidockermon-0.8.0.tar.gz
  • Upload date:
  • Size: 14.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.18.4 setuptools/40.4.3 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.5

File hashes

Hashes for aidockermon-0.8.0.tar.gz
Algorithm Hash digest
SHA256 8218f6c16ecce189a630207920eb21c5fa1bcacd135d578b8c398006c26c690d
MD5 4c0f97d42e0087d00314616e63a7102c
BLAKE2b-256 bbeb9d2f4c123b70a8d9bc368e85019485166794316530397a9a156d5f859c4f

See more details on using hashes here.

Provenance

File details

Details for the file aidockermon-0.8.0-py2.py3-none-any.whl.

File metadata

  • Download URL: aidockermon-0.8.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 27.0 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.18.4 setuptools/40.4.3 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.5

File hashes

Hashes for aidockermon-0.8.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 c1c4d32dbde5325b149534f0005971fde4674a16625d8037fbe5ff2530f4dc39
MD5 68fa013193d81a399eba176aa1d55460
BLAKE2b-256 9bce6024bb05daa8677cc4ddc592423fa725421c5ac719214231f2bdc11727a5

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page