Skip to main content

Utilities to profile Redis RAM usage

Project description

PyPI version Build Status Code Health GitHub license

Redis Memory Analyzer

RMA is a console tool to scan Redis key space in real time and aggregate memory usage statistic by key patterns. You may use this tools without maintenance on production servers. You can scanning by all or selected Redis types such as “string”, “hash”, “list”, “set”, “zset” and use matching pattern as you like. RMA try to discern key names by patterns, for example if you have keys like ‘user:100’ and ‘user:101’ application would pick out common pattern ‘user:*’ in output so you can analyze most memory distressed data in your instance.

Installing rma

Pre-Requisites :

  1. python >= 3.4 and pip.

  2. redis-py.

To install from PyPI (recommended) :

pip install rma

To install from source :

pip install git+https://github.com/gamenet/redis-memory-analyzer@v0.2.1

Running

After install used it from console:

>rma --help
usage: rma [-h] [-s HOST] [-p PORT] [-a PASSWORD] [-d DB] [-m MATCH] [-l LIMIT]
           [-b BEHAVIOUR] [-t TYPES]

RMA is used to scan Redis key space in and aggregate memory usage statistic by
key patterns.

optional arguments:
  -h, --help                 show this help message and exit
  -s, --server HOST          Redis Server hostname. Defaults to 127.0.0.1
  -p, --port PORT            Redis Server port. Defaults to 6379
  -a, --password PASSWORD    Password to use when connecting to the server
  -d, --db DB                Database number, defaults to 0
  -m, --match MATCH          Keys pattern to match
  -l, --limit LIMIT          Get max key matched by pattern
  -b, --behaviour BEHAVIOUR  Specify application working mode. Allowed values
                             are all, scanner, ram, global
  -t, --type TYPES           Data types to include. Possible values are string,
                             hash, list, set. Multiple types can be provided. If
                             not specified, all data types will be returned.
                             Allowed values arestring, hash, list, set, zset
  -f --format TYPE           Output type format: json or text (by default)
  -x --separator SEPARATOR   Specify namespace separator. Default is ':'

If you have large database try running first with --limit option to run first limited amount of keys. Also run with --types to limit only specified Redis types in large database. Not this tool has performance issues - call encoding for individual keys instead if batch queue with LUA (like in scanner does). So this option may be very useful. You can choose what kind of data would be aggregated from Redis node using -b (--behaviour) option as console argument. Supported behaviours are ‘global’, ‘scanner’, ‘ram’ and ‘all’.

Internals

RMA shows statistics separated by types. All works in application separated by few steps:

  1. Load type and encoding for each key matched by given pattern with Lua scripting in batch mode. SCAN used to iterate keys from Redis key db.

  2. Separate keys by types and match patterns.

  3. Run behaviours and rules for given data set.

  4. Output result with given reported (now only TextReported implemented)

Global output (‘global’ behaviour)

The global data is some Redis server statistics which helps you to understand other data from this tools:

| Stat                             | Value          |
|:---------------------------------|:---------------|
| Total keys in db                 | 28979          |
| RedisDB key space overhead       | 790528         |
| Used `set-max-intset-entries`    | 512            |
| ....                             | ...            |
| Info `total_system_memory`       | 3190095872     |
| ....                             | ...            |

The one of interesting things here is “RedisDB key space overhead”. The amount of memory used Redis to store key space data. If you have lots of keys in your Redis instance this actually shows your overhead for this. Keep in mind that part of data such as total keys in db or key space overhead shows data for selected db. But statistics started with Info or Config keywords is server based.

Key types (‘scanner’ behaviour)

This table helps then you do not know actually that kind of keys stored in your Redis database. For example then DevOps or system administrator want to understand what kind of keys stored in Redis instance. Which data structure is most used in system. This also helps if you are new to some big project - this kind of SHOW ALL TABLES request :)

| Match                 |   Count | Type   | %      |
|:----------------------|--------:|:-------|:-------|
| job:*                 |    5254 | hash   | 18.13% |
| game:privacy:*        |    2675 | hash   | 9.23%  |
| user:*                |    1890 | hash   | 6.52%  |
| group:*               |    1885 | set    | 6.50%  |

Why doesn’t reported memory match actual memory used?

The memory reported by this tool is approximate. In general, the reported memory should be within 10% of what is reported by info.

Also note that the tool does not (and cannot) account for the following: - Memory used by allocator metadata (it is actually not possible without c) - Memory used for pub/sub (no any commands in Redis for that) - Redis process internals (like shared objects)

Known issues

  1. Skiplist (zset actually) encoding actually not realized.

  2. Quicklist now calculated as ziplist.

  3. SDS strings from redis 3.2 (optimized headers) not implemented. Now used fixed 9 bytes header.

Whats next?

Now we use this tools as awesome helper. We most used data structures in our Redis instances is hash and list. After upgradings our servers to Redis 3.2.x planning to fix known issues. Be glad to know that are you think about this tool. In my dreams this tools should used as redis-lint tools which can say you Hey, change this from this to this and save 30% of RAM, Hey, you are using PHP serializer for strings - change to msgpack and save 15% of RAM and so on.

License

This application was developed for using in GameNet project as part of Redis memory optimizations and analise. RMA is licensed under the MIT License. See LICENSE

ChangeLog for RMA

  • 0.2.1
    • Fixed #50 Use time.perf_counter instead of deprecated time.clock for Python 3.8 compatibility.

  • 0.2.0
    • Calculate TTL feature for all key types and display min/max/mean TTL in Key summary tables.

    • Replace invalid UTF-8 characters in redis key names when scanning keyspace.

  • 0.1.16

    • Added possibility to report output in JSON format. Issue #28

  • 0.1.15

    • Fix #29 fails with ValueError if a key containing ValueString was removed.

  • 0.1.14

    • Fix fails with ValueError if a key containing Hash was removed. Closes issue #23.

  • 0.1.13

    • Fix fails with TypeError if a key containing integer was removed. Closes issue #22.

  • 0.1.12

    • Fix fails with ResponseError if a key containing integer was removed. Closes issue #22.

    • Add more info to global rule description.

  • 0.1.11

    • Fix unknown command ‘DEBUG’ issue with AWS’s ElastiCache. Closes issue #21.

  • 0.1.10

    • Each rule submit they progress with tqdm. Closes issue #5.

    • Fix CROSSSLOT error in Scanner Lua script by switching to pipelined mode to retrieve type and encoding data from Redis cluster server. Part of fixing issue #17.

  • 0.1.9

    • Fix issue with types and behavior filters. Closes issue #14.

    • Add columns min and max to the list statistic. Closes issue #15.

    • ValueError: min() arg is an empty sequence. Closes issue #13.

    • Make setup.py use requires from requirements.txt and info from readme.rst. Closes issue #8.

  • 0.1.8

    • More one try with deps in setup.py. Closes issue #16.

  • 0.1.7

    • More one try with deps in setup.py. Closes issue #16.

  • 0.1.6

    • Fix display percent of keys issue. Closes issue #13.

    • Fix invalid syntax on python 3.4 in setup.py. Closes issue #16.

  • 0.1.5

    • Fix logging issue in ValueString

  • 0.1.4

    • Fix pip deps

  • 0.1.3

    • Move pattern aggregation to separate pass

    • Retrieve key encoding in Scanner LUA script and boost performance ~1,75 times. Closes issue #4.

    • Lots of pylint warning and code style from landscape

  • 0.1.2

    • Fix issue with no such key ResponseError during debug sdslen in Value String. Closes issue #1.

    • Fix wrong behaviour if used non 0 db. Closes issue #2.

    • Fix crash with ElastiCache because of CONFIG command is disallowed to use. Closes issue #3.

  • 0.1.1

    • Prepare to pip distribution

  • 0.1.0

    • Initial version

    • Strings, Hash, Set, List are supported

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rma-0.2.1.tar.gz (29.2 kB view details)

Uploaded Source

Built Distribution

rma-0.2.1-py3-none-any.whl (29.5 kB view details)

Uploaded Python 3

File details

Details for the file rma-0.2.1.tar.gz.

File metadata

  • Download URL: rma-0.2.1.tar.gz
  • Upload date:
  • Size: 29.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.4

File hashes

Hashes for rma-0.2.1.tar.gz
Algorithm Hash digest
SHA256 7978b27d57ad69f989e51a511ffcd23b0a576421e59a714c737844ef623e7c14
MD5 ed6b85fa8decaa87eaac98fe71b87022
BLAKE2b-256 73a3977de5d3002132536d5ddf208132236ea9c3cba36ce4eff29b3f836bb1f8

See more details on using hashes here.

File details

Details for the file rma-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: rma-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 29.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.4

File hashes

Hashes for rma-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f40f032d0774d1097ec9b34e58b24d0a4b366f1e9f43073963e494a38d596d12
MD5 8b3028f9a43afb9e8df2402f28b0a77e
BLAKE2b-256 43c9c7a358c1082e1b72c739555c8e901c3c0c34ef1ac51bb77ab3d1c3c16765

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page