Skip to main content

guillotina cache implementation using redis + lru in-memory cache

Project description

https://travis-ci.org/guillotinaweb/guillotina_rediscache.svg?branch=master

guillotina_rediscache implements redis into guillotina with an additional in-memory layer cache.

In order to coordinate invalidating the in-memory cache, guillotina_rediscache utilizes the pub/sub feature redis provides.

Configuration

app_settings for this:

{
  "databases": {
    "db": {
      ...
      "cache_strategy": "redis"
      ...
    }
  },
  "redis": {
      'host': 'localhost',
      'port': 6379,
      'ttl': 3600,
      'memory_cache_size': 1000,
      'pool': {
          'minsize': 5,
          'maxsize': 100
      }
  }
}

Run measures

Using guillotina run command:

./bin/g run --script=measures/serialize.py -c measures/config.yaml

With profiling:

./bin/g run --script=measures/serialize.py -c measures/config.yaml --line-profiler --line-profiler-matcher="*serialize*"

1.3.1 (2018-03-19)

  • Fix redis file manager finish method [vangheem]

1.3.0 (2018-03-19)

  • Provide redis file manager [vangheem]

1.2.0 (2018-03-14)

  • Upgrade to work with guillotina 2.4.x [vangheem]

1.1.6 (2018-03-01)

  • Handle errors while canceling task init task [vangheem]

1.1.5 (2018-03-01)

  • Handle errors while canceling task [vangheem]

1.1.4 (2018-02-15)

  • Fix cache stats endpoint [vangheem]

1.1.3 (2018-01-22)

  • Be able to disable deleting group of cache keys together with cluster_mode option. [vangheem]

1.1.2 (2018-01-17)

  • Fix using redis.delete when keys are of length 0 [vangheem]

1.1.1 (2018-01-17)

  • batch all cache deletes into one request [vangheem]

1.1.0 (2018-01-12)

  • Push cache updates to redis subscriber. This should improve cache hits dramatically [vangheem]

1.0.14 (2018-01-10)

  • Only run invalidation task if we have keys to invalidate [vangheem]

1.0.13 (2017-12-15)

  • Improve request performance [vangheem]

  • Change the way we’re using the redis pool so it reuses connections [vangheem]

1.0.12 (2017-11-30)

  • Missing await statement for self.get_redis() [vangheem]

1.0.11 (2017-11-08)

  • Handle CancelledError [vangheem]

1.0.10 (2017-11-06)

  • upgrade for guillotina 2.0.0 [vangheem]

1.0.9 (2017-10-23)

  • Fix handling connection objects and releasing back to pool [vangheem]

1.0.8 (2017-10-23)

  • Fix use of pool [vangheem]

1.0.7 (2017-10-23)

  • Use pickle instead of json from load/dumps because it is much faster [vangheem]

1.0.6 (2017-10-19)

  • Use ujson [vangheem]

1.0.5 (2017-10-02)

  • Track all keys needing invalidation and do invalidation in an async task so the request can finish faster. [vangheem]

1.0.4 (2017-05-29)

  • Test fixes [vangheem]

1.0.3 (2017-05-26)

  • Fix delete not properly invalidating cache [vangheem]

1.0.2 (2017-05-15)

  • Fix channel publishing invalidations [vangheem]

1.0.1 (2017-05-15)

  • Fix release

1.0.0 (2017-05-15)

  • initial release

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

guillotina_rediscache-1.3.1.tar.gz (11.6 kB view details)

Uploaded Source

Built Distribution

guillotina_rediscache-1.3.1-py3-none-any.whl (17.0 kB view details)

Uploaded Python 3

File details

Details for the file guillotina_rediscache-1.3.1.tar.gz.

File metadata

File hashes

Hashes for guillotina_rediscache-1.3.1.tar.gz
Algorithm Hash digest
SHA256 ff0b86dd02d9fb6ea2d256bdd5150252586943322ffaaf147d07af7762a0906b
MD5 c15c3c5f399868b56dbbfbe406e3ec97
BLAKE2b-256 ffe315c1581e52e58ad5f476e146a1851a346aae6778a8bc3d703219472485b2

See more details on using hashes here.

File details

Details for the file guillotina_rediscache-1.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for guillotina_rediscache-1.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 71d3efe1e1f653f32253ff9a4076b720f28b7f40ca93f7b619f2af2f9a6e2f04
MD5 1d16368aaf463c4712db6c48fb485c20
BLAKE2b-256 fa6c81ee33ceaf1e1032b1e68b2ef1f4865151b8b098a65cba2da27dff99a28d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page