Skip to main content

The fastest memoizing and caching Python library written in Rust

Project description

Cachebox

Downloads Downloads

Changelog | Releases

The fastest caching library with different implementations, written in Rust.

  • 🚀 5-23x faster than other libraries (like cachetools and cacheout)
  • 📊 Very very low memory usage (1/3 of dictionary)
  • (R) written in Rust
  • 🤝 Support Python 3.8 and above
  • 📦 Over 7 cache algorithms are supported
  • 🧶 Completely thread-safe

(@) decorator example:

from cachebox import cached, cachedmethod, TTLCache, LRUCache

# Keep coin price for no longer than a minute
@cached(TTLCache(maxsize=126, ttl=60))
def get_coin_price(coin_name):
    return web3_client.get_price(coin_name)

# Async functions are supported
@cached(LRUCache(maxsize=126))
async def get_coin_price(coin_name):
    return await async_web3_client.get_price(coin_name)

# You can pass `capacity` parameter.
# If `capacity` specified, the cache will be able to hold at
# least capacity elements without reallocating.
@cached(LRUCache(maxsize=126, capacity=100))
def fib(n):
    return n if n < 2 else fib(n - 1) + fib(n - 2)

# methods are supported
class APIResource:
    @cachedmethod(
        TTLCache(126, ttl=10),
        # You can detemine how caching is done using `key_maker` parameter.
        key_maker=lambda args, kwds: args[0].client_ip
    )
    def get_information(self, request):
        ...

Page Contents

When i need caching?

  1. Sometimes you have functions that take a long time to execute, and you need to call them each time.
@cached(LRUCache(260))
def function(np_array):
    # big operations
    ...
  1. Sometimes you need to temporarily store data in memory for a short period.

  2. When dealing with remote APIs, Instead of making frequent API calls, store the responses in a cache.

@cached(TTLCache(0, ttl=10))
def api_call(key):
    return api.call(key)
  1. Caching query results from databases can enhance performance.
@cached(TTLCache(0, ttl=1))
def select_user(id):
    return db.execute("SELECT * FROM users WHERE id=?", (id,))

and ...

Why cachebox?

cachebox library uses Rust language to has high-performance.

Low memory usage - It has very low memory usage; let's have a simple compare to dictionary:

>>> import sys, cachebox
>>> sys.getsizeof(cachebox.Cache(0, {i:i for i in range(100000)}))
1835032
>>> sys.getsizeof({i:i for i in range(100000)})
5242960

High-speed - Is speed important for you? It's here for you; see here.

Zero-dependecy - As we said, cachebox written in Rust so you don't have to install any other dependecies.

Thread-safe - It's completely thread-safe and uses read-writer locks to prevent problems.

Installation

You can install cachebox from PyPi:

pip3 install -U cachebox

To verify that the library is installed correctly, run the following command:

python -c "import cachebox; print(cachebox.__version__)"

API

All the implementations are support mutable-mapping methods (e.g __setitem__, get, popitem), and there are some new methods for each implemetation.

These methods are available for all classes:

  • insert(key, value): an aliases for __setitem__
>>> cache.insert(1, 1) # it equals to cache[1] = 1
  • capacity(): Returns the number of elements the cache can hold without reallocating.
>>> cache.update((i, i) for i in range(1000))
>>> cache.capacity()
1432
  • drain(n): According to cache algorithm, deletes and returns how many items removed from cache.
>>> cache = LFUCache(10, {i:i for i in range(10)})
>>> cache.drain(8)
8
>>> len(cache)
2
>>> cache.drain(10)
2
>>> len(cache)
0
  • shrink_to_fit(): Shrinks the capacity of the cache as much as possible.
>>> cache = LRUCache(0, {i:i for i in range(10)})
>>> cache.capacity()
27
>>> cache.shrink_to_fit()
>>> cache.capacity()
11

Cache

Fixed-size (or can be not) cache implementation without any policy, So only can be fixed-size, or unlimited size cache.

>>> from cachebox import Cache
>>> cache = Cache(100) # fixed-size cache
>>> cache = Cache(0) # unlimited-size cache
>>> cache = Cache(100, {"key1": "value1", "key2": "value2"}) # initialize from dict or any iterable object
>>> cache = Cache(2, {i:i for i in range(10)})
...
OverflowError: maximum size limit reached

There're no new methods for this class.

FIFOCache

FIFO Cache implementation (First-In First-Out policy, very useful).

In simple terms, the FIFO cache will remove the element that has been in the cache the longest; It behaves like a Python dictionary.

>>> from cachebox import FIFOCache
>>> cache = FIFOCache(100) # fixed-size cache
>>> cache = FIFOCache(0) # unlimited-size cache
>>> cache = FIFOCache(100, {"key1": "value1", "key2": "value2"}) # initialize from dict or any iterable object

There're new methods:

  • first: returns the first inserted key (the oldest)
  • last: returns the last inserted key (the newest)

LFUCache

LFU Cache implementation (Least frequantly used policy).

In simple terms, the LFU cache will remove the element in the cache that has been accessed the least, regardless of time.

>>> from cachebox import LFUCache
>>> cache = LFUCache(100) # fixed-size cache
>>> cache = LFUCache(0) # unlimited-size cache
>>> cache = LFUCache(100, {"key1": "value1", "key2": "value2"}) # initialize from dict or any iterable object

There's a new method:

  • least_frequently_used: returns the key that has been accessed the least.

RRCache

RRCache implementation (Random Replacement policy).

In simple terms, the RR cache will choice randomly element to remove it to make space when necessary.

>>> from cachebox import RRCache
>>> cache = RRCache(100) # fixed-size cache
>>> cache = RRCache(0) # unlimited-size cache
>>> cache = RRCache(100, {"key1": "value1", "key2": "value2"}) # initialize from dict or any iterable object

There're no new methods for this class.

LRUCache

LRU Cache implementation (Least recently used policy).

In simple terms, the LRU cache will remove the element in the cache that has not been accessed in the longest time.

>>> from cachebox import LRUCache
>>> cache = LRUCache(100) # fixed-size cache
>>> cache = LRUCache(0) # unlimited-size cache
>>> cache = LRUCache(100, {"key1": "value1", "key2": "value2"}) # initialize from dict or any iterable object

There're new methods:

  • least_recently_used: returns the key that has not been accessed in the longest time.
  • most_recently_used: returns the key that has been accessed in the longest time.

TTLCache

TTL Cache implementation (Time-to-live policy).

In simple terms, The TTL cache is one that evicts items that are older than a time-to-live.

>>> from cachebox import TTLCache
>>> cache = TTLCache(100, 2) # fixed-size cache, 2 ttl value
>>> cache = TTLCache(0, 10) # unlimited-size cache, 10 ttl value
>>> cache = TTLCache(100, 5, {"key1": "value1", "key2": "value2"}) # initialize from dict or any iterable object

There're new methods:

  • get_with_expire: Works like .get(), but also returns the remaining expiration.
>>> cache.update({1: 1, 2: 2})
>>> cache.get_with_expire(1)
(1, 1.23445675)
>>> cache.get_with_expire("no-exists")
(None, 0.0)
  • pop_with_expire: Works like .pop(), but also returns the remaining expiration.
>>> cache.update({1: 1, 2: 2})
>>> cache.pop_with_expire(1)
(1, 1.23445675)
>>> cache.pop_with_expire(1)
(None, 0.0)
  • popitem_with_expire: Works like .popitem(), but also returns the remaining expiration.
>>> cache.update({1: 1, 2: 2})
>>> cache.popitem_with_expire()
(1, 1, 1.23445675)
>>> cache.popitem_with_expire()
(2, 2, 1.94389545)
>>> cache.popitem_with_expire()
...
KeyError

VTTLCache

VTTL Cache implementation (Time-to-live per-key policy)

Works like TTLCache, with this different that each key has own time-to-live value.

>>> cache = VTTLCache(100) # fixed-size cache
>>> cache = VTTLCache(0) # unlimited-size cache

# initialize from dict or any iterable object;
# also these items will expire after 5 seconds
>>> cache = VTTLCache(100, {"key1": "value1", "key2": "value2"}, 5)

# initialize from dict or any iterable object;
# but these items never expire, because we pass None as them ttl value
>>> cache = VTTLCache(100, {"key1": "value1", "key2": "value2"}, None)

There're new methods:

  • insert(key, value, ttl): is different here. if you use cache[key] = value way, you cannot set ttl value for those item, but here you can.
>>> cache.insert("key", "value", 10) # this item will expire after 10 seconds
>>> cache.insert("key", "value", None) # but this item never expire.
  • setdefault(key, default, ttl): Returns the value of the specified key. If the key does not exist, insert the key, with the specified value.

  • update(iterable, ttl): inserts the specified items to the cache. The iterable can be a dictionary, or an iterable object with key-value pairs.

>>> cache = VTTLCache(20)
>>> cache.insert("key", "value", 10)
>>> cache.update({i:i for i in range(12)}, 2)
>>> len(cache)
13
>>> time.sleep(2)
>>> len(cache)
1
  • get_with_expire: Works like .get(), but also returns the remaining expiration.
>>> cache.update({1: 1, 2: 2}, 2)
>>> cache.get_with_expire(1)
(1, 1.9934)
>>> cache.get_with_expire("no-exists")
(None, 0.0)
  • pop_with_expire: Works like .pop(), but also returns the remaining expiration.
>>> cache.update({1: 1, 2: 2}, 2)
>>> cache.pop_with_expire(1)
(1, 1.99954)
>>> cache.pop_with_expire(1)
(None, 0.0)
  • popitem_with_expire: Works like .popitem(), but also returns the remaining expiration.
>>> cache.update({1: 1, 2: 2}, 2)
>>> cache.popitem_with_expire()
(1, 1, 1.9786564)
>>> cache.popitem_with_expire()
(2, 2, 1.97389545)
>>> cache.popitem_with_expire()
...
KeyError

Performance table

[!NOTE]
Operations which have an amortized cost are suffixed with a *. Operations with an expected cost are suffixed with a ~.

get(i) insert(i) delete(i) update(m) popitem
Cache O(1)~ O(1)~* O(1)~ O(m)~ N/A
FIFOCache O(1)~ O(min(i, n-i))* O(min(i, n-i)) O(m*min(i, n-i)) O(1)
LFUCache O(1)~ O(n)~* O(1)~ O(m*n)~ O(n)~*
RRCache O(1)~ O(1)~* O(1)~ O(m)~ O(1)~
LRUCache O(1)~ ? O(1)~ ? O(1)
TTLCache O(1)~ O(min(i, n-i))* O(min(i, n-i)) O(m*min(i, n-i)) O(1)
VTTLCache O(1)~ ? O(1)~ ? O(1)~

Frequently asked questions

What is the difference between TTLCache and VTTLCache?

In TTLCache, you set an expiration time for all items, but in VTTLCache, you can set a unique expiration time for each item.

TTL Speed
TTLCache One ttl for all items TTLCache is very faster than VTTLCache
VTTLCache Each item has unique expiration time VTTLCache is slow in inserting
Can we set maxsize to zero?

Yes, if you pass zero to maxsize, means there's no limit for items.

How to migrate from cachetools to cachebox?

cachebox syntax is very similar to cachetools. Just change these:

# If you pass infinity to a cache implementation, change it to zero.
cachetools.Cache(math.inf) -> cachebox.Cache(0)
# If you use `isinstance` for cachetools classes, change those.
isinstance(cache, cachetools.Cache) -> isinstance(cache, cachebox.BaseCacheImpl)

License

Copyright (c) 2024 aWolverP - MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cachebox-2.2.4.tar.gz (28.2 kB view hashes)

Uploaded Source

Built Distributions

cachebox-2.2.4-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (341.3 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ x86-64

cachebox-2.2.4-pp310-pypy310_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl (615.9 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ s390x

cachebox-2.2.4-pp310-pypy310_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl (362.9 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ ppc64le

cachebox-2.2.4-pp310-pypy310_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl (335.1 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ ARMv7l

cachebox-2.2.4-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (315.5 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ ARM64

cachebox-2.2.4-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl (345.1 kB view hashes)

Uploaded PyPy manylinux: glibc 2.5+ i686

cachebox-2.2.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (341.6 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ x86-64

cachebox-2.2.4-pp39-pypy39_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl (616.0 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ s390x

cachebox-2.2.4-pp39-pypy39_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl (363.0 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ ppc64le

cachebox-2.2.4-pp39-pypy39_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl (335.5 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ ARMv7l

cachebox-2.2.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (315.9 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ ARM64

cachebox-2.2.4-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl (345.4 kB view hashes)

Uploaded PyPy manylinux: glibc 2.5+ i686

cachebox-2.2.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (342.0 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ x86-64

cachebox-2.2.4-pp38-pypy38_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl (616.3 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ s390x

cachebox-2.2.4-pp38-pypy38_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl (363.2 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ ppc64le

cachebox-2.2.4-pp38-pypy38_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl (335.7 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ ARMv7l

cachebox-2.2.4-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (316.1 kB view hashes)

Uploaded PyPy manylinux: glibc 2.17+ ARM64

cachebox-2.2.4-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.whl (345.6 kB view hashes)

Uploaded PyPy manylinux: glibc 2.5+ i686

cachebox-2.2.4-cp312-none-win_amd64.whl (261.4 kB view hashes)

Uploaded CPython 3.12 Windows x86-64

cachebox-2.2.4-cp312-none-win32.whl (207.9 kB view hashes)

Uploaded CPython 3.12 Windows x86

cachebox-2.2.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (343.2 kB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.17+ x86-64

cachebox-2.2.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl (646.8 kB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.17+ s390x

cachebox-2.2.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl (364.5 kB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.17+ ppc64le

cachebox-2.2.4-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl (336.1 kB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.17+ ARMv7l

cachebox-2.2.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (319.1 kB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.17+ ARM64

cachebox-2.2.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl (345.1 kB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.5+ i686

cachebox-2.2.4-cp312-cp312-macosx_11_0_arm64.whl (288.7 kB view hashes)

Uploaded CPython 3.12 macOS 11.0+ ARM64

cachebox-2.2.4-cp312-cp312-macosx_10_12_x86_64.whl (310.0 kB view hashes)

Uploaded CPython 3.12 macOS 10.12+ x86-64

cachebox-2.2.4-cp311-none-win_amd64.whl (251.1 kB view hashes)

Uploaded CPython 3.11 Windows x86-64

cachebox-2.2.4-cp311-none-win32.whl (226.3 kB view hashes)

Uploaded CPython 3.11 Windows x86

cachebox-2.2.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (342.2 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ x86-64

cachebox-2.2.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl (620.3 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ s390x

cachebox-2.2.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl (363.7 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ ppc64le

cachebox-2.2.4-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl (336.1 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ ARMv7l

cachebox-2.2.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (315.9 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.17+ ARM64

cachebox-2.2.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl (344.3 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.5+ i686

cachebox-2.2.4-cp311-cp311-macosx_11_0_arm64.whl (287.9 kB view hashes)

Uploaded CPython 3.11 macOS 11.0+ ARM64

cachebox-2.2.4-cp311-cp311-macosx_10_12_x86_64.whl (308.9 kB view hashes)

Uploaded CPython 3.11 macOS 10.12+ x86-64

cachebox-2.2.4-cp310-none-win_amd64.whl (251.1 kB view hashes)

Uploaded CPython 3.10 Windows x86-64

cachebox-2.2.4-cp310-none-win32.whl (226.5 kB view hashes)

Uploaded CPython 3.10 Windows x86

cachebox-2.2.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (342.3 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

cachebox-2.2.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl (620.4 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ s390x

cachebox-2.2.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl (363.7 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ ppc64le

cachebox-2.2.4-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl (333.4 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ ARMv7l

cachebox-2.2.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (316.0 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.17+ ARM64

cachebox-2.2.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl (344.4 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.5+ i686

cachebox-2.2.4-cp310-cp310-macosx_11_0_arm64.whl (288.1 kB view hashes)

Uploaded CPython 3.10 macOS 11.0+ ARM64

cachebox-2.2.4-cp310-cp310-macosx_10_12_x86_64.whl (309.0 kB view hashes)

Uploaded CPython 3.10 macOS 10.12+ x86-64

cachebox-2.2.4-cp39-none-win_amd64.whl (251.3 kB view hashes)

Uploaded CPython 3.9 Windows x86-64

cachebox-2.2.4-cp39-none-win32.whl (226.7 kB view hashes)

Uploaded CPython 3.9 Windows x86

cachebox-2.2.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (342.5 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ x86-64

cachebox-2.2.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl (620.7 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ s390x

cachebox-2.2.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl (363.9 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ ppc64le

cachebox-2.2.4-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl (334.0 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ ARMv7l

cachebox-2.2.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (316.2 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.17+ ARM64

cachebox-2.2.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl (344.6 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.5+ i686

cachebox-2.2.4-cp38-none-win_amd64.whl (251.3 kB view hashes)

Uploaded CPython 3.8 Windows x86-64

cachebox-2.2.4-cp38-none-win32.whl (226.7 kB view hashes)

Uploaded CPython 3.8 Windows x86

cachebox-2.2.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (342.6 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ x86-64

cachebox-2.2.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl (620.6 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ s390x

cachebox-2.2.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl (364.0 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ ppc64le

cachebox-2.2.4-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl (333.6 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ ARMv7l

cachebox-2.2.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (316.2 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.17+ ARM64

cachebox-2.2.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl (344.7 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.5+ i686

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page