Caching mindful of computation/storage costs
Project description
Caching for Analytic Computations
---------------------------------
Humans repeat stuff. Caching helps.
Normal caching policies like LRU aren't well suited for analytic computations
where both the cost of recomputation and the cost of storge routinely vary by
one milllion or more. Consider the following computations
```python
# Want this
np.std(x) # tiny result, costly to recompute
# Don't want this
np.transpose(x) # huge result, cheap to recompute
```
Cachey tries to hold on to values that have the following characteristics
1. Expensive to recompute (in seconds)
2. Cheap to store (in bytes)
3. Frequently used
4. Recenty used
It accomplishes this by adding the following to each items score on each access
score += compute_time / num_bytes * (1 + eps) ** tick_time
For some small value of epsilon (which determines the memory halflife.) This
has units of inverse bandwidth, has exponential decay of old results and
roughly linear amplification of repeated results.
Example
-------
```python
>>> from cachey import Cache
>>> c = Cache(1e9, 1) # 1 GB, cut off anything with cost 1 or less
>>> c.put('x', 'some value', cost=3)
>>> c.put('y', 'other value', cost=2)
>>> c.get('x')
'some value'
```
This also has a `memoize` method
```python
>>> memo_f = c.memoize(f)
```
Status
------
Cachey is new and not robust.
---------------------------------
Humans repeat stuff. Caching helps.
Normal caching policies like LRU aren't well suited for analytic computations
where both the cost of recomputation and the cost of storge routinely vary by
one milllion or more. Consider the following computations
```python
# Want this
np.std(x) # tiny result, costly to recompute
# Don't want this
np.transpose(x) # huge result, cheap to recompute
```
Cachey tries to hold on to values that have the following characteristics
1. Expensive to recompute (in seconds)
2. Cheap to store (in bytes)
3. Frequently used
4. Recenty used
It accomplishes this by adding the following to each items score on each access
score += compute_time / num_bytes * (1 + eps) ** tick_time
For some small value of epsilon (which determines the memory halflife.) This
has units of inverse bandwidth, has exponential decay of old results and
roughly linear amplification of repeated results.
Example
-------
```python
>>> from cachey import Cache
>>> c = Cache(1e9, 1) # 1 GB, cut off anything with cost 1 or less
>>> c.put('x', 'some value', cost=3)
>>> c.put('y', 'other value', cost=2)
>>> c.get('x')
'some value'
```
This also has a `memoize` method
```python
>>> memo_f = c.memoize(f)
```
Status
------
Cachey is new and not robust.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
cachey-0.0.1.tar.gz
(3.9 kB
view details)
Built Distribution
File details
Details for the file cachey-0.0.1.tar.gz
.
File metadata
- Download URL: cachey-0.0.1.tar.gz
- Upload date:
- Size: 3.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 190b96e9bb44de5a210b082036175aefa63336902ba4ab9f497a97bd59296cfe |
|
MD5 | f06cf90642e9c754c90275a201d97071 |
|
BLAKE2b-256 | 860b9039c2fceec2e8467be83ec24b2da257c3bc2e00b1a44d09bf991d31b444 |
File details
Details for the file cachey-0.0.1.linux-x86_64.tar.gz
.
File metadata
- Download URL: cachey-0.0.1.linux-x86_64.tar.gz
- Upload date:
- Size: 6.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cb631598779aab452c11b34f7ca97d538e2330b3614c2bbdd17156c45543824a |
|
MD5 | 14da2ec7c85e5af0e4760065e8916973 |
|
BLAKE2b-256 | 8eee503480aac07de9a880bfecac6f84a4378a8eefb0c38e7078a890150053d6 |