Python Cache Hierarchy Simulator
Project description
A single-core cache hierarchy simulator written in python.
The goal is to accurately simulate the caching (allocation/hit/miss/replace/evict) behavior of all cache levels found in modern processors. It is developed as a backend to kerncraft, but is also planned to introduce a command line interface to replay LOAD/STORE instructions.
- Current features:
Inclusive cache hierarchies
LRU, MRU, RR and FIFO policies supported
Support for cache associativity
Only write-allocate with write-back support
Speed (core is implemented in C)
Python 2.7+ and 3.4+ support, with no other dependencies
- Planned features:
Rules to define the interaction between cache levels (e.g., exclusive caches, copy-back,…)
Support write-through architectures
Report timeline of cache events
Visualize events (html file?)
More detailed store/evict handling (e.g., using dirty bits)
(uncertain) instruction cache
License
pycachesim is licensed under AGPLv3.
Usage
from cachesim import CacheSimulator, Cache, MainMemory
cacheline_size = 64
l3 = Cache(20480, 16, cacheline_size, "LRU") # 20MB 16-ways
l2 = Cache(512, 8, cacheline_size, "LRU", parent=l3) # 256kB 8-ways
l1 = Cache(64, 8, cacheline_size, "LRU", parent=l2) # 32kB 8-ways
mem = MainMemory(l3)
cs = CacheSimulator(l1, mem, write_allocate=True)
cs.load(2342) # Loads one byte from address 2342, should be a miss in all cache-levels
cs.store(512, length=8) # stores 8 bytes to addresses 512-519,
# will also be a load miss (due to write-allocate)
cs.load(512, 520) # Loads from address 512 until (exclusive) 520 (eight bytes)
print(list(cs.stats()))
This should return:
[{u'LOAD': 17L, u'MISS': 2L, u'HIT': 15L, u'STORE': 8L},
{u'LOAD': 2L, u'MISS': 2L, u'HIT': 0L, u'STORE': 8L},
{u'LOAD': 2L, u'MISS': 2L, u'HIT': 0L, u'STORE': 8L},
{u'LOAD': 2L, u'MISS': 0L, u'HIT': 2L, u'STORE': 8L}]
Each dictionary refers to one memory-level, starting with L1 and ending with main memory. The 17 loads are the sum of all byte-wise access to the cache-hierarchy. 1 (from first load) +8 (from store with write-allocate) +8 (from second load) = 17.
The 15 hits, are for bytes which were cached already. The high number is due to the byte-wise operation of the interface, so 15 bytes were already present in cache. Internally the pycachesim operates on cache-lines, which all addresses get transformed to. Thus, the two misses throughout all cache-levels are actually two complete cache-lines and after the cache-line had been loaded the consecutive access to the same cache-line are handled as hits.
So: hits and loads in L1 are byte-wise, just like stores throughout all cache-levels. Every other statistical information are based on cache-lines.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file pycachesim-0.1.3.2.tar.gz
.
File metadata
- Download URL: pycachesim-0.1.3.2.tar.gz
- Upload date:
- Size: 10.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ebc00b15f7ad85a8c0f9c5e702a3024621069fe2ff8510c478b5de477d684ccd |
|
MD5 | 77dfe5b8a3d3546d8cd48c99a38a8855 |
|
BLAKE2b-256 | bac87b2cdc7b6410359f02a9820a6f63a4420777244e98f8c7f4cbfe2ac3a010 |