Skip to main content

Python Cache Hierarchy Simulator

Project description

A single-core cache hierarchy simulator written in python.

https://travis-ci.org/RRZE-HPC/pycachesim.svg?branch=master

The goal is to accurately simulate the caching (allocation/hit/miss/replace/evict) behavior of all cache levels found in modern processors. It is developed as a backend to kerncraft, but is also planned to introduce a command line interface to replay LOAD/STORE instructions.

Current features:
  • Inclusive cache hierarchies

  • LRU, MRU, RR and FIFO policies supported

  • Support for cache associativity

  • Only write-allocate with write-back support

  • Speed (core is implemented in C)

  • Python 2.7+ and 3.4+ support, with no other dependencies

Planned features:
  • Rules to define the interaction between cache levels (e.g., exclusive caches, copy-back,…)

  • Support write-through architectures

  • Report timeline of cache events

  • Visualize events (html file?)

  • More detailed store/evict handling (e.g., using dirty bits)

  • (uncertain) instruction cache

License

pycachesim is licensed under AGPLv3.

Usage

from cachesim import CacheSimulator, Cache, MainMemory

cacheline_size = 64
l3 = Cache(20480, 16, cacheline_size, "LRU")  # 20MB 16-ways
l2 = Cache(512, 8, cacheline_size, "LRU", parent=l3)  # 256kB 8-ways
l1 = Cache(64, 8, cacheline_size, "LRU", parent=l2)  # 32kB 8-ways
mem = MainMemory(l3)
cs = CacheSimulator(l1, mem, write_allocate=True)

cs.load(2342)  # Loads one byte from address 2342, should be a miss in all cache-levels
cs.store(512, length=8)  # stores 8 bytes to addresses 512-519,
                                 # will also be a load miss (due to write-allocate)
cs.load(512, 520)  # Loads from address 512 until (exclusive) 520 (eight bytes)

print(list(cs.stats()))

This should return:

[{u'LOAD': 17L, u'MISS': 2L, u'HIT': 15L, u'STORE': 8L},
 {u'LOAD': 2L, u'MISS': 2L, u'HIT': 0L, u'STORE': 8L},
 {u'LOAD': 2L, u'MISS': 2L, u'HIT': 0L, u'STORE': 8L},
 {u'LOAD': 2L, u'MISS': 0L, u'HIT': 2L, u'STORE': 8L}]

Each dictionary refers to one memory-level, starting with L1 and ending with main memory. The 17 loads are the sum of all byte-wise access to the cache-hierarchy. 1 (from first load) +8 (from store with write-allocate) +8 (from second load) = 17.

The 15 hits, are for bytes which were cached already. The high number is due to the byte-wise operation of the interface, so 15 bytes were already present in cache. Internally the pycachesim operates on cache-lines, which all addresses get transformed to. Thus, the two misses throughout all cache-levels are actually two complete cache-lines and after the cache-line had been loaded the consecutive access to the same cache-line are handled as hits.

So: hits and loads in L1 are byte-wise, just like stores throughout all cache-levels. Every other statistical information are based on cache-lines.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pycachesim-0.1.3.1.tar.gz (10.5 kB view details)

Uploaded Source

File details

Details for the file pycachesim-0.1.3.1.tar.gz.

File metadata

File hashes

Hashes for pycachesim-0.1.3.1.tar.gz
Algorithm Hash digest
SHA256 b3afc652a3be9d4498bb2d61eff3a99fd72455967d50e3fd41519499e85beab6
MD5 656a56dd01b2132c16437f75b5f22797
BLAKE2b-256 8dacb6bfdf8d52f9e9d82680426525c58cc19345649260d0c0c908c63657a618

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page