Skip to main content

A scheduler backed by Redis with a very simple interface

Project description

Build Status

A scheduler backed by Redis with a very simple interface.

RACHE doesn’t handle job execution. It only maintains a list of jobs and their theoretical execution time. It’s up to you to monitor pending jobs and send them to an actual task queue.


RACHE works with any Python version from 2.6 to 3.3. You only need a working Redis server.

pip install rache


By default RACHE connects to Redis on localhost, port 6379, database 0. To override this, set a REDIS_URL environment variable:


RACHE prefixes all its Redis keys with rache:. You can override this by setting the RACHE_REDIS_PREFIX environment variable.


import rq

from rache import schedule_job, pending_jobs

# Schedule a job now
schedule_job('', schedule_in=0, timeout=10)

# Get pending jobs
jobs = pending_jobs()

# Send them to the task queue for immediate execution
for job in jobs:


schedule_job('job id', schedule_in=<seconds>, connection=None, **kwargs)

A given job ID is unique from the scheduler perspective. Scheduling it twice results in it being scheduled at the time decided in the last call.

**kwargs can be used to attach data to your jobs. For instance, if you have jobs to fetch URLs and want to attach a timeout to these jobs:

schedule_job('', schedule_in=3600, timeout=10)

The job data is persistent. To remove a key from the data, call schedule_job() with that key set to None:

schedule_job('', schedule_in=3600, timeout=None)

schedule_in is mandatory. This means you can’t update an existing job without rescheduling it.

connection allows you to pass a custom Redis connection object. This is useful if you have your own connection pooling and want to manage connections yourself.


jobs = pending_jobs(reschedule_in=None, limit=None, connection=None)

(the returned value is a generator)

Fetches the pending jobs and returns a list of jobs. Each job is a dictionnary with an id key and its additional data.

reschedule_in controls whether to auto-reschedule jobs in a given time. This is useful if you have periodic jobs but also want to special-case some jobs according to their results (enqueue is rq-style syntax):

jobs = pending_jobs(reschedule_in=3600)

for job in jobs:
    enqueue(do_something, kwargs=job)

def do_something(**kwargs):
    # … do some work

    if some_condition:
        # re-schedule in 30 days
        schedule_job(kwargs['id'], schedule_in=3600 * 24 * 30)

limit allows you to limit the number of jobs returned. Remaining jobs are left on schedule, even if they should have been scheduled right now.

connection allows you to pass a custom Redis connection object.


delete_job('<job id>', connection=None)

Removes a job completely from the scheduler.

connection allows you to pass a custom Redis connection object.


job_details('<job id>', connection=None)

Returns a dictionnary with the job data. The job ID and scheduled time are set in the id and schedule_at keys of the returned value.

connection allows you to pass a custom Redis connection object.


scheduled_jobs(with_times=False, connection=None)

(the returned value is a generator)

Fetches all the job IDs stored in the scheduler. This returns a list of IDs or a list of (job_id, timestamp) tuples if with_times is set to True.

This is useful for syncing jobs between the scheduler and a database, for instance.

connection allows you to pass a custom Redis connection object.


Create a local environment:

virtualen env
source env/bin/activate
pip install -e .

Run the tests:

python test

Or for all supported python versions:


Hack, fix bugs and submit pull requests!


  • 0.3.1 (2013-08-31):
    • Made pending_jobs work correctly with both Redis and StrictRedis clients.
  • 0.3 (2013-08-31):
    • Allow passing custom Redis connection objects for fine control on open connections.
  • 0.2.2 (2013-07-10):
    • Fixed a typo that lead to AttributeError when retrieving some jobs.
  • 0.2.1 (2013-07-03):
    • Allowed pending_jobs() to return non-unicode data if undecodable bytes are passed to schedule_job().
  • 0.2 (2013-06-02):
    • Added limit kwarg to pending_jobs().
    • Allowed schedule_in to be a timedelta alternatively to a number of seconds.
    • Added job_details().
    • Numerical data attached to jobs is cast to int() when returned.
  • 0.1 (2013-06-01):
    • Initial release

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rache-0.3.1.tar.gz (6.8 kB view hashes)

Uploaded source

Built Distribution

rache-0.3.1-py2.py3-none-any.whl (8.7 kB view hashes)

Uploaded py2 py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page