Skip to main content

A WSGI middleware which processes ESI directives

Project description

`wesgi` implements an ESI Processor as a WSGI middeware. It is primarily aimed
at development environments to simulate the production ESI Processor. Under
certain conditions it may be used in production as well.


This implementation currently only implements ``<esi:include>`` and
``<!--esi -->`` comments. The relevant specifications and documents are:



An ESI processor generally makes a lot of network calls to other services in
the process of putting together a page. So, in general, to reach very high
levels of performance it should be asynchronous. Standard Python and WSGI is
synchronous, placing an upper limit on performance which depends on the

- How many threads are used
- How many ESI includes used per page
- The speed of the servers serving the ESI Includes
- Whether `wesgi` uses a cache and if the ESI includes come with Cache-Control

Depending on the situation, `wesgi` may be performant enough for you.

There are also a number of ways to run WSGI applications asynchronously, with
varying definitions of "asynchronous".


Configuration via Python

>>> from wesgi import MiddleWare
>>> from wsgiref.simple_server import demo_app

To use it in it's default configuration for a development server:

>>> app = MiddleWare(demo_app)

To simulate an Akamai Production environment:

>>> from wesgi import AkamaiPolicy
>>> policy = AkamaiPolicy()
>>> app = MiddleWare(demo_app, policy=policy)

To simulate an Akamai Production environment with "chase redirect" turned on:

>>> policy.chase_redirect = True
>>> app = MiddleWare(demo_app, policy=policy)

If you wish to use it for a production server, it's advisable to turn debug
mode off and enable some kind of cache:

>>> from wesgi import LRUCache
>>> from wesgi import Policy
>>> policy.cache = LRUCache()
>>> app = MiddleWare(demo_app, debug=False, policy=policy)

The ``LRUCache`` is a memory based cache using an approximation of the LRU
algorithm. The good parts of it were inspired by Raymond Hettinger's
``lru_cache`` recipe.

Other available caches that can be easily integrated are ``httplib2``'s
``FileCache`` or ``memcache``. See the ``httplib2`` documentation for details.

Configuration via paste.ini

The ``wesgi.filter_app_factory`` function lets you configure ``wesgi`` in your
paste.ini file. For example::

paste.filter_app_factory = wesgi:filter_app_factory
next = myapp


Development on `wesgi` is centered around this github branch:


0.12 (2016-10-06)


- fix dictionary changed size during iteration errors on Python 3

0.11 (2016-05-25)


- Configuration via paste, rescued from missing 0.9 release.

0.10 (2016-05-25)


- Python 3 support, drop Python 2.5 support.
- Request header forwarding by default.
- Turn relative links in <esi:include into absolute links before

0.8 (2011-07-07)


- A ``max_object_size`` option for ``wesgi.LRUCache`` to limit the maximum size
of objects stored.

0.7 (2011-07-06)


- Major refactoring to use ``httplib2`` as the backend to get ESI includes. This
brings along HTTP Caching.
- A memory based implementation of the LRU caching algoritm at ``wesgi.LRUCache``.
- Handle ESI comments.


- Fix bug where regular expression to find ``src:includes`` could take a long time.

0.5 (2011-07-04)

- Initial release.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for wesgi, version 0.12
Filename, size File type Python version Upload date Hashes
Filename, size wesgi-0.12.tar.gz (14.5 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page