Skip to main content

Library to aid in organizing, running, and debugging regular expressions against large bodies of text.

Project description

Contributors Forks Stargazers Issues MIT License LinkedIn



Library to aid in organizing, running, and debugging regular expressions against large bodies of text.

Table of Contents

About the Project

The goal of this library is to simplify the deployment of regular expression on large bodies of text, in a variety of input formats.

Getting Started

To get a local copy up and running follow these simple steps.



  1. Clone the repo
    git clone
  2. Install requirements (requirements-dev is for test packages)
    pip install -r requirements.txt -r requirements-dev.txt
  3. If you wish to read text from SAS or SQL, you will need to install additional requirements. These additional requirements files may be of use:
    • ODBC-connection: requirements-db.txt
    • Postgres: requirements-psql.txt
    • SAS: requirements-sas.txt
  4. Run tests.
    set/export PYTHONPATH=src
    pytest tests


Example Implementations

Build Customized Algorithm

  • Create 4 files:
    • defines regular expressions of interest
      • See examples/ for some examples
    • tests for those regular expressions
      • Why? Make sure the patterns do what you think they do
    • defines algorithm (how to use regular expressions); returns a Result
      • See examples/ for guidance
    • config.(py|json|yaml): various configurations defined in
      • See example in examples/ for basic config

Input Data

Accepts a variety of input formats, but will need to at least specify a document_id and document_text. The names are configurable.

Sentence Splitting

By default, the input document text is expected to have each sentence on a separate line. If a sentence splitting scheme is desired, it will need to be supplied to the application.


For more details, see the example config or consult the schema

Output Format

  • Recommended output format is jsonl
    • The data can be extracted using python:
import json
with open('output.jsonl') as fh:
    for line in fh:
         data = json.loads(line)  # data is dict
  • Output variables are configurable and can include:

    • id: unique id for line
    • name: document name
    • algorithm: name of algorithm with finding
    • value
    • category: name of category (usually the pattern; multiple categories contribute to an algorithm)
    • date
    • extras
    • matches: pattern matches
    • text: captured text
    • start: start index/offset of match
    • end: end index/offset of match
  • Scripts to accomplish useful tasks with the output are included in the scripts directory.





See the open issues for a list of proposed features (and known issues).


Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request


Distributed under the MIT License.

See LICENSE or for more information.


Please use the issue tracker.


Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

runrex-0.5.0.tar.gz (54.0 kB view hashes)

Uploaded source

Built Distribution

runrex-0.5.0-py3-none-any.whl (39.6 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page