Skip to main content

A real-time data processing pipeline

Project description

LogicSponge Logo

logicsponge-processmining is a library for process-mining tasks that is built on logicsponge. Process mining involves a set of tools for modeling, analyzing, and improving business processes.

In a nutshell

The current implementation includes the following features:

  • Event-log prediction in both batch and streaming modes, using frequency prefix trees, n-grams, LSTMs, and ensemble methods.
  • Visualization of event streams based on their prefix trees.

Getting started

We recommend starting with our logicsponge tutorial to get acquainted with the basics of how logicsponge processes data streams.
Afterwards, to get started with logicsponge-processmining, install it using pip:

pip install logicsponge-processmining

Event-log prediction

Event-log prediction involves anticipating events given historical data about a process. In the streaming case, we receive a sequence of events, where each event is a pair (case_id, activity) consisting of a case ID and an activity. As events arrive, we train a model incrementally, allowing it to predict the next activity for a given case based on the sequence of activities observed so far.

logicsponge-processmining offers several predefined models: frequency prefix trees, n-grams, LSTMs, and ensemble methods (soft, hard, and adaptive voting).

Let’s walk through the required imports to understand the structure of the library:

# example.py

import logicsponge.core as ls
from logicsponge.processmining.algorithms_and_structures import Bag, FrequencyPrefixTree, NGram
from logicsponge.processmining.models import BasicMiner, SoftVoting
from logicsponge.processmining.streaming import IteratorStreamer, StreamingActionPredictor
from logicsponge.processmining.test_data import dataset

This imports algorithms like frequency prefix trees and n-grams. These classes also allow you to define your own data structures.

You will then import models:

  • BasicMiner wraps a single algorithm (e.g., an n-gram) to produce a predictor model.
  • SoftVoting (and other ensemble methods) takes a list of models and produces a new model that applies soft voting.

Instances of these classes are ready for batch learning. To use them in streaming mode, wrap them with StreamingActionPredictor. Below, we define two models:

  • The first is a 6-gram (look-back window size of 5).
  • The second combines several algorithms using soft voting.

By configuring "include_stop": False, stop predictions are ignored, and probabilities are normalized. This is often suitable in streaming settings unless explicit stop activities are present.

config = {
    "include_stop": False,
}

model1 = StreamingActionPredictor(
    strategy=BasicMiner(algorithm=NGram(window_length=5), config=config),
)

model2 = StreamingActionPredictor(
    strategy=SoftVoting(
        models=[
            BasicMiner(algorithm=Bag()),
            BasicMiner(algorithm=FrequencyPrefixTree(min_total_visits=10)),
            BasicMiner(algorithm=NGram(window_length=2)),
            BasicMiner(algorithm=NGram(window_length=3)),
            BasicMiner(algorithm=NGram(window_length=4)),
        ],
        config=config,
    )
)

Next, we set up the sponge to stream data from a dataset and apply a model. For clarity, a key filter is applied first.

The dataset can be any iterator. For illustration, we use the Sepsis dataset available at 4TU.ResearchData. When you run the Python script, you will be prompted to download it.

streamer = IteratorStreamer(data_iterator=dataset)

sponge = (
    streamer
    * ls.KeyFilter(keys=["case_id", "action"])
    * model2
    * ls.AddIndex(key="index", index=1)
    * ls.Print()
)


sponge.start()

A single prediction might look like this. In addition to the actual case_id and action, it provides:

  • The most likely predicted activity.
  • The top-3 activities.
  • The probability distribution over all possible activities.
{
    'case_id': 'FAA',
    'action': 'Return ER',
    'prediction': {
        'action': 'Return ER',
        'top_k_actions': ['Return ER', 'Leucocytes', 'Release E'],
        'probability': 0.9986388006307096,
        'probs': {
            # [...]
            'Leucocytes': 0.0013611993692904283,
            'Return ER': 0.9986388006307096,
            # [...]
        }
    },
    'latency': 0.06985664367675781,
    'index': 15214
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

logicsponge_processmining-0.0.2.tar.gz (31.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

logicsponge_processmining-0.0.2-py3-none-any.whl (39.0 kB view details)

Uploaded Python 3

File details

Details for the file logicsponge_processmining-0.0.2.tar.gz.

File metadata

File hashes

Hashes for logicsponge_processmining-0.0.2.tar.gz
Algorithm Hash digest
SHA256 fea44e592152dc5bc13de3bfccc4d88242b5426c69bbabc3c780b1ed6cf2a975
MD5 543854e62cd1c62493334bb9c0afdf72
BLAKE2b-256 7dab9f2434cd2b1d93698219d65aafee25e49964791fb781cb4f431e08f88571

See more details on using hashes here.

File details

Details for the file logicsponge_processmining-0.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for logicsponge_processmining-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 10a2a7f9261f73aeb28d2b95c20d2a2b8903726c7e8958477395855ce0983371
MD5 e81180bd353290b94754ab75a9c8b02a
BLAKE2b-256 97f659ceac42fe69013b21cddf937b513b9fdb2d580cd41f82d8e1b932db16dd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page