A real-time data processing pipeline
Project description
logicsponge-processmining is a library for process-mining tasks that is built on logicsponge. Process mining involves a set of tools for modeling, analyzing, and improving business processes.
In a nutshell
The current implementation includes the following features:
- Event-log prediction in both batch and streaming modes, using frequency prefix trees, n-grams, LSTMs, and ensemble methods.
- Visualization of event streams based on their prefix trees.
Getting started
We recommend starting with our logicsponge tutorial to get acquainted with the basics of how logicsponge processes data streams.
Afterwards, to get started with logicsponge-processmining, install it using pip:
pip install logicsponge-processmining
Event-log prediction
Event-log prediction involves anticipating events given historical data about a process. In the streaming case, we receive a sequence of events, where each event is a pair (case_id, activity) consisting of a case ID and an activity. As events arrive, we train a model incrementally, allowing it to predict the next activity for a given case based on the sequence of activities observed so far.
logicsponge-processmining offers several predefined models: frequency prefix trees, n-grams, LSTMs, and ensemble methods (soft, hard, and adaptive voting).
Let’s walk through the required imports to understand the structure of the library:
# example.py
import logicsponge.core as ls
from logicsponge.processmining.algorithms_and_structures import Bag, FrequencyPrefixTree, NGram
from logicsponge.processmining.models import BasicMiner, SoftVoting
from logicsponge.processmining.streaming import IteratorStreamer, StreamingActionPredictor
from logicsponge.processmining.test_data import dataset
This imports algorithms like frequency prefix trees and n-grams. These classes also allow you to define your own data structures.
You will then import models:
BasicMinerwraps a single algorithm (e.g., an n-gram) to produce a predictor model.SoftVoting(and other ensemble methods) takes a list of models and produces a new model that applies soft voting.
Instances of these classes are ready for batch learning. To use them in streaming mode, wrap them with StreamingActionPredictor. Below, we define two models:
- The first is a 6-gram (look-back window size of 5).
- The second combines several algorithms using soft voting.
By configuring "include_stop": False, stop predictions are ignored, and probabilities are normalized. This is often suitable in streaming settings unless explicit stop activities are present.
config = {
"include_stop": False,
}
model1 = StreamingActionPredictor(
strategy=BasicMiner(algorithm=NGram(window_length=5), config=config),
)
model2 = StreamingActionPredictor(
strategy=SoftVoting(
models=[
BasicMiner(algorithm=Bag()),
BasicMiner(algorithm=FrequencyPrefixTree(min_total_visits=10)),
BasicMiner(algorithm=NGram(window_length=2)),
BasicMiner(algorithm=NGram(window_length=3)),
BasicMiner(algorithm=NGram(window_length=4)),
],
config=config,
)
)
Next, we set up the sponge to stream data from a dataset and apply a model. For clarity, a key filter is applied first.
The dataset can be any iterator. For illustration, we use the Sepsis dataset available at 4TU.ResearchData. When you run the Python script, you will be prompted to download it.
streamer = IteratorStreamer(data_iterator=dataset)
sponge = (
streamer
* ls.KeyFilter(keys=["case_id", "action"])
* model2
* ls.AddIndex(key="index", index=1)
* ls.Print()
)
sponge.start()
A single prediction might look like this. In addition to the actual case_id and action, it provides:
- The most likely predicted activity.
- The top-3 activities.
- The probability distribution over all possible activities.
{
'case_id': 'FAA',
'action': 'Return ER',
'prediction': {
'action': 'Return ER',
'top_k_actions': ['Return ER', 'Leucocytes', 'Release E'],
'probability': 0.9986388006307096,
'probs': {
# [...]
'Leucocytes': 0.0013611993692904283,
'Return ER': 0.9986388006307096,
# [...]
}
},
'latency': 0.06985664367675781,
'index': 15214
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file logicsponge_processmining-0.0.1.tar.gz.
File metadata
- Download URL: logicsponge_processmining-0.0.1.tar.gz
- Upload date:
- Size: 35.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.28.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5cc60b9e92de717470e01a97a063b9ee29cde769a3564ac09ac7c29280ba06ed
|
|
| MD5 |
2e96d77ef9b40886147d4834bfd1f36c
|
|
| BLAKE2b-256 |
e972ba2dc52e52ef9dc1484252c8552731da0201481067e834192f3bf4158c37
|
File details
Details for the file logicsponge_processmining-0.0.1-py3-none-any.whl.
File metadata
- Download URL: logicsponge_processmining-0.0.1-py3-none-any.whl
- Upload date:
- Size: 44.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.28.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d03a7b35ea593bb76a48d372761ef48cb781f68464d4b9ad17467129ab398505
|
|
| MD5 |
013a68a49f011f15b11bdd393c4e0f05
|
|
| BLAKE2b-256 |
c967260bae62cd47e7bfe41d6245e4351b4b29a7a9b4057decf0e81ed91ccda0
|