Skip to main content

User behavior prediction from event data.

Project description

Neural Lifetimes

#TODO Insert Logo

test lint format docs pypi

Introduction

The Neural Lifetimes package is an open-source lightweight framework based on PyTorch and PyTorch-Lightning to conduct modern lifetimes analysis based on neural network models. This package provides both flexibility and simplicity:

  • Users can use the simple interface to load their own data and train good models out-of-the-box with very few lines of code.
  • The modular design of this package enables users to selectively pick individual tools.

Possible usage of Neural Lifetimes is

  • Predicting customer transactions
  • Calculating Expected Customer Lifetime Values
  • Obtain Customer Embeddings
  • TODO add more

Features

Simple Interface

You can run your own dataset with a few lines of code:

Data

We introduce a set of tools to

  • Load data in batches from database
  • Handle sequential data
  • Load data from interfaces such as Pandas, Clickhouse, Postgres, VAEX and more

We further provide a simulated dataset based on the BTYD model for exploring this package and we provide tutorials to understand the mechanics of this model.

Models

We provide a simple GRU-based model that embeds any data and predicts sequences of transactions.

Model Inference

The class inference.ModelInference allows to simulate sequences from scratch or extend sequences from a model artifact. A sequence is simulated/extended iteratively by adding one event at the end of the sequence each time. To simulate an event, the current sequence is used as the model input and the distributions outputted by the model are used to sample the next event. The sampled event is added to the sequence and the resulting sequence is used as an input in the following iteration. The process ends if a sequence reaches the end_date or if the customer churns.

To initialize the ModelInference class needs, you need to give the filepath of a trained model artifact:

inference = ModelInference(
    model_filename = "/logs/artifacts/version_1/epoch=0-step=1-val_loss_total=1.0.ckpt"
)

ModelInference has two main methods:

  • simulate_sequences: simulates n sequences from scratch. The sequences start with an event randomly sampled between start_date and start_date_limit. The sequences of events are build by sampling from the model distribution ouputs. The sequence is initialized with a Starting Token event. A sequence will end when if either the user churns or if an event happens after the end_date.
simulate_sequences = inference.simulate_sequences(
    n = 10,
    start_date = datetime.datetime(2021, 1, 1, 0, 0, 0),
    start_date_limit = datetime.datetime(2021, 2, 1, 0, 0, 0),
    end_date = datetime.datetime(2021, 4, 1, 0, 0, 0),
    start_token_discr = 'StartToken',
    start_token_cont = 0
)
  • extend_sequence: takes a ml_utils.torch.sequence_loader.SequenceLoader loader and the start and end date of the simulation. The method processes the loader in batches. The start_date must be after any event in any sequence. Customers might have already churned after their last event so we first need to infer the churn status of the customers. To infer the churn status, we input a sequence into the model and sample from the output distributions. If the churn status after the last event is True or the next event would have happened before start_date we infer that that customer has churned. For all the customer sequence that haven't churned we extend the sequences as in simulate_sequences.
raw_data, extended_seq = inference.extend_sequence(
    loader,
    start_date = datetime.datetime(2021, 1, 1, 0, 0, 0),
    end_date = datetime.datetime(2021, 4, 1, 0, 0, 0),
    return_input = True
)

The extend_sequence method can return also the original sequences if return_input = True. extended_seq contains list of dicts where each dict is a processed batch. Each dict has two keys: 'extended_sequences' and 'inferred_churn'. 'extended_sequences' contains the extended sequences that were inferred NOT to have churned. 'inferred_churn' contains the sequences that were inferred to have churned.

Documentation

The documentation for this repository is available at

TODO Add Link

Install

You may install the package from PyPI:

pip install neural-lifetimes

Alternatively, you may install from git to get access to the latest commits:

pip install git+https://github.com/transferwise/neural-lifetimes

Getting started

In the documentation there is a tutorial on getting started.

TODO add link

#TODO add google colab notebook to start

Useful Resources

Contribute

We welcome all contributions to this repository. Please read the Contributing Guide.

If you have any questions or comments please raise a Github issue.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural-lifetimes-0.1.0.tar.gz (73.7 kB view details)

Uploaded Source

Built Distribution

neural_lifetimes-0.1.0-py3-none-any.whl (89.1 kB view details)

Uploaded Python 3

File details

Details for the file neural-lifetimes-0.1.0.tar.gz.

File metadata

  • Download URL: neural-lifetimes-0.1.0.tar.gz
  • Upload date:
  • Size: 73.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.15

File hashes

Hashes for neural-lifetimes-0.1.0.tar.gz
Algorithm Hash digest
SHA256 a77261008889af8d6192faf5cf6c6188b092ac72b11e7b4d99ab21568531bbff
MD5 83be81aef6347ad34478838fbfb7d1c9
BLAKE2b-256 1b3678da436c50d52c97255589247a8d3573eb064ea04e4f0faee3d0d7eda2c1

See more details on using hashes here.

File details

Details for the file neural_lifetimes-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for neural_lifetimes-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3742d9d5621cab357b31d4a0ec5e68f11bc435a90624f92846b83cb917b2efa3
MD5 509b1771f10736d38f20b5ad29bcb856
BLAKE2b-256 fe997ca304ec3a720a46dcd50f43b4a26ce420e111dfa65ea397b6c877d8cf77

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page