Skip to main content

Consistently partitions a dataset into a training set and a test set

Project description

Data Partitioner

Simple project that can be used to consistently partition a data set into two parts - a test set and a training set. There are also helpful methods that provide a way to partition into more groups of elements.

Installation

The easiest way to install this module is to install it via pip:

$ pip install data_partitioner

Usage

Using this module is dead simple. The main module (DatasetSuplier) offers two methods that return the training set (training_set()) or the test set (test_set()). Both of these methods are consitent, so no matter how many times you call them on the same object, they will return the same set of elements back.

You have two configuration options you can specify:

  • training_percent - the percent of the dataset used for the training set. It defaults to 0.8.
  • partitioning_function - the function that’s used to partition the dataset.
  • It defaults to data_partitioner.pseudorandom_function, which will randomly assign every element of the dataset to either the test set or the training set.
  • Another useful existing option you can set it to is data_partitioner.LinearFakeRandomFunction, which will make sure that no elements in the training set come after any elements of the test set.
  • You can also manually write this callable, which will take one parameter as input - the index of the element currently considered.

Example

from data_partitioner import DatasetSuplier

dataset = [
    ('Alice', 10, 23, 401),
    ('Bob', 20, 40, 812),
    ('Christine', 41, 92, 533),
    ('Dave', 843, 12, -5),
    ('Elizabeth', 682, 33, -7),
    ('Fred', 95, 642, 34),
]
suplier = DatasetSuplier(dataset)

for iteration in range(100):
    for element in suplier.training_set():
        do_train(element[1])
for element in suplier.test_set():
    do_evaluate(element[1])

Project details


Release history Release notifications

This version
History Node

0.1

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
data_partitioner-0.1-py3-none-any.whl (6.0 kB) Copy SHA256 hash SHA256 Wheel py3
data_partitioner-0.1.tar.gz (3.5 kB) Copy SHA256 hash SHA256 Source None

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page