This is a pre-production deployment of Warehouse, however changes made here WILL affect the production instance of PyPI.
Latest Version Dependencies status unknown Test status unknown Test coverage unknown
Project Description

This repository will contain several variants of decision tree / ensemble classification algorithms, written in an object-oriented style. My immediate goal is to try to reproduce some of the results from this paper on canonical correlation forests, which I am testing against the same datasets.

Where possible, external parameters names will match scikit-learn’s implementations of decision trees and random forests.


One major difference from scikit-learn is that datasets and their attributes are treated as first-class objects. Additionally, all classifiers must be initialized with their training dataset (as opposed to calling fit).

from oo_trees.dataset import Dataset
from oo_trees.decision_tree import DecisionTree
from oo_trees.random_forest import RandomForest

X = examples # numpy 2D numeric array
y = outcomes # numpy 1D array

dataset = Dataset(X, y)

training_dataset, test_dataset = dataset.random_split(0.75)

d_tree = DecisionTree(training_dataset)
forest = RandomForest(training_dataset)


d_tree_confusion_matrix = d_tree.performance_on(test_dataset)
forest_confusion_matrix = forest.performance_on(test_dataset)


When initializing datasets, we assume all attributes of the training examples are categorical. If that is not the case, you can pass in an additional attribute_types variable on initialize:

from oo_trees.dataset import Dataset
from oo_trees.attribute import NumericAttribute, CategoricalAttribute

X = examples
y = outcomes

attributes = [
  NumericAttribute(index=0, name='age'),
  CategoricalAttribute(index=1, name='sex'),
  NumericAttribute(index=2, name='income')

dataset = Dataset(X, y, attributes)

The logic for finding the best split is differs for each attribute type, and in the future there may be additional type-specific parameters (such as importance or number-to-name mappings) useful for classification or display.

Release History

Release History


This version

History Node

TODO: Figure out how to actually get changelog content.

Changelog content for this version goes here.

Donec et mollis dolor. Praesent et diam eget libero egestas mattis sit amet vitae augue. Nam tincidunt congue enim, ut porta lorem lacinia consectetur. Donec ut libero sed arcu vehicula ultricies a non tortor. Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Show More

Download Files

Download Files

TODO: Brief introduction on what you do with files - including link to relevant help section.

File Name & Checksum SHA256 Checksum Help Version File Type Upload Date
oo_trees-0.0.1.tar.gz (3.9 kB) Copy SHA256 Checksum SHA256 Source Jan 11, 2016

Supported By

WebFaction WebFaction Technical Writing Elastic Elastic Search Pingdom Pingdom Monitoring Dyn Dyn DNS HPE HPE Development Sentry Sentry Error Logging CloudAMQP CloudAMQP RabbitMQ Heroku Heroku PaaS Kabu Creative Kabu Creative UX & Design Fastly Fastly CDN DigiCert DigiCert EV Certificate Rackspace Rackspace Cloud Servers DreamHost DreamHost Log Hosting