A Python module for constructing a decision tree from multidimensional training data and for using the decision tree for classifying new data
Project description
Version 2.1 is a cleaned up version of Version 2.0. This new version should run faster on large training data files.
Version 2.0 is a major rewrite of the DecisionTree module. This revision was prompted by a number of users wanting to see numeric features incorporated in the construction of decision trees. So here it is! This version allows you to use either purely symbolic features, or purely numeric features, or a mixture of the two. (A feature is numeric if it can take any floating-point value over an interval.)
With regard to the purpose of the module, assuming you have arranged your training data in the form of a table in a text file, all you have to do is to supply the name of the training data file to this module and it does the rest for you without much effort on your part. A decision tree classifier consists of feature tests that are arranged in the form of a tree. The feature test associated with the root node is one that can be expected to maximally disambiguate the different possible class labels for an unlabeled data record. From the root node hangs a set of child nodes, one for each value of the feature at the root node. At each such child node, a feature test is selected that is the most class discriminative given that you have already applied the feature test at the root node and observed the value for that feature. This process is continued until you reach the leaf nodes of the tree. The leaf nodes may either correspond to the maximum depth desired for the decision tree or to the case when you run out of features to test.
Typical usage syntax:
training_datafile = "stage3cancer.csv" dt = DecisionTree.DecisionTree( training_datafile = training_datafile, csv_class_column_index = 2, csv_columns_for_features = [3,4,5,6,7,8], entropy_threshold = 0.01, max_depth_desired = 8, symbolic_to_numeric_cardinality_threshold = 10, ) dt.get_training_data() dt.calculate_first_order_probabilities() dt.calculate_class_priors() dt.show_training_data() root_node = dt.construct_decision_tree_classifier() root_node.display_decision_tree(" ") test_sample = ['g2 = 4.2', 'grade = 2.3', 'gleason = 4', 'eet = 1.7', 'age = 55.0', 'ploidy = diploid'] classification = dt.classify(root_node, test_sample) print "Classification: ", classification