Skip to main content

Feature Selection using Genetic Algorithm (DEAP Framework)

Project description

FeatureSelectionGA

Feature Selection using Genetic Algorithm (DEAP Framework)

Data scientists find it really difficult to choose the right features to get maximum accuracy especially if you are dealing with a lot of features. There are currenlty lots of ways to select the right features. But we will have to struggle if the feature space is really big. Genetic algorithm is one solution which searches for one of the best feature set from other features in order to attain a high accuracy.

Requirements:

pip install deap

Usage:

from sklearn.datasets import make_classification
from sklearn import linear_model
from feature_selection_ga import FeatureSelectionGA
import fitness_function as ff
X, y = make_classification(n_samples=100, n_features=15, n_classes=3,
                           n_informative=4, n_redundant=1, n_repeated=2,
                           random_state=1)

model = linear_model.LogisticRegression(solver='lbfgs', multi_class='auto')
fsga = FeatureSelectionGA(model,X,y, ff_obj = ff.FitnessFunction())
pop = fsga.generate(100)

#print(pop)

Usage (Advanced):

By default, the FeatureSelectionGA has its own fitness function class. We can also define our own FitnessFunction class.

class FitnessFunction:
    def __init__(self,n_splits = 5,*args,**kwargs):
        """
            Parameters
            -----------
            n_splits :int, 
                Number of splits for cv

            verbose: 0 or 1
        """
        self.n_splits = n_splits

    def calculate_fitness(self,model,x,y):
        pass

With this, we can design our own fitness function by defining our calculate fitness! Consider the following example from Vieira, Mendoca, Sousa, et al. (2013) $f(X) = \alpha(1-P) + (1-\alpha) \left(1 - \dfrac{N_f}{N_t}\right)$

Define the constructor init with needed parameters: alpha and N_t.

class FitnessFunction:
    def __init__(self,n_total_features,n_splits = 5, alpha=0.01, *args,**kwargs):
        """
            Parameters
            -----------
            n_total_features :int
            	Total number of features N_t.
            n_splits :int, default = 5
                Number of splits for cv
            alpha :float, default = 0.01
                Tradeoff between the classifier performance P and size of 
                feature subset N_f with respect to the total number of features
                N_t.

            verbose: 0 or 1
        """
        self.n_splits = n_splits
        self.alpha = alpha
        self.n_total_features = n_total_features

Next, we define the fitness function, the name has to be calculate_fitness:

    def calculate_fitness(self,model,x,y):
        alpha = self.alpha
        total_features = self.n_total_features

        cv_set = np.repeat(-1.,x.shape[0])
        skf = StratifiedKFold(n_splits = self.n_splits)
        for train_index,test_index in skf.split(x,y):
            x_train,x_test = x[train_index],x[test_index]
            y_train,y_test = y[train_index],y[test_index]
            if x_train.shape[0] != y_train.shape[0]:
                raise Exception()
            model.fit(x_train,y_train)
            predicted_y = model.predict(x_test)
            cv_set[test_index] = predicted_y

        P = accuracy_score(y, cv_set)
        fitness = (alpha*(1.0 - P) + (1.0 - alpha)*(1.0 - (x.shape[1])/total_features))
        return fitness

Example: You may also see example2.py

X, y = make_classification(n_samples=100, n_features=15, n_classes=3,
                           n_informative=4, n_redundant=1, n_repeated=2,
                           random_state=1)
# Define the model
model = linear_model.LogisticRegression(solver='lbfgs', multi_class='auto')
# Define the fitness function object
ff = FitnessFunction(n_total_features= X.shape[1], n_splits=3, alpha=0.05)
fsga = FeatureSelectionGA(model,X,y, ff_obj = ff)
pop = fsga.generate(100)

Example adopted from pyswarms

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

feature-selection-ga-0.1.0.tar.gz (7.0 kB view details)

Uploaded Source

Built Distribution

feature_selection_ga-0.1.0-py2.py3-none-any.whl (7.2 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file feature-selection-ga-0.1.0.tar.gz.

File metadata

  • Download URL: feature-selection-ga-0.1.0.tar.gz
  • Upload date:
  • Size: 7.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.14.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.0 requests-toolbelt/0.9.1 tqdm/4.48.2 CPython/3.8.5

File hashes

Hashes for feature-selection-ga-0.1.0.tar.gz
Algorithm Hash digest
SHA256 6c87f03236ee5d7da50bec4b8572d58040817db48d936fafdab9bee06d2a371a
MD5 35aea4e3a05f36054dc31654b529e9e4
BLAKE2b-256 434aa72b1431d02d157214aaec7adecf90b7b3eec5413d3b59094ad4e3a84f1d

See more details on using hashes here.

File details

Details for the file feature_selection_ga-0.1.0-py2.py3-none-any.whl.

File metadata

  • Download URL: feature_selection_ga-0.1.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 7.2 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.14.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.0 requests-toolbelt/0.9.1 tqdm/4.48.2 CPython/3.8.5

File hashes

Hashes for feature_selection_ga-0.1.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 0be258ddfcc6d2d2f6276c3001f43fd07e9d6de9a89cf0cc55d206a94f86a6c2
MD5 2fd40c3a15333b02da5bebf2352c6969
BLAKE2b-256 18cbdb6adcc4a9f2cf238b7b81f3ba8bc5a60f449306c64be170a6c187102da9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page