Skip to main content

A machine learning tool for Python, in Python

Project description

MLgebra

A machine learning tool for Python, in Python.

Project details

https://pypi.org/project/mlgebra/

Github

License: MIT

pip install mlgebra

This project is a continuation of this repository. Which is also a continuation of vectorgebra library. This particular library therefore requires vectorgebra to be installed. Currently, the development process continues and the code is probably unstable.

This is a basic tool of machine learning that is created based on vectorgebra. All internal operations are imported from vectorgebra. Specific algorithms for machine learning are created based upon them.

There are 2 main classes; Node, Model.

Node represents neurons in the neural network. They hold the weights connecting it to the previous layer in the model as a vector.

Model

Model class has all functionality to train one. Initialization starts with giving it a name, choosing whether to use decimal library and to use multiprocessing. Multiprocessing should only be used when dealing with large models and large data batch sizes. Otherwise, linear program flow is better.

Model name must be unique. Weights and biases of the model are saved to a file according to this given name. Initialization will raise an error if a weight file with the same name is found in the same directory.

Printing the model object will print out details about the model.

Model.addDense(amount)

The only layer type currently is "dense". This method adds a layer of "amount" neurons to the model.

Model.saveModel()

Saves the weight file of the model.

Model.readWeightFile(path)

Loads a weight file specified via "path" to the model. This operation must be done without any bias or weight finalization. However, layers must be configured before using it.

Model.readMNIST(path1, path2, path3, path4)

Reads training/test data/label of MNIST database and returns it in order. Paths are in order to training data, training labels, test data, test labels. Returned objects are lists of vectors. They can be directly used for .singleTrain(), oru you can group them and use them in parallel.

Model.finalize(generation="flat", a=-2, b=2)

This method "finalizes" the model. Therefore, it should be called at the end of every other thing. This method initializes all the weights to layers according to the given "generation" method. This can be uxavier, nxavier, he, naive and flat. uxavier is uniform Xavier initalization. nxavier is normal Xavier initialization. he is He initialization. naive is initialization from normal distribution. If this method is choosen, "a" is the standard deviation of the curve. flat is just random float generation from the range [a, b).

Model.includeBias(generation="flat", a=-2, b=2)

This method includes bias vectors to the model. It can be omitted if you don't want to use any bias. Of course this is not a very good idea but being able to omit it adds experimentability to the library.

flat, zero and constant are possible generation methods. flat is generation from random floats in range [a, b). zero initializes all biases to 0, and constant initializes all them to a constant value given by "a".

Model.configureMethods(**kwargs)

This method configures all other parameters of your model. Activation function, input normalization, operations to the output logits and parameters to all related functions are given here.

All keys: input, output, activator, cutoff, leak

All possible choices: minmax, softmax, sigmoid, relu, cutoff and leak accepts numerical values

These possibilities will be more diverse in the upcoming versions. Currently, these are all.

Model.updateMatrices()

This is a helper function called from training methods. self.last_output must be non-empty to use this. You can use this method if you want to use different training approaches with your own function definitions. Currently only optimizer method that is being used by training functions is stochastic gradient descent.

Model.singleTrain(data, label, learing_rate)

This function trains the model by the given singular data. "data" must either be a Vectorgebra.Vector or Vectorgebra.Matrix. "label" must be a Vectorgebra.Vector. You can choose "learning_rate" as you wish.

Model.train(dataset, labelset, learning_rate)

This method trains the model by data batches. "dataset" must be a list of all Vectorgebra.Vector or Vectorgebra.Matrix. "labelset" must be a list of all Vectorgebra.Vector. Length of these two lists must be equal. If, during initialization, multiprocessing was chosen to be True, this method applies parallel programming.

Model.produce(data)

This method produces a result by the model. "data" must be either a Vectorgebra.Vector or Vectorgebra.Matrix. Result is a Vectorgebra.Vector object which is the result values from the output layer logits.


Exceptions

FileStructureError

This exception is raised when files extension is wrong.

ConfigError

This function is raised when any problem with the model configuration occurs at any point of the structuring.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlgebra-0.1.1.tar.gz (13.7 kB view hashes)

Uploaded Source

Built Distribution

mlgebra-0.1.1-py3-none-any.whl (12.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page