Library to help implement a complex-valued neural network (cvnn) using tensorflow as back-end
Project description
Complex-Valued Neural Networks (CVNN)
Done by @NEGU93 - J. Agustin Barrachina
Using this library, the only difference with a Tensorflow code is that you should use cvnn.layers
module instead of tf.keras.layers
.
This is a library that uses Tensorflow as a back-end to do complex-valued neural networks as CVNNs are barely supported by Tensorflow and not even supported yet for pytorch (reason why I decided to use Tensorflow for this library). To the authors knowledge, this is the first library that actually works with complex data types instead of real value vectors that are interpreted as real and imaginary part.
Update:
- Since v1.12 (28 June 2022), Complex32 and Complex Convolutions in PyTorch.
- Since v0.2 (25 Jan 2021) complexPyTorch uses complex64 dtype.
- Since v1.6 (28 July 2020), pytorch now supports complex vectors and complex gradient as BETA. But still have the same issues that Tensorflow has, so no reason to migrate yet.
Documentation
Please Read the Docs
Instalation Guide:
Using Anaconda
conda install -c negu93 cvnn
Using PIP
Vanilla Version installs all the minimum dependencies.
pip install cvnn
Plot capabilities has the posibility to plot the results obtained with the training with several plot libraries.
pip install cvnn[plotter]
Full Version installs full version with all features
pip install cvnn[full]
Short example
From "outside" everything is the same as when using Tensorflow.
import numpy as np
import tensorflow as tf
# Assume you already have complex data... example numpy arrays of dtype np.complex64
(train_images, train_labels), (test_images, test_labels) = get_dataset() # to be done by each user
model = get_model() # Get your model
# Compile as any TensorFlow model
model.compile(optimizer='adam', metrics=['accuracy'],
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))
model.summary()
# Train and evaluate
history = model.fit(train_images, train_labels, epochs=epochs, validation_data=(test_images, test_labels))
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
The main difference is that you will be using cvnn
layers instead of Tensorflow layers.
There are some options on how to do it as shown here:
Sequential API
import cvnn.layers as complex_layers
def get_model():
model = tf.keras.models.Sequential()
model.add(complex_layers.ComplexInput(input_shape=(32, 32, 3))) # Always use ComplexInput at the start
model.add(complex_layers.ComplexConv2D(32, (3, 3), activation='cart_relu'))
model.add(complex_layers.ComplexAvgPooling2D((2, 2)))
model.add(complex_layers.ComplexConv2D(64, (3, 3), activation='cart_relu'))
model.add(complex_layers.ComplexMaxPooling2D((2, 2)))
model.add(complex_layers.ComplexConv2D(64, (3, 3), activation='cart_relu'))
model.add(complex_layers.ComplexFlatten())
model.add(complex_layers.ComplexDense(64, activation='cart_relu'))
model.add(complex_layers.ComplexDense(10, activation='convert_to_real_with_abs'))
# An activation that casts to real must be used at the last layer.
# The loss function cannot minimize a complex number
return model
Functional API
import cvnn.layers as complex_layers
def get_model():
inputs = complex_layers.complex_input(shape=(128, 128, 3))
c0 = complex_layers.ComplexConv2D(32, activation='cart_relu', kernel_size=3)(inputs)
c1 = complex_layers.ComplexConv2D(32, activation='cart_relu', kernel_size=3)(c0)
c2 = complex_layers.ComplexMaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid')(c1)
t01 = complex_layers.ComplexConv2DTranspose(5, kernel_size=2, strides=(2, 2), activation='cart_relu')(c2)
concat01 = tf.keras.layers.concatenate([t01, c1], axis=-1)
c3 = complex_layers.ComplexConv2D(4, activation='cart_relu', kernel_size=3)(concat01)
out = complex_layers.ComplexConv2D(4, activation='cart_relu', kernel_size=3)(c3)
return tf.keras.Model(inputs, out)
About me & Motivation
I am a PhD student from Ecole CentraleSupelec with a scholarship from ONERA and the DGA
I am basically working with Complex-Valued Neural Networks for my PhD topic. In the need of making my coding more dynamic I build a library not to have to repeat the same code over and over for little changes and accelerate therefore my coding.
Cite Me
Alway prefer the Zenodo citation.
Next you have a model but beware to change the version and date accordingly.
@software{j_agustin_barrachina_2021_4452131,
author = {J Agustin Barrachina},
title = {Complex-Valued Neural Networks (CVNN)},
month = jan,
year = 2021,
publisher = {Zenodo},
version = {v1.0.3},
doi = {10.5281/zenodo.4452131},
url = {https://doi.org/10.5281/zenodo.4452131}
}
Issues
For any issues please report them in here
This library is tested using pytest.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file cvnn-2.0.tar.gz
.
File metadata
- Download URL: cvnn-2.0.tar.gz
- Upload date:
- Size: 59.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.7.0 requests/2.26.0 setuptools/49.6.0.post20210108 requests-toolbelt/0.9.1 tqdm/4.58.0 CPython/3.7.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a76e8048f6a138d811d27072116be65275aff691977b7557bb5e9470a4a172ed |
|
MD5 | 5cc9a87e381aa93999ee0b932798a61c |
|
BLAKE2b-256 | c5e1e98f4f59843a11ff61a0f98e7c640d1310ae09a2c51d66946470d486f241 |