A JAX-based neural network library surpassing Google DeepMind's Haiku and Optax
Project description
NextGenJAX
Overview
NextGenJAX is an advanced neural network library built on top of JAX, designed to surpass the capabilities of existing libraries such as Google DeepMind's Haiku and Optax. It leverages the flexibility and performance of JAX and Flax to provide a modular, high-performance, and easy-to-use framework for building and training neural networks.
Framework Compatibility
NextGenJAX now supports both TensorFlow and PyTorch, allowing users to choose their preferred deep learning framework. This compatibility enables seamless integration with existing TensorFlow or PyTorch workflows while leveraging the advanced features of NextGenJAX.
Features
- Modular design with customizable layers and activation functions
- Support for various optimizers, including custom optimizers
- Flexible training loop with support for custom loss functions
- Integration with JAX and Flax for high performance and scalability
- Comprehensive test suite to ensure model correctness and performance
Installation
To install NextGenJAX, you can use pip:
pip install nextgenjax
For development, clone the repository and install the required dependencies:
git clone https://github.com/VishwamAI/NextGenJAX.git
cd NextGenJAX
pip install -r requirements.txt
NextGenJAX now supports both TensorFlow and PyTorch. To use these frameworks, make sure to install them separately:
For TensorFlow:
pip install tensorflow
For PyTorch:
pip install torch
Usage
NextGenJAX now supports both TensorFlow and PyTorch frameworks. Users can choose their preferred framework when initializing the model.
Creating a Model
To create a model using NextGenJAX, choose your framework and initialize the model:
from src.model import NextGenModel
# Initialize the model with TensorFlow
tf_model = NextGenModel(framework='tensorflow', num_layers=6, hidden_size=512, num_heads=8, dropout_rate=0.1)
# Initialize the model with PyTorch
pytorch_model = NextGenModel(framework='pytorch', num_layers=6, hidden_size=512, num_heads=8, dropout_rate=0.1)
Training the Model
The training process remains similar for both frameworks. Here's an example using TensorFlow:
import tensorflow as tf
from src.train import create_train_state, train_model
# Define the optimizer
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
# Create the training state
train_state = create_train_state(tf_model, optimizer)
# Define the training data and loss function
train_data = ... # Your training data here
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
# Train the model
train_model(train_state, train_data, loss_fn, num_epochs=10)
For PyTorch, the process is similar, but you'll use PyTorch-specific optimizers and loss functions.
Note: The core functionality remains the same for both frameworks, allowing users to leverage either TensorFlow or PyTorch based on their preference or specific use case.
Development Setup
To set up a development environment:
- Clone the repository
- Install development dependencies:
pip install -r requirements-dev.txt
- Run tests using pytest:
pytest tests/
We use GitHub Actions for continuous integration and deployment. Our CI/CD workflow runs tests on Python 3.9 to ensure compatibility and code quality.
Community and Support
We welcome community engagement and support for the NextGenJAX project:
- Discussions: Join our community discussions at NextGenJAX Discussions
- Contact: For additional support or inquiries, you can reach us at aivishwam@gmail.com
Contributing
We welcome contributions to NextGenJAX! Please follow these steps:
- Fork the repository
- Create a new branch (
git checkout -b feature/your-feature
) - Make your changes and commit them (
git commit -am 'Add new feature'
) - Push to the branch (
git push origin feature/your-feature
) - Create a new pull request using the Pull Request Template
Please adhere to our coding standards:
- Follow PEP 8 guidelines
- Write unit tests for new features
- Update documentation as necessary
For more detailed guidelines, please refer to the CONTRIBUTING.md file.
Reporting Issues
If you encounter any issues or have suggestions for improvements, please open an issue in the repository. Use the appropriate issue template:
Provide as much detail as possible to help us understand and address the problem.
License
NextGenJAX is licensed under the MIT License. See the LICENSE file for more information.
Acknowledgements
NextGenJAX is inspired by the work of Google DeepMind and the JAX and Flax communities. We thank them for their contributions to the field of machine learning.
Contact Information
For support or questions about NextGenJAX, please reach out to:
- Email: aivishwam@gmail.com
- GitHub Issues: NextGenJAX Issues
- Community Forum: NextGenJAX Discussions
Last updated: 2023-05-10 12:00:00 UTC
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file nextgenjax-0.2.0-py3-none-any.whl
.
File metadata
- Download URL: nextgenjax-0.2.0-py3-none-any.whl
- Upload date:
- Size: 24.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 43f6e24ae213417a2bd09bf8fef7598d90851ed48d8b3a6feedbeee95f84ec89 |
|
MD5 | 3b64e99ba07f4595dfa1129d1dc95b0c |
|
BLAKE2b-256 | ab87a6dbae571e8d924154cbd5aa1818b5c39981a882d87c0c3920642aa3a761 |