unitorch provides efficient implementation of popular unified NLU / NLG / CV / CTR / MM / RL models with PyTorch.
Project description
Introduction
🔥 unitorch is a library that simplifies and accelerates the development of unified models for natural language understanding, natural language generation, computer vision, click-through rate prediction, multimodal learning and reinforcement learning. It is built on top of PyTorch and integrates seamlessly with popular frameworks such as transformers, peft, diffusers, and fastseq. With unitorch, you can use a single command line tool or a one-line code import unitorch
import to leverage the state-of-the-art models and datasets without sacrificing performance or accuracy.
What's New Model
- SDXL released with the paper SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, Robin Rombach.
- LLaMA released with the paper LLaMA: Open and Efficient Foundation Language Models by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.
- ControlNet released with the paper Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala.
- BLOOM released with the paper BLOOM: A 176B-Parameter Open-Access Multilingual Language Model by BigScience Workshop: Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow...
- PEGASUS-X released with the paper Investigating Efficiently Extending Transformers for Long Input Summarization by Jason Phang, Yao Zhao, Peter J. Liu.
- BLIP released with the paper BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
- BEiT released with the paper BEiT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong, Songhao Piao, Furu Wei.
- Swin Transformer released with the paper Swin Transformer: Hierarchical Vision Transformer using Shifted Windows by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
- CLIP released with the paper Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
- mT5 released with the paper mT5: A massively multilingual pre-trained text-to-text transformer by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
- Vision Transformer (ViT) released with the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
- DeBERTa-V2 released with the paper DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
- DeBERTa released with the paper DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
- MBart released with the paper Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
- PEGASUS released with the paper PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization by Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J. Liu.
- BART released with the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
- T5 released with the paper Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
- VisualBERT released with the paper VisualBERT: A Simple and Performant Baseline for Vision and Language by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
- RoBERTa released together with the paper RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
- BERT released with the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
Features
- User-Friendly Python Package
- Faster & Streamlined Train/Inference
- Deepspeed Integration for Large-Scale Models
- CUDA Optimization
- Extensive STOA Model & Task Supports
Installation
pip3 install unitorch
Quick Examples
Source Code
import unitorch
# import bart model
from unitorch.models.bart import BartForGeneration
model = BartForGeneration("path/to/bart/config.json")
# use the configuration class
from unitorch.cli import CoreConfigureParser
config = CoreConfigureParser("path/to/config.ini")
Multi-GPU Training
torchrun --no_python --nproc_per_node 4 \
unitorch-train examples/configs/generation/bart.ini \
--train_file path/to/train.tsv --dev_file path/to/dev.tsv
Single-GPU Inference
unitorch-infer examples/configs/generation/bart.ini --test_file path/to/test.tsv
Find more details in the Tutorials section of the documentation.
License
Code released under MIT license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file unitorch-0.0.0.20.tar.gz
.
File metadata
- Download URL: unitorch-0.0.0.20.tar.gz
- Upload date:
- Size: 1.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3d4784b3a6407f4037fe074a7ae89624ad918ddd53757ae564a745783179fc21 |
|
MD5 | 9379c652ca12a0e81f727466738d7ac9 |
|
BLAKE2b-256 | d3e966e6ec1591d6eabc1941beb500b07c024300c856fabc70e72f341a7a6a35 |
File details
Details for the file unitorch-0.0.0.20-py3-none-any.whl
.
File metadata
- Download URL: unitorch-0.0.0.20-py3-none-any.whl
- Upload date:
- Size: 737.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 179a4812610cbdb3b98d1cea83f2eaf0cebbc12607338b11da8055989442432e |
|
MD5 | d273ddfedb929a8e93f62089d545fa17 |
|
BLAKE2b-256 | 97aae85393e267f046e444f42df18a616621f4525c91d04aa40606bfb489cca6 |