OSLO: Open Source framework for Large-scale transformer Optimization
Project description
What's New:
- December 30, 2021 Add Deployment Launcher.
- December 21, 2021 Released OSLO 1.0.
What is OSLO about?
OSLO is a framework that provides various GPU based optimization technologies for large-scale modeling. 3D Parallelism and Kernel Fusion which could be useful when training a large model like EleutherAI/gpt-j-6B are the key features. OSLO makes these technologies easy-to-use by magical compatibility with Hugging Face Transformers that is being considered as a de facto standard in 2021. Currently, the architectures such as GPT2, GPTNeo, and GPTJ are supported, but we plan to support more soon.
Installation
OSLO can be easily installed using the pip package manager. All the dependencies such as torch, transformers, dacite, ninja and pybind11 should be installed automatically with the following command. Be careful that the 'core' is in the PyPI project name.
pip install oslo-core
Some of features rely on the C++ language.
So we provide an option, CPP_AVAILABLE
, to decide whether or not you install them.
- If the C++ is available:
CPP_AVAILABLE=1 pip install oslo-core
- If the C++ is not available:
CPP_AVAILABLE=0 pip install oslo-core
Note that the default value of CPP_AVAILABLE
is 0 in Windows and 1 in Linux.
Key Features
import deepspeed
from oslo import GPTJForCausalLM
# 1. 3D Parallelism
model = GPTJForCausalLM.from_pretrained_with_parallel(
"EleutherAI/gpt-j-6B", tensor_parallel_size=2, pipeline_parallel_size=2,
)
# 2. Kernel Fusion
model = model.fuse()
# 3. DeepSpeed Support
engines = deepspeed.initialize(
model=model.gpu_modules(), model_parameters=model.gpu_parameters(), ...,
)
# 4. Data Processing
from oslo import (
DatasetPreprocessor,
DatasetBlender,
DatasetForCausalLM,
...
)
# 5. Deployment Launcher
model = GPTJForCausalLM.from_pretrained_with_parallel(..., deployment=True)
OSLO offers the following features.
- 3D Parallelism: The state-of-the-art technique for training a large-scale model with multiple GPUs.
- Kernel Fusion: A GPU optimization method to increase training and inference speed.
- DeepSpeed Support: We support DeepSpeed which provides ZeRO data parallelism.
- Data Processing: Various utilities for efficient large-scale data processing.
- Deployment Launcher: A launcher for easily deploying a parallelized model to the web server.
See USAGE.md to learn how to use them.
Administrative Notes
Citing OSLO
If you find our work useful, please consider citing:
@misc{oslo,
author = {Ko, Hyunwoong and Kim, Soohwan and Park, Kyubyong},
title = {OSLO: Open Source framework for Large-scale transformer Optimization},
howpublished = {\url{https://github.com/tunib-ai/oslo}},
year = {2021},
}
Licensing
The Code of the OSLO project is licensed under the terms of the Apache License 2.0.
Copyright 2021 TUNiB Inc. http://www.tunib.ai All Rights Reserved.
Acknowledgements
The OSLO project is built with GPU support from the AICA (Artificial Intelligence Industry Cluster Agency).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file oslo-core-2.0.2.tar.gz
.
File metadata
- Download URL: oslo-core-2.0.2.tar.gz
- Upload date:
- Size: 115.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/3.7.3 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.64.0 CPython/3.7.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3506396c7f0c507535ec6d7f664e6e17acd434a6fe122c9afb2ff97205407137 |
|
MD5 | 94dbd177eac3547149a2d0609706acce |
|
BLAKE2b-256 | 188db9b4c87517abb02a6766e7aa75e732bb042a4d3bc00f8a9ed9f047969c00 |