Skip to main content

an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies

Project description

InternEvo

👋 join us on Discord and WeChat

Latest News 🔥

  • 2024/01/17: To delve deeper into the InternLM series of models, please check InternLM in our organization.

Introduction

InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies. With a single codebase, it supports pre-training on large-scale clusters with thousands of GPUs, and fine-tuning on a single GPU while achieving remarkable performance optimizations. InternEvo achieves nearly 90% acceleration efficiency during training on 1024 GPUs.

Based on the InternEvo training framework, we are continually releasing a variety of large language models, including the InternLM-7B series and InternLM-20B series, which significantly outperform numerous renowned open-source LLMs such as LLaMA and other leading models in the field.

Quick Start

Please refer to Usage Tutorial to start InternEvo installation, data processing, pre-training and fine-tuning.

For more details, please check internevo.readthedocs.io

System Architecture

Please refer to the System Architecture document for architecture details.

Performance

InternEvo deeply integrates Flash-Attention, Apex and other high-performance model operators to improve training efficiency. By building the Hybrid Zero technique, it achieves efficient overlap of computation and communication, significantly reducing cross-node communication traffic during training. InternEvo supports expanding the 7B model from 8 GPUs to 1024 GPUs, with an acceleration efficiency of up to 90% at the thousand-GPU scale, a training throughput of over 180 TFLOPS, and an average of over 3600 tokens per GPU per second. The following table shows InternEvo's scalability test data at different configurations:

GPU Number 8 16 32 64 128 256 512 1024
TGS 4078 3939 3919 3944 3928 3920 3835 3625
TFLOPS 193 191 188 188 187 185 186 184

TGS represents the average number of tokens processed per GPU per second. For more performance test data, please refer to the Training Performance document for further details.

Contribution

We appreciate all the contributors for their efforts to improve and enhance InternEvo. Community users are highly encouraged to participate in the project. Please refer to the contribution guidelines for instructions on how to contribute to the project.

Acknowledgements

InternEvo codebase is an open-source project contributed by Shanghai AI Laboratory and researchers from different universities and companies. We would like to thank all the contributors for their support in adding new features to the project and the users for providing valuable feedback. We hope that this toolkit and benchmark can provide the community with flexible and efficient code tools for fine-tuning InternEvo and developing their own models, thus continuously contributing to the open-source community. Special thanks to the two open-source projects, flash-attention and ColossalAI.

Citation

@misc{2023internlm,
    title={InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities},
    author={InternLM Team},
    howpublished = {\url{https://github.com/InternLM/InternLM}},
    year={2023}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

InternEvo-0.1.2.tar.gz (185.3 kB view details)

Uploaded Source

Built Distribution

InternEvo-0.1.2-py3-none-any.whl (225.5 kB view details)

Uploaded Python 3

File details

Details for the file InternEvo-0.1.2.tar.gz.

File metadata

  • Download URL: InternEvo-0.1.2.tar.gz
  • Upload date:
  • Size: 185.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.13

File hashes

Hashes for InternEvo-0.1.2.tar.gz
Algorithm Hash digest
SHA256 e388809136fcf8d449cb0b38295927d5d1962a87ec3d0a91180499930390fc5a
MD5 6578b108afd4aa7adf0509bd9534e55d
BLAKE2b-256 b682126f290d9d66c46514c1af9ac904bec427297372b8f4a62fce1b72cf5c33

See more details on using hashes here.

File details

Details for the file InternEvo-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: InternEvo-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 225.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.13

File hashes

Hashes for InternEvo-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 be11a73189b46441c724e7d2aac849f138c049425a5ec0f1c23d81d5d9f2b1d9
MD5 95385d44211c1eaf7dcc6d1dcd321d5e
BLAKE2b-256 2928715baa435a6da4d417111f43b0012b7625d4e2b8cde9ad65f5d7077e54a6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page