Skip to main content

Core training module for the Open Language Model (OLMo)

Project description

OLMo Logo

OLMo-core

Building blocks for OLMo modeling and training

Docs Examples Pypi GitHub License Paper URL Playground Discord

Installation

First install PyTorch according to the instructions specific to your operating system and hardware.

For development, we recommend installing from source:

git clone https://github.com/allenai/OLMo-core.git
cd OLMo-core
pip install -e .[all]

Or you can install from PyPI with:

pip install ai2-olmo-core

There are a number of optional dependencies that must be installed to use certain functionality as well, including:

The published Docker images contain all core and optional dependencies, and are regularly tested on our in-house H100 clusters. But there are several things to keep in mind if you intend to use these images:

  • They do not come with the OLMo-core package installed, only its dependencies, to accommodate for regular code changes.
  • They may not work on your own cluster if you have different hardware or driver/CUDA versions.

If the published images do not work for your use-case for any of the above reasons, you could adapt our Dockerfile to build your own images.

Official training scripts

Official training scripts for released models can be found in src/scripts/official/. These scripts are meant to be launched with torchrun, or with OLMo-core's Beaker launch CLI if you have access to Beaker.

For example:

torchrun --nproc-per-node=8 src/scripts/official/OLMo-2-0325-32B-train.py \
  --save-folder=/path/to/save/checkpoints

You can override most configuration options from the command-line. For example, to override the learning rate you could launch the script like this:

torchrun --nproc-per-node=8 src/scripts/train/OLMo-2-0325-32B-train.py \
  --save-folder=/path/to/save/checkpoints \
  --train_module.optim.lr=6e-3

To continue annealing from a checkpoint, we use a separate script which can be launched like this:

torchrun --nproc-per-node=8 src/scripts/train/OLMo-2-0325-32B-anneal.py \
  --save-folder=/path/to/save/checkpoints \
  --checkpoint=https://olmo-checkpoints.org/ai2-llm/peteish32/step721901

OLMo-2 Model Training

OLMo-2 32B pretraining follows a two-stage training procedure. In the first stage, we train on large amounts of mostly web-based data: OLMo-mix-1124. In the second stage, we train on a smaller amount of high-quality, targeted data: Dolmino-mix-0324 (releasing soon).

Stage Model Size Training Checkpoint Monitoring
stage 1 32B 6T tokens stage1-step721901-tokens6056B comet.ml/OLMo2-32B
stage 2 32B random seed 1110, 100B tokens stage2-ingredient1-step11921-tokens101B comet.ml/OLMo2-32B
random seed 2662, 100B tokens stage2-ingredient2-step11921-tokens101B comet.ml/OLMo2-32B
random seed 2662, 300B tokens stage2-ingredient3-step35763-tokens301B comet.ml/OLMo2-32B
Final Souped Model main No config, weights averaged in Python

The table below lists the checkpoints for Stage 1 and Stage 2 of OLMo-2, along with their corresponding Hugging Face format.

Variant OLMo Format (Stage 1) OLMo Format (Stage 2) Hugging Face Format
OLMo-2 32B OLMo-2 32B OLMo-2 32B Hugging Face for the 32B variant

Note: OLMo-2 7B and 13B models were trained using the old OLMo trainer. All related checkpoints, configs, and scripts for these models can be found there. While you can train 7B and 13B models with this trainer, please note that the configs and script in the old training codebase are not compatible with this repo.

Inference

You can use our Hugging Face integration to run inference on the OLMo transformers checkpoints:

from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0325-32B")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-0325-32B")
message = ["Language modeling is "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
# inputs = {k: v.to('cuda') for k,v in inputs.items()} # optional verifying cuda
# olmo = olmo.to('cuda')
response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])

Alternatively, with the Hugging Face pipeline abstraction:

from transformers import pipeline
olmo_pipe = pipeline("text-generation", model="allenai/OLMo-2-0325-32B")
print(olmo_pipe("Language modeling is"))

Quantization

olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0325-32B", torch_dtype=torch.float16, load_in_8bit=True)  # requires bitsandbytes

Evaluation

Additional tools for evaluating OLMo models are available at the OLMo Eval and olmes repositories.

Development

The Python library source code is located in src/olmo_core. The corresponding tests are located in src/test. The library docs are located in docs. You can build the docs locally with make docs.

Code checks:

  • We use pytest to run tests. You can run all tests with pytest -v src/test. You can also point pytest at a specific test file to run it individually.
  • We use isort and black for code formatting. Ideally you should integrate these into your editor, but you can also run them manually or configure them with a pre-commit hook. To validate that all files are formatted correctly, run make style-check.
  • We use ruff as our primary linter. You can run it with make lint-check.
  • We use mypy as our type checker. You can run it with make type-check.

Citing

@misc{olmo20242olmo2furious,
      title={{2 OLMo 2 Furious}},
      author={{Team OLMo} and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
      year={2024},
      eprint={2501.00656},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.00656},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai2_olmo_core-2.3.0.tar.gz (345.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai2_olmo_core-2.3.0-py3-none-any.whl (404.3 kB view details)

Uploaded Python 3

File details

Details for the file ai2_olmo_core-2.3.0.tar.gz.

File metadata

  • Download URL: ai2_olmo_core-2.3.0.tar.gz
  • Upload date:
  • Size: 345.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for ai2_olmo_core-2.3.0.tar.gz
Algorithm Hash digest
SHA256 7745fea1268cf0d882d28c25fbbfbedc969e2f9bf4f187d8c042105429d0dff7
MD5 df4e9e3d488e32c21815b71045422b74
BLAKE2b-256 68f80a55822e471a1143fc4e55dbe6995259141260ed9c460c7eac448e9e9acd

See more details on using hashes here.

File details

Details for the file ai2_olmo_core-2.3.0-py3-none-any.whl.

File metadata

  • Download URL: ai2_olmo_core-2.3.0-py3-none-any.whl
  • Upload date:
  • Size: 404.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for ai2_olmo_core-2.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7b22d09aaaf8e49000c808833b377b7cef774f10e93114fb3c6e61b74cb75df8
MD5 aa384e605d4c5de9c1a4316fa5c0ec1a
BLAKE2b-256 7dd2353853a4de4ccf209dd04d3ceb28e2d8ce11c06816a2517eb6b412524294

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page