Skip to main content

The official Cartesia PyTorch library.

Project description

Models

This repository contains the PyTorch implementation of the Rene and Llamba models, which are large-scale language models trained by Cartesia.

Rene

Rene is a 1.3 billion-parameter language model, which is the first model in a series of models trained by Cartesia. Rene has a hybrid architecture based on Mamba-2, with feedforward and sliding window attention layers interspersed. It uses the allenai/OLMo-1B-hf tokenizer. Rene was pretrained on 1.5 trillion tokens of the Dolma-1.7 dataset. For more details, see our blog post.

Llamba

The Llamba model series is a family of highly efficient recurrent language models distilled from meta-llama/Llama-3.x into the Mamba-2 architecture, developed in collaboration between Cartesia and CMU’s Goomba Lab. The series includes Llamba-1B, Llamba-3B, and Llamba-8B, delivering high inference throughput while maintaining competitive benchmark performance. Llamba models scale linearly with input length and were trained on 8B, 10B, and 12B tokens, respectively, demonstrating the effectiveness of distillation in large language models.

Usage

This is the PyTorch version of the package, and it's intended to run on CUDA devices. For use on Mac computers, please install the native MLX version instead.

Installation

The Rene model depends on the cartesia-pytorch package, which can be installed with pip as follows:

pip install --no-binary :all: cartesia-pytorch

Generation example

python -m evals.generation \
--model Rene \
--prompt "Rene Descartes was" \
--promptlen 100 \
--genlen 100 \
--dtype bfloat16 \
--temperature 1.0 \
--top_k 1 \
--top_p 0.99 \
--min_p 0.0 \
--repetition_penalty 1.0

To generate using another model, simply replace Rene with the desired model name, e.g. Llamba-8B.

python -m evals.generation \
--model Llamba-8B \
--prompt "My favorite book is" \
--promptlen 100 \
--genlen 100 \
--dtype bfloat16 \
--temperature 1.0 \
--top_k 1 \
--top_p 0.99 \
--min_p 0.0 \
--repetition_penalty 1.0298

Evaluation example

You can use our cartesia_lm_eval wrapper around the Language Model Evaluation Harness to evaluate our model on standard text benchmarks. Example command (clone this repo and run the below from within the cartesia-pytorch directory):

python -m evals.cartesia_lm_eval --model rene_ssm --model_args pretrained=cartesia-ai/Rene-v0.1-1.3b-pytorch,trust_remote_code=True --trust_remote_code --tasks copa,hellaswag,piqa,arc_easy,arc_challenge,winogrande,openbookqa --cache_requests true --batch_size auto:4 --output_path outputs/rene_evals/
python -m evals.cartesia_lm_eval --model llamba_ssm --model_args pretrained=cartesia-ai/Llamba-8B,trust_remote_code=True --trust_remote_code --tasks hellaswag,piqa,arc_easy,arc_challenge,winogrande,mmlu --cache_requests true --batch_size auto:4 --output_path outputs/llamba_evals/

About Cartesia

At Cartesia, we're building real-time multimodal intelligence for every device.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cartesia_pytorch-0.0.2.tar.gz (4.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cartesia_pytorch-0.0.2-py2.py3-none-any.whl (2.7 kB view details)

Uploaded Python 2Python 3

File details

Details for the file cartesia_pytorch-0.0.2.tar.gz.

File metadata

  • Download URL: cartesia_pytorch-0.0.2.tar.gz
  • Upload date:
  • Size: 4.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for cartesia_pytorch-0.0.2.tar.gz
Algorithm Hash digest
SHA256 0142a131011eecaac7a93ad51da9882c3cb172ec1f64537a2f8398eddac4556b
MD5 a362f95761bc066cc576ff154a4d6bfb
BLAKE2b-256 4d3c48d87bc651a7c48f1efb9f4341ee12bd01beb78b973bf2264d2866684025

See more details on using hashes here.

File details

Details for the file cartesia_pytorch-0.0.2-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for cartesia_pytorch-0.0.2-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 b57931195e4a18c38fcc4c514d53348220fa300fa2c747e34394a79fcc27d9bb
MD5 5778ec6e0e8e9cec589582707ac26104
BLAKE2b-256 21ea89348886107a5457c516d1c76a241890439c174e274e0a9c30d60f6966d1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page