Skip to main content

No project description provided

Project description

🤗 AutoTrain Advanced

AutoTrain Advanced: faster and easier training and deployments of state-of-the-art machine learning models. AutoTrain Advanced is a no-code solution that allows you to train machine learning models in just a few clicks. Please note that you must upload data in correct format for project to be created. For help regarding proper data format and pricing, check out the documentation.

NOTE: AutoTrain is free! You only pay for the resources you use in case you decide to run AutoTrain on Hugging Face Spaces. When running locally, you only pay for the resources you use on your own infrastructure.

Supported Tasks

Task Status Python Notebook Example Configs
LLM SFT Finetuning Open In Colab llm_sft_finetune.yaml
LLM ORPO Finetuning Open In Colab llm_orpo_finetune.yaml
LLM DPO Finetuning Open In Colab llm_dpo_finetune.yaml
LLM Reward Finetuning Open In Colab llm_reward_finetune.yaml
LLM Generic/Default Finetuning Open In Colab llm_generic_finetune.yaml
Text Classification Open In Colab text_classification.yaml
Text Regression Open In Colab text_regression.yaml
Token Classification Coming Soon token_classification.yaml
Seq2Seq Coming Soon seq2seq.yaml
Extractive Question Answering Coming Soon extractive_qa.yaml
Image Classification Coming Soon image_classification.yaml
Image Scoring/Regression Coming Soon image_regression.yaml
VLM 🟥 Coming Soon vlm.yaml

Running UI on Colab or Hugging Face Spaces

  • Deploy AutoTrain on Hugging Face Spaces: Deploy on Spaces

  • Run AutoTrain UI on Colab via ngrok: Open In Colab

Local Installation

You can Install AutoTrain-Advanced python package via PIP. Please note you will need python >= 3.10 for AutoTrain Advanced to work properly.

pip install autotrain-advanced

Please make sure that you have git lfs installed. Check out the instructions here: https://github.com/git-lfs/git-lfs/wiki/Installation

You also need to install torch, torchaudio and torchvision.

The best way to run autotrain is in a conda environment. You can create a new conda environment with the following command:

conda create -n autotrain python=3.10
conda activate autotrain
pip install autotrain-advanced
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
conda install -c "nvidia/label/cuda-12.1.0" cuda-nvcc

Once done, you can start the application using:

autotrain app --port 8080 --host 127.0.0.1

If you are not fond of UI, you can use AutoTrain Configs to train using command line or simply AutoTrain CLI.

To use config file for training, you can use the following command:

autotrain --config <path_to_config_file>

You can find sample config files in the configs directory of this repository.

Example config file for finetuning SmolLM2:

task: llm-sft
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
project_name: autotrain-smollm2-finetune
log: tensorboard
backend: local

data:
  path: HuggingFaceH4/no_robots
  train_split: train
  valid_split: null
  chat_template: tokenizer
  column_mapping:
    text_column: messages

params:
  block_size: 2048
  model_max_length: 4096
  epochs: 2
  batch_size: 1
  lr: 1e-5
  peft: true
  quantization: int4
  target_modules: all-linear
  padding: right
  optimizer: paged_adamw_8bit
  scheduler: linear
  gradient_accumulation: 8
  mixed_precision: bf16
  merge_adapter: true

hub:
  username: ${HF_USERNAME}
  token: ${HF_TOKEN}
  push_to_hub: true

To fine-tune a model using the config file above, you can use the following command:

$ export HF_USERNAME=<your_hugging_face_username>
$ export HF_TOKEN=<your_hugging_face_write_token>
$ autotrain --config <path_to_config_file>

Documentation

Documentation is available at https://hf.co/docs/autotrain/

Citation

@inproceedings{thakur-2024-autotrain,
    title = "{A}uto{T}rain: No-code training for state-of-the-art models",
    author = "Thakur, Abhishek",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-demo.44",
    pages = "419--423",
    abstract = "With the advancements in open-source models, training(or finetuning) models on custom datasets has become a crucial part of developing solutions which are tailored to specific industrial or open-source applications. Yet, there is no single tool which simplifies the process of training across different types of modalities or tasks.We introduce AutoTrain(aka AutoTrain Advanced){---}an open-source, no code tool/library which can be used to train (or finetune) models for different kinds of tasks such as: large language model (LLM) finetuning, text classification/regression, token classification, sequence-to-sequence task, finetuning of sentence transformers, visual language model (VLM) finetuning, image classification/regression and even classification and regression tasks on tabular data. AutoTrain Advanced is an open-source library providing best practices for training models on custom datasets. The library is available at https://github.com/huggingface/autotrain-advanced. AutoTrain can be used in fully local mode or on cloud machines and works with tens of thousands of models shared on Hugging Face Hub and their variations.",
}

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autotrain_advanced-0.8.36.tar.gz (211.9 kB view details)

Uploaded Source

Built Distribution

autotrain_advanced-0.8.36-py3-none-any.whl (341.3 kB view details)

Uploaded Python 3

File details

Details for the file autotrain_advanced-0.8.36.tar.gz.

File metadata

  • Download URL: autotrain_advanced-0.8.36.tar.gz
  • Upload date:
  • Size: 211.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.14

File hashes

Hashes for autotrain_advanced-0.8.36.tar.gz
Algorithm Hash digest
SHA256 11346133200acb5aa8de118264a6955c96856406b203140e14c17e292cb698f3
MD5 658fd6c34d0436ede89523356086d574
BLAKE2b-256 97f6bbe5a0d78c3fe60546be3731c99f262d3d7201411574fb5708a023268432

See more details on using hashes here.

File details

Details for the file autotrain_advanced-0.8.36-py3-none-any.whl.

File metadata

File hashes

Hashes for autotrain_advanced-0.8.36-py3-none-any.whl
Algorithm Hash digest
SHA256 03e5400bce4fb4c0a14114cdf2e1b4614e44b8949ec9ecf17eaccb9e61da0a06
MD5 0092e69fe5b78efca4fed86f9a397d76
BLAKE2b-256 18fa81e7f46e903ade3955ef8143aa12fae6a59fca67b13e76aa02df821d02ff

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page