Skip to main content

FlagEmbedding

Project description

FlagEmbedding

Build License Build Build

News | Projects | Model List | Contributor | Citation | License

English | 中文

Hiring: We're seeking experienced NLP researchers and intern students focusing on dense retrieval and retrieval-augmented LLMs. If you're interested, please feel free to reach out to us via email at zhengliu1026@gmail.com.

FlagEmbedding focus on retrieval-augmented LLMs, consisting of following projects currently:

News

  • 11/23/2023: Release LM-Cocktail, a method to maintain general capabilities during fine-tuning by merging multiple language models. Technical Report :fire:
  • 10/12/2023: Release LLM-Embedder, a unified embedding model to support diverse retrieval augmentation needs for LLMs. Technical Report
  • 09/15/2023: The technical report of BGE has been released
  • 09/15/2023: The massive training data of BGE has been released
  • 09/12/2023: New models:
    • New reranker model: release cross-encoder models BAAI/bge-reranker-base and BAAI/bge-reranker-large, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
    • update embedding model: release bge-*-v1.5 embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
More
  • 09/07/2023: Update fine-tune code: Add script to mine hard negatives and support adding instruction during fine-tuning.
  • 08/09/2023: BGE Models are integrated into Langchain, you can use it like this; C-MTEB leaderboard is available.
  • 08/05/2023: Release base-scale and small-scale models, best performance among the models of the same size 🤗
  • 08/02/2023: Release bge-large-*(short for BAAI General Embedding) Models, rank 1st on MTEB and C-MTEB benchmark! :tada: :tada:
  • 08/01/2023: We release the Chinese Massive Text Embedding Benchmark (C-MTEB), consisting of 31 test dataset.

Projects

LM-Cocktail

Model merging has been used to improve the performance of single model. We find this method is also useful for large language models and dense embedding model, and design the LM-Cocktail strategy which automatically merges fine-tuned models and base model using a simple function to compute merging weights. LM-Cocktail can be used to improve the performance on target domain without decrease the general capabilities beyond target domain. It also can be used to generate a model for new tasks without fine-tuning. You can use it to merge the LLMs (e.g., Llama) or embedding models. More details please refer to our report: LM-Cocktail and code.

LLM Embedder

LLM Embedder is fine-tuned based on the feedback from LLMs. It can support the retrieval augmentation needs of large language models, including knowledge retrieval, memory retrieval, examplar retrieval, and tool retrieval. It is fine-tuned over 6 tasks: Question Answering, Conversational Search, Long Conversation, Long-Range Language Modeling, In-Context Learning, and Tool Learning. For more details please refer to ./FlagEmbedding/llm_embedder/README.md

BGE Reranker

Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our example. For more details please refer to ./FlagEmbedding/reranker/README.md

BGE Embedding

BGE embedding is a general Embedding Model. We pre-train the models using retromae and train them on large-scale pair data using contrastive learning. You can fine-tune the embedding model on your data following our examples. We also provide a pre-train example. Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. For more training details for bge see baai_general_embedding.

Model List

bge is short for BAAI general embedding.

Model Language Description query instruction for retrieval
LM-Cocktail English fine-tuned models (Llama and BGE) which can be used to reproduce the results of LM-Cocktail
BAAI/llm-embedder English Inference Fine-tune a unified embedding model to support diverse retrieval augmentation needs for LLMs See README
BAAI/bge-reranker-large Chinese and English Inference Fine-tune a cross-encoder model which is more accurate but less efficient
BAAI/bge-reranker-base Chinese and English Inference Fine-tune a cross-encoder model which is more accurate but less efficient
BAAI/bge-large-en-v1.5 English Inference Fine-tune version 1.5 with more reasonable similarity distribution Represent this sentence for searching relevant passages:
BAAI/bge-base-en-v1.5 English Inference Fine-tune version 1.5 with more reasonable similarity distribution Represent this sentence for searching relevant passages:
BAAI/bge-small-en-v1.5 English Inference Fine-tune version 1.5 with more reasonable similarity distribution Represent this sentence for searching relevant passages:
BAAI/bge-large-zh-v1.5 Chinese Inference Fine-tune version 1.5 with more reasonable similarity distribution 为这个句子生成表示以用于检索相关文章:
BAAI/bge-base-zh-v1.5 Chinese Inference Fine-tune version 1.5 with more reasonable similarity distribution 为这个句子生成表示以用于检索相关文章:
BAAI/bge-small-zh-v1.5 Chinese Inference Fine-tune version 1.5 with more reasonable similarity distribution 为这个句子生成表示以用于检索相关文章:
BAAI/bge-large-en English Inference Fine-tune Embedding Model which map text into vector Represent this sentence for searching relevant passages:
BAAI/bge-base-en English Inference Fine-tune a base-scale model but with similar ability to bge-large-en Represent this sentence for searching relevant passages:
BAAI/bge-small-en English Inference Fine-tune a small-scale model but with competitive performance Represent this sentence for searching relevant passages:
BAAI/bge-large-zh Chinese Inference Fine-tune Embedding Model which map text into vector 为这个句子生成表示以用于检索相关文章:
BAAI/bge-base-zh Chinese Inference Fine-tune a base-scale model but with similar ability to bge-large-zh 为这个句子生成表示以用于检索相关文章:
BAAI/bge-small-zh Chinese Inference Fine-tune a small-scale model but with competitive performance 为这个句子生成表示以用于检索相关文章:

Contributors:

Citation

If you find this repository useful, please consider giving a star :star: and citation

@misc{cocktail,
      title={LM-Cocktail: Resilient Tuning of Language Models via Model Merging}, 
      author={Shitao Xiao and Zheng Liu and Peitian Zhang and Xingrun Xing},
      year={2023},
      eprint={2311.13534},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

@misc{llm_embedder,
      title={Retrieve Anything To Augment Large Language Models}, 
      author={Peitian Zhang and Shitao Xiao and Zheng Liu and Zhicheng Dou and Jian-Yun Nie},
      year={2023},
      eprint={2310.07554},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}

@misc{bge_embedding,
      title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, 
      author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
      year={2023},
      eprint={2309.07597},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

License

FlagEmbedding is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

FlagEmbedding-1.1.8.tar.gz (26.8 kB view details)

Uploaded Source

File details

Details for the file FlagEmbedding-1.1.8.tar.gz.

File metadata

  • Download URL: FlagEmbedding-1.1.8.tar.gz
  • Upload date:
  • Size: 26.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for FlagEmbedding-1.1.8.tar.gz
Algorithm Hash digest
SHA256 780a6ed3fe267238fef07356c2579c4ae6b560ad012dfd327a119874e5163e69
MD5 72a52642d652236c1fd705472eb5ea66
BLAKE2b-256 90d09e2c1270118a32ff7f22083754bf1cad2e19e970fb70c46cd1597d68af43

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page