LLM plugin for embedding evaluation with semantic scoring
Project description
llm-embedding-eval
LLM plugin that will compare two embeddings and determine their similarity/relevance based on various metrics.
It also supports SemScore as proposed in a publication(https://arxiv.org/abs/2401.17072).
Installation
Install this plugin in the same environment as LLM.
llm install llm-embedding-eval
Usage
The plugin adds a new command, llm eval.
Usage: llm eval [OPTIONS] EMBEDDING1 EMBEDDING2
Evaluate similarity between two embeddings
This command compares two embeddings using various similarity metrics. For semantic scoring (semscore), original texts must be provided or available in the database for DB files.
Supports both binary embedding files and SQLite DB files (.db extension).
Example usage:
# Basic usage with auto-detection of text column with semscore
llm eval --query "DB semantics" --metric semscore docs.db docs1.db
# Basic usage with cosine similarity
llm eval --query "How similar are these?" --metric cosine docs.db docs1.db
# Basic usage custom prompt with metric llm
llm eval --metric llm -m llama3.2 --query "Are the contents similar?" --prompt "Query: {query}\n\nMetrics for first text: {metricsA}\nMetrics for second text: {metricsB}\n\nBased on the semscore of {semscore}, are these texts similar? Give a detailed explanation." docs.db docs1.db
Note: The prompt template variables are {query}, {metricsA}, {metricsB} and {semscore}.
The default prompt used is:
Given the query: {query}
Compare these two embedding results:
Embedding A metrics: {metricsA}
Embedding B metrics: {metricsB}
SemScore: {semscore:.4f}
Which embedding is more relevant to the query? Answer with "Embedding A" or "Embedding B".
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-embedding-eval
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
python -m pytest
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_embedding_eval-0.1.tar.gz.
File metadata
- Download URL: llm_embedding_eval-0.1.tar.gz
- Upload date:
- Size: 11.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6cd32f2638c23af04e76566eea74c3922efe349bc279abbfcbf2a7c73d11b775
|
|
| MD5 |
7631614938979e69bed785a1e03ce27c
|
|
| BLAKE2b-256 |
7e789e458bb328ff3b396d714a54eeaa58a42acd6910c68f43c6b0040e17fbe4
|
File details
Details for the file llm_embedding_eval-0.1-py3-none-any.whl.
File metadata
- Download URL: llm_embedding_eval-0.1-py3-none-any.whl
- Upload date:
- Size: 11.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cbcec621f20032e3b913f1958d2f56bf0779919a7d5a69686484a89d275541c6
|
|
| MD5 |
b809715ea43bb6f5aa68bdf6f70fd199
|
|
| BLAKE2b-256 |
2ff7248fdc47512d313bc01edbb6e8a38de2cb9a2d3d039e1d348a727c575902
|