A framework for evaluating language models - packaged by NVIDIA
Project description
NVIDIA NeMo Evaluator
The goal of NVIDIA NeMo Evaluator is to advance and refine state-of-the-art methodologies for model evaluation, and deliver them as modular evaluation packages (evaluation containers and pip wheels) that teams can use as standardized building blocks.
Quick start guide
NVIDIA NeMo Evaluator provide you with evaluation clients, that are specifically built to evaluate model endpoints using our Standard API.
Launching an evaluation for an LLM
-
Install the package
pip install nvidia-lm-eval -
(Optional) Set a token to your API endpoint if it's protected
export MY_API_KEY="your_api_key_here"
-
List the available evaluations:
$ nemo-evaluator ls Available tasks: * mmlu (in lm-evaluation-harness) * ifeval (in lm-evaluation-harness) * mmlu_pro (in lm-evaluation-harness) * math (in lm-evaluation-harness) ...
-
Run the evaluation of your choice:
nemo-evaluator run_eval \ --eval_type mmlu_pro \ --model_id meta/llama-3.1-70b-instruct \ --model_url https://integrate.api.nvidia.com/v1/chat/completions \ --model_type chat \ --api_key_name MY_API_KEY \ --output_dir /workspace/results
-
Gather the results
cat /workspace/results/results.yml
Command-Line Tool
Each package comes pre-installed with a set of command-line tools, designed to simplify the execution of evaluation tasks. Below are the available commands and their usage for the lm_eval (lm-evaluation-harness):
Commands
1. List Evaluation Types
nemo-evaluator ls
Displays the evaluation types available within the harness.
2. Run an evaluation
The nemo-evaluator run_eval command executes the evaluation process. Below are the flags and their descriptions:
Required flags
--eval_type <string>The type of evaluation to perform--model_id <string>The name or identifier of the model to evaluate.--model_url <url>The API endpoint where the model is accessible.--model_type <string>The type of the model to evaluate, currently either "chat", "completions", or "vlm".--output_dir <directory>The directory to use as the working directory for the evaluation. The results, including the results.yml output file, will be saved here.
Optional flags
--api_key_name <string>The name of the environment variable that stores the Bearer token for the API, if authentication is required.--run_config <path>Specifies the path to a YAML file containing the evaluation definition.
Example
nemo-evaluator run_eval \
--eval_type ifeval \
--model_id my_model \
--model_type chat \
--model_url http://localhost:8000 \
--output_dir ./evaluation_results
If the model API requires authentication, set the API key in an environment variable and reference it using the --api_key_name flag:
export MY_API_KEY="your_api_key_here"
nemo-evaluator run_eval \
--eval_type ifeval \
--model_id my_model \
--model_type chat \
--model_url http://localhost:8000 \
--api_key_name MY_API_KEY \
--output_dir ./evaluation_results
Configuring evaluations via YAML
Evaluations in NVIDIA NeMo Evaluator are configured using YAML files that define the parameters and settings required for the evaluation process. These configuration files follow a standard API which ensures consistency across evaluations.
Example of a YAML config:
config:
type: ifeval
params:
parallelism: 50
limit_samples: 20
target:
api_endpoint:
model_id: meta/llama-3.1-8b-instruct
type: chat
url: https://integrate.api.nvidia.com/v1/chat/completions
api_key: NVIDIA_API_KEY
The priority of overrides is as follows:
- command line arguments
- user config (as seen above)
- task defaults (defined per task type)
- framework defaults
--dry_run option allows you to print the final run configuration and command without executing the evaluation.
Example:
nemo-evaluator run_eval \
--eval_type mmlu_pro_instruct \
--model_id my_model \
--model_type chat \
--model_url http://localhost:8000 \
--output_dir .evaluation_results \
--dry_run
Output:
Rendered config:
command: '{% if target.api_endpoint.api_key is not none %}OPENAI_API_KEY=${{target.api_endpoint.api_key}}{%
endif %} lm-eval --tasks {{config.params.task}}{% if config.params.extra.num_fewshot
is defined %} --num_fewshot {{ config.params.extra.num_fewshot }}{% endif %} --model
{% if target.api_endpoint.type == "completions" %}local-completions{% elif target.api_endpoint.type
== "chat" %}local-chat-completions{% endif %} --model_args "base_url={{target.api_endpoint.url}},model={{target.api_endpoint.model_id}},tokenized_requests=false,{%
if target.api_endpoint.type == "completions" %}tokenizer={{config.params.extra.tokenizer}}{%
endif %},num_concurrent={{config.params.parallelism}}{% if config.params.max_new_tokens
is not none %},max_gen_toks={{ config.params.max_new_tokens }}{% endif %},timeout={{
config.params.timeout }},max_retries={{ config.params.max_retries }},stream={{ target.api_endpoint.stream
}}" --log_samples --output_path {{config.output_dir}} --use_cache {{config.output_dir}}/lm_cache
{% if config.params.limit_samples is not none %}--limit {{config.params.limit_samples}}{%
endif %} {% if target.api_endpoint.type == "chat" %}--fewshot_as_multiturn --apply_chat_template
{% endif %} {% if config.params.extra.args is defined %} {{config.params.extra.args}}
{% endif %} {% if config.params.temperature is not none or config.params.top_p is
not none %}--gen_kwargs="{% if config.params.temperature is not none %}temperature={{
config.params.temperature }},{% endif %}{% if config.params.top_p is not none %}top_p={{
config.params.top_p}}{% endif %}"{% endif %}'
framework_name: lm-evaluation-harness
pkg_name: lm_eval
config:
output_dir: .evaluation_results
params:
limit_samples: null
max_new_tokens: 1024
max_retries: 5
parallelism: 10
task: mmlu_pro
temperature: 1.0e-07
timeout: 30
top_p: 0.9999999
extra:
tokenizer: meta-llama/Llama-3.1-70B-Instruct
num_fewshot: 0
supported_endpoint_types:
- chat
type: mmlu_pro_instruct
target:
api_endpoint:
api_key: null
model_id: my_model
stream: false
type: chat
url: http://localhost:8000
Rendered command:
lm-eval --tasks mmlu_pro --num_fewshot 0 --model local-chat-completions --model_args "base_url=http://localhost:8000,model=my_model,tokenized_requests=false,,num_concurrent=10,max_gen_toks=1024,timeout=30,max_retries=5,stream=False" --log_samples --output_path .evaluation_results --use_cache .evaluation_results/lm_cache --fewshot_as_multiturn --apply_chat_template --gen_kwargs="temperature=1e-07,top_p=0.9999999"
FAQ
Deploying a model as an endpoint
NVIDIA NeMo Evaluator utilize a client-server communication architecture to interact with the model. As a prerequisite, the model must be deployed as an endpoint with a NIM-compatible API.
Users have the flexibility to deploy their model using their own infrastructure and tooling.
Servers with APIs that conform to the OpenAI/NIM API standard are expected to work seamlessly out of the box.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nvidia_lm_eval-25.11.1-py3-none-any.whl.
File metadata
- Download URL: nvidia_lm_eval-25.11.1-py3-none-any.whl
- Upload date:
- Size: 8.2 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
21e90204b98dad85de23f6eb84f4a9ec1e849117f577166301f394c9aeb35264
|
|
| MD5 |
787d08a2c336ce4528ac1575c97ad235
|
|
| BLAKE2b-256 |
12e6d004fe5e364b833db26c7eae5138fcb84a1daf97efe4264dd13b6c30c803
|