CMD + K

evaluate

Anaconda Verified

HuggingFace community-driven open-source library of evaluation

Installation

To install this package, run one of the following:

Conda
$conda install main::evaluate

Usage Tracking

0.4.6
0.4.3
0.4.0
0.3.0
4 / 8 versions selected
Downloads (Last 6 months): 0

Description

Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized.

It currently contains:

  • implementations of dozens of popular metrics: the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like accuracy = load("accuracy"), get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX).
  • comparisons and measurements: comparisons are used to measure the difference between models and measurements are tools to evaluate datasets.
  • an easy way of adding new evaluation modules to the 🤗 Hub: you can create new evaluation modules and push them to a dedicated Space in the 🤗 Hub with evaluate-cli create [metric name], which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions.

About

Summary

HuggingFace community-driven open-source library of evaluation

Last Updated

Oct 30, 2025 at 19:10

License

Apache-2.0

Total Downloads

1.8K

Supported Platforms

macOS-arm64
linux-64
linux-aarch64
win-64

Unsupported Platforms

linux-ppc64le Last supported version: 0.4.0
macOS-64 Last supported version: 0.4.3