×

Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized. It currently contains: - implementations of dozens of popular metrics: the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like `accuracy = load("accuracy")`, get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX). - comparisons and measurements: comparisons are used to measure the difference between models and measurements are tools to evaluate datasets. - an easy way of adding new evaluation modules to the 🤗 Hub: you can create new evaluation modules and push them to a dedicated Space in the 🤗 Hub with evaluate-cli create [metric name], which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions.

Uploaded Mon Mar 31 21:34:00 2025
md5 checksum a3b5e7179fbf7a8de5f258a781da5639
arch x86_64
build py310h06a4308_0
depends cookiecutter, datasets >=2.0.0, dill, fsspec >=2021.05.0, huggingface_hub >=0.7.0, multiprocess, numpy >=1.17,<2.0a0, packaging, pandas, python >=3.10,<3.11.0a0, python-xxhash, requests >=2.19.0, responses <0.19, tqdm >=4.62.1
license Apache-2.0
license_family Apache
md5 a3b5e7179fbf7a8de5f258a781da5639
name evaluate
platform linux
sha1 9e15f3b751310a74606419bbee7d1111b8d873e1
sha256 b236686f561584baca94ce7143f4ac8bb2dbb6506aab4e0134c11a85e10a4138
size 105142
subdir linux-64
timestamp 1668622833770
version 0.3.0