This project is about explaining what machine learning classifiers (or models) are doing. At the moment, it supports explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data) or images, with a package called lime (short for local interpretable model-agnostic explanations).
Uploaded | Mon Mar 31 22:31:01 2025 |
md5 checksum | c3274cc1032ca1984a542842b929cec8 |
arch | x86_64 |
build | py311h06a4308_0 |
depends | matplotlib-base, numpy <2.0a0, python >=3.11,<3.12.0a0, scikit-image >=0.12, scikit-learn >=0.18, scipy, tqdm |
license | BSD-2-Clause |
license_family | BSD |
md5 | c3274cc1032ca1984a542842b929cec8 |
name | lime |
platform | linux |
sha1 | 0e946d63986ab853a864371ec74b04f0e8b95258 |
sha256 | e9aa92276f4ede50d6db6c7cecd030f8aad991f68f78d846ec6b77828952283b |
size | 314652 |
subdir | linux-64 |
timestamp | 1692608686828 |
version | 0.2.0.1 |