This project is about explaining what machine learning classifiers (or models) are doing. At the moment, it supports explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data) or images, with a package called lime (short for local interpretable model-agnostic explanations).
Uploaded | Sun Mar 30 23:45:46 2025 |
md5 checksum | 34d78c8637d41ac5f48ed508dd3daacd |
arch | x86_64 |
build | py310h06a4308_0 |
depends | matplotlib-base, numpy <2.0a0, python >=3.10,<3.11.0a0, scikit-image >=0.12, scikit-learn >=0.18, scipy, tqdm |
license | BSD-2-Clause |
license_family | BSD |
md5 | 34d78c8637d41ac5f48ed508dd3daacd |
name | lime |
platform | linux |
sha1 | bec09b5303b5f7912ab83eb890801f318d856815 |
sha256 | 5cf43f442b463ee50127005974087697021eaf15535f55d01ae686728f148143 |
size | 291500 |
subdir | linux-64 |
timestamp | 1692608524672 |
version | 0.2.0.1 |