Local Interpretable Model-Agnostic Explanations (DataScience.com fork)
conda install conda-forge::ds-lime
conda install conda-forge/label/cf201901::ds-lime
conda install conda-forge/label/cf202003::ds-lime
conda install conda-forge/label/gcc7::ds-lime
This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data), with a package called lime (short for local interpretable model-agnostic explanations). This is the DataScience.com fork used in the Skater https://github.com/datascienceinc/Skater package.