Package: lime 0.5.3.9000

Emil Hvitfeldt

lime: Local Interpretable Model-Agnostic Explanations

When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) <arxiv:1602.04938>.

Authors:Emil Hvitfeldt [aut, cre], Thomas Lin Pedersen [aut], Michaël Benesty [aut]

lime_0.5.3.9000.tar.gz
lime_0.5.3.9000.zip(r-4.5)lime_0.5.3.9000.zip(r-4.4)lime_0.5.3.9000.zip(r-4.3)
lime_0.5.3.9000.tgz(r-4.4-x86_64)lime_0.5.3.9000.tgz(r-4.4-arm64)lime_0.5.3.9000.tgz(r-4.3-x86_64)lime_0.5.3.9000.tgz(r-4.3-arm64)
lime_0.5.3.9000.tar.gz(r-4.5-noble)lime_0.5.3.9000.tar.gz(r-4.4-noble)
lime_0.5.3.9000.tgz(r-4.4-emscripten)lime_0.5.3.9000.tgz(r-4.3-emscripten)
lime.pdf |lime.html
lime/json (API)
NEWS

# Install 'lime' in R:
install.packages('lime', repos = c('https://thomasp85.r-universe.dev', 'https://cloud.r-project.org'))

Peer review:

Bug tracker:https://github.com/thomasp85/lime/issues

Uses libs:
  • c++– GNU Standard C++ Library v3
Datasets:

On CRAN:

caretmodel-checkingmodel-evaluationmodeling

18 exports 481 stars 10.99 score 39 dependencies 1 dependents 8 mentions 700 scripts 1.6k downloads

Last updated 2 years agofrom:301be637ef. Checks:OK: 9. Indexed: yes.

TargetResultDate
Doc / VignettesOKSep 07 2024
R-4.5-win-x86_64OKSep 07 2024
R-4.5-linux-x86_64OKSep 07 2024
R-4.4-win-x86_64OKSep 07 2024
R-4.4-mac-x86_64OKSep 07 2024
R-4.4-mac-aarch64OKSep 07 2024
R-4.3-win-x86_64OKSep 07 2024
R-4.3-mac-x86_64OKSep 07 2024
R-4.3-mac-aarch64OKSep 07 2024

Exports:.load_image_example.load_text_exampleas_classifieras_regressordefault_tokenizeexplaininteractive_text_explanationslimemodel_typeplot_explanationsplot_featuresplot_image_explanationplot_superpixelsplot_text_explanationspredict_modelrender_text_explanationsslictext_explanations_output

Dependencies:assertthatclicodetoolscolorspacefansifarverforeachggplot2glmnetgluegowergtableisobanditeratorslabelinglatticelifecyclemagrittrMASSMatrixmgcvmunsellnlmepillarpkgconfigR6RColorBrewerRcppRcppEigenrlangscalesshapestringisurvivaltibbleutf8vctrsviridisLitewithr

Understanding lime

Rendered fromUnderstanding_lime.Rmdusingknitr::rmarkdownon Sep 07 2024.

Last update: 2021-02-24
Started: 2017-09-11

Readme and manuals

Help Manual

Help pageTopics
lime: Local Interpretable Model-Agnostic Explanationslime-package _PACKAGE
Indicate model type to limeas_classifier as_regressor
Default function to tokenizedefault_tokenize
Explain model predictionsexplain explain.character explain.data.frame explain.imagefile
Interactive explanationsinteractive_text_explanations render_text_explanations text_explanations_output
Create a model explanation function based on training datalime lime.character lime.data.frame lime.imagefile
Methods for extending limes model supportmodel_support model_type predict_model
Plot a condensed overview of all explanationsplot_explanations
Plot the features in an explanationplot_features
Display image explanations as superpixel areasplot_image_explanation
Test super pixel segmentationplot_superpixels
Plot text explanationsplot_text_explanations
Stop words liststop_words_sentences
Sentence corpus - test parttest_sentences
Sentence corpus - train parttrain_sentences