Package: lime 0.5.3.9000
lime: Local Interpretable Model-Agnostic Explanations
When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) <arxiv:1602.04938>.
Authors:
lime_0.5.3.9000.tar.gz
lime_0.5.3.9000.zip(r-4.5)lime_0.5.3.9000.zip(r-4.4)lime_0.5.3.9000.zip(r-4.3)
lime_0.5.3.9000.tgz(r-4.4-x86_64)lime_0.5.3.9000.tgz(r-4.4-arm64)lime_0.5.3.9000.tgz(r-4.3-x86_64)lime_0.5.3.9000.tgz(r-4.3-arm64)
lime_0.5.3.9000.tar.gz(r-4.5-noble)lime_0.5.3.9000.tar.gz(r-4.4-noble)
lime_0.5.3.9000.tgz(r-4.4-emscripten)lime_0.5.3.9000.tgz(r-4.3-emscripten)
lime.pdf |lime.html✨
lime/json (API)
NEWS
# Install 'lime' in R: |
install.packages('lime', repos = c('https://thomasp85.r-universe.dev', 'https://cloud.r-project.org')) |
Bug tracker:https://github.com/thomasp85/lime/issues
- stop_words_sentences - Stop words list
- test_sentences - Sentence corpus - test part
- train_sentences - Sentence corpus - train part
caretmodel-checkingmodel-evaluationmodeling
Last updated 2 years agofrom:301be637ef. Checks:OK: 9. Indexed: yes.
Target | Result | Date |
---|---|---|
Doc / Vignettes | OK | Nov 06 2024 |
R-4.5-win-x86_64 | OK | Nov 06 2024 |
R-4.5-linux-x86_64 | OK | Nov 06 2024 |
R-4.4-win-x86_64 | OK | Nov 06 2024 |
R-4.4-mac-x86_64 | OK | Nov 06 2024 |
R-4.4-mac-aarch64 | OK | Nov 06 2024 |
R-4.3-win-x86_64 | OK | Nov 06 2024 |
R-4.3-mac-x86_64 | OK | Nov 06 2024 |
R-4.3-mac-aarch64 | OK | Nov 06 2024 |
Exports:.load_image_example.load_text_exampleas_classifieras_regressordefault_tokenizeexplaininteractive_text_explanationslimemodel_typeplot_explanationsplot_featuresplot_image_explanationplot_superpixelsplot_text_explanationspredict_modelrender_text_explanationsslictext_explanations_output
Dependencies:assertthatclicodetoolscolorspacefansifarverforeachggplot2glmnetgluegowergtableisobanditeratorslabelinglatticelifecyclemagrittrMASSMatrixmgcvmunsellnlmepillarpkgconfigR6RColorBrewerRcppRcppEigenrlangscalesshapestringisurvivaltibbleutf8vctrsviridisLitewithr
Readme and manuals
Help Manual
Help page | Topics |
---|---|
lime: Local Interpretable Model-Agnostic Explanations | lime-package _PACKAGE |
Indicate model type to lime | as_classifier as_regressor |
Default function to tokenize | default_tokenize |
Explain model predictions | explain explain.character explain.data.frame explain.imagefile |
Interactive explanations | interactive_text_explanations render_text_explanations text_explanations_output |
Create a model explanation function based on training data | lime lime.character lime.data.frame lime.imagefile |
Methods for extending limes model support | model_support model_type predict_model |
Plot a condensed overview of all explanations | plot_explanations |
Plot the features in an explanation | plot_features |
Display image explanations as superpixel areas | plot_image_explanation |
Test super pixel segmentation | plot_superpixels |
Plot text explanations | plot_text_explanations |
Stop words list | stop_words_sentences |
Sentence corpus - test part | test_sentences |
Sentence corpus - train part | train_sentences |