Showing 6 of total 6 results (show query)
giuseppec
iml:Interpretable Machine Learning
Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) <doi:10.48550/arxiv.1801.01489>, accumulated local effects plots described by Apley (2018) <doi:10.48550/arxiv.1612.08468>, partial dependence plots described by Friedman (2001) <www.jstor.org/stable/2699986>, individual conditional expectation ('ice') plots described by Goldstein et al. (2013) <doi:10.1080/10618600.2014.907095>, local models (variant of 'lime') described by Ribeiro et. al (2016) <doi:10.48550/arXiv.1602.04938>, the Shapley Value described by Strumbelj et. al (2014) <doi:10.1007/s10115-013-0679-x>, feature interactions described by Friedman et. al <doi:10.1214/07-AOAS148> and tree surrogate models.
Maintained by Giuseppe Casalicchio. Last updated 1 months ago.
495 stars 13.05 score 642 scripts 4 dependentsholgstr
fmeffects:Model-Agnostic Interpretations with Forward Marginal Effects
Create local, regional, and global explanations for any machine learning model with forward marginal effects. You provide a model and data, and 'fmeffects' computes feature effects. The package is based on the theory in: C. A. Scholbeck, G. Casalicchio, C. Molnar, B. Bischl, and C. Heumann (2022) <doi:10.48550/arXiv.2201.08837>.
Maintained by Holger Löwe. Last updated 5 months ago.
2 stars 5.73 score 6 scriptsf-rousset
spaMM:Mixed-Effect Models, with or without Spatial Random Effects
Inference based on models with or without spatially-correlated random effects, multivariate responses, or non-Gaussian random effects (e.g., Beta). Variation in residual variance (heteroscedasticity) can itself be represented by a mixed-effect model. Both classical geostatistical models (Rousset and Ferdy 2014 <doi:10.1111/ecog.00566>), and Markov random field models on irregular grids (as considered in the 'INLA' package, <https://www.r-inla.org>), can be fitted, with distinct computational procedures exploiting the sparse matrix representations for the latter case and other autoregressive models. Laplace approximations are used for likelihood or restricted likelihood. Penalized quasi-likelihood and other variants discussed in the h-likelihood literature (Lee and Nelder 2001 <doi:10.1093/biomet/88.4.987>) are also implemented.
Maintained by François Rousset. Last updated 10 months ago.
4.94 score 208 scripts 5 dependentsforestry-labs
distillML:Model Distillation and Interpretability Methods for Machine Learning Models
Provides several methods for model distillation and interpretability for general black box machine learning models and treatment effect estimation methods. For details on the algorithms implemented, see <https://forestry-labs.github.io/distillML/index.html> Brian Cho, Theo F. Saarinen, Jasjeet S. Sekhon, Simon Walter.
Maintained by Theo Saarinen. Last updated 2 years ago.
bartdistillation-modelexplainable-machine-learningexplainable-mlinterpretabilityinterpretable-machine-learningmachine-learningmodelrandom-forestxgboost
7 stars 3.92 score 12 scriptsdyfanjones
sagemaker:R SDK for `AWS Sagemaker`
A library for training and deploying machine learning models on Amazon SageMaker <https://aws.amazon.com/sagemaker/> using R through `paws SDK`.
Maintained by Dyfan Jones. Last updated 3 years ago.
amazon-sagemakerawsmachine-learningsagemakersdk
12 stars 2.78 score 6 scriptsdyfanjones
sagemaker.mlcore:sagemaker machine learning core classes and methods
`sagemaker` machine learning core classes and methods.
Maintained by Dyfan Jones. Last updated 3 years ago.
amazon-sagemakerawsmachine-learningsagemakersdk
2.65 score 3 dependents