Showing 5 of total 5 results (show query)
brandmaier
semtree:Recursive Partitioning for Structural Equation Models
SEM Trees and SEM Forests -- an extension of model-based decision trees and forests to Structural Equation Models (SEM). SEM trees hierarchically split empirical data into homogeneous groups each sharing similar data patterns with respect to a SEM by recursively selecting optimal predictors of these differences. SEM forests are an extension of SEM trees. They are ensembles of SEM trees each built on a random sample of the original data. By aggregating over a forest, we obtain measures of variable importance that are more robust than measures from single trees. A description of the method was published by Brandmaier, von Oertzen, McArdle, & Lindenberger (2013) <doi:10.1037/a0030001> and Arnold, Voelkle, & Brandmaier (2020) <doi:10.3389/fpsyg.2020.564403>.
Maintained by Andreas M. Brandmaier. Last updated 3 months ago.
bigdatadecision-treeforestmultivariaterandomforestrecursive-partitioningsemstatistical-modelingstructural-equation-modelingstructural-equation-models
15 stars 8.56 score 68 scriptsbioc
SPONGE:Sparse Partial Correlations On Gene Expression
This package provides methods to efficiently detect competitive endogeneous RNA interactions between two genes. Such interactions are mediated by one or several miRNAs such that both gene and miRNA expression data for a larger number of samples is needed as input. The SPONGE package now also includes spongEffects: ceRNA modules offer patient-specific insights into the miRNA regulatory landscape.
Maintained by Markus List. Last updated 5 months ago.
geneexpressiontranscriptiongeneregulationnetworkinferencetranscriptomicssystemsbiologyregressionrandomforestmachinelearning
6.66 score 38 scripts 1 dependentsbenjilu
forestError:A Unified Framework for Random Forest Prediction Error Estimation
Estimates the conditional error distributions of random forest predictions and common parameters of those distributions, including conditional misclassification rates, conditional mean squared prediction errors, conditional biases, and conditional quantiles, by out-of-bag weighting of out-of-bag prediction errors as proposed by Lu and Hardin (2021). This package is compatible with several existing packages that implement random forests in R.
Maintained by Benjamin Lu. Last updated 4 years ago.
inferenceintervalsmachine-learningmachinelearningpredictionrandom-forestrandomforeststatistics
26 stars 4.62 score 16 scriptsaberhrml
forestControl:Approximate False Positive Rate Control in Selection Frequency for Random Forest
Approximate false positive rate control in selection frequency for random forest using the methods described by Ender Konukoglu and Melanie Ganz (2014) <arXiv:1410.2838>. Methods for calculating the selection frequency threshold at false positive rates and selection frequency false positive rate feature selection.
Maintained by Tom Wilson. Last updated 3 years ago.
2 stars 4.00 score 7 scriptsnjtierney
broomstick:Convert Decision Tree Objects into Tidy Data Frames
Convert Decision Tree objects into tidy data frames, by using the framework laid out by the package broom, this means that decision tree output can be easily reshaped, porocessed, and combined with tools like 'dplyr', 'tidyr' and 'ggplot2'. Like the package broom, broomstick provides three S3 generics: tidy, to summarise decision tree specific features - tidy returns the variable importance table; augment adds columns to the original data such as predictions and residuals; and glance, which provides a one-row summary of model-level statistics.
Maintained by Nicholas Tierney. Last updated 1 years ago.
broomdecision-treesgbmmachine-learningrandomforestrpartstatistical-learning
29 stars 3.59 score 27 scripts