Showing 19 of total 19 results (show query)
pbastide
PhylogeneticEM:Automatic Shift Detection using a Phylogenetic EM
Implementation of the automatic shift detection method for Brownian Motion (BM) or Ornstein–Uhlenbeck (OU) models of trait evolution on phylogenies. Some tools to handle equivalent shifts configurations are also available. See Bastide et al. (2017) <doi:10.1111/rssb.12206> and Bastide et al. (2018) <doi:10.1093/sysbio/syy005>.
Maintained by Paul Bastide. Last updated 1 months ago.
17 stars 6.81 score 47 scriptsbioc
scPCA:Sparse Contrastive Principal Component Analysis
A toolbox for sparse contrastive principal component analysis (scPCA) of high-dimensional biological data. scPCA combines the stability and interpretability of sparse PCA with contrastive PCA's ability to disentangle biological signal from unwanted variation through the use of control data. Also implements and extends cPCA.
Maintained by Philippe Boileau. Last updated 2 months ago.
principalcomponentgeneexpressiondifferentialexpressionsequencingmicroarrayrnaseqbioconductorcontrastive-learningdimensionality-reduction
12 stars 5.94 score 29 scriptstopepo
sparseLDA:Sparse Discriminant Analysis
Performs sparse linear discriminant analysis for Gaussians and mixture of Gaussian models.
Maintained by Max Kuhn. Last updated 9 years ago.
7 stars 5.50 score 45 scripts 3 dependentsbioc
MOSClip:Multi Omics Survival Clip
Topological pathway analysis tool able to integrate multi-omics data. It finds survival-associated modules or significant modules for two-class analysis. This tool have two main methods: pathway tests and module tests. The latter method allows the user to dig inside the pathways itself.
Maintained by Paolo Martini. Last updated 5 months ago.
softwarestatisticalmethodgraphandnetworksurvivalregressiondimensionreductionpathwaysreactome
5.34 score 5 scriptstreynkens
rospca:Robust Sparse PCA using the ROSPCA Algorithm
Implementation of robust sparse PCA using the ROSPCA algorithm of Hubert et al. (2016) <DOI:10.1080/00401706.2015.1093962>.
Maintained by Tom Reynkens. Last updated 4 months ago.
13 stars 4.77 score 45 scriptsjfukuyama
treeDA:Tree-Based Discriminant Analysis
Performs sparse discriminant analysis on a combination of node and leaf predictors when the predictor variables are structured according to a tree, as described in Fukuyama et al. (2017) <doi:10.1371/journal.pcbi.1005706>.
Maintained by Julia Fukuyama. Last updated 4 years ago.
3.70 score 9 scriptskangjian2016
brainKCCA:Region-Level Connectivity Network Construction via Kernel Canonical Correlation Analysis
It is designed to calculate connection between (among) brain regions and plot connection lines. Also, the summary function is included to summarize group-level connectivity network. Kang, Jian (2016) <doi:10.1016/j.neuroimage.2016.06.042>.
Maintained by Jian Kang. Last updated 6 years ago.
3.70 score 5 scriptsacsala
sRDA:Sparse Redundancy Analysis
Sparse redundancy analysis for high dimensional (biomedical) data. Directional multivariate analysis to express the maximum variance in the predicted data set by a linear combination of variables of the predictive data set. Implemented in a partial least squares framework, for more details see Csala et al. (2017) <doi:10.1093/bioinformatics/btx374>.
Maintained by Attila Csala. Last updated 11 months ago.
3 stars 3.18 score 5 scriptsronho
prinvars:Principal Variables
Provides methods for reducing the number of features within a data set. See Bauer JO (2021) <doi:10.1145/3475827.3475832> and Bauer JO, Drabant B (2021) <doi:10.1016/j.jmva.2021.104754> for more information on principal loading analysis.
Maintained by Ron Holzapfel. Last updated 2 years ago.
2 stars 3.00 score 2 scriptsjaydevine
pheble:Classifying High-Dimensional Phenotypes with Ensemble Learning
A system for binary and multi-class classification of high-dimensional phenotypic data using ensemble learning. By combining predictions from different classification models, this package attempts to improve performance over individual learners. The pre-processing, training, validation, and testing are performed end-to-end to minimize user input and simplify the process of classification.
Maintained by Jay Devine. Last updated 2 years ago.
2.70 scorecran
bujar:Buckley-James Regression for Survival Data with High-Dimensional Covariates
Buckley-James regression for right-censoring survival data with high-dimensional covariates. Implementations for survival data include boosting with componentwise linear least squares, componentwise smoothing splines, regression trees and MARS. Other high-dimensional tools include penalized regression for survival data. See Wang and Wang (2010) <doi:10.2202/1544-6115.1550>.
Maintained by Zhu Wang. Last updated 2 years ago.
2.30 scoreguangbaog
SOPC:The Sparse Online Principal Component Estimation Algorithm
The sparse online principal component can not only process the online data set, but also obtain a sparse solution of the online data set. The philosophy of the package is described in Guo G. (2022) <doi:10.1007/s00180-022-01270-z>.
Maintained by Guangbao Guo. Last updated 2 years ago.
2.30 score 11 scripts 1 dependentsdcauseur
FADA:Variable Selection for Supervised Classification in High Dimension
The functions provided in the FADA (Factor Adjusted Discriminant Analysis) package aim at performing supervised classification of high-dimensional and correlated profiles. The procedure combines a decorrelation step based on a factor modeling of the dependence among covariates and a classification method. The available methods are Lasso regularized logistic model (see Friedman et al. (2010)), sparse linear discriminant analysis (see Clemmensen et al. (2011)), shrinkage linear and diagonal discriminant analysis (see M. Ahdesmaki et al. (2010)). More methods of classification can be used on the decorrelated data provided by the package FADA.
Maintained by David Causeur. Last updated 5 years ago.
1.90 score 6 scriptscran
funFEM:Clustering in the Discriminative Functional Subspace
The funFEM algorithm (Bouveyron et al., 2014) allows to cluster functional data by modeling the curves within a common and discriminative functional subspace.
Maintained by Charles Bouveyron. Last updated 3 years ago.
1.48 score 1 dependentscran
LINselect:Selection of Linear Estimators
Estimate the mean of a Gaussian vector, by choosing among a large collection of estimators, following the method developed by Y. Baraud, C. Giraud and S. Huet (2014) <doi:10.1214/13-AIHP539>. In particular it solves the problem of variable selection by choosing the best predictor among predictors emanating from different methods as lasso, elastic-net, adaptive lasso, pls, randomForest. Moreover, it can be applied for choosing the tuning parameter in a Gauss-lasso procedure.
Maintained by Benjamin Auder. Last updated 1 years ago.
1.48 score 1 dependentscran
FisherEM:The FisherEM Algorithm to Simultaneously Cluster and Visualize High-Dimensional Data
The FisherEM algorithm, proposed by Bouveyron & Brunet (2012) <doi:10.1007/s11222-011-9249-9>, is an efficient method for the clustering of high-dimensional data. FisherEM models and clusters the data in a discriminative and low-dimensional latent subspace. It also provides a low-dimensional representation of the clustered data. A sparse version of Fisher-EM algorithm is also provided.
Maintained by Charles Bouveyron. Last updated 5 years ago.
1.00 scorecran
funLBM:Model-Based Co-Clustering of Functional Data
The funLBM algorithm allows to simultaneously cluster the rows and the columns of a data matrix where each entry of the matrix is a function or a time series.
Maintained by Charles Bouveyron. Last updated 3 years ago.
1.00 scoreguangbaog
DLEGFM:Distributed Loading Estimation for General Factor Model
The load estimation method is based on a general factor model to solve the estimates of load and specific variance. The philosophy of the package is described in Guangbao Guo. (2022). <doi:10.1007/s00180-022-01270-z>.
Maintained by Guangbao Guo. Last updated 1 years ago.
1.00 score