Showing 178 of total 178 results (show query)
skoestlmeier
monotonicity:Test for Monotonicity in Expected Asset Returns, Sorted by Portfolios
Test for monotonicity in financial variables sorted by portfolios. It is conventional practice in empirical research to form portfolios of assets ranked by a certain sort variable. A t-test is then used to consider the mean return spread between the portfolios with the highest and lowest values of the sort variable. Yet comparing only the average returns on the top and bottom portfolios does not provide a sufficient way to test for a monotonic relation between expected returns and the sort variable. This package provides nonparametric tests for the full set of monotonic patterns by Patton, A. and Timmermann, A. (2010) <doi:10.1016/j.jfineco.2010.06.006> and compares the proposed results with extant alternatives such as t-tests, Bonferroni bounds, and multivariate inequality tests through empirical applications and simulations.
Maintained by Siegfried Köstlmeier. Last updated 3 years ago.
monotonicityportfolio-analysis
69.0 match 10 stars 3.70 score 10 scriptscran
scam:Shape Constrained Additive Models
Generalized additive models under shape constraints on the component functions of the linear predictor. Models can include multiple shape-constrained (univariate and bivariate) and unconstrained terms. Routines of the package 'mgcv' are used to set up the model matrix, print, and plot the results. Multiple smoothing parameter estimation by the Generalized Cross Validation or similar. See Pya and Wood (2015) <doi:10.1007/s11222-013-9448-7> for an overview. A broad selection of shape-constrained smoothers, linear functionals of smooths with shape constraints, and Gaussian models with AR1 residuals.
Maintained by Natalya Pya. Last updated 2 months ago.
29.9 match 5 stars 5.24 score 24 dependentsjamesramsay5
fda:Functional Data Analysis
These functions were developed to support functional data analysis as described in Ramsay, J. O. and Silverman, B. W. (2005) Functional Data Analysis. New York: Springer and in Ramsay, J. O., Hooker, Giles, and Graves, Spencer (2009). Functional Data Analysis with R and Matlab (Springer). The package includes data sets and script files working many examples including all but one of the 76 figures in this latter book. Matlab versions are available by ftp from <https://www.psych.mcgill.ca/misc/fda/downloads/FDAfuns/>.
Maintained by James Ramsay. Last updated 4 months ago.
11.7 match 3 stars 12.29 score 2.0k scripts 143 dependentsbusingfmta
monotone:Performs Monotone Regression
The monotone package contains a fast up-and-down-blocks implementation for the pool-adjacent-violators algorithm for simple linear ordered monotone regression, including two spin-off functions for unimodal and bivariate monotone regression (see <doi:10.18637/jss.v102.c01>).
Maintained by Frank Busing. Last updated 3 years ago.
64.9 match 1.78 score 10 scripts 2 dependentsdanielmork
dlmtree:Bayesian Treed Distributed Lag Models
Estimation of distributed lag models (DLMs) based on a Bayesian additive regression trees framework. Includes several extensions of DLMs: treed DLMs and distributed lag mixture models (Mork and Wilson, 2023) <doi:10.1111/biom.13568>; treed distributed lag nonlinear models (Mork and Wilson, 2022) <doi:10.1093/biostatistics/kxaa051>; heterogeneous DLMs (Mork, et. al., 2024) <doi:10.1080/01621459.2023.2258595>; monotone DLMs (Mork and Wilson, 2024) <doi:10.1214/23-BA1412>. The package also includes visualization tools and a 'shiny' interface to help interpret results.
Maintained by Daniel Mork. Last updated 1 months ago.
18.1 match 21 stars 5.40 score 17 scriptsgadenbuie
xaringanthemer:Custom 'xaringan' CSS Themes
Create beautifully color-coordinated and customized themes for your 'xaringan' slides, without writing any CSS. Complete your slide theme with 'ggplot2' themes that match the font and colors used in your slides. Customized styles can be created directly in your slides' 'R Markdown' source file or in a separate external script.
Maintained by Garrick Aden-Buie. Last updated 6 months ago.
csspresentationremarkjsslidesthemesxaringan
7.9 match 444 stars 11.01 score 4.3k scripts 1 dependentsandrija-djurovic
monobin:Monotonic Binning for Credit Rating Models
Performs monotonic binning of numeric risk factor in credit rating models (PD, LGD, EAD) development. All functions handle both binary and continuous target variable. Functions that use isotonic regression in the first stage of binning process have an additional feature for correction of minimum percentage of observations and minimum target rate per bin. Additionally, monotonic trend can be identified based on raw data or, if known in advance, forced by functions' argument. Missing values and other possible special values are treated separately from so-called complete cases.
Maintained by Andrija Djurovic. Last updated 3 years ago.
16.0 match 5 stars 4.99 score 43 scripts 3 dependentsharrelfe
Hmisc:Harrell Miscellaneous
Contains many functions useful for data analysis, high-level graphics, utility operations, functions for computing sample size and power, simulation, importing and annotating datasets, imputing missing values, advanced table making, variable clustering, character string manipulation, conversion of R objects to LaTeX and html code, recoding variables, caching, simplified parallel computing, encrypting and decrypting data using a safe workflow, general moving window statistical estimation, and assistance in interpreting principal component analysis.
Maintained by Frank E Harrell Jr. Last updated 2 days ago.
3.0 match 210 stars 17.61 score 17k scripts 750 dependentsnjtierney
brolgar:Browse Over Longitudinal Data Graphically and Analytically in R
Provides a framework of tools to summarise, visualise, and explore longitudinal data. It builds upon the tidy time series data frames used in the 'tsibble' package, and is designed to integrate within the 'tidyverse', and 'tidyverts' (for time series) ecosystems. The methods implemented include calculating features for understanding longitudinal data, including calculating summary statistics such as quantiles, medians, and numeric ranges, sampling individual series, identifying individual series representative of a group, and extending the facet system in 'ggplot2' to facilitate exploration of samples of data. These methods are fully described in the paper "brolgar: An R package to Browse Over Longitudinal Data Graphically and Analytically in R", Nicholas Tierney, Dianne Cook, Tania Prvan (2020) <doi:10.32614/RJ-2022-023>.
Maintained by Nicholas Tierney. Last updated 2 months ago.
6.0 match 109 stars 8.73 score 141 scriptshjopia
smbinning:Scoring Modeling and Optimal Binning
A set of functions to build a scoring model from beginning to end, leading the user to follow an efficient and organized development process, reducing significantly the time spent on data exploration, variable selection, feature engineering, binning and model selection among other recurrent tasks. The package also incorporates monotonic and customized binning, scaling capabilities that transforms logistic coefficients into points for a better business understanding and calculates and visualizes classic performance metrics of a classification model.
Maintained by Herman Jopia. Last updated 6 years ago.
12.5 match 20 stars 3.74 score 124 scriptstidymodels
recipes:Preprocessing and Feature Engineering Steps for Modeling
A recipe prepares your data for modeling. We provide an extensible framework for pipeable sequences of feature engineering steps provides preprocessing tools to be applied to data. Statistical parameters for the steps can be estimated from an initial data set and then applied to other data sets. The resulting processed output can then be used as inputs for statistical or machine learning models.
Maintained by Max Kuhn. Last updated 9 hours ago.
2.3 match 584 stars 18.73 score 7.2k scripts 382 dependentsstatcompute
mob:Monotonic Optimal Binning
Generate the monotonic binning and perform the woe (weight of evidence) transformation for the logistic regression used in the consumer credit scorecard development. The woe transformation is a piecewise transformation that is linear to the log odds. For a numeric variable, all of its monotonic functional transformations will converge to the same woe transformation.
Maintained by WenSui Liu. Last updated 4 years ago.
20.4 match 1.88 score 15 scriptsrbgramacy
monomvn:Estimation for MVN and Student-t Data with Monotone Missingness
Estimation of multivariate normal (MVN) and student-t data of arbitrary dimension where the pattern of missing data is monotone. See Pantaleo and Gramacy (2010) <doi:10.48550/arXiv.0907.2135>. Through the use of parsimonious/shrinkage regressions (plsr, pcr, lasso, ridge, etc.), where standard regressions fail, the package can handle a nearly arbitrary amount of missing data. The current version supports maximum likelihood inference and a full Bayesian approach employing scale-mixtures for Gibbs sampling. Monotone data augmentation extends this Bayesian approach to arbitrary missingness patterns. A fully functional standalone interface to the Bayesian lasso (from Park & Casella), Normal-Gamma (from Griffin & Brown), Horseshoe (from Carvalho, Polson, & Scott), and ridge regression with model selection via Reversible Jump, and student-t errors (from Geweke) is also provided.
Maintained by Robert B. Gramacy. Last updated 6 months ago.
10.5 match 4 stars 3.14 score 127 scriptspaul-buerkner
brms:Bayesian Regression Models using 'Stan'
Fit Bayesian generalized (non-)linear multivariate multilevel models using 'Stan' for full Bayesian inference. A wide range of distributions and link functions are supported, allowing users to fit -- among others -- linear, robust linear, count data, survival, response times, ordinal, zero-inflated, hurdle, and even self-defined mixture models all in a multilevel context. Further modeling options include both theory-driven and data-driven non-linear terms, auto-correlation structures, censoring and truncation, meta-analytic standard errors, and quite a few more. In addition, all parameters of the response distribution can be predicted in order to perform distributional regression. Prior specifications are flexible and explicitly encourage users to apply prior distributions that actually reflect their prior knowledge. Models can easily be evaluated and compared using several methods assessing posterior or prior predictions. References: Bürkner (2017) <doi:10.18637/jss.v080.i01>; Bürkner (2018) <doi:10.32614/RJ-2018-017>; Bürkner (2021) <doi:10.18637/jss.v100.i05>; Carpenter et al. (2017) <doi:10.18637/jss.v076.i01>.
Maintained by Paul-Christian Bürkner. Last updated 5 days ago.
bayesian-inferencebrmsmultilevel-modelsstanstatistical-models
1.9 match 1.3k stars 16.61 score 13k scripts 34 dependentseagerai
tfaddons:Interface to 'TensorFlow SIG Addons'
'TensorFlow SIG Addons' <https://www.tensorflow.org/addons> is a repository of community contributions that conform to well-established API patterns, but implement new functionality not available in core 'TensorFlow'. 'TensorFlow' natively supports a large number of operators, layers, metrics, losses, optimizers, and more. However, in a fast moving field like Machine Learning, there are many interesting new developments that cannot be integrated into core 'TensorFlow' (because their broad applicability is not yet clear, or it is mostly used by a smaller subset of the community).
Maintained by Turgut Abdullayev. Last updated 3 years ago.
deep-learningkerasneural-networkstensorflowtensorflow-addonstfa
6.0 match 20 stars 5.20 score 16 scriptsrvaradhan
SQUAREM:Squared Extrapolation Methods for Accelerating EM-Like Monotone Algorithms
Algorithms for accelerating the convergence of slow, monotone sequences from smooth, contraction mapping such as the EM algorithm. It can be used to accelerate any smooth, linearly convergent acceleration scheme. A tutorial style introduction to this package is available in a vignette on the CRAN download page or, when the package is loaded in an R session, with vignette("SQUAREM"). Refer to the J Stat Software article: <doi:10.18637/jss.v092.i07>.
Maintained by Ravi Varadhan. Last updated 4 years ago.
3.3 match 2 stars 9.26 score 84 scripts 502 dependentsadrian-bowman
sm:Smoothing Methods for Nonparametric Regression and Density Estimation
This is software linked to the book 'Applied Smoothing Techniques for Data Analysis - The Kernel Approach with S-Plus Illustrations' Oxford University Press.
Maintained by Adrian Bowman. Last updated 1 years ago.
4.1 match 1 stars 6.99 score 732 scripts 36 dependentsavdrark
mokken:Conducts Mokken Scale Analysis
Contains functions for performing Mokken scale analysis on test and questionnaire data. It includes an automated item selection algorithm, and various checks of model assumptions.
Maintained by L. Andries van der Ark. Last updated 9 months ago.
8.3 match 2 stars 3.45 score 68 scriptsvlyubchich
lawstat:Tools for Biostatistics, Public Policy, and Law
Statistical tests widely utilized in biostatistics, public policy, and law. Along with the well-known tests for equality of means and variances, randomness, and measures of relative variability, the package contains new robust tests of symmetry, omnibus and directional tests of normality, and their graphical counterparts such as robust QQ plot, robust trend tests for variances, etc. All implemented tests and methods are illustrated by simulations and real-life examples from legal statistics, economics, and biostatistics.
Maintained by Yulia R. Gel. Last updated 2 years ago.
3.6 match 7.17 score 484 scripts 6 dependentsberwinturlach
MonoPoly:Functions to Fit Monotone Polynomials
Functions for fitting monotone polynomials to data. Detailed discussion of the methodologies used can be found in Murray, Mueller and Turlach (2013) <doi:10.1007/s00180-012-0390-5> and Murray, Mueller and Turlach (2016) <doi:10.1080/00949655.2016.1139582>.
Maintained by Berwin A. Turlach. Last updated 6 years ago.
13.9 match 1.82 score 22 scripts 1 dependentstagteam
riskRegression:Risk Regression Models and Prediction Scores for Survival Analysis with Competing Risks
Implementation of the following methods for event history analysis. Risk regression models for survival endpoints also in the presence of competing risks are fitted using binomial regression based on a time sequence of binary event status variables. A formula interface for the Fine-Gray regression model and an interface for the combination of cause-specific Cox regression models. A toolbox for assessing and comparing performance of risk predictions (risk markers and risk prediction models). Prediction performance is measured by the Brier score and the area under the ROC curve for binary possibly time-dependent outcome. Inverse probability of censoring weighting and pseudo values are used to deal with right censored data. Lists of risk markers and lists of risk models are assessed simultaneously. Cross-validation repeatedly splits the data, trains the risk prediction models on one part of each split and then summarizes and compares the performance across splits.
Maintained by Thomas Alexander Gerds. Last updated 19 days ago.
1.9 match 46 stars 13.00 score 736 scripts 35 dependentsbioc
lumi:BeadArray Specific Methods for Illumina Methylation and Expression Microarrays
The lumi package provides an integrated solution for the Illumina microarray data analysis. It includes functions of Illumina BeadStudio (GenomeStudio) data input, quality control, BeadArray-specific variance stabilization, normalization and gene annotation at the probe level. It also includes the functions of processing Illumina methylation microarrays, especially Illumina Infinium methylation microarrays.
Maintained by Lei Huang. Last updated 5 months ago.
microarrayonechannelpreprocessingdnamethylationqualitycontroltwochannel
3.9 match 6.26 score 294 scripts 5 dependentshwborchers
pracma:Practical Numerical Math Functions
Provides a large number of functions from numerical analysis and linear algebra, numerical optimization, differential equations, time series, plus some well-known special mathematical functions. Uses 'MATLAB' function names where appropriate to simplify porting.
Maintained by Hans W. Borchers. Last updated 1 years ago.
1.9 match 29 stars 12.34 score 6.6k scripts 931 dependentsmarius-cp
calibrationband:Calibration Bands
Package to assess the calibration of probabilistic classifiers using confidence bands for monotonic functions. Besides testing the classical goodness-of-fit null hypothesis of perfect calibration, the confidence bands calculated within that package facilitate inverted goodness-of-fit tests whose rejection allows for a sought-after conclusion of a sufficiently well-calibrated model. The package creates flexible graphical tools to perform these tests. For construction details see also Dimitriadis, Dümbgen, Henzi, Puke, Ziegel (2022) <arXiv:2203.04065>.
Maintained by Marius Puke. Last updated 3 years ago.
6.1 match 11 stars 3.74 score 10 scriptscran
mgcv:Mixed GAM Computation Vehicle with Automatic Smoothness Estimation
Generalized additive (mixed) models, some of their extensions and other generalized ridge regression with multiple smoothing parameter estimation by (Restricted) Marginal Likelihood, Generalized Cross Validation and similar, or using iterated nested Laplace approximation for fully Bayesian inference. See Wood (2017) <doi:10.1201/9781315370279> for an overview. Includes a gam() function, a wide variety of smoothers, 'JAGS' support and distributions beyond the exponential family.
Maintained by Simon Wood. Last updated 1 years ago.
1.8 match 32 stars 12.71 score 17k scripts 7.8k dependentsbaolong281
MonotonicityTest:Nonparametric Bootstrap Test for Regression Monotonicity
Implements nonparametric bootstrap tests for detecting monotonicity in regression functions from Hall, P. and Heckman, N. (2000) <doi:10.1214/aos/1016120363> Includes tools for visualizing results using Nadaraya-Watson kernel regression and supports efficient computation with 'C++'.
Maintained by Dylan Huynh. Last updated 11 days ago.
5.5 match 4.08 score 2 scripts 1 dependentsbioc
hdxmsqc:An R package for quality Control for hydrogen deuterium exchange mass spectrometry experiments
The hdxmsqc package enables us to analyse and visualise the quality of HDX-MS experiments. Either as a final quality check before downstream analysis and publication or as part of a interative procedure to determine the quality of the data. The package builds on the QFeatures and Spectra packages to integrate with other mass-spectrometry data.
Maintained by Oliver M. Crook. Last updated 5 months ago.
qualitycontroldataimportproteomicsmassspectrometrymetabolomics
5.2 match 4.30 score 2 scriptsbioc
goseq:Gene Ontology analyser for RNA-seq and other length biased data
Detects Gene Ontology and/or other user defined categories which are over/under represented in RNA-seq data.
Maintained by Federico Marini. Last updated 5 months ago.
immunooncologysequencinggogeneexpressiontranscriptionrnaseqdifferentialexpressionannotationgenesetenrichmentkeggpathwayssoftware
2.3 match 1 stars 9.67 score 636 scripts 9 dependentsalexkowa
EnvStats:Package for Environmental Statistics, Including US EPA Guidance
Graphical and statistical analyses of environmental data, with focus on analyzing chemical concentrations and physical parameters, usually in the context of mandated environmental monitoring. Major environmental statistical methods found in the literature and regulatory guidance documents, with extensive help that explains what these methods do, how to use them, and where to find them in the literature. Numerous built-in data sets from regulatory guidance documents and environmental statistics literature. Includes scripts reproducing analyses presented in the book "EnvStats: An R Package for Environmental Statistics" (Millard, 2013, Springer, ISBN 978-1-4614-8455-4, <doi:10.1007/978-1-4614-8456-1>).
Maintained by Alexander Kowarik. Last updated 19 days ago.
1.7 match 26 stars 12.80 score 2.4k scripts 46 dependentslbau7
baskexact:Analytical Calculation of Basket Trial Operating Characteristics
Analytically calculates the operating characteristics of single-stage and two-stage basket trials with equal sample sizes using the power prior design by Baumann et al. (2024) <doi:10.48550/arXiv.2309.06988> and the design by Fujikawa et al. (2020) <doi:10.1002/bimj.201800404>.
Maintained by Lukas Baumann. Last updated 7 months ago.
3.9 match 2 stars 5.22 score 11 scriptsbioc
UMI4Cats:UMI4Cats: Processing, analysis and visualization of UMI-4C chromatin contact data
UMI-4C is a technique that allows characterization of 3D chromatin interactions with a bait of interest, taking advantage of a sonication step to produce unique molecular identifiers (UMIs) that help remove duplication bias, thus allowing a better differential comparsion of chromatin interactions between conditions. This package allows processing of UMI-4C data, starting from FastQ files provided by the sequencing facility. It provides two statistical methods for detecting differential contacts and includes a visualization function to plot integrated information from a UMI-4C assay.
Maintained by Mireia Ramos-Rodriguez. Last updated 5 months ago.
qualitycontrolpreprocessingalignmentnormalizationvisualizationsequencingcoveragechromatinchromatin-interactiongenomicsumi4c
3.6 match 5 stars 5.57 score 7 scriptskazuyanagimoto
quartomonothemer:Monotone Theme Maker for Quarto
Makes a monotone theme for Quarto revealjs slides, ggplot, and gt.
Maintained by Kazuharu Yanagimoto. Last updated 10 months ago.
5.6 match 13 stars 3.41 score 10 scriptssooahnshin
aihuman:Experimental Evaluation of Algorithm-Assisted Human Decision-Making
Provides statistical methods for analyzing experimental evaluation of the causal impacts of algorithmic recommendations on human decisions developed by Imai, Jiang, Greiner, Halen, and Shin (2023) <doi:10.1093/jrsssa/qnad010> and Ben-Michael, Greiner, Huang, Imai, Jiang, and Shin (2024) <doi:10.48550/arXiv.2403.12108>. The data used for this paper, and made available here, are interim, based on only half of the observations in the study and (for those observations) only half of the study follow-up period. We use them only to illustrate methods, not to draw substantive conclusions.
Maintained by Sooahn Shin. Last updated 3 months ago.
4.1 match 2 stars 4.60 score 8 scriptscran
cequre:Censored Quantile Regression & Monotonicity-Respecting Restoring
Perform censored quantile regression of Huang (2010) <doi:10.1214/09-AOS771>, and restore monotonicity respecting via adaptive interpolation for dynamic regression of Huang (2017) <doi:10.1080/01621459.2016.1149070>. The monotonicity-respecting restoration applies to general dynamic regression models including (uncensored or censored) quantile regression model, additive hazards model, and dynamic survival models of Peng and Huang (2007) <doi:10.1093/biomet/asm058>, among others.
Maintained by Yijian Huang. Last updated 2 years ago.
9.4 match 2.00 scoredecisionpatterns
ordering:Test, Check, Verify, Investigate the Monotonic Properties of Vectors
Functions to test/check/verify/investigate the ordering of vectors. The 'is_[strictly_]*' family of functions test vectors for 'sorted', 'monotonic', 'increasing', 'decreasing' order; 'is_constant' and 'is_incremental' test for the degree of ordering. `ordering` provides a numeric indication of ordering -2 (strictly decreasing) to 2 (strictly increasing).
Maintained by Christopher Brown. Last updated 6 years ago.
6.4 match 1 stars 2.93 score 17 scriptscran
fdrtool:Estimation of (Local) False Discovery Rates and Higher Criticism
Estimates both tail area-based false discovery rates (Fdr) as well as local false discovery rates (fdr) for a variety of null models (p-values, z-scores, correlation coefficients, t-scores). The proportion of null values and the parameters of the null distribution are adaptively estimated from the data. In addition, the package contains functions for non-parametric density estimation (Grenander estimator), for monotone regression (isotonic regression and antitonic regression with weights), for computing the greatest convex minorant (GCM) and the least concave majorant (LCM), for the half-normal and correlation distributions, and for computing empirical higher criticism (HC) scores and the corresponding decision threshold.
Maintained by Korbinian Strimmer. Last updated 7 months ago.
2.3 match 3 stars 8.24 score 844 scripts 118 dependentsbioc
dyebias:The GASSCO method for correcting for slide-dependent gene-specific dye bias
Many two-colour hybridizations suffer from a dye bias that is both gene-specific and slide-specific. The former depends on the content of the nucleotide used for labeling; the latter depends on the labeling percentage. The slide-dependency was hitherto not recognized, and made addressing the artefact impossible. Given a reasonable number of dye-swapped pairs of hybridizations, or of same vs. same hybridizations, both the gene- and slide-biases can be estimated and corrected using the GASSCO method (Margaritis et al., Mol. Sys. Biol. 5:266 (2009), doi:10.1038/msb.2009.21)
Maintained by Philip Lijnzaad. Last updated 5 months ago.
microarraytwochannelqualitycontrolpreprocessing
5.6 match 3.30 score 10 scriptsnicholasjclark
mvgam:Multivariate (Dynamic) Generalized Additive Models
Fit Bayesian Dynamic Generalized Additive Models to multivariate observations. Users can build nonlinear State-Space models that can incorporate semiparametric effects in observation and process components, using a wide range of observation families. Estimation is performed using Markov Chain Monte Carlo with Hamiltonian Monte Carlo in the software 'Stan'. References: Clark & Wells (2023) <doi:10.1111/2041-210X.13974>.
Maintained by Nicholas J Clark. Last updated 2 days ago.
bayesian-statisticsdynamic-factor-modelsecological-modellingforecastinggaussian-processgeneralised-additive-modelsgeneralized-additive-modelsjoint-species-distribution-modellingmultilevel-modelsmultivariate-timeseriesstantime-series-analysistimeseriesvector-autoregressionvectorautoregressioncpp
1.9 match 139 stars 9.85 score 117 scriptsbioc
MsCoreUtils:Core Utils for Mass Spectrometry Data
MsCoreUtils defines low-level functions for mass spectrometry data and is independent of any high-level data structures. These functions include mass spectra processing functions (noise estimation, smoothing, binning, baseline estimation), quantitative aggregation functions (median polish, robust summarisation, ...), missing data imputation, data normalisation (quantiles, vsn, ...), misc helper functions, that are used across high-level data structure within the R for Mass Spectrometry packages.
Maintained by RforMassSpectrometry Package Maintainer. Last updated 7 days ago.
infrastructureproteomicsmassspectrometrymetabolomicsbioconductormass-spectrometryutils
1.8 match 16 stars 10.52 score 41 scripts 71 dependentsscottkosty
monreg:Nonparametric Monotone Regression
Estimates monotone regression and variance functions in a nonparametric model, based on Dette, Holger, Neumeyer, and Pilz (2006) <doi:10.3150/bj/1151525131>.
Maintained by Scott Kostyshak. Last updated 5 years ago.
9.1 match 2.00 score 1 scriptsalexanderrobitzsch
sirt:Supplementary Item Response Theory Models
Supplementary functions for item response models aiming to complement existing R packages. The functionality includes among others multidimensional compensatory and noncompensatory IRT models (Reckase, 2009, <doi:10.1007/978-0-387-89976-3>), MCMC for hierarchical IRT models and testlet models (Fox, 2010, <doi:10.1007/978-1-4419-0742-4>), NOHARM (McDonald, 1982, <doi:10.1177/014662168200600402>), Rasch copula model (Braeken, 2011, <doi:10.1007/s11336-010-9190-4>; Schroeders, Robitzsch & Schipolowski, 2014, <doi:10.1111/jedm.12054>), faceted and hierarchical rater models (DeCarlo, Kim & Johnson, 2011, <doi:10.1111/j.1745-3984.2011.00143.x>), ordinal IRT model (ISOP; Scheiblechner, 1995, <doi:10.1007/BF02301417>), DETECT statistic (Stout, Habing, Douglas & Kim, 1996, <doi:10.1177/014662169602000403>), local structural equation modeling (LSEM; Hildebrandt, Luedtke, Robitzsch, Sommer & Wilhelm, 2016, <doi:10.1080/00273171.2016.1142856>).
Maintained by Alexander Robitzsch. Last updated 3 months ago.
item-response-theoryopenblascpp
1.8 match 23 stars 10.01 score 280 scripts 22 dependentsnchenderson
daarem:Damped Anderson Acceleration with Epsilon Monotonicity for Accelerating EM-Like Monotone Algorithms
Implements the DAAREM method for accelerating the convergence of slow, monotone sequences from smooth, fixed-point iterations such as the EM algorithm. For further details about the DAAREM method, see Henderson, N.C. and Varadhan, R. (2019) <doi:10.1080/10618600.2019.1594835>.
Maintained by Nicholas Henderson. Last updated 3 years ago.
6.6 match 2.71 score 17 scripts 1 dependentsbioc
PharmacoGx:Analysis of Large-Scale Pharmacogenomic Data
Contains a set of functions to perform large-scale analysis of pharmaco-genomic data. These include the PharmacoSet object for storing the results of pharmacogenomic experiments, as well as a number of functions for computing common summaries of drug-dose response and correlating them with the molecular features in a cancer cell-line.
Maintained by Benjamin Haibe-Kains. Last updated 2 months ago.
geneexpressionpharmacogeneticspharmacogenomicssoftwareclassificationdatasetspharmacogenomicpharmacogxcpp
1.6 match 68 stars 11.39 score 442 scripts 3 dependentsichxw
mixtox:Dose Response Curve Fitting and Mixture Toxicity Assessment
Curve Fitting of monotonic(sigmoidal) & non-monotonic(J-shaped) dose-response data. Predicting mixture toxicity based on reference models such as 'concentration addition', 'independent action', and 'generalized concentration addition'.
Maintained by Xiangwei Zhu. Last updated 6 months ago.
4.4 match 3 stars 3.97 score 31 scriptswenchao-ma
GDINA:The Generalized DINA Model Framework
A set of psychometric tools for cognitive diagnosis modeling based on the generalized deterministic inputs, noisy and gate (G-DINA) model by de la Torre (2011) <DOI:10.1007/s11336-011-9207-7> and its extensions, including the sequential G-DINA model by Ma and de la Torre (2016) <DOI:10.1111/bmsp.12070> for polytomous responses, and the polytomous G-DINA model by Chen and de la Torre <DOI:10.1177/0146621613479818> for polytomous attributes. Joint attribute distribution can be independent, saturated, higher-order, loglinear smoothed or structured. Q-matrix validation, item and model fit statistics, model comparison at test and item level and differential item functioning can also be conducted. A graphical user interface is also provided. For tutorials, please check Ma and de la Torre (2020) <DOI:10.18637/jss.v093.i14>, Ma and de la Torre (2019) <DOI:10.1111/emip.12262>, Ma (2019) <DOI:10.1007/978-3-030-05584-4_29> and de la Torre and Akbay (2019).
Maintained by Wenchao Ma. Last updated 1 months ago.
cdmcognitive-diagnosisdcmdina-modeldinoestimation-modelsgdinaitem-response-theorypsychometricsopenblascpp
1.9 match 30 stars 8.92 score 94 scripts 6 dependentsdgbonett
statpsych:Statistical Methods for Psychologists
Implements confidence interval and sample size methods that are especially useful in psychological research. The methods can be applied in 1-group, 2-group, paired-samples, and multiple-group designs and to a variety of parameters including means, medians, proportions, slopes, standardized mean differences, standardized linear contrasts of means, plus several measures of correlation and association. Confidence interval and sample size functions are given for single parameters as well as differences, ratios, and linear contrasts of parameters. The sample size functions can be used to approximate the sample size needed to estimate a parameter or function of parameters with desired confidence interval precision or to perform a variety of hypothesis tests (directional two-sided, equivalence, superiority, noninferiority) with desired power. For details see: Statistical Methods for Psychologists, Volumes 1 – 4, <https://dgbonett.sites.ucsc.edu/>.
Maintained by Douglas G. Bonett. Last updated 3 months ago.
3.4 match 6 stars 4.83 score 15 scripts 1 dependentsrobjhyndman
demography:Forecasting Mortality, Fertility, Migration and Population Data
Functions for demographic analysis including lifetable calculations; Lee-Carter modelling; functional data analysis of mortality rates, fertility rates, net migration numbers; and stochastic population forecasting.
Maintained by Rob Hyndman. Last updated 3 months ago.
actuarialdemographyforecasting
2.0 match 74 stars 8.21 score 241 scripts 6 dependentsstephenslab
EbayesThresh:Empirical Bayes Thresholding and Related Methods
Empirical Bayes thresholding using the methods developed by I. M. Johnstone and B. W. Silverman. The basic problem is to estimate a mean vector given a vector of observations of the mean vector plus white noise, taking advantage of possible sparsity in the mean vector. Within a Bayesian formulation, the elements of the mean vector are modelled as having, independently, a distribution that is a mixture of an atom of probability at zero and a suitable heavy-tailed distribution. The mixing parameter can be estimated by a marginal maximum likelihood approach. This leads to an adaptive thresholding approach on the original data. Extensions of the basic method, in particular to wavelet thresholding, are also implemented within the package.
Maintained by Peter Carbonetto. Last updated 7 years ago.
1.8 match 5 stars 8.95 score 54 scripts 14 dependentskevinstadler
cultevo:Tools, Measures and Statistical Tests for Cultural Evolution
Provides tools and statistics useful for analysing data from artificial language experiments. It implements the information-theoretic measure of the compositionality of signalling systems due to Spike (2016) <http://hdl.handle.net/1842/25930>, the Mantel test for distance matrix correlation (after Dietz 1983) <doi:10.1093/sysbio/32.1.21>), functions for computing string and meaning distance matrices as well as an implementation of the Page test for monotonicity of ranks (Page 1963) <doi:10.1080/01621459.1963.10500843> with exact p-values up to k = 22.
Maintained by Kevin Stadler. Last updated 1 years ago.
2.4 match 8 stars 6.50 score 131 scripts 1 dependentsiiasa
ibis.iSDM:Modelling framework for integrated biodiversity distribution scenarios
Integrated framework of modelling the distribution of species and ecosystems in a suitability framing. This package allows the estimation of integrated species distribution models (iSDM) based on several sources of evidence and provided presence-only and presence-absence datasets. It makes heavy use of point-process models for estimating habitat suitability and allows to include spatial latent effects and priors in the estimation. To do so 'ibis.iSDM' supports a number of engines for Bayesian and more non-parametric machine learning estimation. Further, the 'ibis.iSDM' is specifically customized to support spatial-temporal projections of habitat suitability into the future.
Maintained by Martin Jung. Last updated 4 months ago.
bayesianbiodiversityintegrated-frameworkpoisson-processscenariossdmspatial-grainspatial-predictionsspecies-distribution-modelling
3.5 match 21 stars 4.36 score 12 scripts 1 dependentsrh8liuqy
MSIMST:Bayesian Monotonic Single-Index Regression Model with the Skew-T Likelihood
Incorporates a Bayesian monotonic single-index mixed-effect model with a multivariate skew-t likelihood, specifically designed to handle survey weights adjustments. Features include a simulation program and an associated Gibbs sampler for model estimation. The single-index function is constrained to be monotonic increasing, utilizing a customized Gaussian process prior for precise estimation. The model assumes random effects follow a canonical skew-t distribution, while residuals are represented by a multivariate Student-t distribution. Offers robust Bayesian adjustments to integrate survey weight information effectively.
Maintained by Qingyang Liu. Last updated 6 months ago.
3.6 match 2 stars 4.30 scoredistancedevelopment
mrds:Mark-Recapture Distance Sampling
Animal abundance estimation via conventional, multiple covariate and mark-recapture distance sampling (CDS/MCDS/MRDS). Detection function fitting is performed via maximum likelihood. Also included are diagnostics and plotting for fitted detection functions. Abundance estimation is via a Horvitz-Thompson-like estimator.
Maintained by Laura Marshall. Last updated 2 months ago.
1.9 match 4 stars 8.05 score 78 scripts 7 dependentselray1
distfromq:Reconstruct a Distribution from a Collection of Quantiles
Given a set of predictive quantiles from a distribution, estimate the distribution and create `d`, `p`, `q`, and `r` functions to evaluate its density function, distribution function, and quantile function, and generate random samples. On the interior of the provided quantiles, an interpolation method such as a monotonic cubic spline is used; the tails are approximated by a location-scale family.
Maintained by Evan Ray. Last updated 6 months ago.
3.7 match 4.05 score 37 scripts 2 dependentsoucru-modelling
serosv:Model Infectious Disease Parameters from Serosurveys
An easy-to-use and efficient tool to estimate infectious diseases parameters using serological data. Implemented models include SIR models (basic_sir_model(), static_sir_model(), mseir_model(), sir_subpops_model()), parametric models (polynomial_model(), fp_model()), nonparametric models (lp_model()), semiparametric models (penalized_splines_model()), hierarchical models (hierarchical_bayesian_model()). The package is based on the book "Modeling Infectious Disease Parameters Based on Serological and Social Contact Data: A Modern Statistical Perspective" (Hens, Niel & Shkedy, Ziv & Aerts, Marc & Faes, Christel & Damme, Pierre & Beutels, Philippe., 2013) <doi:10.1007/978-1-4614-4072-7>.
Maintained by Anh Phan Truong Quynh. Last updated 1 months ago.
2.3 match 6.58 score 24 scriptsnwaller
FMP:Filtered Monotonic Polynomial IRT Models
Estimates Filtered Monotonic Polynomial IRT Models as described by Liang and Browne (2015) <DOI:10.3102/1076998614556816>.
Maintained by Niels G. Waller. Last updated 9 years ago.
8.7 match 1.70 score 10 scriptsmc-schaaf
mousetRajectory:Mouse Trajectory Analyses for Behavioural Scientists
Helping psychologists and other behavioural scientists to analyze mouse movement (and other 2-D trajectory) data. Bundles together several functions that compute spatial measures (e.g., maximum absolute deviation, area under the curve, sample entropy) or provide a shorthand for procedures that are frequently used (e.g., time normalization, linear interpolation, extracting initiation and movement times). For more informationsee Pfister et al. (2024) <doi:10.20982/tqmp.20.3.p217>.
Maintained by Roland Pfister. Last updated 6 months ago.
3.6 match 2 stars 4.00 score 5 scriptskelliejarcher
countgmifs:Discrete Response Regression for High-Dimensional Data
Provides a function for fitting Poisson and negative binomial regression models when the number of parameters exceeds the sample size, using the the generalized monotone incremental forward stagewise method.
Maintained by Kellie Archer. Last updated 3 years ago.
3.8 match 3.70 score 1 scriptsarne-henningsen
micEcon:Microeconomic Analysis and Modelling
Various tools for microeconomic analysis and microeconomic modelling, e.g. estimating quadratic, Cobb-Douglas and Translog functions, calculating partial derivatives and elasticities of these functions, and calculating Hessian matrices, checking curvature and preparing restrictions for imposing monotonicity of Translog functions.
Maintained by Arne Henningsen. Last updated 3 years ago.
4.4 match 3 stars 3.18 score 39 scripts 3 dependentscran
ecotoxicology:Methods for Ecotoxicology
Implementation of the EPA's Ecological Exposure Research Division (EERD) tools (discontinued in 1999) for Probit and Trimmed Spearman-Karber Analysis. Probit and Spearman-Karber methods from Finney's book "Probit analysis a statistical treatment of the sigmoid response curve" with options for most accurate results or identical results to the book. Probit and all the tables from Finney's book (code-generated, not copied) with the generating functions included. Control correction: Abbott, Schneider-Orelli, Henderson-Tilton, Sun-Shepard. Toxicity scales: Horsfall-Barratt, Archer, Gauhl-Stover, Fullerton-Olsen, etc.
Maintained by Jose Gama. Last updated 9 years ago.
7.4 match 3 stars 1.89 score 26 scriptscran
MonoInc:Monotonic Increasing
Various imputation methods are utilized in this package, where one can flag and impute non-monotonic data that is outside of a prespecified range.
Maintained by Michele Josey. Last updated 9 years ago.
13.7 match 1.00 scoreharrison4192
autostats:Auto Stats
Automatically do statistical exploration. Create formulas using 'tidyselect' syntax, and then determine cross-validated model accuracy and variable contributions using 'glm' and 'xgboost'. Contains additional helper functions to create and modify formulas. Has a flagship function to quickly determine relationships between categorical and continuous variables in the data set.
Maintained by Harrison Tietze. Last updated 14 days ago.
2.0 match 6 stars 6.76 score 5 scripts 2 dependentsvlyubchich
funtimes:Functions for Time Series Analysis
Nonparametric estimators and tests for time series analysis. The functions use bootstrap techniques and robust nonparametric difference-based estimators to test for the presence of possibly non-monotonic trends and for synchronicity of trends in multiple time series.
Maintained by Vyacheslav Lyubchich. Last updated 2 years ago.
2.0 match 7 stars 6.69 score 93 scriptsdsalfran
ImputeRobust:Robust Multiple Imputation with Generalized Additive Models for Location Scale and Shape
Provides new imputation methods for the 'mice' package based on generalized additive models for location, scale, and shape (GAMLSS) as described in de Jong, van Buuren and Spiess <doi:10.1080/03610918.2014.911894>.
Maintained by Daniel Salfran. Last updated 6 years ago.
imputationmissing-datamultiple-imputation
3.5 match 9 stars 3.65 score 4 scriptscran
FuzzyNumbers.Ext.2:Apply Two Fuzzy Numbers on a Monotone Function
One can easily draw the membership function of f(x,y) by package 'FuzzyNumbers.Ext.2' in which f(.,.) is supposed monotone and x and y are two fuzzy numbers. This work is possible using function f2apply() which is an extension of function fapply() from Package 'FuzzyNumbers' for two-variable monotone functions. Moreover, this package has the ability of computing the core, support and alpha-cuts of the fuzzy-valued final result.
Maintained by Abbas Parchami. Last updated 8 years ago.
5.4 match 2.32 score 7 dependentsfabrice-rossi
mixvlmc:Variable Length Markov Chains with Covariates
Estimates Variable Length Markov Chains (VLMC) models and VLMC with covariates models from discrete sequences. Supports model selection via information criteria and simulation of new sequences from an estimated model. See Bühlmann, P. and Wyner, A. J. (1999) <doi:10.1214/aos/1018031204> for VLMC and Zanin Zambom, A., Kim, S. and Lopes Garcia, N. (2022) <doi:10.1111/jtsa.12615> for VLMC with covariates.
Maintained by Fabrice Rossi. Last updated 11 months ago.
machine-learningmarkov-chainmarkov-modelstatisticstime-seriescpp
2.0 match 2 stars 6.23 score 20 scriptsvochr
TapeS:Tree Taper Curves and Sorting Based on 'TapeR'
Providing new german-wide 'TapeR' Models and functions for their evaluation. Included are the most common tree species in Germany (Norway spruce, Scots pine, European larch, Douglas fir, Silver fir as well as European beech, Common/Sessile oak and Red oak). Many other species are mapped to them so that 36 tree species / groups can be processed. Single trees are defined by species code, one or multiple diameters in arbitrary measuring height and tree height. The functions then provide information on diameters along the stem, bark thickness, height of diameters, volume of the total or parts of the trunk and total and component above-ground biomass. It is also possible to calculate assortments from the taper curves. Uncertainty information is provided for diameter, volume and component biomass estimation.
Maintained by Christian Vonderach. Last updated 1 months ago.
3.1 match 3.90 score 1 scriptsalexcannon
monmlp:Multi-Layer Perceptron Neural Network with Optional Monotonicity Constraints
Train and make predictions from a multi-layer perceptron neural network with optional partial monotonicity constraints.
Maintained by Alex J. Cannon. Last updated 7 years ago.
5.1 match 2.30 score 33 scripts 1 dependentstmsalab
hmcdm:Hidden Markov Cognitive Diagnosis Models for Learning
Fitting hidden Markov models of learning under the cognitive diagnosis framework. The estimation of the hidden Markov diagnostic classification model, the first order hidden Markov model, the reduced-reparameterized unified learning model, and the joint learning model for responses and response times.
Maintained by Sunbeom Kwon. Last updated 2 years ago.
cognitive-diagnostic-modelspsychometricsrcpprcpparmadilloopenblascppopenmp
2.0 match 7 stars 5.70 score 12 scriptsivanfernandezval
Rearrangement:Monotonize Point and Interval Functional Estimates by Rearrangement
The rearrangement operator (Hardy, Littlewood, and Polya 1952) for univariate, bivariate, and trivariate point estimates of monotonic functions. The package additionally provides a function that creates simultaneous confidence intervals for univariate functions and applies the rearrangement operator to these confidence intervals.
Maintained by Ivan Fernandez-Val. Last updated 9 years ago.
3.4 match 3.26 score 20 scripts 3 dependentsleahfeuerstahler
flexmet:Flexible Latent Trait Metrics using the Filtered Monotonic Polynomial Item Response Model
Application of the filtered monotonic polynomial (FMP) item response model to flexibly fit item response models. The package includes tools that allow the item response model to be build on any monotonic transformation of the latent trait metric, as described by Feuerstahler (2019) <doi:10.1007/s11336-018-9642-9>.
Maintained by Leah Feuerstahler. Last updated 4 years ago.
3.5 match 1 stars 3.18 score 15 scriptscran
prevtoinc:Prevalence to Incidence Calculations for Point-Prevalence Studies in a Nosocomial Setting
Functions to simulate point prevalence studies (PPSs) of healthcare-associated infections (HAIs) and to convert prevalence to incidence in steady state setups. Companion package to the preprint Willrich et al., From prevalence to incidence - a new approach in the hospital setting; <doi:10.1101/554725> , where methods are explained in detail.
Maintained by Niklas Willrich. Last updated 6 years ago.
3.4 match 3.18 score 1 dependentsfreezenik
bamlss:Bayesian Additive Models for Location, Scale, and Shape (and Beyond)
Infrastructure for estimating probabilistic distributional regression models in a Bayesian framework. The distribution parameters may capture location, scale, shape, etc. and every parameter may depend on complex additive terms (fixed, random, smooth, spatial, etc.) similar to a generalized additive model. The conceptual and computational framework is introduced in Umlauf, Klein, Zeileis (2019) <doi:10.1080/10618600.2017.1407325> and the R package in Umlauf, Klein, Simon, Zeileis (2021) <doi:10.18637/jss.v100.i04>.
Maintained by Nikolaus Umlauf. Last updated 5 months ago.
1.8 match 1 stars 5.76 score 239 scripts 5 dependentswlenhard
cNORM:Continuous Norming
A comprehensive toolkit for generating continuous test norms in psychometrics and biometrics, and analyzing model fit. The package offers both distribution-free modeling using Taylor polynomials and parametric modeling using the beta-binomial distribution. Originally developed for achievement tests, it is applicable to a wide range of mental, physical, or other test scores dependent on continuous or discrete explanatory variables. The package provides several advantages: It minimizes deviations from representativeness in subsamples, interpolates between discrete levels of explanatory variables, and significantly reduces the required sample size compared to conventional norming per age group. cNORM enables graphical and analytical evaluation of model fit, accommodates a wide range of scales including those with negative and descending values, and even supports conventional norming. It generates norm tables including confidence intervals. It also includes methods for addressing representativeness issues through Iterative Proportional Fitting.
Maintained by Wolfgang Lenhard. Last updated 4 months ago.
beta-binomialbiometricscontinuous-norminggrowth-curvenorm-scoresnorm-tablesnormalization-techniquespercentilepsychometricsregression-based-normingtaylor-series
1.9 match 2 stars 5.49 score 75 scriptsjohn-d-fox
norm:Analysis of Multivariate Normal Datasets with Missing Values
An integrated set of functions for the analysis of multivariate normal datasets with missing values, including implementation of the EM algorithm, data augmentation, and multiple imputation.
Maintained by John Fox. Last updated 2 years ago.
1.7 match 5.99 score 106 scripts 33 dependentscran
smdi:Perform Structural Missing Data Investigations
An easy to use implementation of routine structural missing data diagnostics with functions to visualize the proportions of missing observations, investigate missing data patterns and conduct various empirical missing data diagnostic tests. Reference: Weberpals J, Raman SR, Shaw PA, Lee H, Hammill BG, Toh S, Connolly JG, Dandreo KJ, Tian F, Liu W, Li J, Hernández-Muñoz JJ, Glynn RJ, Desai RJ. smdi: an R package to perform structural missing data investigations on partially observed confounders in real-world evidence studies. JAMIA Open. 2024 Jan 31;7(1):ooae008. <doi:10.1093/jamiaopen/ooae008>.
Maintained by Janick Weberpals. Last updated 6 months ago.
3.3 match 3.00 scoreyujian-wu
MonotoneHazardRatio:Nonparametric Estimation and Inference of a Monotone Hazard Ratio Function
A tool for nonparametric estimation and inference of a non-decreasing monotone hazard ratio from a right censored survival dataset. The estimator is based on a generalized Grenander typed estimator, and the inference procedure relies on direct plugin estimation of a first order derivative. More details please refer to the paper "Nonparametric inference under a monotone hazard ratio order" by Y. Wu and T. Westling (2023).
Maintained by Yujian Wu. Last updated 5 months ago.
3.6 match 2.70 score 2 scriptsjassler
socialranking:Social Ranking Solutions for Power Relations on Coalitions
The notion of power index has been widely used in literature to evaluate the influence of individual players (e.g., voters, political parties, nations, stockholders, etc.) involved in a collective decision situation like an electoral system, a parliament, a council, a management board, etc., where players may form coalitions. Traditionally this ranking is determined through numerical evaluation. More often than not however only ordinal data between coalitions is known. The package 'socialranking' offers a set of solutions to rank players based on a transitive ranking between coalitions, including through CP-Majority, ordinal Banzhaf or lexicographic excellence solution summarized by Tahar Allouche, Bruno Escoffier, Stefano Moretti and Meltem Öztürk (2020, <doi:10.24963/ijcai.2020/3>).
Maintained by Felix Fritz. Last updated 4 days ago.
1.9 match 6 stars 5.08 score 6 scriptsalidalba
mazeinda:Monotonic Association on Zero-Inflated Data
Methods for calculating and testing the significance of pairwise monotonic association from and based on the work of Pimentel (2009) <doi:10.4135/9781412985291.n2>. Computation of association of vectors from one or multiple sets can be performed in parallel thanks to the packages 'foreach' and 'doMC'.
Maintained by Alice Albasi. Last updated 3 years ago.
3.5 match 2.70 score 10 scriptscran
monoreg:Bayesian Monotonic Regression Using a Marked Point Process Construction
An extended version of the nonparametric Bayesian monotonic regression procedure described in Saarela & Arjas (2011) <DOI:10.1111/j.1467-9469.2010.00716.x>, allowing for multiple additive monotonic components in the linear predictor, and time-to-event outcomes through case-base sampling. The extension and its applications, including estimation of absolute risks, are described in Saarela & Arjas (2015) <DOI:10.1111/sjos.12125>. The package also implements the nonparametric ordinal regression model described in Saarela, Rohrbeck & Arjas <DOI:10.1214/22-BA1310>.
Maintained by Olli Saarela. Last updated 2 years ago.
9.3 match 1.00 score 8 scriptsneurodata
causalBatch:Causal Batch Effects
Software which provides numerous functionalities for detecting and removing group-level effects from high-dimensional scientific data which, when combined with additional assumptions, allow for causal conclusions, as-described in our manuscripts Bridgeford et al. (2024) <doi:10.1101/2021.09.03.458920> and Bridgeford et al. (2023) <doi:10.48550/arXiv.2307.13868>. Also provides a number of useful utilities for generating simulations and balancing covariates across multiple groups/batches of data via matching and propensity trimming for more than two groups.
Maintained by Eric W. Bridgeford. Last updated 5 days ago.
1.9 match 4 stars 4.70 score 23 scriptsbiooss
mistral:Methods in Structural Reliability
Various reliability analysis methods for rare event inference (computing failure probability and quantile from model/function outputs).
Maintained by Bertrand Iooss. Last updated 1 years ago.
3.6 match 1 stars 2.43 score 27 scriptsskranz
gtree:gtree basic functionality to model and solve games
gtree basic functionality to model and solve games
Maintained by Sebastian Kranz. Last updated 4 years ago.
economic-experimentseconomicsgambitgame-theorynash-equilibrium
2.3 match 18 stars 3.79 score 23 scripts 1 dependentssnehatody1
logiBin:Binning Variables to Use in Logistic Regression
Fast binning of multiple variables using parallel processing. A summary of all the variables binned is generated which provides the information value, entropy, an indicator of whether the variable follows a monotonic trend or not, etc. It supports rebinning of variables to force a monotonic trend as well as manual binning based on pre specified cuts. The cut points of the bins are based on conditional inference trees as implemented in the partykit package. The conditional inference framework is described by Hothorn T, Hornik K, Zeileis A (2006) <doi:10.1198/106186006X133933>.
Maintained by Sneha Tody. Last updated 7 years ago.
4.2 match 2.00 score 8 scriptsshubhra-opensource
LMD:A Self-Adaptive Approach for Demodulating Multi-Component Signal
Local Mean Decomposition is an iterative and self-adaptive approach for demodulating, processing, and analyzing multi-component amplitude modulated and frequency modulated signals. This R package is based on the approach suggested by Smith (2005) <doi:10.1098/rsif.2005.0058> and the 'Python' library 'PyLMD'.
Maintained by Shubhra Prakash. Last updated 2 years ago.
2.3 match 3.70 score 1 scriptskelliejarcher
ordinalgmifs:Ordinal Regression for High-Dimensional Data
Provides a function for fitting cumulative link, adjacent category, forward and backward continuation ratio, and stereotype ordinal response models when the number of parameters exceeds the sample size, using the the generalized monotone incremental forward stagewise method.
Maintained by Kellie J. Archer. Last updated 2 years ago.
2.2 match 3.70 score 6 scriptsargju
gpmap:Analysing and Plotting Genotype-Phenotype Maps
Tools for studying genotype-phenotype maps for bi-allelic loci underlying quantitative phenotypes. The 0.1 version is released in connection with the publication of Gjuvsland et al (2013) and implements basic line plots and the monotonicity measures for GP maps presented in the paper. Reference: Gjuvsland AB, Wang Y, Plahte E and Omholt SW (2013) Monotonicity is a key feature of genotype-phenotype maps. Frontier in Genetics 4:216 <doi:10.3389/fgene.2013.00216>.
Maintained by Arne B. Gjuvsland. Last updated 4 years ago.
6.1 match 1.34 score 11 scriptstalegari
ggisotonic:'ggplot2' Friendly Isotonic or Monotonic Regression Curves
Provides stat_isotonic() to add weighted univariate isotonic regression curves.
Maintained by Komala Sheshachala Srikanth. Last updated 3 years ago.
2.9 match 1 stars 2.70 score 3 scriptsarne-henningsen
micEconAids:Demand Analysis with the Almost Ideal Demand System (AIDS)
Functions and tools for analysing consumer demand with the Almost Ideal Demand System (AIDS) suggested by Deaton and Muellbauer (1980).
Maintained by Arne Henningsen. Last updated 3 years ago.
2.3 match 7 stars 3.41 score 37 scriptsbioc
combi:Compositional omics model based visual integration
This explorative ordination method combines quasi-likelihood estimation, compositional regression models and latent variable models for integrative visualization of several omics datasets. Both unconstrained and constrained integration are available. The results are shown as interpretable, compositional multiplots.
Maintained by Stijn Hawinkel. Last updated 5 months ago.
metagenomicsdimensionreductionmicrobiomevisualizationmetabolomics
1.7 match 1 stars 4.48 score 7 scriptscran
mistr:Mixture and Composite Distributions
A flexible computational framework for mixture distributions with the focus on the composite models.
Maintained by Lukas Sablica. Last updated 2 years ago.
2.3 match 3.38 score 4 dependentscran
iAdapt:Two-Stage Adaptive Dose-Finding Clinical Trial Design
Simulate and implement early phase two-stage adaptive dose-finding design for binary and quasi-continuous toxicity endpoints. See Chiuzan et al. (2018) for further reading <DOI:10.1080/19466315.2018.1462727>.
Maintained by Alyssa Vanderbeek. Last updated 4 years ago.
2.4 match 3.18 score 7 scriptsassaforon
cir:Centered Isotonic Regression and Dose-Response Utilities
Isotonic regression (IR) and its improvement: centered isotonic regression (CIR). CIR is recommended in particular with small samples. Also, interval estimates for both, and additional utilities such as plotting dose-response data. For dev version and change history, see GitHub assaforon/cir.
Maintained by Assaf P. Oron. Last updated 2 months ago.
1.6 match 4.45 score 19 scripts 1 dependentscran
crov:Constrained Regression Model for an Ordinal Response and Ordinal Predictors
Fits a constrained regression model for an ordinal response with ordinal predictors and possibly others, Espinosa and Hennig (2019) <DOI:10.1007/s11222-018-9842-2>. The parameter estimates associated with an ordinal predictor are constrained to be monotonic. If a monotonicity direction (isotonic or antitonic) is not specified for an ordinal predictor by the user, then one of the available methods will either establish it or drop the monotonicity assumption. Two monotonicity tests are also available to test the null hypothesis of monotonicity over a set of parameters associated with an ordinal predictor.
Maintained by Javier Espinosa. Last updated 2 years ago.
6.9 match 1.00 scorebioc
Icens:NPMLE for Censored and Truncated Data
Many functions for computing the NPMLE for censored and truncated data.
Maintained by Bioconductor Package Maintainer. Last updated 5 months ago.
1.8 match 3.83 score 16 scripts 7 dependentskolassa-dev
PHInfiniteEstimates:Tools for Inference in the Presence of a Monotone Likelihood
Proportional hazards estimation in the presence of a partially monotone likelihood has difficulties, in that finite estimators do not exist. These difficulties are related to those arising from logistic and multinomial regression. References for methods are given in the separate function documents. Supported by grant NSF DMS 1712839.
Maintained by John E. Kolassa. Last updated 1 years ago.
6.8 match 1.00 scorealjacq
LorenzRegression:Lorenz and Penalized Lorenz Regressions
Inference for the Lorenz and penalized Lorenz regressions. More broadly, the package proposes functions to assess inequality and graphically represent it. The Lorenz Regression procedure is introduced in Heuchenne and Jacquemain (2022) <doi:10.1016/j.csda.2021.107347> and in Jacquemain, A., C. Heuchenne, and E. Pircalabelu (2024) <doi:10.1214/23-EJS2200>.
Maintained by Alexandre Jacquemain. Last updated 13 days ago.
1.7 match 1 stars 3.95 scorecran
qrnn:Quantile Regression Neural Network
Fit quantile regression neural network models with optional left censoring, partial monotonicity constraints, generalized additive model constraints, and the ability to fit multiple non-crossing quantile functions following Cannon (2011) <doi:10.1016/j.cageo.2010.07.005> and Cannon (2018) <doi:10.1007/s00477-018-1573-6>.
Maintained by Alex J. Cannon. Last updated 1 years ago.
2.1 match 8 stars 3.08 score 1 dependentsvitomuggeo
quantregGrowth:Non-Crossing Additive Regression Quantiles and Non-Parametric Growth Charts
Fits non-crossing regression quantiles as a function of linear covariates and multiple smooth terms, including varying coefficients, via B-splines with L1-norm difference penalties. Random intercepts and variable selection are allowed via the lasso penalties. The smoothing parameters are estimated as part of the model fitting, see Muggeo and others (2021) <doi:10.1177/1471082X20929802>. Monotonicity and concavity constraints on the fitted curves are allowed, see Muggeo and others (2013) <doi:10.1007/s10651-012-0232-1>, and also <doi:10.13140/RG.2.2.12924.85122> or <doi:10.13140/RG.2.2.29306.21445> some code examples.
Maintained by Vito M. R. Muggeo. Last updated 10 months ago.
2.3 match 1 stars 2.82 score 22 scripts 1 dependentshezibu
alien:Estimate Invasive and Alien Species (IAS) Introduction Rates
Easily estimate the introduction rates of alien species given first records data. It specializes in addressing the role of sampling on the pattern of discoveries, thus providing better estimates than using Generalized Linear Models which assume perfect immediate detection of newly introduced species.
Maintained by Yehezkel Buba. Last updated 9 months ago.
1.3 match 1 stars 5.08 score 10 scriptssnthomas99
clinDR:Simulation and Analysis Tools for Clinical Dose Response Modeling
Bayesian and ML Emax model fitting, graphics and simulation for clinical dose response. The summary data from the dose response meta-analyses in Thomas, Sweeney, and Somayaji (2014) <doi:10.1080/19466315.2014.924876> and Thomas and Roy (2016) <doi:10.1080/19466315.2016.1256229> Wu, Banerjee, Jin, Menon, Martin, and Heatherington(2017) <doi:10.1177/0962280216684528> are included in the package. The prior distributions for the Bayesian analyses default to the posterior predictive distributions derived from these references.
Maintained by Neal Thomas. Last updated 2 years ago.
3.4 match 1 stars 1.85 score 71 scriptstoduckhanh
bcROCsurface:Bias-Corrected Methods for Estimating the ROC Surface of Continuous Diagnostic Tests
The bias-corrected estimation methods for the receiver operating characteristics ROC surface and the volume under ROC surfaces (VUS) under missing at random (MAR) assumption.
Maintained by Duc-Khanh To. Last updated 1 years ago.
1.8 match 3.45 score 14 scriptscran
UtilityFrailtyPH12:Implementing EFF-TOX and Monotone Utility Based Phase 12 Trials
Contains functions for simulating phase 12 trial designs described by Chapple and Thall (2019) including simulation and the EFF-TOX trial and simulation and implementation of the U12 trial. Functions for implementing the EFF-TOX trial are found in the package 'Phase123'.
Maintained by Andrew G Chapple. Last updated 6 years ago.
6.1 match 1.00 score 6 scriptsbioc
miRcomp:Tools to assess and compare miRNA expression estimatation methods
Based on a large miRNA dilution study, this package provides tools to read in the raw amplification data and use these data to assess the performance of methods that estimate expression from the amplification curves.
Maintained by Matthew N. McCall. Last updated 5 months ago.
softwareqpcrpreprocessingqualitycontrol
1.8 match 3.30 score 1 scriptsschoonees
cds:Constrained Dual Scaling for Detecting Response Styles
This is an implementation of constrained dual scaling for detecting response styles in categorical data, including utility functions. The procedure involves adding additional columns to the data matrix representing the boundaries between the rating categories. The resulting matrix is then doubled and analyzed by dual scaling. One-dimensional solutions are sought which provide optimal scores for the rating categories. These optimal scores are constrained to follow monotone quadratic splines. Clusters are introduced within which the response styles can vary. The type of response style present in a cluster can be diagnosed from the optimal scores for said cluster, and this can be used to construct an imputed version of the data set which adjusts for response styles.
Maintained by Pieter Schoonees. Last updated 9 years ago.
2.2 match 2.65 score 37 scripts 1 dependentswaternumbers
dynatop:An Implementation of Dynamic TOPMODEL Hydrological Model in R
An R implementation and enhancement of the Dynamic TOPMODEL semi-distributed hydrological model originally proposed by Beven and Freer (2001) <doi:10.1002/hyp.252>. The 'dynatop' package implements code for simulating models which can be created using the 'dynatopGIS' package.
Maintained by Paul Smith. Last updated 5 months ago.
1.1 match 3 stars 5.08 score 9 scriptscran
CMLS:Constrained Multivariate Least Squares
Solves multivariate least squares (MLS) problems subject to constraints on the coefficients, e.g., non-negativity, orthogonality, equality, inequality, monotonicity, unimodality, smoothness, etc. Includes flexible functions for solving MLS problems subject to user-specified equality and/or inequality constraints, as well as a wrapper function that implements 24 common constraint options. Also does k-fold or generalized cross-validation to tune constraint options for MLS problems. See ten Berge (1993, ISBN:9789066950832) for an overview of MLS problems, and see Goldfarb and Idnani (1983) <doi:10.1007/BF02591962> for a discussion of the underlying quadratic programming algorithm.
Maintained by Nathaniel E. Helwig. Last updated 2 years ago.
2.3 match 2.48 score 5 dependentsfernandotusell
cat:Analysis and Imputation of Categorical-Variable Datasets with Missing Values
Performs analysis of categorical-variable with missing values. Implements methods from Schafer, JL, Analysis of Incomplete Multivariate Data, Chapman and Hall.
Maintained by Fernando Tusell. Last updated 2 years ago.
1.7 match 3.27 score 52 scripts 2 dependentsjcatwood
VeccTMVN:Multivariate Normal Probabilities using Vecchia Approximation
Under a different representation of the multivariate normal (MVN) probability, we can use the Vecchia approximation to sample the integrand at a linear complexity with respect to n. Additionally, both the SOV algorithm from Genz (92) and the exponential-tilting method from Botev (2017) can be adapted to linear complexity. The reference for the method implemented in this package is Jian Cao and Matthias Katzfuss (2024) "Linear-Cost Vecchia Approximation of Multivariate Normal Probabilities" <doi:10.48550/arXiv.2311.09426>. Two major references for the development of our method are Alan Genz (1992) "Numerical Computation of Multivariate Normal Probabilities" <doi:10.1080/10618600.1992.10477010> and Z. I. Botev (2017) "The Normal Law Under Linear Restrictions: Simulation and Estimation via Minimax Tilting" <doi:10.48550/arXiv.1603.04166>.
Maintained by Jian Cao. Last updated 4 months ago.
normal-distributionsampling-methodsstatisticsfortranopenblascppopenmp
1.6 match 2 stars 3.56 score 36 scriptscran
DIconvex:Finding Patterns of Monotonicity and Convexity in Data
Given an initial set of points, this package minimizes the number of elements to discard from this set such that there exists at least one monotonic and convex mapping within pre-specified upper and lower bounds.
Maintained by Liudmila Karagyaur. Last updated 6 years ago.
5.2 match 1.00 score 1 scriptsugroempi
ic.infer:Inequality Constrained Inference in Linear Normal Situations
Implements inequality constrained inference. This includes parameter estimation in normal (linear) models under linear equality and inequality constraints, as well as normal likelihood ratio tests involving inequality-constrained hypotheses. For inequality-constrained linear models, averaging over R-squared for different orderings of regressors is also included.
Maintained by Ulrike Groemping. Last updated 1 years ago.
1.7 match 3.03 score 17 scripts 2 dependentsamishra-stats
gofar:Generalized Co-Sparse Factor Regression
Divide and conquer approach for estimating low-rank and sparse coefficient matrix in the generalized co-sparse factor regression. Please refer the manuscript 'Mishra, Aditya, Dipak K. Dey, Yong Chen, and Kun Chen. Generalized co-sparse factor regression. Computational Statistics & Data Analysis 157 (2021): 107127' for more details.
Maintained by Aditya Mishra. Last updated 3 years ago.
1.7 match 2 stars 3.00 score 1 scriptsyili-hong
MICsplines:The Computing of Monotonic Spline Bases and Constrained Least-Squares Estimates
Providing C implementation for the computing of monotonic spline bases, including M-splines, I-splines, and C-splines, denoted by MIC splines. The definitions of the spline bases are described in Meyer (2008) <doi: 10.1214/08-AOAS167>. The package also provides the computing of constrained least-squares estimates when a subset of or all of the regression coefficients are constrained to be non-negative.
Maintained by Yili Hong. Last updated 4 years ago.
5.0 match 1.00 scoreskranz
phack:Detecting p-Hacking using Elliot et al. (2022)
Implements the tests from Elliot et al. (2022) for detecting p-Hacking. The package is essentially a simple wrapper to the code provided in the code and data supplement of the article, with some cosmetical changes. The original code can be found in the code and data supplement of the article. s References Elliott, G., Kudrin, N., & Wüthrich, K. (2022). Detecting p‐Hacking. Econometrica, 90(2), 887-906.
Maintained by Sebastian Kranz. Last updated 2 years ago.
1.6 match 2 stars 3.00 scoredaniel-gerhard
goric:Generalized Order-Restricted Information Criterion
Generalized Order-Restricted Information Criterion (GORIC) value for a set of hypotheses in multivariate linear models and generalised linear models.
Maintained by Daniel Gerhard. Last updated 4 years ago.
information-criterionlinear-models
1.3 match 3.81 score 13 scriptscran
NHPoisson:Modelling and Validation of Non Homogeneous Poisson Processes
Tools for modelling, ML estimation, validation analysis and simulation of non homogeneous Poisson processes in time.
Maintained by Ana C. Cebrian. Last updated 5 years ago.
1.7 match 2 stars 2.71 score 43 scripts 2 dependentsvivianephilipps
weightQuant:Weights for Incomplete Longitudinal Data and Quantile Regression
Estimation of observation-specific weights for incomplete longitudinal data and bootstrap procedure for weighted quantile regressions. See Jacqmin-Gadda, Rouanet, Mba, Philipps, Dartigues (2020) for details <doi:10.1177/0962280220909986>.
Maintained by Viviane Philipps. Last updated 3 years ago.
1.7 match 1 stars 2.70 score 3 scriptsdistancedevelopment
Distance:Distance Sampling Detection Function and Abundance Estimation
A simple way of fitting detection functions to distance sampling data for both line and point transects. Adjustment term selection, left and right truncation as well as monotonicity constraints and binning are supported. Abundance and density estimates can also be calculated (via a Horvitz-Thompson-like estimator) if survey area information is provided. See Miller et al. (2019) <doi:10.18637/jss.v089.i01> for more information on methods and <https://examples.distancesampling.org/> for example analyses.
Maintained by Laura Marshall. Last updated 1 days ago.
0.5 match 11 stars 8.96 score 358 scripts 3 dependentsjcatwood
nntmvn:Draw Samples of Truncated Multivariate Normal Distributions
Draw samples from truncated multivariate normal distribution using the sequential nearest neighbor (SNN) method introduced in "Scalable Sampling of Truncated Multivariate Normals Using Sequential Nearest-Neighbor Approximation" <doi:10.48550/arXiv.2406.17307>.
Maintained by Jian Cao. Last updated 1 months ago.
1.6 match 2.85 score 3 scriptseliebs
depcoeff:Dependency Coefficients
Functions to compute coefficients measuring the dependence of two or more than two variables. The functions can be deployed to gain information about functional dependencies of the variables with emphasis on monotone functions. The statistics describe how well one response variable can be approximated by a monotone function of other variables. In regression analysis the variable selection is an important issue. In this framework the functions could be useful tools in modeling the regression function. Detailed explanations on the subject can be found in papers Liebscher (2014) <doi:10.2478/demo-2014-0004>; Liebscher (2017) <doi:10.1515/demo-2017-0012>; Liebscher (2019, submitted).
Maintained by Eckhard Liebscher. Last updated 5 years ago.
4.3 match 1.00 scorexliaosdsu
csurvey:Constrained Regression for Survey Data
Domain mean estimation with monotonicity or block monotone constraints. See Xu X, Meyer MC and Opsomer JD (2021)<doi:10.1016/j.jspi.2021.02.004> for more details.
Maintained by Xiyue Liao. Last updated 26 days ago.
4.3 match 1.00 scorerahmasarina
NMTox:Dose-Response Relationship Analysis of Nanomaterial Toxicity
Perform an exploration and a preliminary analysis on the dose- response relationship of nanomaterial toxicity. Several functions are provided for data exploration, including functions for creating a subset of dataset, frequency tables and plots. Inference for order restricted dose- response data is performed by testing the significance of monotonic dose-response relationship, using Williams, Marcus, M, Modified M and Likelihood ratio tests. Several methods of multiplicity adjustment are also provided. Description of the methods can be found in <https://github.com/rahmasarina/dose-response-analysis/blob/main/Methodology.pdf>.
Maintained by Rahmasari Nur Azizah. Last updated 3 years ago.
3.9 match 1.00 scoresokbae
ciccr:Causal Inference in Case-Control and Case-Population Studies
Estimation and inference methods for causal relative and attributable risk in case-control and case-population studies under the monotone treatment response and monotone treatment selection assumptions. For more details, see the paper by Jun and Lee (2023), "Causal Inference under Outcome-Based Sampling with Monotonicity Assumptions," <arXiv:2004.08318 [econ.EM]>, accepted for publication in Journal of Business & Economic Statistics.
Maintained by Sokbae Lee. Last updated 1 years ago.
case-control-studiescausal-inferencepartial-identificationtreatment-effects
0.9 match 2 stars 4.00 score 4 scriptsinsitro
AllelicSeries:Allelic Series Test
Implementation of gene-level rare variant association tests targeting allelic series: genes where increasingly deleterious mutations have increasingly large phenotypic effects. The COding-variant Allelic Series Test (COAST) operates on the benign missense variants (BMVs), deleterious missense variants (DMVs), and protein truncating variants (PTVs) within a gene. COAST uses a set of adjustable weights that tailor the test towards rejecting the null hypothesis for genes where the average magnitude of effect increases monotonically from BMVs to DMVs to PTVs. See McCaw ZR, O’Dushlaine C, Somineni H, Bereket M, Klein C, Karaletsos T, Casale FP, Koller D, Soare TW. (2023) "An allelic series rare variant association test for candidate gene discovery" <doi:10.1016/j.ajhg.2023.07.001>.
Maintained by Zachary McCaw. Last updated 1 months ago.
0.5 match 13 stars 6.97 score 8 scriptsandrewyyp
NMMIPW:Inverse Probability Weighting under Non-Monotone Missing
We fit inverse probability weighting estimator and the augmented inverse probability weighting for non-monotone missing at random data.
Maintained by Andrew Ying. Last updated 3 years ago.
3.5 match 1.00 scoremicecon
micEconDistRay:Econometric Production Analysis with Ray-Based Distance Functions
Econometric analysis of multiple-input-multiple-output production technologies with ray-based input distance functions as suggested by Price and Henningsen (2022): "A Ray-Based Input Distance Function to Model Zero-Valued Output Quantities: Derivation and an Empirical Application", <https://ideas.repec.org/p/foi/wpaper/2022_03.html>.
Maintained by Arne Henningsen. Last updated 2 years ago.
1.7 match 2.00 scorecran
contoureR:Contouring of Non-Regular Three-Dimensional Data
Create contour lines for a non regular series of points, potentially from a non-regular canvas.
Maintained by Nicholas Hamilton. Last updated 10 years ago.
1.7 match 2 stars 2.00 scoregeorgheinze
coxphf:Cox Regression with Firth's Penalized Likelihood
Implements Firth's penalized maximum likelihood bias reduction method for Cox regression which has been shown to provide a solution in case of monotone likelihood (nonconvergence of likelihood function), see Heinze and Schemper (2001) and Heinze and Dunkler (2008). The program fits profile penalized likelihood confidence intervals which were proved to outperform Wald confidence intervals.
Maintained by Georg Heinze. Last updated 2 years ago.
0.5 match 2 stars 5.63 score 36 scripts 1 dependentsforestry-labs
Rforestry:Random Forests, Linear Trees, and Gradient Boosting for Inference and Interpretability
Provides fast implementations of Honest Random Forests, Gradient Boosting, and Linear Random Forests, with an emphasis on inference and interpretability. Additionally contains methods for variable importance, out-of-bag prediction, regression monotonicity, and several methods for missing data imputation.
Maintained by Theo Saarinen. Last updated 7 days ago.
0.5 match 5.57 score 82 scripts 1 dependentsr-forge
Gifi:Multivariate Analysis with Optimal Scaling
Implements categorical principal component analysis ('PRINCALS'), multiple correspondence analysis ('HOMALS'), monotone regression analysis ('MORALS'). It replaces the 'homals' package.
Maintained by Patrick Mair. Last updated 3 months ago.
0.5 match 4.90 score 37 scripts 1 dependentscran
pkmon:Least-Squares Estimator under k-Monotony Constraint for Discrete Functions
We implement two least-squares estimators under k-monotony constraint using a method based on the Support Reduction Algorithm from Groeneboom et al (2008) <DOI:10.1111/j.1467-9469.2007.00588.x>. The first one is a projection estimator on the set of k-monotone discrete functions. The second one is a projection on the set of k-monotone discrete probabilities. This package provides functions to generate samples from the spline basis from Lefevre and Loisel (2013) <DOI:10.1239/jap/1378401239>, and from mixtures of splines.
Maintained by Francois Deslandes. Last updated 2 years ago.
2.5 match 1.00 scoreandrija-djurovic
PDtoolkit:Collection of Tools for PD Rating Model Development and Validation
The goal of this package is to cover the most common steps in probability of default (PD) rating model development and validation. The main procedures available are those that refer to univariate, bivariate, multivariate analysis, calibration and validation. Along with accompanied 'monobin' and 'monobinShiny' packages, 'PDtoolkit' provides functions which are suitable for different data transformation and modeling tasks such as: imputations, monotonic binning of numeric risk factors, binning of categorical risk factors, weights of evidence (WoE) and information value (IV) calculations, WoE coding (replacement of risk factors modalities with WoE values), risk factor clustering, area under curve (AUC) calculation and others. Additionally, package provides set of validation functions for testing homogeneity, heterogeneity, discriminatory and predictive power of the model.
Maintained by Andrija Djurovic. Last updated 1 years ago.
0.5 match 14 stars 4.78 score 86 scriptsaryapoddar
scorecardModelUtils:Credit Scorecard Modelling Utils
Provides infrastructure functionalities such as missing value treatment, information value calculation, GINI calculation etc. which are used for developing a traditional credit scorecard as well as a machine learning based model. The functionalities defined are standard steps for any credit underwriting scorecard development, extensively used in financial domain.
Maintained by Arya Poddar. Last updated 6 years ago.
1.6 match 1.46 score 29 scriptsfernandalschumacher
skewlmm:Scale Mixture of Skew-Normal Linear Mixed Models
It fits scale mixture of skew-normal linear mixed models using either an expectation–maximization (EM) type algorithm or its accelerated version (Damped Anderson Acceleration with Epsilon Monotonicity, DAAREM), including some possibilities for modeling the within-subject dependence. Details can be found in Schumacher, Lachos and Matos (2021) <doi:10.1002/sim.8870>.
Maintained by Fernanda L. Schumacher. Last updated 2 months ago.
0.5 match 6 stars 4.43 score 10 scriptswjbraun
sharpData:Data Sharpening
Functions and data sets inspired by data sharpening - data perturbation to achieve improved performance in nonparametric estimation, as described in Choi, E., Hall, P. and Rousson, V. (2000). Capabilities for enhanced local linear regression function and derivative estimation are included, as well as an asymptotically correct iterated data sharpening estimator for any degree of local polynomial regression estimation. A cross-validation-based bandwidth selector is included which, in concert with the iterated sharpener, will often provide superior performance, according to a median integrated squared error criterion. Sample data sets are provided to illustrate function usage.
Maintained by W.J. Braun. Last updated 4 years ago.
2.0 match 1.00 scoretkmckenzie
snfa:Smooth Non-Parametric Frontier Analysis
Fitting of non-parametric production frontiers for use in efficiency analysis. Methods are provided for both a smooth analogue of Data Envelopment Analysis (DEA) and a non-parametric analogue of Stochastic Frontier Analysis (SFA). Frontiers are constructed for multiple inputs and a single output using constrained kernel smoothing as in Racine et al. (2009), which allow for the imposition of monotonicity and concavity constraints on the estimated frontier.
Maintained by Taylor McKenzie. Last updated 5 years ago.
0.5 match 3.70 score 8 scriptsrvaradhan
turboEM:A Suite of Convergence Acceleration Schemes for EM, MM and Other Fixed-Point Algorithms
Algorithms for accelerating the convergence of slow, monotone sequences from smooth, contraction mapping such as the EM and MM algorithms. It can be used to accelerate any smooth, linearly convergent acceleration scheme. A tutorial style introduction to this package is available in a vignette on the CRAN download page or, when the package is loaded in an R session, with vignette("turboEM").
Maintained by Ravi Varadhan. Last updated 4 years ago.
0.5 match 3.64 score 24 scripts 6 dependentsmihaiconstantin
powerly:Sample Size Analysis for Psychological Networks and More
An implementation of the sample size computation method for network models proposed by Constantin et al. (2021) <doi:10.31234/osf.io/j5v7u>. The implementation takes the form of a three-step recursive algorithm designed to find an optimal sample size given a model specification and a performance measure of interest. It starts with a Monte Carlo simulation step for computing the performance measure and a statistic at various sample sizes selected from an initial sample size range. It continues with a monotone curve-fitting step for interpolating the statistic across the entire sample size range. The final step employs stratified bootstrapping to quantify the uncertainty around the fitted curve.
Maintained by Mihai Constantin. Last updated 2 years ago.
network-modelspower-analysispsychologysample-size-calculation
0.5 match 8 stars 3.60 score 3 scriptscran
ORCME:Order Restricted Clustering for Microarray Experiments
Provides clustering of genes with similar dose response (or time course) profiles. It implements the method described by Lin et al. (2012).
Maintained by Rudradev Sengupta. Last updated 10 years ago.
1.8 match 1.00 scorecran
weightedRank:Sensitivity Analysis Using Weighted Rank Statistics
Performs a sensitivity analysis using weighted rank tests in observational studies with I blocks of size J; see Rosenbaum (2024) <doi:10.1080/01621459.2023.2221402>. The package can perform adaptive inference in block designs; see Rosenbaum (2012) <doi:10.1093/biomet/ass032>. The main functions are wgtRank(), wgtRankCI() and wgtRanktt().
Maintained by Paul Rosenbaum. Last updated 9 months ago.
1.8 match 1.00 scorelaubok
PStrata:Principal Stratification Analysis in R
Estimating causal effects in the presence of post-treatment confounding using principal stratification. 'PStrata' allows for customized monotonicity assumptions and exclusion restriction assumptions, with automatic full Bayesian inference supported by 'Stan'. The main function to use in this package is PStrata(), which provides posterior estimates of principal causal effect with uncertainty quantification. Visualization tools are also provided for diagnosis and interpretation. See Liu and Li (2023) <arXiv:2304.02740> for details.
Maintained by Bo Liu. Last updated 1 years ago.
0.5 match 6 stars 3.48 scores-baumann
schumaker:Schumaker Shape-Preserving Spline
This is a shape preserving spline <doi:10.1137/0720057> which is guaranteed to be monotonic and concave or convex if the data is monotonic and concave or convex. It does not use any optimisation and is therefore quick and smoothly converges to a fixed point in economic dynamics problems including value function iteration. It also automatically gives the first two derivatives of the spline and options for determining behaviour when evaluated outside the interpolation domain.
Maintained by Stuart Baumann. Last updated 4 years ago.
0.8 match 2.26 score 18 scriptscran
sonar:Fundamental Formulas for Sonar
Formulas for calculating sound velocity, water pressure, depth, density, absorption and sonar equations.
Maintained by Jose Gama. Last updated 9 years ago.
1.8 match 1.00 scorecran
assist:A Suite of R Functions Implementing Spline Smoothing Techniques
Fit various smoothing spline models. Includes an ssr() function for smoothing spline regression, an nnr() function for nonparametric nonlinear regression, an snr() function for semiparametric nonlinear regression, an slm() function for semiparametric linear mixed-effects models, and an snm() function for semiparametric nonlinear mixed-effects models. See Wang (2011) <doi:10.1201/b10954> for an overview.
Maintained by Yuedong Wang. Last updated 2 years ago.
1.8 match 1.00 scorefinyang
flap:Forecast Linear Augmented Projection
The Forecast Linear Augmented Projection (flap) method reduces forecast variance by adjusting the forecasts of multivariate time series to be consistent with the forecasts of linear combinations (components) of the series by projecting all forecasts onto the space where the linear constraints are satisfied. The forecast variance can be reduced monotonically by including more components. For a given number of components, the flap method achieves maximum forecast variance reduction among linear projections.
Maintained by Yangzhuoran Fin Yang. Last updated 9 months ago.
0.5 match 1 stars 3.30 score 2 scriptsflorianjansen
eHOF:Extended HOF (Huisman-Olff-Fresco) Models
Extended and enhanced hierarchical logistic regression models (called Huisman-Olff-Fresco in biology, see Huisman et al. 1993 Journal of Vegetation Science <doi:10.1111/jvs.12050>) models. Response curves along one-dimensional gradients including no response, monotone, plateau, unimodal and bimodal models.
Maintained by Florian Jansen. Last updated 3 months ago.
0.5 match 3.16 score 24 scriptssangillee
CBSr:Fits Cubic Bezier Spline Functions to Intertemporal and Risky Choice Data
Uses monotonically constrained Cubic Bezier Splines (CBS) to approximate latent utility functions in intertemporal choice and risky choice data. For more information, see Lee, Glaze, Bradlow, and Kable <doi:10.1007/s11336-020-09723-4>.
Maintained by Sangil Lee. Last updated 4 years ago.
0.5 match 2.70 scorenabipoor
ROCFTP.MMS:Perfect Sampling
The algorithm provided in this package generates perfect sample for unimodal or multimodal posteriors. Read Once Coupling From The Past, with Metropolis-Multishift is used to generate a perfect sample for a given posterior density based on the two extreme starting paths, minimum and maximum of the most interest range of the posterior. It uses the monotone random operation of multishift coupler which allows to sandwich all of the state space in one point. It means both Markov Chains starting from the maximum and minimum will be coalesced. The generated sample is independent from the starting points. It is useful for mixture distributions too. The output of this function is a real value as an exact draw from the posterior distribution.
Maintained by Majid Nabipoor. Last updated 3 years ago.
0.5 match 2.70 score 2 scriptsbklamer
rankdifferencetest:Kornbrot's Rank Difference Test
Implements Kornbrot's rank difference test as described in <doi:10.1111/j.2044-8317.1990.tb00939.x>. This method is a modified Wilcoxon signed-rank test which produces consistent and meaningful results for ordinal or monotonically-transformed data.
Maintained by Brett Klamer. Last updated 6 months ago.
0.5 match 2.18 score 4 scriptsuwemenzel
RMThreshold:Signal-Noise Separation in Random Matrices by using Eigenvalue Spectrum Analysis
An algorithm which can be used to determine an objective threshold for signal-noise separation in large random matrices (correlation matrices, mutual information matrices, network adjacency matrices) is provided. The package makes use of the results of Random Matrix Theory (RMT). The algorithm increments a suppositional threshold monotonically, thereby recording the eigenvalue spacing distribution of the matrix. According to RMT, that distribution undergoes a characteristic change when the threshold properly separates signal from noise. By using the algorithm, the modular structure of a matrix - or of the corresponding network - can be unraveled.
Maintained by Uwe Menzel. Last updated 9 years ago.
0.5 match 4 stars 2.16 score 18 scriptssscogges
noncomplyR:Bayesian Analysis of Randomized Experiments with Non-Compliance
Functions for Bayesian analysis of data from randomized experiments with non-compliance. The functions are based on the models described in Imbens and Rubin (1997) <doi:10.1214/aos/1034276631>. Currently only two types of outcome models are supported: binary outcomes and normally distributed outcomes. Models can be fit with and without the exclusion restriction and/or the strong access monotonicity assumption. Models are fit using the data augmentation algorithm as described in Tanner and Wong (1987) <doi:10.2307/2289457>.
Maintained by Scott Coggeshall. Last updated 8 years ago.
0.5 match 2.00 score 7 scriptscran
SurrogateParadoxTest:Empirical Testing of Surrogate Paradox Assumptions
Provides functions to nonparametrically assess assumptions necessary to prevent the surrogate paradox through hypothesis tests of stochastic dominance, monotonicity of regression functions, and non-negative residual treatment effects. More details are available in Hsiao et al 2025 (under review). A tutorial for this package can be found at <https://laylaparast.com/home/SurrogateParadoxTest.html>.
Maintained by Emily Hsiao. Last updated 2 months ago.
0.5 match 1.30 scorecran
isoboost:Isotonic Boosting Classification Rules
In classification problems a monotone relation between some predictors and the classes may be assumed. In this package 'isoboost' we propose new boosting algorithms, based on LogitBoost, that incorporate this isotonicity information, yielding more accurate and easily interpretable rules.
Maintained by David Conde. Last updated 4 years ago.
0.5 match 1.00 scorecran
GeneF:Package for Generalized F-Statistics
Implementation of several generalized F-statistics. The current version includes a generalized F-statistic based on the flexible isotonic/monotonic regression or order restricted hypothesis testing. Based on: Y. Lai (2011) <doi:10.1371/journal.pone.0019754>.
Maintained by Yinglei Lai. Last updated 3 years ago.
0.5 match 1.00 scorerh8liuqy
DNNSIM:Single-Index Neural Network for Skewed Heavy-Tailed Data
Provides a deep neural network model with a monotonic increasing single index function tailored for periodontal disease studies. The residuals are assumed to follow a skewed T distribution, a skewed normal distribution, or a normal distribution. More details can be found at Liu, Huang, and Bai (2024) <doi:10.1016/j.csda.2024.108012>.
Maintained by Qingyang Liu. Last updated 2 months ago.
0.5 match 1.00 scorecran
LIStest:Tests of independence based on the Longest Increasing Subsequence
Tests for independence between X and Y computed from a paired sample (x1,y1),...(xn,yn) of (X,Y), using one of the following statistics (a) the Longest Increasing Subsequence (Ln), (b) JLn, a Jackknife version of Ln or (c) JLMn, a Jackknife version of the longest monotonic subsequence. This family of tests can be applied under the assumption of continuity of X and Y.
Maintained by J. E. Garcia. Last updated 11 years ago.
0.5 match 1.00 scoreyanliu5
PenIC:Semiparametric Regression Analysis of Interval-Censored Data using Penalized Splines
Currently incorporate the generalized odds-rate model (a type of linear transformation model) for interval-censored data based on penalized monotonic B-Spline. More methods under other semiparametric models such as cure model or additive model will be included in future versions. For more details see Lu, M., Liu, Y., Li, C. and Sun, J. (2019) <arXiv:1912.11703>.
Maintained by Yan Liu. Last updated 5 years ago.
0.5 match 1.00 scoreguangbaog
DIRMR:Distributed Imputation for Random Effects Models with Missing Responses
By adding over-relaxation factor to PXEM (Parameter Expanded Expectation Maximization) method, the MOPXEM (Monotonically Overrelaxed Parameter Expanded Expectation Maximization) method is obtained. Compare it with the existing EM (Expectation-Maximization)-like methods. Then, distribute and process five methods and compare them, achieving good performance in convergence speed and result quality.The philosophy of the package is described in Guo G. (2022) <doi:10.1007/s00180-022-01270-z>.
Maintained by Guangbao Guo. Last updated 4 months ago.
0.5 match 1.00 scorenumbersman77
OrdFacReg:Least Squares, Logistic, and Cox-Regression with Ordered Predictors
In biomedical studies, researchers are often interested in assessing the association between one or more ordinal explanatory variables and an outcome variable, at the same time adjusting for covariates of any type. The outcome variable may be continuous, binary, or represent censored survival times. In the absence of a precise knowledge of the response function, using monotonicity constraints on the ordinal variables improves efficiency in estimating parameters, especially when sample sizes are small. This package implements an active set algorithm that efficiently computes such estimators.
Maintained by Kaspar Rufibach. Last updated 10 years ago.
0.5 match 1.00 score 2 scriptscran
optiscale:Optimal Scaling
Optimal scaling of a data vector, relative to a set of targets, is obtained through a least-squares transformation subject to appropriate measurement constraints. The targets are usually predicted values from a statistical model. If the data are nominal level, then the transformation must be identity-preserving. If the data are ordinal level, then the transformation must be monotonic. If the data are discrete, then tied data values must remain tied in the optimal transformation. If the data are continuous, then tied data values can be untied in the optimal transformation.
Maintained by Dave Armstrong. Last updated 10 months ago.
0.5 match 1 stars 1.00 scorecran
Taba:Taba Robust Correlations
Calculates the robust Taba linear, Taba rank (monotonic), TabWil, and TabWil rank correlations. Test statistics as well as one sided or two sided p-values are provided for all correlations. Multiple correlations and p-values can be calculated simultaneously across multiple variables. In addition, users will have the option to use the partial, semipartial, and generalized partial correlations; where the partial and semipartial correlations use linear, logistic, or Poisson regression to modify the specified variable.
Maintained by Derek Wilus. Last updated 4 years ago.
0.5 match 1.00 score