Showing 84 of total 84 results (show query)
r-forge
Matrix:Sparse and Dense Matrix Classes and Methods
A rich hierarchy of sparse and dense matrix classes, including general, symmetric, triangular, and diagonal matrices with numeric, logical, or pattern entries. Efficient methods for operating on such matrices, often wrapping the 'BLAS', 'LAPACK', and 'SuiteSparse' libraries.
Maintained by Martin Maechler. Last updated 7 days ago.
16.6 match 1 stars 17.23 score 33k scripts 12k dependentslaplacesdemonr
LaplacesDemon:Complete Environment for Bayesian Inference
Provides a complete environment for Bayesian inference using a variety of different samplers (see ?LaplacesDemon for an overview).
Maintained by Henrik Singmann. Last updated 12 months ago.
17.9 match 93 stars 13.45 score 1.8k scripts 60 dependentsgzt
CholWishart:Cholesky Decomposition of the Wishart Distribution
Sampling from the Cholesky factorization of a Wishart random variable, sampling from the inverse Wishart distribution, sampling from the Cholesky factorization of an inverse Wishart random variable, sampling from the pseudo Wishart distribution, sampling from the generalized inverse Wishart distribution, computing densities for the Wishart and inverse Wishart distributions, and computing the multivariate gamma and digamma functions. Provides a header file so the C functions can be called directly from other programs.
Maintained by Geoffrey Thompson. Last updated 6 months ago.
cholesky-decompositioncholesky-factorizationdigamma-functionsgammamultivariatepseudo-wishartwishartwishart-distributionsopenblas
22.4 match 7 stars 7.05 score 41 scripts 13 dependentsmlverse
torch:Tensors and Neural Networks with 'GPU' Acceleration
Provides functionality to define and train neural networks similar to 'PyTorch' by Paszke et al (2019) <doi:10.48550/arXiv.1912.01703> but written entirely in R using the 'libtorch' library. Also supports low-level tensor operations and 'GPU' acceleration.
Maintained by Daniel Falbel. Last updated 6 days ago.
6.3 match 520 stars 16.52 score 1.4k scripts 38 dependentsfaosorios
fastmatrix:Fast Computation of some Matrices Useful in Statistics
Small set of functions to fast computation of some matrices and operations useful in statistics and econometrics. Currently, there are functions for efficient computation of duplication, commutation and symmetrizer matrices with minimal storage requirements. Some commonly used matrix decompositions (LU and LDL), basic matrix operations (for instance, Hadamard, Kronecker products and the Sherman-Morrison formula) and iterative solvers for linear systems are also available. In addition, the package includes a number of common statistical procedures such as the sweep operator, weighted mean and covariance matrix using an online algorithm, linear regression (using Cholesky, QR, SVD, sweep operator and conjugate gradients methods), ridge regression (with optimal selection of the ridge parameter considering several procedures), omnibus tests for univariate normality, functions to compute the multivariate skewness, kurtosis, the Mahalanobis distance (checking the positive defineteness), and the Wilson-Hilferty transformation of gamma variables. Furthermore, the package provides interfaces to C code callable by another C code from other R packages.
Maintained by Felipe Osorio. Last updated 1 years ago.
commutation-matrixjarque-bera-testldl-factorizationlu-factorizationmatrix-api-for-r-packagesmatrix-normsmodified-choleskyols-regressionpower-methodridge-regressionsherman-morrisonstatisticssweep-operatorsymmetrizer-matrixfortranopenblas
11.8 match 19 stars 6.27 score 37 scripts 10 dependentsrstudio
tfprobability:Interface to 'TensorFlow Probability'
Interface to 'TensorFlow Probability', a 'Python' library built on 'TensorFlow' that makes it easy to combine probabilistic models and deep learning on modern hardware ('TPU', 'GPU'). 'TensorFlow Probability' includes a wide selection of probability distributions and bijectors, probabilistic layers, variational inference, Markov chain Monte Carlo, and optimizers such as Nelder-Mead, BFGS, and SGLD.
Maintained by Tomasz Kalinowski. Last updated 3 years ago.
8.0 match 54 stars 8.63 score 221 scripts 3 dependentscran
mgcv:Mixed GAM Computation Vehicle with Automatic Smoothness Estimation
Generalized additive (mixed) models, some of their extensions and other generalized ridge regression with multiple smoothing parameter estimation by (Restricted) Marginal Likelihood, Generalized Cross Validation and similar, or using iterated nested Laplace approximation for fully Bayesian inference. See Wood (2017) <doi:10.1201/9781315370279> for an overview. Includes a gam() function, a wide variety of smoothers, 'JAGS' support and distributions beyond the exponential family.
Maintained by Simon Wood. Last updated 1 years ago.
5.3 match 32 stars 12.71 score 17k scripts 7.8k dependentstbates
umx:Structural Equation Modeling and Twin Modeling in R
Quickly create, run, and report structural equation models, and twin models. See '?umx' for help, and umx_open_CRAN_page("umx") for NEWS. Timothy C. Bates, Michael C. Neale, Hermine H. Maes, (2019). umx: A library for Structural Equation and Twin Modelling in R. Twin Research and Human Genetics, 22, 27-41. <doi:10.1017/thg.2019.2>.
Maintained by Timothy C. Bates. Last updated 2 days ago.
behavior-geneticsgeneticsopenmxpsychologysemstatisticsstructural-equation-modelingtutorialstwin-modelsumx
6.7 match 44 stars 9.45 score 472 scriptsfriendly
matlib:Matrix Functions for Teaching and Learning Linear Algebra and Multivariate Statistics
A collection of matrix functions for teaching and learning matrix linear algebra as used in multivariate statistical methods. Many of these functions are designed for tutorial purposes in learning matrix algebra ideas using R. In some cases, functions are provided for concepts available elsewhere in R, but where the function call or name is not obvious. In other cases, functions are provided to show or demonstrate an algorithm. In addition, a collection of functions are provided for drawing vector diagrams in 2D and 3D and for rendering matrix expressions and equations in LaTeX.
Maintained by Michael Friendly. Last updated 2 days ago.
diagramslinear-equationsmatrixmatrix-functionsmatrix-visualizervectorvignette
4.9 match 65 stars 12.89 score 900 scripts 11 dependentsjoeguinness
GpGp:Fast Gaussian Process Computation Using Vecchia's Approximation
Functions for fitting and doing predictions with Gaussian process models using Vecchia's (1988) approximation. Package also includes functions for reordering input locations, finding ordered nearest neighbors (with help from 'FNN' package), grouping operations, and conditional simulations. Covariance functions for spatial and spatial-temporal data on Euclidean domains and spheres are provided. The original approximation is due to Vecchia (1988) <http://www.jstor.org/stable/2345768>, and the reordering and grouping methods are from Guinness (2018) <doi:10.1080/00401706.2018.1437476>. Model fitting employs a Fisher scoring algorithm described in Guinness (2019) <doi:10.48550/arXiv.1905.08374>.
Maintained by Joseph Guinness. Last updated 5 months ago.
9.1 match 10 stars 6.16 score 160 scripts 6 dependentsfreezenik
bamlss:Bayesian Additive Models for Location, Scale, and Shape (and Beyond)
Infrastructure for estimating probabilistic distributional regression models in a Bayesian framework. The distribution parameters may capture location, scale, shape, etc. and every parameter may depend on complex additive terms (fixed, random, smooth, spatial, etc.) similar to a generalized additive model. The conceptual and computational framework is introduced in Umlauf, Klein, Zeileis (2019) <doi:10.1080/10618600.2017.1407325> and the R package in Umlauf, Klein, Simon, Zeileis (2021) <doi:10.18637/jss.v100.i04>.
Maintained by Nikolaus Umlauf. Last updated 5 months ago.
8.5 match 1 stars 5.76 score 239 scripts 5 dependentsalexkz
kernlab:Kernel-Based Machine Learning Lab
Kernel-based machine learning methods for classification, regression, clustering, novelty detection, quantile regression and dimensionality reduction. Among other methods 'kernlab' includes Support Vector Machines, Spectral Clustering, Kernel PCA, Gaussian Processes and a QP solver.
Maintained by Alexandros Karatzoglou. Last updated 7 months ago.
3.9 match 21 stars 12.26 score 7.8k scripts 487 dependentsgreta-dev
greta:Simple and Scalable Statistical Modelling in R
Write statistical models in R and fit them by MCMC and optimisation on CPUs and GPUs, using Google 'TensorFlow'. greta lets you write your own model like in BUGS, JAGS and Stan, except that you write models right in R, it scales well to massive datasets, and it’s easy to extend and build on. See the website for more information, including tutorials, examples, package documentation, and the greta forum.
Maintained by Nicholas Tierney. Last updated 6 days ago.
3.8 match 566 stars 12.53 score 396 scripts 6 dependentsrudjer
SparseM:Sparse Linear Algebra
Some basic linear algebra functionality for sparse matrices is provided: including Cholesky decomposition and backsolving as well as standard R subsetting and Kronecker products.
Maintained by Roger Koenker. Last updated 8 months ago.
3.9 match 3 stars 11.47 score 306 scripts 1.5k dependentsreinhardfurrer
spam:SPArse Matrix
Set of functions for sparse matrix algebra. Differences with other sparse matrix packages are: (1) we only support (essentially) one sparse matrix format, (2) based on transparent and simple structure(s), (3) tailored for MCMC calculations within G(M)RF. (4) and it is fast and scalable (with the extension package spam64). Documentation about 'spam' is provided by vignettes included in this package, see also Furrer and Sain (2010) <doi:10.18637/jss.v036.i10>; see 'citation("spam")' for details.
Maintained by Reinhard Furrer. Last updated 2 months ago.
4.8 match 1 stars 9.26 score 420 scripts 433 dependentsnk027
sanic:Solving Ax = b Nimbly in C++
Routines for solving large systems of linear equations and eigenproblems in R. Direct and iterative solvers from the Eigen C++ library are made available. Solvers include Cholesky, LU, QR, and Krylov subspace methods (Conjugate Gradient, BiCGSTAB). Dense and sparse problems are supported.
Maintained by Nikolas Kuschnig. Last updated 2 years ago.
bicgstabcholeskyconjugate-gradienteigenlinear-equationssolverscpp
10.5 match 9 stars 4.13 score 1 scripts 1 dependentscran
bdsmatrix:Routines for Block Diagonal Symmetric Matrices
This is a special case of sparse matrices, used by coxme.
Maintained by Terry Therneau. Last updated 1 years ago.
5.4 match 1 stars 5.91 score 202 dependentsjrmccombs
RHPCBenchmark:Benchmarks for High-Performance Computing Environments
Microbenchmarks for determining the run time performance of aspects of the R programming environment and packages relevant to high-performance computation. The benchmarks are divided into three categories: dense matrix linear algebra kernels, sparse matrix linear algebra kernels, and machine learning functionality.
Maintained by James McCombs. Last updated 8 years ago.
10.1 match 3.02 score 21 scriptsstan-dev
posterior:Tools for Working with Posterior Distributions
Provides useful tools for both users and developers of packages for fitting Bayesian models or working with output from Bayesian models. The primary goals of the package are to: (a) Efficiently convert between many different useful formats of draws (samples) from posterior or prior distributions. (b) Provide consistent methods for operations commonly performed on draws, for example, subsetting, binding, or mutating draws. (c) Provide various summaries of draws in convenient formats. (d) Provide lightweight implementations of state of the art posterior inference diagnostics. References: Vehtari et al. (2021) <doi:10.1214/20-BA1221>.
Maintained by Paul-Christian Bürkner. Last updated 11 days ago.
1.9 match 168 stars 16.13 score 3.3k scripts 342 dependentsnago2020
depCensoring:Statistical Methods for Survival Data with Dependent Censoring
Several statistical methods for analyzing survival data under various forms of dependent censoring are implemented in the package. In addition to accounting for dependent censoring, it offers tools to adjust for unmeasured confounding factors. The implemented approaches allow users to estimate the dependency between survival time and dependent censoring time, based solely on observed survival data. For more details on the methods, refer to Deresa and Van Keilegom (2021) <doi:10.1093/biomet/asaa095>, Czado and Van Keilegom (2023) <doi:10.1093/biomet/asac067>, Crommen et al. (2024) <doi:10.1007/s11749-023-00903-9>, Deresa and Van Keilegom (2024) <doi:10.1080/01621459.2022.2161387>, Rutten et al. (2024+) <doi:10.48550/arXiv.2403.11860> and Ding and Van Keilegom (2024).
Maintained by Negera Wakgari Deresa. Last updated 11 days ago.
10.3 match 2.78 score 5 scriptshelske
ramcmc:Robust Adaptive Metropolis Algorithm
Function for adapting the shape of the random walk Metropolis proposal as specified by robust adaptive Metropolis algorithm by Vihola (2012) <doi:10.1007/s11222-011-9269-5>. The package also includes fast functions for rank-one Cholesky update and downdate. These functions can be used directly from R or the corresponding C++ header files can be easily linked to other R packages.
Maintained by Jouni Helske. Last updated 3 years ago.
4.1 match 6 stars 6.21 score 8 scripts 12 dependentsrstudio
keras3:R Interface to 'Keras'
Interface to 'Keras' <https://keras.io>, a high-level neural networks API. 'Keras' was developed with a focus on enabling fast experimentation, supports both convolution based networks and recurrent networks (as well as combinations of the two), and runs seamlessly on both CPU and GPU devices.
Maintained by Tomasz Kalinowski. Last updated 4 hours ago.
1.7 match 845 stars 13.60 score 264 scripts 2 dependentsnimble-dev
nimble:MCMC, Particle Filtering, and Programmable Hierarchical Modeling
A system for writing hierarchical statistical models largely compatible with 'BUGS' and 'JAGS', writing nimbleFunctions to operate models and do basic R-style math, and compiling both models and nimbleFunctions via custom-generated C++. 'NIMBLE' includes default methods for MCMC, Laplace Approximation, Monte Carlo Expectation Maximization, and some other tools. The nimbleFunction system makes it easy to do things like implement new MCMC samplers from R, customize the assignment of samplers to different parts of a model from R, and compile the new samplers automatically via C++ alongside the samplers 'NIMBLE' provides. 'NIMBLE' extends the 'BUGS'/'JAGS' language by making it extensible: New distributions and functions can be added, including as calls to external compiled code. Although most people think of MCMC as the main goal of the 'BUGS'/'JAGS' language for writing models, one can use 'NIMBLE' for writing arbitrary other kinds of model-generic algorithms as well. A full User Manual is available at <https://r-nimble.org>.
Maintained by Christopher Paciorek. Last updated 5 days ago.
bayesian-inferencebayesian-methodshierarchical-modelsmcmcprobabilistic-programmingopenblascpp
1.8 match 169 stars 12.97 score 2.6k scripts 19 dependentsnashjc
optimx:Expanded Replacement and Extension of the 'optim' Function
Provides a replacement and extension of the optim() function to call to several function minimization codes in R in a single statement. These methods handle smooth, possibly box constrained functions of several or many parameters. Note that function 'optimr()' was prepared to simplify the incorporation of minimization codes going forward. Also implements some utility codes and some extra solvers, including safeguarded Newton methods. Many methods previously separate are now included here. This is the version for CRAN.
Maintained by John C Nash. Last updated 2 months ago.
1.7 match 2 stars 12.87 score 1.8k scripts 89 dependentssachaepskamp
psychonetrics:Structural Equation Modeling and Confirmatory Network Analysis
Multi-group (dynamical) structural equation models in combination with confirmatory network models from cross-sectional, time-series and panel data <doi:10.31234/osf.io/8ha93>. Allows for confirmatory testing and fit as well as exploratory model search.
Maintained by Sacha Epskamp. Last updated 13 days ago.
3.0 match 51 stars 6.82 score 41 scripts 1 dependentsdylanb95
statespacer:State Space Modelling in 'R'
A tool that makes estimating models in state space form a breeze. See "Time Series Analysis by State Space Methods" by Durbin and Koopman (2012, ISBN: 978-0-19-964117-8) for details about the algorithms implemented.
Maintained by Dylan Beijers. Last updated 2 years ago.
cppdynamic-linear-modelforecastinggaussian-modelskalman-filtermathematical-modellingstate-spacestatistical-inferencestatistical-modelsstructural-analysistime-seriesopenblascppopenmp
3.0 match 15 stars 6.14 score 37 scriptsmarcinjurek
GPvecchia:Scalable Gaussian-Process Approximations
Fast scalable Gaussian process approximations, particularly well suited to spatial (aerial, remote-sensed) and environmental data, described in more detail in Katzfuss and Guinness (2017) <arXiv:1708.06302>. Package also contains a fast implementation of the incomplete Cholesky decomposition (IC0), based on Schaefer et al. (2019) <arXiv:1706.02205> and MaxMin ordering proposed in Guinness (2018) <arXiv:1609.05372>.
Maintained by Marcin Jurek. Last updated 1 years ago.
4.0 match 4.26 score 61 scripts 2 dependentsnlmixr2
nlmixr2est:Nonlinear Mixed Effects Models in Population PK/PD, Estimation Routines
Fit and compare nonlinear mixed-effects models in differential equations with flexible dosing information commonly seen in pharmacokinetics and pharmacodynamics (Almquist, Leander, and Jirstrand 2015 <doi:10.1007/s10928-015-9409-1>). Differential equation solving is by compiled C code provided in the 'rxode2' package (Wang, Hallow, and James 2015 <doi:10.1002/psp4.12052>).
Maintained by Matthew Fidler. Last updated 26 days ago.
1.9 match 9 stars 8.26 score 26 scripts 9 dependentslbelzile
TruncatedNormal:Truncated Multivariate Normal and Student Distributions
A collection of functions to deal with the truncated univariate and multivariate normal and Student distributions, described in Botev (2017) <doi:10.1111/rssb.12162> and Botev and L'Ecuyer (2015) <doi:10.1109/WSC.2015.7408180>.
Maintained by Leo Belzile. Last updated 17 days ago.
gaussianstudent-distributionstruncatedopenblascppopenmp
1.8 match 8 stars 8.38 score 116 scripts 18 dependentsbiometris
LMMsolver:Linear Mixed Model Solver
An efficient and flexible system to solve sparse mixed model equations. Important applications are the use of splines to model spatial or temporal trends as described in Boer (2023). (<doi:10.1177/1471082X231178591>).
Maintained by Bart-Jan van Rossum. Last updated 2 months ago.
1.7 match 11 stars 8.14 score 66 scripts 3 dependentsfbertran
bigalgebra:'BLAS' and 'LAPACK' Routines for Native R Matrices and 'big.matrix' Objects
Provides arithmetic functions for R matrix and 'big.matrix' objects as well as functions for QR factorization, Cholesky factorization, General eigenvalue, and Singular value decomposition (SVD). A method matrix multiplication and an arithmetic method -for matrix addition, matrix difference- allows for mixed type operation -a matrix class object and a big.matrix class object- and pure type operation for two big.matrix class objects.
Maintained by Frederic Bertrand. Last updated 6 months ago.
2.8 match 4 stars 4.81 score 54 scripts 2 dependentsalexanderlange53
svars:Data-Driven Identification of SVAR Models
Implements data-driven identification methods for structural vector autoregressive (SVAR) models as described in Lange et al. (2021) <doi:10.18637/jss.v097.i05>. Based on an existing VAR model object (provided by e.g. VAR() from the 'vars' package), the structural impact matrix is obtained via data-driven identification techniques (i.e. changes in volatility (Rigobon, R. (2003) <doi:10.1162/003465303772815727>), patterns of GARCH (Normadin, M., Phaneuf, L. (2004) <doi:10.1016/j.jmoneco.2003.11.002>), independent component analysis (Matteson, D. S, Tsay, R. S., (2013) <doi:10.1080/01621459.2016.1150851>), least dependent innovations (Herwartz, H., Ploedt, M., (2016) <doi:10.1016/j.jimonfin.2015.11.001>), smooth transition in variances (Luetkepohl, H., Netsunajev, A. (2017) <doi:10.1016/j.jedc.2017.09.001>) or non-Gaussian maximum likelihood (Lanne, M., Meitz, M., Saikkonen, P. (2017) <doi:10.1016/j.jeconom.2016.06.002>)).
Maintained by Alexander Lange. Last updated 2 years ago.
1.7 match 46 stars 7.22 score 130 scriptsstla
EigenR:Complex Matrix Algebra with 'Eigen'
Matrix algebra using the 'Eigen' C++ library: determinant, rank, inverse, pseudo-inverse, kernel and image, QR decomposition, Cholesky decomposition, Schur decomposition, Hessenberg decomposition, linear least-squares problems. Also provides matrix functions such as exponential, logarithm, power, sine and cosine. Complex matrices are supported.
Maintained by Stéphane Laurent. Last updated 11 months ago.
2.5 match 5 stars 4.78 score 27 scripts 1 dependentscysouw
qlcMatrix:Utility Sparse Matrix Functions for Quantitative Language Comparison
Extension of the functionality of the 'Matrix' package for using sparse matrices. Some of the functions are very general, while other are highly specific for special data format as used for quantitative language comparison.
Maintained by Michael Cysouw. Last updated 9 months ago.
1.7 match 6 stars 6.98 score 256 scripts 1 dependentsmandymejia
templateICAr:Estimate Brain Networks and Connectivity with ICA and Empirical Priors
Implements the template ICA (independent components analysis) model proposed in Mejia et al. (2020) <doi:10.1080/01621459.2019.1679638> and the spatial template ICA model proposed in proposed in Mejia et al. (2022) <doi:10.1080/10618600.2022.2104289>. Both models estimate subject-level brain as deviations from known population-level networks, which are estimated using standard ICA algorithms. Both models employ an expectation-maximization algorithm for estimation of the latent brain networks and unknown model parameters. Includes direct support for 'CIFTI', 'GIFTI', and 'NIFTI' neuroimaging file formats.
Maintained by Amanda Mejia. Last updated 3 days ago.
1.9 match 10 stars 6.35 score 25 scriptscran
tensorA:Advanced Tensor Arithmetic with Named Indices
Provides convenience functions for advanced linear algebra with tensors and computation with data sets of tensors on a higher level abstraction. It includes Einstein and Riemann summing conventions, dragging, co- and contravariate indices, parallel computations on sequences of tensors.
Maintained by K. Gerald van den Boogaart. Last updated 1 years ago.
2.0 match 5.83 score 399 dependentspachadotdev
cpp11armadillo:An 'Armadillo' Interface
Provides function declarations and inline function definitions that facilitate communication between R and the 'Armadillo' 'C++' library for linear algebra and scientific computing. This implementation is detailed in Vargas Sepulveda and Schneider Malamud (2024) <doi:10.48550/arXiv.2408.11074>.
Maintained by Mauricio Vargas Sepulveda. Last updated 26 days ago.
armadillocppcpp11hacktoberfestlinear-algebra
1.3 match 9 stars 9.14 score 1 scripts 16 dependentscran
MultiPhen:A Package to Test for Multi-Trait Association
Performs genetic association tests between SNPs (one-at-a-time) and multiple phenotypes (separately or in joint model).
Maintained by Lachlan Coin. Last updated 5 years ago.
3.0 match 3.70 scoreygeunkim
bvhar:Bayesian Vector Heterogeneous Autoregressive Modeling
Tools to model and forecast multivariate time series including Bayesian Vector heterogeneous autoregressive (VHAR) model by Kim & Baek (2023) (<doi:10.1080/00949655.2023.2281644>). 'bvhar' can model Vector Autoregressive (VAR), VHAR, Bayesian VAR (BVAR), and Bayesian VHAR (BVHAR) models.
Maintained by Young Geun Kim. Last updated 17 days ago.
bayesianbayesian-econometricsbvareigenforecastingharpybind11pythonrcppeigentime-seriesvector-autoregressioncppopenmp
1.7 match 6 stars 6.42 score 25 scriptstrn000
norMmix:Direct MLE for Multivariate Normal Mixture Distributions
Multivariate Normal (i.e. Gaussian) Mixture Models (S3) Classes. Fitting models to data using 'MLE' (maximum likelihood estimation) for multivariate normal mixtures via smart parametrization using the 'LDL' (Cholesky) decomposition, see McLachlan and Peel (2000, ISBN:9780471006268), Celeux and Govaert (1995) <doi:10.1016/0031-3203(94)00125-6>.
Maintained by Nicolas Trutmann. Last updated 6 months ago.
gaussian-mixture-modelsmaximum-likelihood-estimationr-language
2.5 match 4.18 score 3 scriptsloelschlaeger
oeli:Utilities for Developing Data Science Software
Some general helper functions that I (and maybe others) find useful when developing data science software.
Maintained by Lennart Oelschläger. Last updated 4 months ago.
1.9 match 2 stars 5.42 score 1 scripts 4 dependentsmlysy
SuperGauss:Superfast Likelihood Inference for Stationary Gaussian Time Series
Likelihood evaluations for stationary Gaussian time series are typically obtained via the Durbin-Levinson algorithm, which scales as O(n^2) in the number of time series observations. This package provides a "superfast" O(n log^2 n) algorithm written in C++, crossing over with Durbin-Levinson around n = 300. Efficient implementations of the score and Hessian functions are also provided, leading to superfast versions of inference algorithms such as Newton-Raphson and Hamiltonian Monte Carlo. The C++ code provides a Toeplitz matrix class packaged as a header-only library, to simplify low-level usage in other packages and outside of R.
Maintained by Martin Lysy. Last updated 1 months ago.
1.8 match 2 stars 5.60 score 33 scripts 2 dependentsmorrowcj
remotePARTS:Spatiotemporal Autoregression Analyses for Large Data Sets
These tools were created to test map-scale hypotheses about trends in large remotely sensed data sets but any data with spatial and temporal variation can be analyzed. Tests are conducted using the PARTS method for analyzing spatially autocorrelated time series (Ives et al., 2021: <doi:10.1016/j.rse.2021.112678>). The method's unique approach can handle extremely large data sets that other spatiotemporal models cannot, while still appropriately accounting for spatial and temporal autocorrelation. This is done by partitioning the data into smaller chunks, analyzing chunks separately and then combining the separate analyses into a single, correlated test of the map-scale hypotheses.
Maintained by Clay Morrow. Last updated 2 years ago.
autocorrelationbig-dataremote-sensing-in-rstatistical-analysiscppopenmp
1.9 match 22 stars 5.25 score 16 scriptsspan-18
spStack:Bayesian Geostatistics Using Predictive Stacking
Fits Bayesian hierarchical spatial process models for point-referenced Gaussian, Poisson, binomial, and binary data using stacking of predictive densities. It involves sampling from analytically available posterior distributions conditional upon some candidate values of the spatial process parameters and, subsequently assimilate inference from these individual posterior distributions using Bayesian predictive stacking. Our algorithm is highly parallelizable and hence, much faster than traditional Markov chain Monte Carlo algorithms while delivering competitive predictive performance. See Zhang, Tang, and Banerjee (2024) <doi:10.48550/arXiv.2304.12414>, and, Pan, Zhang, Bradley, and Banerjee (2024) <doi:10.48550/arXiv.2406.04655> for details.
Maintained by Soumyakanti Pan. Last updated 11 days ago.
1.9 match 4.95 score 6 scriptsandrewzm
sparseinv:Computation of the Sparse Inverse Subset
Creates a wrapper for the 'SuiteSparse' routines that execute the Takahashi equations. These equations compute the elements of the inverse of a sparse matrix at locations where the its Cholesky factor is structurally non-zero. The resulting matrix is known as a sparse inverse subset. Some helper functions are also implemented. Support for spam matrices is currently limited and will be implemented in the future. See Rue and Martino (2007) <doi:10.1016/j.jspi.2006.07.016> and Zammit-Mangion and Rougier (2018) <doi:10.1016/j.csda.2018.02.001> for the application of these equations to statistics.
Maintained by Andrew Zammit-Mangion. Last updated 7 years ago.
2.3 match 4.10 score 14 scripts 6 dependentscran
FastBandChol:Fast Estimation of a Covariance Matrix by Banding the Cholesky Factor
Fast and numerically stable estimation of a covariance matrix by banding the Cholesky factor using a modified Gram-Schmidt algorithm implemented in RcppArmadilo. See <http://stat.umn.edu/~molst029> for details on the algorithm.
Maintained by Aaron Molstad. Last updated 10 years ago.
8.6 match 1.00 scoremhunter1
EasyMx:Easy Model-Builder Functions for 'OpenMx'
Utilities for building certain kinds of common matrices and models in the extended structural equation modeling package, 'OpenMx'.
Maintained by Michael D. Hunter. Last updated 2 years ago.
3.6 match 2.32 score 21 scriptsr-forge
mvtnorm:Multivariate Normal and t Distributions
Computes multivariate normal and t probabilities, quantiles, random deviates, and densities. Log-likelihoods for multivariate Gaussian models and Gaussian copulae parameterised by Cholesky factors of covariance or precision matrices are implemented for interval-censored and exact data, or a mix thereof. Score functions for these log-likelihoods are available. A class representing multiple lower triangular matrices and corresponding methods are part of this package.
Maintained by Torsten Hothorn. Last updated 18 days ago.
0.5 match 15.84 score 13k scripts 2.6k dependentscran
VCA:Variance Component Analysis
ANOVA and REML estimation of linear mixed models is implemented, once following Searle et al. (1991, ANOVA for unbalanced data), once making use of the 'lme4' package. The primary objective of this package is to perform a variance component analysis (VCA) according to CLSI EP05-A3 guideline "Evaluation of Precision of Quantitative Measurement Procedures" (2014). There are plotting methods for visualization of an experimental design, plotting random effects and residuals. For ANOVA type estimation two methods for computing ANOVA mean squares are implemented (SWEEP and quadratic forms). The covariance matrix of variance components can be derived, which is used in estimating confidence intervals. Linear hypotheses of fixed effects and LS means can be computed. LS means can be computed at specific values of covariables and with custom weighting schemes for factor variables. See ?VCA for a more comprehensive description of the features.
Maintained by Andre Schuetzenmeister. Last updated 1 years ago.
1.7 match 2 stars 4.51 score 5 dependentsstla
matrixsampling:Simulations of Matrix Variate Distributions
Provides samplers for various matrix variate distributions: Wishart, inverse-Wishart, normal, t, inverted-t, Beta type I, Beta type II, Gamma, confluent hypergeometric. Allows to simulate the noncentral Wishart distribution without the integer restriction on the degrees of freedom.
Maintained by Stéphane Laurent. Last updated 6 years ago.
1.8 match 3 stars 4.22 score 37 scripts 1 dependentsreinhardfurrer
spam64:64-Bit Extension of the SPArse Matrix R Package 'spam'
Provides the Fortran code of the R package 'spam' with 64-bit integers. Loading this package together with the R package spam enables the sparse matrix class spam to handle huge sparse matrices with more than 2^31-1 non-zero elements. Documentation is provided in Gerber, Moesinger and Furrer (2017) <doi:10.1016/j.cageo.2016.11.015>.
Maintained by Reinhard Furrer. Last updated 1 years ago.
2.9 match 2.58 score 25 scripts 3 dependentsboennecd
mmcif:Mixed Multivariate Cumulative Incidence Functions
Fits the mixed cumulative incidence functions model suggested by <doi:10.1093/biostatistics/kxx072> which decomposes within cluster dependence of risk and timing. The estimation method supports computation in parallel using a shared memory C++ implementation. A sandwich estimator of the covariance matrix is available. Natural cubic splines are used to provide a flexible model for the cumulative incidence functions.
Maintained by Benjamin Christoffersen. Last updated 2 years ago.
competing-riskcomposite-likelihoodmixed-modelssandwich-estimatorsurvival-analysisfortranopenblascppopenmp
1.8 match 4.00 score 10 scriptstopepo
sparsediscrim:Sparse and Regularized Discriminant Analysis
A collection of sparse and regularized discriminant analysis methods intended for small-sample, high-dimensional data sets. The package features the High-Dimensional Regularized Discriminant Analysis classifier from Ramey et al. (2017) <arXiv:1602.01182>. Other classifiers include those from Dudoit et al. (2002) <doi:10.1198/016214502753479248>, Pang et al. (2009) <doi:10.1111/j.1541-0420.2009.01200.x>, and Tong et al. (2012) <doi:10.1093/bioinformatics/btr690>.
Maintained by Max Kuhn. Last updated 4 years ago.
1.7 match 3 stars 4.11 score 86 scriptsmatthewwolak
gremlin:Mixed-Effects REML Incorporating Generalized Inverses
Fit linear mixed-effects models using restricted (or residual) maximum likelihood (REML) and with generalized inverse matrices to specify covariance structures for random effects. In particular, the package is suited to fit quantitative genetic mixed models, often referred to as 'animal models'. Implements the average information algorithm as the main tool to maximize the restricted log-likelihood, but with other algorithms available.
Maintained by Matthew Wolak. Last updated 4 months ago.
average-informationlinear-mixed-modelsmaximum-likelihoodmixed-modelscpp
1.8 match 5 stars 3.85 score 14 scriptsalexcannon
MBC:Multivariate Bias Correction of Climate Model Outputs
Calibrate and apply multivariate bias correction algorithms for climate model simulations of multiple climate variables. Three methods described by Cannon (2016) <doi:10.1175/JCLI-D-15-0679.1> and Cannon (2018) <doi:10.1007/s00382-017-3580-6> are implemented — (i) MBC Pearson correlation (MBCp), (ii) MBC rank correlation (MBCr), and (iii) MBC N-dimensional PDF transform (MBCn) — as is the Rank Resampling for Distributions and Dependences (R2D2) method.
Maintained by Alex J. Cannon. Last updated 4 months ago.
1.8 match 6 stars 3.76 score 16 scripts 1 dependentsjcatwood
VeccTMVN:Multivariate Normal Probabilities using Vecchia Approximation
Under a different representation of the multivariate normal (MVN) probability, we can use the Vecchia approximation to sample the integrand at a linear complexity with respect to n. Additionally, both the SOV algorithm from Genz (92) and the exponential-tilting method from Botev (2017) can be adapted to linear complexity. The reference for the method implemented in this package is Jian Cao and Matthias Katzfuss (2024) "Linear-Cost Vecchia Approximation of Multivariate Normal Probabilities" <doi:10.48550/arXiv.2311.09426>. Two major references for the development of our method are Alan Genz (1992) "Numerical Computation of Multivariate Normal Probabilities" <doi:10.1080/10618600.1992.10477010> and Z. I. Botev (2017) "The Normal Law Under Linear Restrictions: Simulation and Estimation via Minimax Tilting" <doi:10.48550/arXiv.1603.04166>.
Maintained by Jian Cao. Last updated 4 months ago.
normal-distributionsampling-methodsstatisticsfortranopenblascppopenmp
1.7 match 2 stars 3.56 score 36 scriptsadamjrothman
PDSCE:Positive Definite Sparse Covariance Estimators
Compute and tune some positive definite and sparse covariance estimators.
Maintained by Adam J. Rothman. Last updated 3 years ago.
3.4 match 1 stars 1.62 score 14 scripts 1 dependentspaciorek
bigGP:Distributed Gaussian Process Calculations
Distributes Gaussian process calculations across nodes in a distributed memory setting, using Rmpi. The bigGP class provides high-level methods for maximum likelihood with normal data, prediction, calculation of uncertainty (i.e., posterior covariance calculations), and simulation of realizations. In addition, bigGP provides an API for basic matrix calculations with distributed covariance matrices, including Cholesky decomposition, back/forwardsolve, crossproduct, and matrix multiplication.
Maintained by Christopher Paciorek. Last updated 2 years ago.
2.4 match 2.02 score 21 scriptsjmleach-bst
sim2Dpredictr:Simulate Outcomes Using Spatially Dependent Design Matrices
Provides tools for simulating spatially dependent predictors (continuous or binary), which are used to generate scalar outcomes in a (generalized) linear model framework. Continuous predictors are generated using traditional multivariate normal distributions or Gauss Markov random fields with several correlation function approaches (e.g., see Rue (2001) <doi:10.1111/1467-9868.00288> and Furrer and Sain (2010) <doi:10.18637/jss.v036.i10>), while binary predictors are generated using a Boolean model (see Cressie and Wikle (2011, ISBN: 978-0-471-69274-4)). Parameter vectors exhibiting spatial clustering can also be easily specified by the user.
Maintained by Justin Leach. Last updated 1 years ago.
1.8 match 2.70 score 2 scriptshjboonstra
mcmcsae:Markov Chain Monte Carlo Small Area Estimation
Fit multi-level models with possibly correlated random effects using Markov Chain Monte Carlo simulation. Such models allow smoothing over space and time and are useful in, for example, small area estimation.
Maintained by Harm Jan Boonstra. Last updated 3 months ago.
1.9 match 2.48 score 8 scriptscran
cPCG:Efficient and Customized Preconditioned Conjugate Gradient Method for Solving System of Linear Equations
Solves system of linear equations using (preconditioned) conjugate gradient algorithm, with improved efficiency using Armadillo templated 'C++' linear algebra library, and flexibility for user-specified preconditioning method. Please check <https://github.com/styvon/cPCG> for latest updates.
Maintained by Yongwen Zhuang. Last updated 6 years ago.
2.0 match 2.28 score 19 scriptsantongagin
BBEST:Bayesian Estimation of Incoherent Neutron Scattering Backgrounds
We implemented a Bayesian-statistics approach for subtraction of incoherent scattering from neutron total-scattering data. In this approach, the estimated background signal associated with incoherent scattering maximizes the posterior probability, which combines the likelihood of this signal in reciprocal and real spaces with the prior that favors smooth lines. The description of the corresponding approach could be found at Gagin and Levin (2014) <DOI:10.1107/S1600576714023796>.
Maintained by Anton Gagin. Last updated 4 years ago.
2.3 match 2.00 score 4 scriptscran
optR:Optimization Toolbox for Solving Linear Systems
Solves linear systems of form Ax=b via Gauss elimination, LU decomposition, Gauss-Seidel, Conjugate Gradient Method (CGM) and Cholesky methods.
Maintained by Prakash. Last updated 8 years ago.
4.2 match 1 stars 1.00 scorefunctionaldata
frechet:Statistical Analysis for Random Objects and Non-Euclidean Data
Provides implementation of statistical methods for random objects lying in various metric spaces, which are not necessarily linear spaces. The core of this package is Fréchet regression for random objects with Euclidean predictors, which allows one to perform regression analysis for non-Euclidean responses under some mild conditions. Examples include distributions in L^2-Wasserstein space, covariance matrices endowed with power metric (with Frobenius metric as a special case), Cholesky and log-Cholesky metrics. References: Petersen, A., & Müller, H.-G. (2019) <doi:10.1214/17-AOS1624>.
Maintained by Yaqing Chen. Last updated 6 months ago.
0.8 match 11 stars 4.82 score 20 scriptsmaurobernardi
fastQR:Fast QR Decomposition and Update
Efficient algorithms for performing, updating, and downdating the QR decomposition, R decomposition, or the inverse of the R decomposition of a matrix as rows or columns are added or removed. It also includes functions for solving linear systems of equations, normal equations for linear regression models, and normal equations for linear regression with a RIDGE penalty. For a detailed introduction to these methods, see the book by Golub and Van Loan (2013, <doi:10.1007/978-3-319-05089-8>) for complete introduction to the methods.
Maintained by Mauro Bernardi. Last updated 1 months ago.
3.6 match 1.00 scoredavid-cortes
cmfrec:Collective Matrix Factorization for Recommender Systems
Collective matrix factorization (a.k.a. multi-view or multi-way factorization, Singh, Gordon, (2008) <doi:10.1145/1401890.1401969>) tries to approximate a (potentially very sparse or having many missing values) matrix 'X' as the product of two low-dimensional matrices, optionally aided with secondary information matrices about rows and/or columns of 'X', which are also factorized using the same latent components. The intended usage is for recommender systems, dimensionality reduction, and missing value imputation. Implements extensions of the original model (Cortes, (2018) <arXiv:1809.00366>) and can produce different factorizations such as the weighted 'implicit-feedback' model (Hu, Koren, Volinsky, (2008) <doi:10.1109/ICDM.2008.22>), the 'weighted-lambda-regularization' model, (Zhou, Wilkinson, Schreiber, Pan, (2008) <doi:10.1007/978-3-540-68880-8_32>), or the enhanced model with 'implicit features' (Rendle, Zhang, Koren, (2019) <arXiv:1905.01395>), with or without side information. Can use gradient-based procedures or alternating-least squares procedures (Koren, Bell, Volinsky, (2009) <doi:10.1109/MC.2009.263>), with either a Cholesky solver, a faster conjugate gradient solver (Takacs, Pilaszy, Tikk, (2011) <doi:10.1145/2043932.2043987>), or a non-negative coordinate descent solver (Franc, Hlavac, Navara, (2005) <doi:10.1007/11556121_50>), providing efficient methods for sparse and dense data, and mixtures thereof. Supports L1 and L2 regularization in the main models, offers alternative most-popular and content-based models, and implements functionality for cold-start recommendations and imputation of 2D data.
Maintained by David Cortes. Last updated 2 months ago.
cold-startcollaborative-filteringcollective-matrix-factorizationopenblasopenmp
0.5 match 120 stars 6.84 score 23 scriptspapgeo
BNSP:Bayesian Non- And Semi-Parametric Model Fitting
MCMC algorithms & processing functions for: 1. single response multiple regression, see Papageorgiou, G. (2018) <doi: 10.32614/RJ-2018-069>, 2. multivariate response multiple regression, with nonparametric models for the means, the variances and the correlation matrix, with variable selection, see Papageorgiou, G. and Marshall, B. C. (2020) <doi: 10.1080/10618600.2020.1739534>, 3. joint mean-covariance models for multivariate responses, see Papageorgiou, G. (2022) <doi: 10.1002/sim.9376>, and 4.Dirichlet process mixtures, see Papageorgiou, G. (2019) <doi: 10.1111/anzs.12273>.
Maintained by Georgios Papageorgiou. Last updated 2 years ago.
3.4 match 1 stars 1.00 scorebstpourcain
grmsem:Genetic-Relationship-Matrix Structural Equation Modelling (GRMSEM)
Quantitative genetics tool supporting the modelling of multivariate genetic variance structures in quantitative data. It allows fitting different models through multivariate genetic-relationship-matrix (GRM) structural equation modelling (SEM) in unrelated individuals, using a maximum likelihood approach. Specifically, it combines genome-wide genotyping information, as captured by GRMs, with twin-research-based SEM techniques, St Pourcain et al. (2017) <doi:10.1016/j.biopsych.2017.09.020>, Shapland et al. (2020) <doi:10.1101/2020.08.14.251199>.
Maintained by Beate StPourcain. Last updated 4 years ago.
1.5 match 2.00 score 1 scriptsskranz
mlogitExtras:Some extras for the mlogit package
Some extras for the mlogit package. I started this package because I could not make the predict function in mlogit robustly work without errors for mixed logit models. The function ml_predict can be used as a replacment (but still only works for a subset of random parameter specifications.)
Maintained by Sebastian Kranz. Last updated 2 years ago.
1.7 match 1.70 scorejfrench
gear:Geostatistical Analysis in R
Implements common geostatistical methods in a clean, straightforward, efficient manner. The methods are discussed in Schabenberger and Gotway (2004, <ISBN:9781584883227>) and Waller and Gotway (2004, <ISBN:9780471387718>).
Maintained by Joshua French. Last updated 5 years ago.
1.9 match 1.43 score 27 scriptsstla
RationalMatrix:Exact Matrix Algebra for Rational Matrices
Provides functions to deal with matrix algebra for matrices with rational entries: determinant, rank, image and kernel, inverse, Cholesky decomposition. All computations are exact.
Maintained by Stéphane Laurent. Last updated 2 years ago.
0.5 match 4.03 score 12 scripts 6 dependentscran
gek:Gradient-Enhanced Kriging
Gradient-Enhanced Kriging as an emulator for computer experiments based on Maximum-Likelihood estimation.
Maintained by Carmen van Meegen. Last updated 3 days ago.
2.0 match 1.00 scorecran
assist:A Suite of R Functions Implementing Spline Smoothing Techniques
Fit various smoothing spline models. Includes an ssr() function for smoothing spline regression, an nnr() function for nonparametric nonlinear regression, an snr() function for semiparametric nonlinear regression, an slm() function for semiparametric linear mixed-effects models, and an snm() function for semiparametric nonlinear mixed-effects models. See Wang (2011) <doi:10.1201/b10954> for an overview.
Maintained by Yuedong Wang. Last updated 2 years ago.
2.0 match 1.00 scorenmmarquez
ar.matrix:Simulate Auto Regressive Data from Precision Matrices
Using sparse precision matrices and Cholesky factorization simulates data that is auto-regressive.
Maintained by Neal Marquez. Last updated 6 years ago.
0.6 match 5 stars 3.60 score 16 scriptscran
tmvnsim:Truncated Multivariate Normal Simulation
Importance sampling from the truncated multivariate normal using the GHK (Geweke-Hajivassiliou-Keane) simulator. Unlike Gibbs sampling which can get stuck in one truncation sub-region depending on initial values, this package allows truncation based on disjoint regions that are created by truncation of absolute values. The GHK algorithm uses simple Cholesky transformation followed by recursive simulation of univariate truncated normals hence there are also no convergence issues. Importance sample is returned along with sampling weights, based on which, one can calculate integrals over truncated regions for multivariate normals.
Maintained by Samsiddhi Bhattacharjee. Last updated 8 years ago.
0.5 match 2.79 score 4 dependentscran
whitening:Whitening and High-Dimensional Canonical Correlation Analysis
Implements the whitening methods (ZCA, PCA, Cholesky, ZCA-cor, and PCA-cor) discussed in Kessy, Lewin, and Strimmer (2018) "Optimal whitening and decorrelation", <doi:10.1080/00031305.2016.1277159>, as well as the whitening approach to canonical correlation analysis allowing negative canonical correlations described in Jendoubi and Strimmer (2019) "A whitening approach to probabilistic canonical correlation analysis for omics data integration", <doi:10.1186/s12859-018-2572-9>. The package also offers functions to simulate random orthogonal matrices, compute (correlation) loadings and explained variation. It also contains four example data sets (extended UCI wine data, TCGA LUSC data, nutrimouse data, extended pitprops data).
Maintained by Korbinian Strimmer. Last updated 3 years ago.
0.5 match 2.59 score 65 scripts 2 dependentsmneal4
CDGHMM:Hidden Markov Models for Multivariate Panel Data
Estimates hidden Markov models from the family of Cholesky-decomposed Gaussian hidden Markov models (CDGHMM) under various missingness schemes. This family improves upon estimation of traditional Gaussian HMMs by introducing parsimony, as well as, controlling for dropped out observations and non-random missingness. See Neal, Sochaniwsky and McNicholas (2024) <DOI:10.1007/s11222-024-10462-0>.
Maintained by Mackenzie R. Neal. Last updated 5 months ago.
0.5 match 1.30 scorecran
randcorr:Generate a Random p x p Correlation Matrix
Implements the algorithm by Pourahmadi and Wang (2015) <doi:10.1016/j.spl.2015.06.015> for generating a random p x p correlation matrix. Briefly, the idea is to represent the correlation matrix using Cholesky factorization and p(p-1)/2 hyperspherical coordinates (i.e., angles), sample the angles from a particular distribution and then convert to the standard correlation matrix form. The angles are sampled from a distribution with pdf proportional to sin^k(theta) (0 < theta < pi, k >= 1) using the efficient sampling algorithm described in Enes Makalic and Daniel F. Schmidt (2018) <arXiv:1809.05212>.
Maintained by Daniel F. Schmidt. Last updated 6 years ago.
0.5 match 1 stars 1.26 score 18 scriptscran
qape:Quantile of Absolute Prediction Errors
Estimates QAPE using bootstrap procedures. The residual, parametric and double bootstrap is used. The test of normality using Cholesky decomposition is added. Y pop is defined.
Maintained by Alicja Wolny-Dominiak. Last updated 2 years ago.
0.5 match 1.00 scorecran
longclust:Model-Based Clustering and Classification for Longitudinal Data
Clustering or classification of longitudinal data based on a mixture of multivariate t or Gaussian distributions with a Cholesky-decomposed covariance structure. Details in McNicholas and Murphy (2010) <doi:10.1002/cjs.10047> and McNicholas and Subedi (2012) <doi:10.1016/j.jspi.2011.11.026>.
Maintained by Paul D. McNicholas. Last updated 1 years ago.
0.5 match 1 stars 1.00 score