Showing 200 of total 1148 results (show query)

raphaelhartmann

WienR:Derivatives of the First-Passage Time Density and Cumulative Distribution Function, and Random Sampling from the (Truncated) First-Passage Time Distribution

First, we provide functions to calculate the partial derivative of the first-passage time diffusion probability density function (PDF) and cumulative distribution function (CDF) with respect to the first-passage time t (only for PDF), the upper barrier a, the drift rate v, the relative starting point w, the non-decision time t0, the inter-trial variability of the drift rate sv, the inter-trial variability of the rel. starting point sw, and the inter-trial variability of the non-decision time st0. In addition the PDF and CDF themselves are also provided. Most calculations are done on the logarithmic scale to make it more stable. Since the PDF, CDF, and their derivatives are represented as infinite series, we give the user the option to control the approximation errors with the argument 'precision'. For the numerical integration we used the C library cubature by Johnson, S. G. (2005-2013) <https://github.com/stevengj/cubature>. Numerical integration is required whenever sv, sw, and/or st0 is not zero. Note that numerical integration reduces speed of the computation and the precision cannot be guaranteed anymore. Therefore, whenever numerical integration is used an estimate of the approximation error is provided in the output list. Note: The large number of contributors (ctb) is due to copying a lot of C/C++ code chunks from the GNU Scientific Library (GSL). Second, we provide methods to sample from the first-passage time distribution with or without user-defined truncation from above. The first method is a new adaptive rejection sampler building on the works of Gilks and Wild (1992; <doi:10.2307/2347565>) and Hartmann and Klauer (in press). The second method is a rejection sampler provided by Drugowitsch (2016; <doi:10.1038/srep20490>). The third method is an inverse transformation sampler. The fourth method is a "pseudo" adaptive rejection sampler that builds on the first method. For more details see the corresponding help files.

Maintained by Raphael Hartmann. Last updated 1 years ago.

cpp

30.7 match 2 stars 3.48 score 3 scripts 1 dependents

functionaldata

fdapace:Functional Data Analysis and Empirical Dynamics

A versatile package that provides implementation of various methods of Functional Data Analysis (FDA) and Empirical Dynamics. The core of this package is Functional Principal Component Analysis (FPCA), a key technique for functional data analysis, for sparsely or densely sampled random trajectories and time courses, via the Principal Analysis by Conditional Estimation (PACE) algorithm. This core algorithm yields covariance and mean functions, eigenfunctions and principal component (scores), for both functional data and derivatives, for both dense (functional) and sparse (longitudinal) sampling designs. For sparse designs, it provides fitted continuous trajectories with confidence bands, even for subjects with very few longitudinal observations. PACE is a viable and flexible alternative to random effects modeling of longitudinal data. There is also a Matlab version (PACE) that contains some methods not available on fdapace and vice versa. Updates to fdapace were supported by grants from NIH Echo and NSF DMS-1712864 and DMS-2014626. Please cite our package if you use it (You may run the command citation("fdapace") to get the citation format and bibtex entry). References: Wang, J.L., Chiou, J., Müller, H.G. (2016) <doi:10.1146/annurev-statistics-041715-033624>; Chen, K., Zhang, X., Petersen, A., Müller, H.G. (2017) <doi:10.1007/s12561-015-9137-5>.

Maintained by Yidong Zhou. Last updated 9 months ago.

cpp

6.8 match 31 stars 11.46 score 474 scripts 25 dependents

welch-lab

cytosignal:What the Package Does (One Line, Title Case)

What the package does (one paragraph).

Maintained by Jialin Liu. Last updated 6 days ago.

openblascpp

12.4 match 16 stars 5.95 score 6 scripts

bioc

Biobase:Biobase: Base functions for Bioconductor

Functions that are needed by many other packages or which replace R functions.

Maintained by Bioconductor Package Maintainer. Last updated 5 months ago.

infrastructurebioconductor-packagecore-package

3.4 match 9 stars 16.45 score 6.6k scripts 1.8k dependents

jasjeetsekhon

rgenoud:R Version of GENetic Optimization Using Derivatives

A genetic algorithm plus derivative optimizer.

Maintained by Jasjeet Singh Sekhon. Last updated 2 years ago.

5.4 match 12 stars 9.76 score 251 scripts 29 dependents

poissonconsulting

embr:Model Builder Utility Functions and Virtual Classes

Utility functions and virtual classes shared by model builder packages such as tmbr, jmbr and smbr.

Maintained by Joe Thorley. Last updated 1 months ago.

analysesmbr

10.5 match 3 stars 4.61 score 4 scripts 3 dependents

bioc

mixOmics:Omics Data Integration Project

Multivariate methods are well suited to large omics data sets where the number of variables (e.g. genes, proteins, metabolites) is much larger than the number of samples (patients, cells, mice). They have the appealing properties of reducing the dimension of the data by using instrumental variables (components), which are defined as combinations of all variables. Those components are then used to produce useful graphical outputs that enable better understanding of the relationships and correlation structures between the different data sets that are integrated. mixOmics offers a wide range of multivariate methods for the exploration and integration of biological datasets with a particular focus on variable selection. The package proposes several sparse multivariate models we have developed to identify the key variables that are highly correlated, and/or explain the biological outcome of interest. The data that can be analysed with mixOmics may come from high throughput sequencing technologies, such as omics data (transcriptomics, metabolomics, proteomics, metagenomics etc) but also beyond the realm of omics (e.g. spectral imaging). The methods implemented in mixOmics can also handle missing values without having to delete entire rows with missing data. A non exhaustive list of methods include variants of generalised Canonical Correlation Analysis, sparse Partial Least Squares and sparse Discriminant Analysis. Recently we implemented integrative methods to combine multiple data sets: N-integration with variants of Generalised Canonical Correlation Analysis and P-integration with variants of multi-group Partial Least Squares.

Maintained by Eva Hamrud. Last updated 4 days ago.

immunooncologymicroarraysequencingmetabolomicsmetagenomicsproteomicsgenepredictionmultiplecomparisonclassificationregressionbioconductorgenomicsgenomics-datagenomics-visualizationmultivariate-analysismultivariate-statisticsomicsr-pkgr-project

3.5 match 182 stars 13.71 score 1.3k scripts 22 dependents

polinasuter

BiDAG:Bayesian Inference for Directed Acyclic Graphs

Implementation of a collection of MCMC methods for Bayesian structure learning of directed acyclic graphs (DAGs), both from continuous and discrete data. For efficient inference on larger DAGs, the space of DAGs is pruned according to the data. To filter the search space, the algorithm employs a hybrid approach, combining constraint-based learning with search and score. A reduced search space is initially defined on the basis of a skeleton obtained by means of the PC-algorithm, and then iteratively improved with search and score. Search and score is then performed following two approaches: Order MCMC, or Partition MCMC. The BGe score is implemented for continuous data and the BDe score is implemented for binary data or categorical data. The algorithms may provide the maximum a posteriori (MAP) graph or a sample (a collection of DAGs) from the posterior distribution given the data. All algorithms are also applicable for structure learning and sampling for dynamic Bayesian networks. References: J. Kuipers, P. Suter, G. Moffa (2022) <doi:10.1080/10618600.2021.2020127>, N. Friedman and D. Koller (2003) <doi:10.1023/A:1020249912095>, J. Kuipers and G. Moffa (2017) <doi:10.1080/01621459.2015.1133426>, M. Kalisch et al. (2012) <doi:10.18637/jss.v047.i11>, D. Geiger and D. Heckerman (2002) <doi:10.1214/aos/1035844981>, P. Suter, J. Kuipers, G. Moffa, N.Beerenwinkel (2023) <doi:10.18637/jss.v105.i09>.

Maintained by Polina Suter. Last updated 2 years ago.

cpp

13.6 match 4 stars 3.29 score 81 scripts 2 dependents

murrayefford

openCR:Open Population Capture-Recapture

Non-spatial and spatial open-population capture-recapture analysis.

Maintained by Murray Efford. Last updated 5 months ago.

cpp

7.3 match 4 stars 5.98 score 53 scripts

r-cas

Ryacas:R Interface to the 'Yacas' Computer Algebra System

Interface to the 'yacas' computer algebra system (<http://www.yacas.org/>).

Maintained by Mikkel Meyer Andersen. Last updated 2 years ago.

cpp

3.9 match 40 stars 10.15 score 167 scripts 14 dependents

qsbase

qs:Quick Serialization of R Objects

Provides functions for quickly writing and reading any R object to and from disk.

Maintained by Travers Ching. Last updated 10 days ago.

compressiondata-storageencodingserializationlibzstdlz4cpp

2.3 match 414 stars 13.91 score 2.5k scripts 51 dependents

daijiang

rtrees:Deriving Phylogenies from Synthesis Trees

To facilitate generating phylogenies from synthesis trees.

Maintained by Daijiang Li. Last updated 9 months ago.

phylogenetic-trees

5.0 match 33 stars 5.68 score 73 scripts

fauvernierma

survPen:Multidimensional Penalized Splines for (Excess) Hazard Models, Relative Mortality Ratio Models and Marginal Intensity Models

Fits (excess) hazard, relative mortality ratio or marginal intensity models with multidimensional penalized splines allowing for time-dependent effects, non-linear effects and interactions between several continuous covariates. In survival and net survival analysis, in addition to modelling the effect of time (via the baseline hazard), one has often to deal with several continuous covariates and model their functional forms, their time-dependent effects, and their interactions. Model specification becomes therefore a complex problem and penalized regression splines represent an appealing solution to that problem as splines offer the required flexibility while penalization limits overfitting issues. Current implementations of penalized survival models can be slow or unstable and sometimes lack some key features like taking into account expected mortality to provide net survival and excess hazard estimates. In contrast, survPen provides an automated, fast, and stable implementation (thanks to explicit calculation of the derivatives of the likelihood) and offers a unified framework for multidimensional penalized hazard and excess hazard models. Later versions (>2.0.0) include penalized models for relative mortality ratio, and marginal intensity in recurrent event setting. survPen may be of interest to those who 1) analyse any kind of time-to-event data: mortality, disease relapse, machinery breakdown, unemployment, etc 2) wish to describe the associated hazard and to understand which predictors impact its dynamics, 3) wish to model the relative mortality ratio between a cohort and a reference population, 4) wish to describe the marginal intensity for recurrent event data. See Fauvernier et al. (2019a) <doi:10.21105/joss.01434> for an overview of the package and Fauvernier et al. (2019b) <doi:10.1111/rssc.12368> for the method.

Maintained by Mathieu Fauvernier. Last updated 4 months ago.

cpp

4.1 match 12 stars 6.82 score 85 scripts 1 dependents

cran

circular:Circular Statistics

Circular Statistics, from "Topics in circular Statistics" (2001) S. Rao Jammalamadaka and A. SenGupta, World Scientific.

Maintained by Eduardo García-Portugués. Last updated 7 months ago.

fortran

3.4 match 7 stars 7.76 score 1.1k scripts 40 dependents

dnychka

fields:Tools for Spatial Data

For curve, surface and function fitting with an emphasis on splines, spatial data, geostatistics, and spatial statistics. The major methods include cubic, and thin plate splines, Kriging, and compactly supported covariance functions for large data sets. The splines and Kriging methods are supported by functions that can determine the smoothing parameter (nugget and sill variance) and other covariance function parameters by cross validation and also by restricted maximum likelihood. For Kriging there is an easy to use function that also estimates the correlation scale (range parameter). A major feature is that any covariance function implemented in R and following a simple format can be used for spatial prediction. There are also many useful functions for plotting and working with spatial data as images. This package also contains an implementation of sparse matrix methods for large spatial data sets and currently requires the sparse matrix (spam) package. Use help(fields) to get started and for an overview. The fields source code is deliberately commented and provides useful explanations of numerical details as a companion to the manual pages. The commented source code can be viewed by expanding the source code version and looking in the R subdirectory. The reference for fields can be generated by the citation function in R and has DOI <doi:10.5065/D6W957CT>. Development of this package was supported in part by the National Science Foundation Grant 1417857, the National Center for Atmospheric Research, and Colorado School of Mines. See the Fields URL for a vignette on using this package and some background on spatial statistics.

Maintained by Douglas Nychka. Last updated 9 months ago.

fortran

2.0 match 15 stars 12.60 score 7.7k scripts 295 dependents

cran

propagate:Propagation of Uncertainty

Propagation of uncertainty using higher-order Taylor expansion and Monte Carlo simulation.

Maintained by Andrej-Nikolai Spiess. Last updated 7 years ago.

cpp

4.9 match 2 stars 4.82 score 183 scripts 3 dependents