Showing 200 of total 219 results (show query)

mu-sigma

HVT:Constructing Hierarchical Voronoi Tessellations and Overlay Heatmaps for Data Analysis

Facilitates building topology preserving maps for data analysis.

Maintained by "Mu Sigma, Inc.". Last updated 1 months ago.

23.4 match 4 stars 6.26 score 1 scripts

dnychka

fields:Tools for Spatial Data

For curve, surface and function fitting with an emphasis on splines, spatial data, geostatistics, and spatial statistics. The major methods include cubic, and thin plate splines, Kriging, and compactly supported covariance functions for large data sets. The splines and Kriging methods are supported by functions that can determine the smoothing parameter (nugget and sill variance) and other covariance function parameters by cross validation and also by restricted maximum likelihood. For Kriging there is an easy to use function that also estimates the correlation scale (range parameter). A major feature is that any covariance function implemented in R and following a simple format can be used for spatial prediction. There are also many useful functions for plotting and working with spatial data as images. This package also contains an implementation of sparse matrix methods for large spatial data sets and currently requires the sparse matrix (spam) package. Use help(fields) to get started and for an overview. The fields source code is deliberately commented and provides useful explanations of numerical details as a companion to the manual pages. The commented source code can be viewed by expanding the source code version and looking in the R subdirectory. The reference for fields can be generated by the citation function in R and has DOI <doi:10.5065/D6W957CT>. Development of this package was supported in part by the National Science Foundation Grant 1417857, the National Center for Atmospheric Research, and Colorado School of Mines. See the Fields URL for a vignette on using this package and some background on spatial statistics.

Maintained by Douglas Nychka. Last updated 9 months ago.

fortran

5.8 match 15 stars 12.60 score 7.7k scripts 295 dependents

moshagen

semPower:Power Analyses for SEM

Provides a-priori, post-hoc, and compromise power-analyses for structural equation models (SEM).

Maintained by Morten Moshagen. Last updated 1 months ago.

10.8 match 8 stars 6.30 score 18 scripts

alexiosg

rugarch:Univariate GARCH Models

ARFIMA, in-mean, external regressors and various GARCH flavors, with methods for fit, forecast, simulation, inference and plotting.

Maintained by Alexios Galanos. Last updated 3 months ago.

cpp

3.3 match 26 stars 12.13 score 1.3k scripts 15 dependents

cran

nlme:Linear and Nonlinear Mixed Effects Models

Fit and compare Gaussian linear and nonlinear mixed-effects models.

Maintained by R Core Team. Last updated 2 months ago.

fortran

2.3 match 6 stars 13.00 score 13k scripts 8.7k dependents

ramnathv

htmlwidgets:HTML Widgets for R

A framework for creating HTML widgets that render in various contexts including the R console, 'R Markdown' documents, and 'Shiny' web applications.

Maintained by Carson Sievert. Last updated 1 years ago.

1.3 match 791 stars 19.05 score 7.4k scripts 3.1k dependents

mqbssppe

fabMix:Overfitting Bayesian Mixtures of Factor Analyzers with Parsimonious Covariance and Unknown Number of Components

Model-based clustering of multivariate continuous data using Bayesian mixtures of factor analyzers (Papastamoulis (2019) <DOI:10.1007/s11222-019-09891-z> (2018) <DOI:10.1016/j.csda.2018.03.007>). The number of clusters is estimated using overfitting mixture models (Rousseau and Mengersen (2011) <DOI:10.1111/j.1467-9868.2011.00781.x>): suitable prior assumptions ensure that asymptotically the extra components will have zero posterior weight, therefore, the inference is based on the ``alive'' components. A Gibbs sampler is implemented in order to (approximately) sample from the posterior distribution of the overfitting mixture. A prior parallel tempering scheme is also available, which allows to run multiple parallel chains with different prior distributions on the mixture weights. These chains run in parallel and can swap states using a Metropolis-Hastings move. Eight different parameterizations give rise to parsimonious representations of the covariance per cluster (following Mc Nicholas and Murphy (2008) <DOI:10.1007/s11222-008-9056-0>). The model parameterization and number of factors is selected according to the Bayesian Information Criterion. Identifiability issues related to label switching are dealt by post-processing the simulated output with the Equivalence Classes Representatives algorithm (Papastamoulis and Iliopoulos (2010) <DOI:10.1198/jcgs.2010.09008>, Papastamoulis (2016) <DOI:10.18637/jss.v069.c01>).

Maintained by Panagiotis Papastamoulis. Last updated 1 years ago.

openblascppopenmp

10.4 match 2.09 score 41 scripts 1 dependents

bioc

PROcess:Ciphergen SELDI-TOF Processing

A package for processing protein mass spectrometry data.

Maintained by Xiaochun Li. Last updated 5 months ago.

immunooncologymassspectrometryproteomics

3.3 match 6.04 score 552 scripts

lukejharmon

geiger:Analysis of Evolutionary Diversification

Methods for fitting macroevolutionary models to phylogenetic trees Pennell (2014) <doi:10.1093/bioinformatics/btu181>.

Maintained by Luke Harmon. Last updated 2 years ago.

openblascpp

2.3 match 1 stars 7.84 score 2.3k scripts 28 dependents

bioc

gcrma:Background Adjustment Using Sequence Information

Background adjustment using sequence information

Maintained by Z. Wu. Last updated 5 months ago.

microarrayonechannelpreprocessing

2.3 match 7.28 score 164 scripts 11 dependents

pachadotdev

LSTS:Locally Stationary Time Series

A set of functions that allow stationary analysis and locally stationary time series analysis.

Maintained by Mauricio Vargas. Last updated 1 years ago.

1.8 match 3 stars 5.54 score 51 scripts 5 dependents

flr

AAP:Aarts and Poos Stock Assessment Model that Estimates Bycatch

FLR version of Aarts and Poos stock assessment model.

Maintained by Iago Mosqueira. Last updated 1 years ago.

3.0 match 2.70 score 5 scripts

georgekoliopanos

modgo:MOck Data GeneratiOn

Generation of mock data from a real dataset using rank normal inverse transformation.

Maintained by George Koliopanos. Last updated 9 months ago.

1.8 match 1 stars 4.00 score 3 scripts

flaviobarros

IQCC:Improved Quality Control Charts

Builds statistical control charts with exact limits for univariate and multivariate cases.

Maintained by Flavio Barros. Last updated 6 years ago.

quality-control

1.7 match 2 stars 3.92 score 28 scripts 1 dependents

marlonecobos

nichevol:Tools for Ecological Niche Evolution Assessment Considering Uncertainty

A collection of tools that allow users to perform critical steps in the process of assessing ecological niche evolution over phylogenies, with uncertainty incorporated explicitly in reconstructions. The method proposed here for ancestral reconstruction of ecological niches characterizes species' niches using a bin-based approach that incorporates uncertainty in estimations. Compared to other existing methods, the approaches presented here reduce risk of overestimation of amounts and rates of ecological niche evolution. The main analyses include: initial exploration of environmental data in occurrence records and accessible areas, preparation of data for phylogenetic analyses, executing comparative phylogenetic analyses of ecological niches, and plotting for interpretations. Details on the theoretical background and methods used can be found in: Owens et al. (2020) <doi:10.1002/ece3.6359>, Peterson et al. (1999) <doi:10.1126/science.285.5431.1265>, Soberón and Peterson (2005) <doi:10.17161/bi.v2i0.4>, Peterson (2011) <doi:10.1111/j.1365-2699.2010.02456.x>, Barve et al. (2011) <doi:10.1111/ecog.02671>, Machado-Stredel et al. (2021) <doi:10.21425/F5FBG48814>, Owens et al. (2013) <doi:10.1016/j.ecolmodel.2013.04.011>, Saupe et al. (2018) <doi:10.1093/sysbio/syx084>, and Cobos et al. (2021) <doi:10.1111/jav.02868>.

Maintained by Marlon E. Cobos. Last updated 2 years ago.

1.7 match 14 stars 3.85 score 2 scripts

thomaschln

kgraph:Knowledge Graphs Constructions and Visualizations

Knowledge graphs enable to efficiently visualize and gain insights into large-scale data analysis results, as p-values from multiple studies or embedding data matrices. The usual workflow is a user providing a data frame of association studies results and specifying target nodes, e.g. phenotypes, to visualize. The knowledge graph then shows all the features which are significantly associated with the phenotype, with the edges being proportional to the association scores. As the user adds several target nodes and grouping information about the nodes such as biological pathways, the construction of such graphs soon becomes complex. The 'kgraph' package aims to enable users to easily build such knowledge graphs, and provides two main features: first, to enable building a knowledge graph based on a data frame of concepts relationships, be it p-values or cosine similarities; second, to enable determining an appropriate cut-off on cosine similarities from a complete embedding matrix, to enable the building of a knowledge graph directly from an embedding matrix. The 'kgraph' package provides several display, layout and cut-off options, and has already proven useful to researchers to enable them to visualize large sets of p-value associations with various phenotypes, and to quickly be able to visualize embedding results. Two example datasets are provided to demonstrate these behaviors, and several live 'shiny' applications are hosted by the CELEHS laboratory and Parse Health, as the KESER Mental Health application <https://keser-mental-health.parse-health.org/> based on Hong C. (2021) <doi:10.1038/s41746-021-00519-z>.

Maintained by Thomas Charlon. Last updated 25 days ago.

1.3 match 4.85 score

bristol-vaccine-centre

testerror:Uncertainty in Multiplex Panel Testing

Provides methods to support the estimation of epidemiological parameters based on the results of multiplex panel tests.

Maintained by Robert Challen. Last updated 12 months ago.

1.8 match 1 stars 3.40 score 4 scripts

prajual

bqror:Bayesian Quantile Regression for Ordinal Models

Package provides functions for estimating Bayesian quantile regression with ordinal outcomes, computing the covariate effects, model comparison measures, and inefficiency factor. The generic ordinal model with 3 or more outcomes (labeled OR1 model) is estimated by a combination of Gibbs sampling and Metropolis-Hastings algorithm. Whereas an ordinal model with exactly 3 outcomes (labeled OR2 model) is estimated using Gibbs sampling only. For each model framework, there is a specific function for estimation. The summary output produces estimates for regression quantiles and two measures of model comparison — log of marginal likelihood and Deviance Information Criterion (DIC). The package also has specific functions for computing the covariate effects and other functions that aids either the estimation or inference in quantile ordinal models. Rahman, M. A. (2016).“Bayesian Quantile Regression for Ordinal Models.” Bayesian Analysis, II(I): 1-24 <doi: 10.1214/15-BA939>. Yu, K., and Moyeed, R. A. (2001). “Bayesian Quantile Regression.” Statistics and Probability Letters, 54(4): 437–447 <doi: 10.1016/S0167-7152(01)00124-9>. Koenker, R., and Bassett, G. (1978).“Regression Quantiles.” Econometrica, 46(1): 33-50 <doi: 10.2307/1913643>. Chib, S. (1995). “Marginal likelihood from the Gibbs output.” Journal of the American Statistical Association, 90(432):1313–1321, 1995. <doi: 10.1080/01621459.1995.10476635>. Chib, S., and Jeliazkov, I. (2001). “Marginal likelihood from the Metropolis-Hastings output.” Journal of the American Statistical Association, 96(453):270–281, 2001. <doi: 10.1198/016214501750332848>.

Maintained by Prajual Maheshwari. Last updated 3 years ago.

1.9 match 2.01 score 4 scripts

pvkabaila

ciuupi:Confidence Intervals Utilizing Uncertain Prior Information

Computes a confidence interval for a specified linear combination of the regression parameters in a linear regression model with iid normal errors with known variance when there is uncertain prior information that a distinct specified linear combination of the regression parameters takes a given value. This confidence interval, found by numerical nonlinear constrained optimization, has the required minimum coverage and utilizes this uncertain prior information through desirable expected length properties. This confidence interval has the following three practical applications. Firstly, if the error variance has been accurately estimated from previous data then it may be treated as being effectively known. Secondly, for sufficiently large (dimension of the response vector) minus (dimension of regression parameter vector), greater than or equal to 30 (say), if we replace the assumed known value of the error variance by its usual estimator in the formula for the confidence interval then the resulting interval has, to a very good approximation, the same coverage probability and expected length properties as when the error variance is known. Thirdly, some more complicated models can be approximated by the linear regression model with error variance known when certain unknown parameters are replaced by estimates. This confidence interval is described in Mainzer, R. and Kabaila, P. (2019) <doi:10.32614/RJ-2019-026>, and is a member of the family of confidence intervals proposed by Kabaila, P. and Giri, K. (2009) <doi:10.1016/j.jspi.2009.03.018>.

Maintained by Paul Kabaila. Last updated 9 months ago.

1.6 match 2.00 score 8 scripts

edmhlin

BAYSTAR:On Bayesian Analysis of Threshold Autoregressive Models

Fit two-regime threshold autoregressive (TAR) models by Markov chain Monte Carlo methods.

Maintained by Edward M.H. Lin. Last updated 3 years ago.

2.3 match 2 stars 1.30 score 4 scripts