Showing 35 of total 35 results (show query)
neotomadb
neotoma2:Working with the Neotoma Paleoecology Database
Access and manipulation of data using the Neotoma Paleoecology Database. <https://api.neotomadb.org/api-docs/>.
Maintained by Dominguez Vidana Socorro. Last updated 8 months ago.
earthcubeneotomansfpaleoecology
11.0 match 8 stars 5.35 score 56 scriptsropensci
neotoma:Access to the Neotoma Paleoecological Database Through R
NOTE: This package is deprecated. Please use the neotoma2 package described at https://github.com/NeotomaDB/neotoma2. Access paleoecological datasets from the Neotoma Paleoecological Database using the published API (<http://wnapi.neotomadb.org/>), only containing datasets uploaded prior to June 2020. The functions in this package access various pre-built API functions and attempt to return the results from Neotoma in a usable format for researchers and the public.
Maintained by Simon J. Goring. Last updated 2 years ago.
neotomaneotoma-apisneotoma-databasensfpaleoecology
11.0 match 30 stars 5.04 score 145 scriptsenricoschumann
NMOF:Numerical Methods and Optimization in Finance
Functions, examples and data from the first and the second edition of "Numerical Methods and Optimization in Finance" by M. Gilli, D. Maringer and E. Schumann (2019, ISBN:978-0128150658). The package provides implementations of optimisation heuristics (Differential Evolution, Genetic Algorithms, Particle Swarm Optimisation, Simulated Annealing and Threshold Accepting), and other optimisation tools, such as grid search and greedy search. There are also functions for the valuation of financial instruments such as bonds and options, for portfolio selection and functions that help with stochastic simulations.
Maintained by Enrico Schumann. Last updated 30 days ago.
black-scholesdifferential-evolutiongenetic-algorithmgrid-searchheuristicsimplied-volatilitylocal-searchoptimizationparticle-swarm-optimizationsimulated-annealingthreshold-accepting
3.3 match 36 stars 9.56 score 101 scripts 4 dependentscboettig
contentid:An Interface for Content-Based Identifiers
An interface for creating, registering, and resolving content-based identifiers for data management. Content-based identifiers rely on the 'cryptographic' hashes to refer to the files they identify, thus, anyone possessing the file can compute the identifier using a well-known standard algorithm, such as 'SHA256'. By registering a URL at which the content is accessible to a public archive (such as Hash Archive) or depositing data in a scientific repository such 'Zenodo', 'DataONE' or 'SoftwareHeritage', the content identifier can serve many functions typically associated with A Digital Object Identifier ('DOI'). Unlike location-based identifiers like 'DOIs', content-based identifiers permit the same content to be registered in many locations.
Maintained by Carl Boettiger. Last updated 2 months ago.
2.4 match 46 stars 7.59 score 108 scripts 3 dependentsdcgerard
segtest:Tests for Segregation Distortion in Polyploids
Provides a suite of tests for segregation distortion in F1 polyploid populations (for now, just tetraploids). This is under different assumptions of meiosis. Details of these methods are described in Gerard et al. (2025) <doi:10.1007/s00122-025-04816-z>. This material is based upon work supported by the National Science Foundation under Grant No. 2132247. The opinions, findings, and conclusions or recommendations expressed are those of the author and do not necessarily reflect the views of the National Science Foundation.
Maintained by David Gerard. Last updated 2 months ago.
2.4 match 1 stars 4.85 score 3 scriptsropensci
awardFindR:awardFindR
Queries a number of scientific awards databases. Collects relevant results based on keyword and date parameters, returns list of projects that fit those criteria as a data frame. Sources include: Arnold Ventures, Carnegie Corp, Federal RePORTER, Gates Foundation, MacArthur Foundation, Mellon Foundation, NEH, NIH, NSF, Open Philanthropy, Open Society Foundations, Rockefeller Foundation, Russell Sage Foundation, Robert Wood Johnson Foundation, Sloan Foundation, Social Science Research Council, John Templeton Foundation, and USASpending.gov.
Maintained by Michael McCall. Last updated 12 months ago.
2.4 match 16 stars 4.38 score 3 scriptsgorelab
waves:Vis-NIR Spectral Analysis Wrapper
Originally designed application in the context of resource-limited plant research and breeding programs, 'waves' provides an open-source solution to spectral data processing and model development by bringing useful packages together into a streamlined pipeline. This package is wrapper for functions related to the analysis of point visible and near-infrared reflectance measurements. It includes visualization, filtering, aggregation, preprocessing, cross-validation set formation, model training, and prediction functions to enable open-source association of spectral and reference data. This package is documented in a peer-reviewed manuscript in the Plant Phenome Journal <doi:10.1002/ppj2.20012>. Specialized cross-validation schemes are described in detail in Jarquín et al. (2017) <doi:10.3835/plantgenome2016.12.0130>. Example data is from Ikeogu et al. (2017) <doi:10.1371/journal.pone.0188918>.
Maintained by Jenna Hershberger. Last updated 11 months ago.
1.6 match 6 stars 5.98 score 53 scriptslter
lterpalettefinder:Extract Color Palettes from Photos and Pick Official LTER Palettes
Allows identification of palettes derived from LTER (Long Term Ecological Research) photographs based on user criteria. Also facilitates extraction of palettes from users' photos directly.
Maintained by Nicholas J Lyon. Last updated 2 years ago.
color-palette-generatordata-science
1.5 match 27 stars 5.43 score 7 scriptsmhahsler
stream:Infrastructure for Data Stream Mining
A framework for data stream modeling and associated data mining tasks such as clustering and classification. The development of this package was supported in part by NSF IIS-0948893, NSF CMMI 1728612, and NIH R21HG005912. Hahsler et al (2017) <doi:10.18637/jss.v076.i14>.
Maintained by Michael Hahsler. Last updated 4 days ago.
data-stream-clusteringdatastreamstream-miningcpp
0.8 match 39 stars 10.05 score 132 scripts 3 dependentsnceas
scicomptools:Tools Developed by the NCEAS Scientific Computing Support Team
Set of tools to import, summarize, wrangle, and visualize data. These functions were originally written based on the needs of the various synthesis working groups that were supported by the National Center for Ecological Analysis and Synthesis (NCEAS). These tools are meant to be useful inside and outside of the context for which they were designed.
Maintained by Angel Chen. Last updated 5 months ago.
1.5 match 9 stars 5.26 score 6 scriptsmikejohnson51
AHGestimation:An R package for Computing Robust, Mass Preserving Hydraulic Geometries and Rating Curves
Compute mass preserving 'At a station Hydraulic Geometry' (AHG) fits from river measurements.
Maintained by Mike Johnson. Last updated 3 months ago.
1.6 match 6 stars 5.02 score 10 scriptslter
ltertools:Tools Developed by the Long Term Ecological Research Community
Set of the data science tools created by various members of the Long Term Ecological Research (LTER) community. These functions were initially written largely as standalone operations and have later been aggregated into this package.
Maintained by Nicholas Lyon. Last updated 23 days ago.
1.5 match 3 stars 4.95 score 4 scriptscotterell
TDCM:The Transition Diagnostic Classification Model Framework
Estimate the transition diagnostic classification model (TDCM) described in Madison & Bradshaw (2018) <doi:10.1007/s11336-018-9638-5>, a longitudinal extension of the log-linear cognitive diagnosis model (LCDM) in Henson, Templin & Willse (2009) <doi:10.1007/s11336-008-9089-5>. As the LCDM subsumes many other diagnostic classification models (DCMs), many other DCMs can be estimated longitudinally via the TDCM. The 'TDCM' package includes functions to estimate the single-group and multigroup TDCM, summarize results of interest including item parameters, growth proportions, transition probabilities, transitional reliability, attribute correlations, model fit, and growth plots.
Maintained by Michael E. Cotterell. Last updated 4 months ago.
1.5 match 4.54 score 5 scriptsprojectmosaic
mosaic:Project MOSAIC Statistics and Mathematics Teaching Utilities
Data sets and utilities from Project MOSAIC (<http://www.mosaic-web.org>) used to teach mathematics, statistics, computation and modeling. Funded by the NSF, Project MOSAIC is a community of educators working to tie together aspects of quantitative work that students in science, technology, engineering and mathematics will need in their professional lives, but which are usually taught in isolation, if at all.
Maintained by Randall Pruim. Last updated 1 years ago.
0.5 match 93 stars 13.32 score 7.2k scripts 7 dependentswmay
oc:Optimal Classification Roll Call Analysis Software
Estimates optimal classification (Poole 2000) <doi:10.1093/oxfordjournals.pan.a029814> scores from roll call votes supplied though a 'rollcall' object from package 'pscl'.
Maintained by William May. Last updated 2 years ago.
fortranideal-pointspolitical-scienceopenblas
1.5 match 2 stars 4.30 score 50 scriptsmikejohnson51
nwmTools:nwmTools
Tools for working with operational and historic National Water Model Output.
Maintained by Mike Johnson. Last updated 3 months ago.
1.6 match 19 stars 3.06 score 12 scriptsprojectmosaic
mosaicData:Project MOSAIC Data Sets
Data sets from Project MOSAIC (<http://www.mosaic-web.org>) used to teach mathematics, statistics, computation and modeling. Funded by the NSF, Project MOSAIC is a community of educators working to tie together aspects of quantitative work that students in science, technology, engineering and mathematics will need in their professional lives, but which are usually taught in isolation, if at all.
Maintained by Randall Pruim. Last updated 1 years ago.
0.5 match 6 stars 8.33 score 632 scripts 8 dependentscran
NPHazardRate:Nonparametric Hazard Rate Estimation
Provides functions and examples for histogram, kernel (classical, variable bandwidth and transformations based), discrete and semiparametric hazard rate estimators.
Maintained by Dimitrios Bagkavos. Last updated 6 years ago.
3.3 match 1.18 score 15 scriptsplfjohnson
adaptivetau:Tau-Leaping Stochastic Simulation
Implements adaptive tau leaping to approximate the trajectory of a continuous-time stochastic process as described by Cao et al. (2007) The Journal of Chemical Physics <doi:10.1063/1.2745299> (aka. the Gillespie stochastic simulation algorithm). This package is based upon work supported by NSF DBI-0906041 and NIH K99-GM104158 to Philip Johnson and NIH R01-AI049334 to Rustom Antia.
Maintained by Philip Johnson. Last updated 5 months ago.
0.5 match 7.23 score 138 scripts 2 dependentsdcgerard
tensr:Covariance Inference and Decompositions for Tensor Datasets
A collection of functions for Kronecker structured covariance estimation and testing under the array normal model. For estimation, maximum likelihood and Bayesian equivariant estimation procedures are implemented. For testing, a likelihood ratio testing procedure is available. This package also contains additional functions for manipulating and decomposing tensor data sets. This work was partially supported by NSF grant DMS-1505136. Details of the methods are described in Gerard and Hoff (2015) <doi:10.1016/j.jmva.2015.01.020> and Gerard and Hoff (2016) <doi:10.1016/j.laa.2016.04.033>.
Maintained by David Gerard. Last updated 2 years ago.
0.5 match 5 stars 6.53 score 56 scripts 4 dependentslaurabruckman
netSEM:Network Structural Equation Modeling
The network structural equation modeling conducts a network statistical analysis on a data frame of coincident observations of multiple continuous variables [1]. It builds a pathway model by exploring a pool of domain knowledge guided candidate statistical relationships between each of the variable pairs, selecting the 'best fit' on the basis of a specific criteria such as adjusted r-squared value. This material is based upon work supported by the U.S. National Science Foundation Award EEC-2052776 and EEC-2052662 for the MDS-Rely IUCRC Center, under the NSF Solicitation: NSF 20-570 Industry-University Cooperative Research Centers Program [1] Bruckman, Laura S., Nicholas R. Wheeler, Junheng Ma, Ethan Wang, Carl K. Wang, Ivan Chou, Jiayang Sun, and Roger H. French. (2013) <doi:10.1109/ACCESS.2013.2267611>.
Maintained by Laura S. Bruckman. Last updated 2 years ago.
0.8 match 3.72 score 13 scriptsmhahsler
rEMM:Extensible Markov Model for Modelling Temporal Relationships Between Clusters
Implements TRACDS (Temporal Relationships between Clusters for Data Streams), a generalization of Extensible Markov Model (EMM). TRACDS adds a temporal or order model to data stream clustering by superimposing a dynamically adapting Markov Chain. Also provides an implementation of EMM (TRACDS on top of tNN data stream clustering). Development of this package was supported in part by NSF IIS-0948893 and R21HG005912 from the National Human Genome Research Institute. Hahsler and Dunham (2010) <doi:10.18637/jss.v035.i05>.
Maintained by Michael Hahsler. Last updated 7 months ago.
clusteringdata-streamsequence-analysis
0.5 match 2 stars 4.79 score 31 scriptsgloewing
sMTL:Sparse Multi-Task Learning
Implements L0-constrained Multi-Task Learning and domain generalization algorithms. The algorithms are coded in Julia allowing for fast implementations of the coordinate descent and local combinatorial search algorithms. For more details, see a preprint of the paper: Loewinger et al., (2022) <arXiv:2212.08697>.
Maintained by Gabriel Loewinger. Last updated 2 years ago.
1.5 match 1.00 score 8 scriptsyili-hong
SPREDA:Statistical Package for Reliability Data Analysis
The Statistical Package for REliability Data Analysis (SPREDA) implements recently-developed statistical methods for the analysis of reliability data. Modern technological developments, such as sensors and smart chips, allow us to dynamically track product/system usage as well as other environmental variables, such as temperature and humidity. We refer to these variables as dynamic covariates. The package contains functions for the analysis of time-to-event data with dynamic covariates and degradation data with dynamic covariates. The package also contains functions that can be used for analyzing time-to-event data with right censoring, and with left truncation and right censoring. Financial support from NSF and DuPont are acknowledged.
Maintained by Yili Hong. Last updated 6 years ago.
0.5 match 1.43 score 27 scriptskolassa-dev
MultNonParam:Multivariate Nonparametric Methods
A collection of multivariate nonparametric methods, selected in part to support an MS level course in nonparametric statistical methods. Methods include adjustments for multiple comparisons, implementation of multivariate Mann-Whitney-Wilcoxon testing, inversion of these tests to produce a confidence region, some permutation tests for linear models, and some algorithms for calculating exact probabilities associated with one- and two- stage testing involving Mann-Whitney-Wilcoxon statistics. Supported by grant NSF DMS 1712839. See Kolassa and Seifu (2013) <doi:10.1016/j.acra.2013.03.006>.
Maintained by John E. Kolassa. Last updated 2 years ago.
0.5 match 1.18 score 15 scriptskolassa-dev
PHInfiniteEstimates:Tools for Inference in the Presence of a Monotone Likelihood
Proportional hazards estimation in the presence of a partially monotone likelihood has difficulties, in that finite estimators do not exist. These difficulties are related to those arising from logistic and multinomial regression. References for methods are given in the separate function documents. Supported by grant NSF DMS 1712839.
Maintained by John E. Kolassa. Last updated 1 years ago.
0.5 match 1.00 scorecran
twingp:A Fast Global-Local Gaussian Process Approximation
A global-local approximation framework for large-scale Gaussian process modeling. Please see Vakayil and Joseph (2024) <doi:10.1080/00401706.2023.2296451> for details. This work is supported by U.S. NSF grants CMMI-1921646 and DMREF-1921873.
Maintained by Akhil Vakayil. Last updated 6 months ago.
0.5 match 1.00 scorecran
supercompress:Supervised Compression of Big Data
A supervised compression method that incorporates the response for reducing big data to a carefully selected subset. Please see Joseph and Mak (2021) <doi:10.1002/sam.11508>. This research is supported by a U.S. National Science Foundation (NSF) grant CMMI-1921646.
Maintained by Chaofan Huang. Last updated 3 years ago.
0.5 match 1 stars 1.00 score