Showing 200 of total 431 results (show query)

usepa

httk:High-Throughput Toxicokinetics

Pre-made models that can be rapidly tailored to various chemicals and species using chemical-specific in vitro data and physiological information. These tools allow incorporation of chemical toxicokinetics ("TK") and in vitro-in vivo extrapolation ("IVIVE") into bioinformatics, as described by Pearce et al. (2017) (<doi:10.18637/jss.v079.i04>). Chemical-specific in vitro data characterizing toxicokinetics have been obtained from relatively high-throughput experiments. The chemical-independent ("generic") physiologically-based ("PBTK") and empirical (for example, one compartment) "TK" models included here can be parameterized with in vitro data or in silico predictions which are provided for thousands of chemicals, multiple exposure routes, and various species. High throughput toxicokinetics ("HTTK") is the combination of in vitro data and generic models. We establish the expected accuracy of HTTK for chemicals without in vivo data through statistical evaluation of HTTK predictions for chemicals where in vivo data do exist. The models are systems of ordinary differential equations that are developed in MCSim and solved using compiled (C-based) code for speed. A Monte Carlo sampler is included for simulating human biological variability (Ring et al., 2017 <doi:10.1016/j.envint.2017.06.004>) and propagating parameter uncertainty (Wambaugh et al., 2019 <doi:10.1093/toxsci/kfz205>). Empirically calibrated methods are included for predicting tissue:plasma partition coefficients and volume of distribution (Pearce et al., 2017 <doi:10.1007/s10928-017-9548-7>). These functions and data provide a set of tools for using IVIVE to convert concentrations from high-throughput screening experiments (for example, Tox21, ToxCast) to real-world exposures via reverse dosimetry (also known as "RTK") (Wetmore et al., 2015 <doi:10.1093/toxsci/kfv171>).

Maintained by John Wambaugh. Last updated 2 months ago.

comptoxord

27 stars 10.22 score 307 scripts 1 dependents

robinhankin

hypergeo:The Gauss Hypergeometric Function

The Gaussian hypergeometric function for complex numbers.

Maintained by Robin K. S. Hankin. Last updated 6 days ago.

cpp

2 stars 8.97 score 109 scripts 77 dependents

samhforbes

PupillometryR:A Unified Pipeline for Pupillometry Data

Provides a unified pipeline to clean, prepare, plot, and run basic analyses on pupillometry experiments.

Maintained by Samuel Forbes. Last updated 2 years ago.

44 stars 7.58 score 288 scripts 1 dependents

jrdnmdhl

flexsurvcure:Flexible Parametric Cure Models

Flexible parametric mixture and non-mixture cure models for time-to-event data.

Maintained by Jordan Amdahl. Last updated 1 months ago.

16 stars 6.82 score 20 scripts 2 dependents

pik-piam

mrwater:madrat based MAgPIE water Input Data Library

Provides functions for MAgPIE cellular input data generation and stand-alone water calculations.

Maintained by Felicitas Beier. Last updated 5 months ago.

6.45 score 4 scripts 3 dependents

pik-piam

mrremind:MadRat REMIND Input Data Package

The mrremind packages contains data preprocessing for the REMIND model.

Maintained by Lavinia Baumstark. Last updated 2 days ago.

4 stars 6.25 score 15 scripts 1 dependents

pik-piam

mrland:MadRaT land data package

The package provides land related data via the madrat framework.

Maintained by Jan Philipp Dietrich. Last updated 10 days ago.

5.59 score 3 scripts 4 dependents

pik-piam

mrmagpie:madrat based MAgPIE Input Data Library

Provides functions for MAgPIE country and cellular input data generation.

Maintained by Kristine Karstens. Last updated 11 days ago.

5.12 score 2 dependents

pik-piam

mrlandcore:One-line description of this awesome package

One-paragraph description of this awesome package.

Maintained by Felicitas Beier. Last updated 2 months ago.

1 stars 5.11 score 16 dependents

pik-piam

mrvalidation:madrat data preparation for validation purposes

Package contains routines to prepare data for validation exercises.

Maintained by Benjamin Leon Bodirsky. Last updated 6 days ago.

4.81 score 9 scripts 1 dependents

jatanrt

eprscope:Processing and Analysis of Electron Paramagnetic Resonance Data and Spectra in Chemistry

Processing, analysis and plottting of Electron Paramagnetic Resonance (EPR) spectra in chemistry. Even though the package is mainly focused on continuous wave (CW) EPR/ENDOR, many functions may be also used for the integrated forms of 1D PULSED EPR spectra. It is able to find the most important spectral characteristics like g-factor, linewidth, maximum of derivative or integral intensities and single/double integrals. This is especially important in spectral (time) series consisting of many EPR spectra like during variable temperature experiments, electrochemical or photochemical radical generation and/or decay. Package also enables processing of data/spectra for the analytical (quantitative) purposes. Namely, how many radicals or paramagnetic centers can be found in the analyte/sample. The goal is to evaluate rate constants, considering different kinetic models, to describe the radical reactions. The key feature of the package resides in processing of the universal ASCII text formats (such as '.txt', '.csv' or '.asc') from scratch. No proprietary formats are used (except the MATLAB EasySpin outputs) and in such respect the package is in accordance with the FAIR data principles. Upon 'reading' (also providing automatic procedures for the most common EPR spectrometers) the spectral data are transformed into the universal R 'data frame' format. Subsequently, the EPR spectra can be visualized and are fully consistent either with the 'ggplot2' package or with the interactive formats based on 'plotly'. Additionally, simulations and fitting of the isotropic EPR spectra are also included in the package. Advanced simulation parameters provided by the MATLAB-EasySpin toolbox and results from the quantum chemical calculations like g-factor and hyperfine splitting/coupling constants (a/A) can be compared and summarized in table-format in order to analyze the EPR spectra by the most effective way.

Maintained by Ján Tarábek. Last updated 2 days ago.

chemistrydata-analysisdata-visualizationepresrfittingoptimizationprogramming-languagereproducible-researchscientific-plottingspectroscopyopenjdk

4.76 score 7 scripts

cran

ftsa:Functional Time Series Analysis

Functions for visualizing, modeling, forecasting and hypothesis testing of functional time series.

Maintained by Han Lin Shang. Last updated 1 months ago.

6 stars 4.61 score 10 dependents

mrc-ide

gonovax:Deterministic Compartmental Model of Gonorrhoea with Vaccination

Model for gonorrhoea vaccination, using odin.

Maintained by Lilith Whittles. Last updated 16 days ago.

3 stars 4.56 score

r-forge

stops:Structure Optimized Proximity Scaling

Methods that use flexible variants of multidimensional scaling (MDS) which incorporate parametric nonlinear distance transformations and trade-off the goodness-of-fit fit with structure considerations to find optimal hyperparameters, also known as structure optimized proximity scaling (STOPS) (Rusch, Mair & Hornik, 2023,<doi:10.1007/s11222-022-10197-w>). The package contains various functions, wrappers, methods and classes for fitting, plotting and displaying different 1-way MDS models with ratio, interval, ordinal optimal scaling in a STOPS framework. These cover essentially the functionality of the package smacofx, including Torgerson (classical) scaling with power transformations of dissimilarities, SMACOF MDS with powers of dissimilarities, Sammon mapping with powers of dissimilarities, elastic scaling with powers of dissimilarities, spherical SMACOF with powers of dissimilarities, (ALSCAL) s-stress MDS with powers of dissimilarities, r-stress MDS, MDS with powers of dissimilarities and configuration distances, elastic scaling powers of dissimilarities and configuration distances, Sammon mapping powers of dissimilarities and configuration distances, power stress MDS (POST-MDS), approximate power stress, Box-Cox MDS, local MDS, Isomap, curvilinear component analysis (CLCA), curvilinear distance analysis (CLDA) and sparsified (power) multidimensional scaling and (power) multidimensional distance analysis (experimental models from smacofx influenced by CLCA). All of these models can also be fit by optimizing over hyperparameters based on goodness-of-fit fit only (i.e., no structure considerations). The package further contains functions for optimization, specifically the adaptive Luus-Jaakola algorithm and a wrapper for Bayesian optimization with treed Gaussian process with jumps to linear models, and functions for various c-structuredness indices.

Maintained by Thomas Rusch. Last updated 3 months ago.

openjdk

1 stars 4.48 score 23 scripts

weiguonimh

mMARCH.AC:Processing of Accelerometry Data with 'GGIR' in mMARCH

Mobile Motor Activity Research Consortium for Health (mMARCH) is a collaborative network of studies of clinical and community samples that employ common clinical, biological, and digital mobile measures across involved studies. One of the main scientific goals of mMARCH sites is developing a better understanding of the inter-relationships between accelerometry-measured physical activity (PA), sleep (SL), and circadian rhythmicity (CR) and mental and physical health in children, adolescents, and adults. Currently, there is no consensus on a standard procedure for a data processing pipeline of raw accelerometry data, and few open-source tools to facilitate their development. The R package 'GGIR' is the most prominent open-source software package that offers great functionality and tremendous user flexibility to process raw accelerometry data. However, even with 'GGIR', processing done in a harmonized and reproducible fashion requires a non-trivial amount of expertise combined with a careful implementation. In addition, novel accelerometry-derived features of PA/SL/CR capturing multiscale, time-series, functional, distributional and other complimentary aspects of accelerometry data being constantly proposed and become available via non-GGIR R implementations. To address these issues, mMARCH developed a streamlined harmonized and reproducible pipeline for loading and cleaning raw accelerometry data, extracting features available through 'GGIR' as well as through non-GGIR R packages, implementing several data and feature quality checks, merging all features of PA/SL/CR together, and performing multiple analyses including Joint Individual Variation Explained (JIVE), an unsupervised machine learning dimension reduction technique that identifies latent factors capturing joint across and individual to each of three domains of PA/SL/CR. In detail, the pipeline generates all necessary R/Rmd/shell files for data processing after running 'GGIR' (v2.4.0) for accelerometer data. In module 1, all csv files in the 'GGIR' output directory were read, transformed and then merged. In module 2, the 'GGIR' output files were checked and summarized in one excel sheet. In module 3, the merged data was cleaned according to the number of valid hours on each night and the number of valid days for each subject. In module 4, the cleaned activity data was imputed by the average Euclidean norm minus one (ENMO) over all the valid days for each subject. Finally, a comprehensive report of data processing was created using Rmarkdown, and the report includes few exploratory plots and multiple commonly used features extracted from minute level actigraphy data. Reference: Guo W, Leroux A, Shou S, Cui L, Kang S, Strippoli MP, Preisig M, Zipunnikov V, Merikangas K (2022) Processing of accelerometry data with GGIR in Motor Activity Research Consortium for Health (mMARCH) Journal for the Measurement of Physical Behaviour, 6(1): 37-44.

Maintained by Wei Guo. Last updated 2 years ago.

openjdk

2 stars 4.41 score 26 scripts

anttonalberdi

hilldiv:Integral Analysis of Diversity Based on Hill Numbers

Tools for analysing, comparing, visualising and partitioning diversity based on Hill numbers. 'hilldiv' is an R package that provides a set of functions to assist analysis of diversity for diet reconstruction, microbial community profiling or more general ecosystem characterisation analyses based on Hill numbers, using OTU/ASV tables and associated phylogenetic trees as inputs. The package includes functions for (phylo)diversity measurement, (phylo)diversity profile plotting, (phylo)diversity comparison between samples and groups, (phylo)diversity partitioning and (dis)similarity measurement. All of these grounded in abundance-based and incidence-based Hill numbers. The statistical framework developed around Hill numbers encompasses many of the most broadly employed diversity (e.g. richness, Shannon index, Simpson index), phylogenetic diversity (e.g. Faith's PD, Allen's H, Rao's quadratic entropy) and dissimilarity (e.g. Sorensen index, Unifrac distances) metrics. This enables the most common analyses of diversity to be performed while grounded in a single statistical framework. The methods are described in Jost et al. (2007) <DOI:10.1890/06-1736.1>, Chao et al. (2010) <DOI:10.1098/rstb.2010.0272> and Chiu et al. (2014) <DOI:10.1890/12-0960.1>; and reviewed in the framework of molecularly characterised biological systems in Alberdi & Gilbert (2019) <DOI:10.1111/1755-0998.13014>.

Maintained by Antton Alberdi. Last updated 4 years ago.

11 stars 4.35 score 41 scripts

angelospsy

condir:Computation of P Values and Bayes Factors for Conditioning Data

Set of functions for the easy analyses of conditioning data.

Maintained by Angelos-Miltiadis Krypotos. Last updated 2 years ago.

2 stars 4.34 score 11 scripts

tanja819

TreeSim:Simulating Phylogenetic Trees

Simulation methods for phylogenetic trees where (i) all tips are sampled at one time point or (ii) tips are sampled sequentially through time. (i) For sampling at one time point, simulations are performed under a constant rate birth-death process, conditioned on having a fixed number of final tips (sim.bd.taxa()), or a fixed age (sim.bd.age()), or a fixed age and number of tips (sim.bd.taxa.age()). When conditioning on the number of final tips, the method allows for shifts in rates and mass extinction events during the birth-death process (sim.rateshift.taxa()). The function sim.bd.age() (and sim.rateshift.taxa() without extinction) allow the speciation rate to change in a density-dependent way. The LTT plots of the simulations can be displayed using LTT.plot(), LTT.plot.gen() and LTT.average.root(). TreeSim further samples trees with n final tips from a set of trees generated by the common sampling algorithm stopping when a fixed number m>>n of tips is first reached (sim.gsa.taxa()). This latter method is appropriate for m-tip trees generated under a big class of models (details in the sim.gsa.taxa() man page). For incomplete phylogeny, the missing speciation events can be added through simulations (corsim()). (ii) sim.rateshifts.taxa() is generalized to sim.bdsky.stt() for serially sampled trees, where the trees are conditioned on either the number of sampled tips or the age. Furthermore, for a multitype-branching process with sequential sampling, trees on a fixed number of tips can be simulated using sim.bdtypes.stt.taxa(). This function further allows to simulate under epidemiological models with an exposed class. The function sim.genespeciestree() simulates coalescent gene trees within birth-death species trees, and sim.genetree() simulates coalescent gene trees.

Maintained by Tanja Stadler. Last updated 6 years ago.

4.19 score 172 scripts 3 dependents

vmoprojs

GeoModels:Procedures for Gaussian and Non Gaussian Geostatistical (Large) Data Analysis

Functions for Gaussian and Non Gaussian (bivariate) spatial and spatio-temporal data analysis are provided for a) (fast) simulation of random fields, b) inference for random fields using standard likelihood and a likelihood approximation method called weighted composite likelihood based on pairs and b) prediction using (local) best linear unbiased prediction. Weighted composite likelihood can be very efficient for estimating massive datasets. Both regression and spatial (temporal) dependence analysis can be jointly performed. Flexible covariance models for spatial and spatial-temporal data on Euclidean domains and spheres are provided. There are also many useful functions for plotting and performing diagnostic analysis. Different non Gaussian random fields can be considered in the analysis. Among them, random fields with marginal distributions such as Skew-Gaussian, Student-t, Tukey-h, Sin-Arcsin, Two-piece, Weibull, Gamma, Log-Gaussian, Binomial, Negative Binomial and Poisson. See the URL for the papers associated with this package, as for instance, Bevilacqua and Gaetan (2015) <doi:10.1007/s11222-014-9460-6>, Bevilacqua et al. (2016) <doi:10.1007/s13253-016-0256-3>, Vallejos et al. (2020) <doi:10.1007/978-3-030-56681-4>, Bevilacqua et. al (2020) <doi:10.1002/env.2632>, Bevilacqua et. al (2021) <doi:10.1111/sjos.12447>, Bevilacqua et al. (2022) <doi:10.1016/j.jmva.2022.104949>, Morales-Navarrete et al. (2023) <doi:10.1080/01621459.2022.2140053>, and a large class of examples and tutorials.

Maintained by Moreno Bevilacqua. Last updated 2 months ago.

fortranopenblasglibc

3 stars 4.17 score 83 scripts