Showing 5 of total 5 results (show query)
r-forge
Matrix:Sparse and Dense Matrix Classes and Methods
A rich hierarchy of sparse and dense matrix classes, including general, symmetric, triangular, and diagonal matrices with numeric, logical, or pattern entries. Efficient methods for operating on such matrices, often wrapping the 'BLAS', 'LAPACK', and 'SuiteSparse' libraries.
Maintained by Martin Maechler. Last updated 20 days ago.
1 stars 17.23 score 33k scripts 12k dependentsjackstat
ModelMetrics:Rapid Calculation of Model Metrics
Collection of metrics for evaluating models written in C++ using 'Rcpp'. Popular metrics include area under the curve, log loss, root mean square error, etc.
Maintained by Tyler Hunt. Last updated 4 years ago.
aucloglossmachine-learningmetricsmodel-evaluationmodel-metricscpp
29 stars 11.83 score 1.3k scripts 306 dependentsnpm27
lrd:A Package for Processing Lexical Response Data
Lexical response data is a package that can be used for processing cued-recall, free-recall, and sentence responses from memory experiments.
Maintained by Nicholas Maxwell. Last updated 3 years ago.
3 stars 5.30 score 33 scriptshaghish
h2otools:Machine Learning Model Evaluation for 'h2o' Package
Enhances the H2O platform by providing tools for detailed evaluation of machine learning models. It includes functions for bootstrapped performance evaluation, extended F-score calculations, and various other metrics, aimed at improving model assessment.
Maintained by E. F. Haghish. Last updated 13 days ago.
2 stars 4.10 score 14 scripts 1 dependentscodymarquart
rhoR:Rho for Inter Rater Reliability
Rho is used to test the generalization of inter rater reliability (IRR) statistics. Calculating rho starts by generating a large number of simulated, fully-coded data sets: a sizable collection of hypothetical populations, all of which have a kappa value below a given threshold -- which indicates unacceptable agreement. Then kappa is calculated on a sample from each of those sets in the collection to see if it is equal to or higher than the kappa in then real sample. If less than five percent of the distribution of samples from the simulated data sets is greater than actual observed kappa, the null hypothesis is rejected and one can conclude that if the two raters had coded the rest of the data, we would have acceptable agreement (kappa above the threshold).
Maintained by Cody L Marquart. Last updated 5 years ago.
2.18 score 8 scripts 1 dependents