r-bayeslogit
|
public |
Tools for sampling from the PolyaGamma distribution based on Polson, Scott, and Windle (2013) <doi:10.1080/01621459.2013.829001>. Useful for logistic regression.
|
2025-03-25 |
r-bayesiantools
|
public |
General-purpose MCMC and SMC samplers, as well as plot and diagnostic functions for Bayesian statistics, with a particular focus on calibrating complex system models. Implemented samplers include various Metropolis MCMC variants (including adaptive and/or delayed rejection MH), the T-walk, two differential evolution MCMCs, two DREAM MCMCs, and a sequential Monte Carlo (SMC) particle filter.
|
2025-03-25 |
r-bayesfm
|
public |
Collection of procedures to perform Bayesian analysis on a variety of factor models. Currently, it includes: "Bayesian Exploratory Factor Analysis" (befa) from G. Conti, S. Frühwirth-Schnatter, J.J. Heckman, R. Piatek (2014) <doi:10.1016/j.jeconom.2014.06.008>, an approach to dedicated factor analysis with stochastic search on the structure of the factor loading matrix. The number of latent factors, as well as the allocation of the manifest variables to the factors, are not fixed a priori but determined during MCMC sampling.
|
2025-03-25 |
r-bayesfactor
|
public |
A suite of functions for computing various Bayes factors for simple designs, including contingency tables, one- and two-sample designs, one-way designs, general ANOVA designs, and linear regression.
|
2025-03-25 |
r-bayesdfa
|
public |
Implements Bayesian dynamic factor analysis with 'Stan'. Dynamic factor analysis is a dimension reduction tool for multivariate time series. 'bayesdfa' extends conventional dynamic factor models in several ways. First, extreme events may be estimated in the latent trend by modeling process error with a student-t distribution. Second, alternative constraints (including proportions are allowed). Third, the estimated dynamic factors can be analyzed with hidden Markov models to evaluate support for latent regimes.
|
2025-03-25 |
r-batchtools
|
public |
As a successor of the packages 'BatchJobs' and 'BatchExperiments', this package provides a parallel implementation of the Map function for high performance computing systems managed by schedulers 'IBM Spectrum LSF' (<https://www.ibm.com/products/hpc-workload-management>), 'OpenLava' (<https://www.openlava.org/>), 'Univa Grid Engine'/'Oracle Grid Engine' (<https://www.univa.com/>), 'Slurm' (<https://slurm.schedmd.com/>), 'TORQUE/PBS' (<https://adaptivecomputing.com/cherry-services/torque-resource-manager/>), or 'Docker Swarm' (<https://docs.docker.com/engine/swarm/>). A multicore and socket mode allow the parallelization on a local machines, and multiple machines can be hooked up via SSH to create a makeshift cluster. Moreover, the package provides an abstraction mechanism to define large-scale computer experiments in a well-organized and reproducible way.
|
2025-03-25 |
r-basefun
|
public |
Some very simple infrastructure for basis functions.
|
2025-03-25 |
r-bamlss
|
public |
Infrastructure for estimating probabilistic distributional regression models in a Bayesian framework. The distribution parameters may capture location, scale, shape, etc. and every parameter may depend on complex additive terms (fixed, random, smooth, spatial, etc.) similar to a generalized additive model. The conceptual and computational framework is introduced in Umlauf, Klein, Zeileis (2019) <doi:10.1080/10618600.2017.1407325> and the R package in Umlauf, Klein, Simon, Zeileis (2021) <doi:10.18637/jss.v100.i04>.
|
2025-03-25 |
r-autothresholdr
|
public |
Algorithms for automatically finding appropriate thresholds for numerical data, with special functions for thresholding images. Provides the 'ImageJ' 'Auto Threshold' plugin functionality to R users. See <https://imagej.net/plugins/auto-threshold> and Landini et al. (2017) <DOI:10.1111/jmi.12474>.
|
2025-03-25 |
r-ashr
|
public |
The R package 'ashr' implements an Empirical Bayes approach for large-scale hypothesis testing and false discovery rate (FDR) estimation based on the methods proposed in M. Stephens, 2016, "False discovery rates: a new deal", <DOI:10.1093/biostatistics/kxw041>. These methods can be applied whenever two sets of summary statistics---estimated effects and standard errors---are available, just as 'qvalue' can be applied to previously computed p-values. Two main interfaces are provided: ash(), which is more user-friendly; and ash.workhorse(), which has more options and is geared toward advanced users. The ash() and ash.workhorse() also provides a flexible modeling interface that can accommodate a variety of likelihoods (e.g., normal, Poisson) and mixture priors (e.g., uniform, normal).
|
2025-03-25 |
r-arfima
|
public |
Simulates, fits, and predicts long-memory and anti-persistent time series, possibly mixed with ARMA, regression, transfer-function components. Exact methods (MLE, forecasting, simulation) are used. Bug reports should be done via GitHub (at <https://github.com/JQVeenstra/arfima>), where the development version of this package lives; it can be installed using devtools.
|
2025-03-25 |
r-apollo
|
public |
Choice models are a widely used technique across numerous scientific disciplines. The Apollo package is a very flexible tool for the estimation and application of choice models in R. Users are able to write their own model functions or use a mix of already available ones. Random heterogeneity, both continuous and discrete and at the level of individuals and choices, can be incorporated for all models. There is support for both standalone models and hybrid model structures. Both classical and Bayesian estimation is available, and multiple discrete continuous models are covered in addition to discrete choice. Multi-threading processing is supported for estimation and a large number of pre and post-estimation routines, including for computing posterior (individual-level) distributions are available. For examples, a manual, and a support forum, visit <http://www.ApolloChoiceModelling.com>. For more information on choice models see Train, K. (2009) <isbn:978-0-521-74738-7> and Hess, S. & Daly, A.J. (2014) <isbn:978-1-781-00314-5> for an overview of the field.
|
2025-03-25 |
r-archive
|
public |
Bindings to 'libarchive' <http://www.libarchive.org> the Multi-format archive and compression library. Offers R connections and direct extraction for many archive formats including 'tar', 'ZIP', '7-zip', 'RAR', 'CAB' and compression formats including 'gzip', 'bzip2', 'compress', 'lzma' and 'xz'.
|
2025-03-25 |
r-anticlust
|
public |
The method of anticlustering partitions a pool of elements into groups (i.e., anticlusters) with the goal of maximizing between-group similarity or within-group heterogeneity. The anticlustering approach thereby reverses the logic of cluster analysis that strives for high within-group homogeneity and clear separation between groups. Computationally, anticlustering is accomplished by maximizing instead of minimizing a clustering objective function, such as the intra-cluster variance (used in k-means clustering) or the sum of pairwise distances within clusters. The main function anticlustering() gives access to exact and heuristic anticlustering methods described in Papenberg and Klau (2021; <doi:10.1037/met0000301>), Brusco et al. (2020; <doi:10.1111/bmsp.12186>), and Papenberg (2023; <doi:10.1111/bmsp.12315>). The exact algorithms require that an integer linear programming solver is installed, either the GNU linear programming kit (<https://www.gnu.org/software/glpk/glpk.html>) together with the interface package 'Rglpk' (<https://cran.R-project.org/package=Rglpk>), or the SYMPHONY ILP solver (<https://github.com/coin-or/SYMPHONY>) together with the interface package 'Rsymphony' (<https://cran.r-project.org/package=Rsymphony>). Full access to the bicriterion anticlustering method proposed by Brusco et al. (2020) is given via the function bicriterion_anticlustering(), while kplus_anticlustering() implements the full functionality of the k-plus anticlustering approach proposed by Papenberg (2023). Some other functions are available to solve classical clustering problems. The function balanced_clustering() applies a cluster analysis under size constraints, i.e., creates equal-sized clusters. The function matching() can be used for (unrestricted, bipartite, or K-partite) matching. The function wce() can be used optimally solve the (weighted) cluster editing problem, also known as correlation clustering, clique partitioning problem or transitivity clustering.
|
2025-03-25 |
r-aorsf
|
public |
Fit, interpret, and make predictions with oblique random survival forests. Oblique decision trees are notoriously slow compared to their axis based counterparts, but 'aorsf' runs as fast or faster than axis-based decision tree algorithms for right-censored time-to-event outcomes. Methods to accelerate and interpret the oblique random survival forest are described in Jaeger et al., (2023) <DOI:10.1080/10618600.2023.2231048>.
|
2025-03-25 |
r-analogue
|
public |
Fits Modern Analogue Technique and Weighted Averaging transfer function models for prediction of environmental data from species data, and related methods used in palaeoecology.
|
2025-03-25 |
r-alphashape3d
|
public |
Implementation in R of the alpha-shape of a finite set of points in the three-dimensional space. The alpha-shape generalizes the convex hull and allows to recover the shape of non-convex and even non-connected sets in 3D, given a random sample of points taken into it. Besides the computation of the alpha-shape, this package provides users with functions to compute the volume of the alpha-shape, identify the connected components and facilitate the three-dimensional graphical visualization of the estimated set.
|
2025-03-25 |
r-adlift
|
public |
Adaptive wavelet lifting transforms for signal denoising using optimal local neighbourhood regression, from Nunes et al. (2006) <doi:10.1007/s11222-006-6560-y>.
|
2025-03-25 |
r-adespatial
|
public |
Tools for the multiscale spatial analysis of multivariate data. Several methods are based on the use of a spatial weighting matrix and its eigenvector decomposition (Moran's Eigenvectors Maps, MEM). Several approaches are described in the review Dray et al (2012) <doi:10.1890/11-1183.1>.
|
2025-03-25 |
r-adehabitatlt
|
public |
A collection of tools for the analysis of animal movements.
|
2025-03-25 |
r-adephylo
|
public |
Multivariate tools to analyze comparative data, i.e. a phylogeny and some traits measured for each taxa. The package contains functions to represent comparative data, compute phylogenetic proximities, perform multivariate analysis with phylogenetic constraints and test for the presence of phylogenetic autocorrelation. The package is described in Jombart et al (2010) <doi:10.1093/bioinformatics/btq292>.
|
2025-03-25 |
r-adehabitatma
|
public |
A collection of tools to deal with raster maps.
|
2025-03-25 |
r-adehabitaths
|
public |
A collection of tools for the analysis of habitat selection.
|
2025-03-25 |
r-adehabitathr
|
public |
A collection of tools for the estimation of animals home range.
|
2025-03-25 |
r-adegenet
|
public |
Toolset for the exploration of genetic and genomic data. Adegenet provides formal (S4) classes for storing and handling various genetic data, including genetic markers with varying ploidy and hierarchical population structure ('genind' class), alleles counts by populations ('genpop'), and genome-wide SNP data ('genlight'). It also implements original multivariate methods (DAPC, sPCA), graphics, statistical tests, simulation tools, distance and similarity measures, and several spatial methods. A range of both empirical and simulated datasets is also provided to illustrate various methods.
|
2025-03-25 |
r-adbcdrivermanager
|
public |
Provides a developer-facing interface to 'Arrow' Database Connectivity ('ADBC') for the purposes of driver development, driver testing, and building high-level database interfaces for users. 'ADBC' <https://arrow.apache.org/adbc/> is an API standard for database access libraries that uses 'Arrow' for result sets and query parameters.
|
2025-03-25 |
r-abess
|
public |
Extremely efficient toolkit for solving the best subset selection problem <https://www.jmlr.org/papers/v23/21-1060.html>. This package is its R interface. The package implements and generalizes algorithms designed in <doi:10.1073/pnas.2014241117> that exploits a novel sequencing-and-splicing technique to guarantee exact support recovery and globally optimal solution in polynomial times for linear model. It also supports best subset selection for logistic regression, Poisson regression, Cox proportional hazard model, Gamma regression, multiple-response regression, multinomial logistic regression, ordinal regression, (sequential) principal component analysis, and robust principal component analysis. The other valuable features such as the best subset of group selection <doi:10.1287/ijoc.2022.1241> and sure independence screening <doi:10.1111/j.1467-9868.2008.00674.x> are also provided.
|
2025-03-25 |
r-yulab.utils
|
public |
Miscellaneous functions commonly used by 'YuLab-SMU'.
|
2025-03-25 |
r-xaringan
|
public |
Create HTML5 slides with R Markdown and the JavaScript library 'remark.js' (<https://remarkjs.com>).
|
2025-03-25 |
r-worrms
|
public |
Client for World Register of Marine Species (<https://www.marinespecies.org/>). Includes functions for each of the API methods, including searching for names by name, date and common names, searching using external identifiers, fetching synonyms, as well as fetching taxonomic children and taxonomic classification.
|
2025-03-25 |
r-wikitaxa
|
public |
'Taxonomic' information from 'Wikipedia', 'Wikicommons', 'Wikispecies', and 'Wikidata'. Functions included for getting taxonomic information from each of the sources just listed, as well performing taxonomic search.
|
2025-03-25 |
r-wikidatar
|
public |
Read from, interogate, and write to Wikidata <https://www.wikidata.org> - the multilingual, interdisciplinary, semantic knowledgebase. Includes functions to: read from wikidata (single items, properties, or properties); query wikidata (retrieving all items that match a set of criterial via Wikidata SPARQL query service); write to Wikidata (adding new items or statements via QuickStatements); and handle and manipulate Wikidata objects (as lists and tibbles). Uses the Wikidata and Quickstatements APIs.
|
2025-03-25 |
r-wikidataqueryservicer
|
public |
An API client for the 'Wikidata Query Service' <https://query.wikidata.org/>.
|
2025-03-25 |
r-usmapdata
|
public |
Provides a container for data used by the 'usmap' package. The data used by 'usmap' has been extracted into this package so that the file size of the 'usmap' package can be reduced greatly. The data in this package will be updated roughly once per year (plus bug fixes) as new shape files are provided by the US Census bureau.
|
2025-03-25 |
r-usdata
|
public |
Demographic data on the United States at the county and state levels spanning multiple years.
|
2025-03-25 |
r-urlshortener
|
public |
Allows using two URL shortening services, which also provide expanding and analytic functions. Specifically developed for 'Bit.ly' (which requires OAuth 2.0) and 'is.gd' (no API key).
|
2025-03-25 |
r-univariateml
|
public |
User-friendly maximum likelihood estimation (Fisher (1921) <doi:10.1098/rsta.1922.0009>) of univariate densities.
|
2025-03-25 |
r-tsclust
|
public |
A set of measures of dissimilarity between time series to perform time series clustering. Metrics based on raw data, on generating models and on the forecast behavior are implemented. Some additional utilities related to time series clustering are also provided, such as clustering algorithms and cluster evaluation metrics.
|
2025-03-25 |
r-truncsp
|
public |
Semi-parametric estimation of truncated linear regression models
|
2025-03-25 |
r-truncdist
|
public |
A collection of tools to evaluate probability density functions, cumulative distribution functions, quantile functions and random numbers for truncated random variables. These functions are provided to also compute the expected value and variance. Nadarajah and Kotz (2006) developed most of the functions. QQ plots can be produced. All the probability functions in the stats, stats4 and evd packages are automatically available for truncation..
|
2025-03-25 |
r-tidycpp
|
public |
Core parts of the C API of R are wrapped in a C++ namespace via a set of inline functions giving a tidier representation of the underlying data structures and functionality using a header-only implementation without additional dependencies.
|
2025-03-25 |
r-tfautograph
|
public |
Translate R control flow expressions into 'Tensorflow' graphs.
|
2025-03-25 |
r-taxize
|
public |
Interacts with a suite of web 'APIs' for taxonomic tasks, such as getting database specific taxonomic identifiers, verifying species names, getting taxonomic hierarchies, fetching downstream and upstream taxonomic names, getting taxonomic synonyms, converting scientific to common names and vice versa, and more.
|
2025-03-25 |
r-systemfit
|
public |
Econometric estimation of simultaneous systems of linear and nonlinear equations using Ordinary Least Squares (OLS), Weighted Least Squares (WLS), Seemingly Unrelated Regressions (SUR), Two-Stage Least Squares (2SLS), Weighted Two-Stage Least Squares (W2SLS), and Three-Stage Least Squares (3SLS) as suggested, e.g., by Zellner (1962) <doi:10.2307/2281644>, Zellner and Theil (1962) <doi:10.2307/1911287>, and Schmidt (1990) <doi:10.1016/0304-4076(90)90127-F>.
|
2025-03-25 |
r-svmpath
|
public |
Computes the entire regularization path for the two-class svm classifier with essentially the same cost as a single SVM fit.
|
2025-03-25 |
r-survivalmpl
|
public |
Estimate the regression coefficients and the baseline hazard of proportional hazard Cox models with left, right or interval censored survival data using maximum penalised likelihood. A 'non-parametric' smooth estimate of the baseline hazard function is provided.
|
2025-03-25 |
r-survivalroc
|
public |
Compute time-dependent ROC curve from censored survival data using Kaplan-Meier (KM) or Nearest Neighbor Estimation (NNE) method of Heagerty, Lumley & Pepe (Biometrics, Vol 56 No 2, 2000, PP 337-344).
|
2025-03-25 |
r-survey
|
public |
Summary statistics, two-sample tests, rank tests, generalised linear models, cumulative link models, Cox models, loglinear models, and general maximum pseudolikelihood estimation for multistage stratified, cluster-sampled, unequally weighted survey samples. Variances by Taylor series linearisation or replicate weights. Post-stratification, calibration, and raking. Two-phase subsampling designs. Graphics. PPS sampling without replacement.
|
2025-03-25 |
r-stepreg
|
public |
Three most common types of stepwise regression including linear regression, logistic regression and cox proportional hazard regression can be performed to select best model with methods of forward selection, backward elimination, bidirectional selection and best subset selection. A widely used selection criteria are available for variable selection.
|
2025-03-25 |
r-stars
|
public |
Reading, manipulating, writing and plotting spatiotemporal arrays (raster and vector data cubes) in 'R', using 'GDAL' bindings provided by 'sf', and 'NetCDF' bindings by 'ncmeta' and 'RNetCDF'.
|
2025-03-25 |