JASP (Jeffreys’s Amazing Statistics Program) is a
free and open-source
Free and open-source software (FOSS) is a term used to refer to groups of software consisting of both free software and open-source software where anyone is freely licensed to use, copy, study, and change the software in any way, and the source ...
program for
statistical
Statistics (from German: ''Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industria ...
analysis supported by the
University of Amsterdam
The University of Amsterdam (abbreviated as UvA, nl, Universiteit van Amsterdam) is a public research university located in Amsterdam, Netherlands. The UvA is one of two large, publicly funded research universities in the city, the other being ...
. It is designed to be easy to use, and familiar to users of
SPSS
SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. C ...
. It offers standard analysis procedures in both their classical and
Bayesian form.
JASP generally produces
APA style results tables and plots to ease publication. It promotes
open science by integration with the
Open Science Framework and
reproducibility
Reproducibility, also known as replicability and repeatability, is a major principle underpinning the scientific method. For the findings of a study to be reproducible means that results obtained by an experiment or an observational study or in a ...
by integrating the analysis settings into the results. The development of JASP is financially supported b
several universities and research funds
Analyses
JASP offers frequentist inference and Bayesian inference on the same
statistical models
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, ...
.
Frequentist inference
Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or pro ...
uses
p-values and
confidence intervals to control error rates in the limit of infinite perfect replications.
Bayesian inference
Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, a ...
uses
credible intervals
In Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution. The ...
and
Bayes factors to estimate credible parameter values and model evidence given the available data and prior knowledge.
The following analyses are available in JASP:
Other features
*
Descriptive statistics.
* Assumption checks for all analyses, including
Levene's test In statistics, Levene's test is an inferential statistic used to assess the equality of variances for a variable calculated for two or more groups. Some common statistical procedures assume that variances of the populations from which different samp ...
, the
Shapiro–Wilk test
The Shapiro–Wilk test is a test of normality in frequentist statistics. It was published in 1965 by Samuel Sanford Shapiro and Martin Wilk.
Theory
The Shapiro–Wilk test tests the null hypothesis that a sample ''x''1, ..., ''x'n'' came fr ...
, and
Q–Q plot.
* Imports SPSS files and comma-separated files.
*
Open Science Framework integration.
*Data filtering: Use either R code or a drag-and-drop GUI to select cases of interest.
*Create columns: Use either R code or a drag-and-drop GUI to create new variables from existing ones.
*Copy tables in
LaTeX
Latex is an emulsion (stable dispersion) of polymer microparticles in water. Latexes are found in nature, but synthetic latexes are common as well.
In nature, latex is found as a milky fluid found in 10% of all flowering plants (angiosperms ...
format.
*Plot editing, Raincloud plots.
*PDF export of results.
*Importing SQL databases (since v0.16.4)
Modules
# Audit: Planning, selection and evaluation of statistical audit samples.
# Summary statistics: Bayesian inference from frequentist summary statistics for t-test, regression, and binomial tests.
# Bain: Bayesian informative hypotheses evaluation for t-test,
ANOVA,
ANCOVA
Analysis of covariance (ANCOVA) is a general linear model which blends ANOVA and regression. ANCOVA evaluates whether the means of a dependent variable (DV) are equal across levels of a categorical independent variable (IV) often called a treat ...
and linear regression.
# Network: Network Analysis allows the user to analyze the network structure of variables.
# Meta Analysis: Includes techniques for fixed and random effects analysis, fixed and mixed effects meta-regression, forest and funnel plots, tests for funnel plot asymmetry, trim-and-fill and fail-safe N analysis.
#Machine Learning: The machine Learning module contains 19 analyses for supervised an unsupervised learning:
#*Regression
#*#Boosting Regression
#*#Decision Tree Regression
#*#
K-Nearest Neighbors Regression
#*#Neural Network Regression
#*#Random Forest Regression
#*#Regularized Linear Regression
#*#Support Vector Machine Regression
#*Classification
#*#
Boosting Classification
#*#Decision Tree Classification
#*#K-Nearest Neighbors Classification
#*#Neural Network Classification
#*#Linear Discriminant Classification
#*#
Random Forest Classification
#*#Support Vector Machine Classification
#*Clustering
#*#
Density-Based Clustering
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of ...
#*#
Fuzzy C-Means Clustering
Fuzzy clustering (also referred to as soft clustering or soft ''k''-means) is a form of clustering in which each data point can belong to more than one cluster.
Clustering or cluster analysis involves assigning data points to clusters such that ...
#*#
Hierarchical Clustering
In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into ...
#*#Neighborhood-based Clustering (i.e.,
K-Means Clustering
''k''-means clustering is a method of vector quantization, originally from signal processing, that aims to partition ''n'' observations into ''k'' clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or ...
, K-Medians clustering, K-Medoids clustering)
#*#Random Forest Clustering
#SEM: Structural equation modeling.
#JAGS module
#Discover distributions
#Equivalence testing
#Cochrane meta-analyses
References
External links
*
*
{{DEFAULTSORT:JASP
Free Bayesian statistics software
Free educational software
Free statistical software
Software using the GNU AGPL license