In the context of
artificial neural networks
In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks.
A neural network consists of connected ...
, the rectifier or ReLU (rectified linear unit) activation function is an
activation function
The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation f ...
defined as the non-negative part of its argument, i.e., the
ramp function
The ramp function is a unary real function, whose graph is shaped like a ramp. It can be expressed by numerous definitions, for example "0 for negative inputs, output equals input for non-negative inputs". The term "ramp" can also be used for ...
:
:
where
is the input to a
neuron
A neuron (American English), neurone (British English), or nerve cell, is an membrane potential#Cell excitability, excitable cell (biology), cell that fires electric signals called action potentials across a neural network (biology), neural net ...
. This is analogous to
half-wave rectification
A rectifier is an electrical device that converts alternating current (AC), which periodically reverses direction, to direct current (DC), which flows in only one direction.
The process is known as ''rectification'', since it "straightens" t ...
in
electrical engineering
Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems that use electricity, electronics, and electromagnetism. It emerged as an identifiable occupation in the l ...
.
ReLU is one of the most popular activation functions for artificial neural networks, and finds application in
computer vision
Computer vision tasks include methods for image sensor, acquiring, Image processing, processing, Image analysis, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical ...
and
speech recognition
Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also ...
[Andrew L. Maas, Awni Y. Hannun, Andrew Y. Ng (2014)]
Rectifier Nonlinearities Improve Neural Network Acoustic Models
using
deep neural nets and
computational neuroscience
Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand th ...
.
History
The ReLU was first used by
Alston Householder in 1941 as a mathematical abstraction of biological neural networks.
Kunihiko Fukushima
Kunihiko Fukushima ( Japanese: 福島 邦彦, born 16 March 1936) is a Japanese computer scientist, most noted for his work on artificial neural networks and deep learning. He is currently working part-time as a senior research scientist at the F ...
in 1969 used ReLU in the context of visual feature extraction in hierarchical neural networks. Thirty years later, Hahnloser et al. argued that ReLU approximates the biological relationship between neural firing rates and input current, in addition to enabling recurrent neural network dynamics to stabilise under weaker criteria.
Prior to 2010, most activation functions used were the
logistic sigmoid (which is inspired by
probability theory
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
; see
logistic regression
In statistics, a logistic model (or logit model) is a statistical model that models the logit, log-odds of an event as a linear function (calculus), linear combination of one or more independent variables. In regression analysis, logistic regres ...
) and its more numerically efficient counterpart, the
hyperbolic tangent
In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points form a circle with a unit radius, the points form the right half of the ...
. Around 2010, the use of ReLU became common again.
Jarrett et al. (2009) noted that rectification by either
absolute
Absolute may refer to:
Companies
* Absolute Entertainment, a video game publisher
* Absolute Radio, (formerly Virgin Radio), independent national radio station in the UK
* Absolute Software Corporation, specializes in security and data risk ma ...
or ReLU (which they called "positive part") was critical for object recognition in convolutional neural networks (CNNs), specifically because it allows
average pooling without neighboring filter outputs cancelling each other out. They hypothesized that the use of sigmoid or tanh was responsible for poor performance in previous CNNs.
Nair and Hinton (2010) made a theoretical argument that the
softplus
In mathematics and machine learning, the softplus function is
: f(x) = \log(1 + e^x).
It is a smooth approximation (in fact, an analytic function) to the ramp function, which is known as the ''rectifier'' or ''ReLU (rectified linear unit)'' in m ...
activation function should be used, in that the softplus function numerically approximates the sum of an exponential number of linear models that share parameters. They then proposed ReLU as a good approximation to it. Specifically, they began by considering a single binary neuron in a
Boltzmann machine
A Boltzmann machine (also called Sherrington–Kirkpatrick model with external field or stochastic Ising model), named after Ludwig Boltzmann, is a spin glass, spin-glass model with an external field, i.e., a Spin glass#Sherrington–Kirkpatrick m ...
that takes
as input, and produces 1 as output with probability
. They then considered extending its range of output by making infinitely many copies of it
, that all take the same input, offset by an amount
, then their outputs are added together as
. They then demonstrated that
is approximately equal to
, which is also approximately equal to
, where
stands for the
gaussian distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is
f(x ...
.
They also argued for another reason for using ReLU: that it allows "intensity equivariance" in image recognition. That is, multiplying input image by a constant
multiplies the output also. In contrast, this is false for other activation functions like sigmoid or tanh. They found that ReLU activation allowed good empirical performance in
restricted Boltzmann machine
A restricted Boltzmann machine (RBM) (also called a restricted Sherrington–Kirkpatrick model with external field or restricted stochastic Ising–Lenz–Little model) is a generative stochastic artificial neural network that can learn a prob ...
s.
[Nair, Vinod, and Geoffrey E. Hinton.]
Rectified linear units improve restricted boltzmann machines
" ''Proceedings of the 27th international conference on machine learning (ICML-10)''. 2010.
Glorot et al (2011) argued that ReLU has the following advantages over sigmoid or tanh. ReLU is more similar to biological neurons' responses in their main operating regime. ReLU avoids vanishing gradients. ReLU is cheaper to compute. ReLU creates sparse representation naturally, because many hidden units output exactly zero for a given input. They also found empirically that deep networks trained with ReLU can achieve strong performance ''without'' unsupervised pre-training, especially on large, purely supervised tasks.
Advantages
Advantages of ReLU include:
*
Sparse activation: for example, in a
randomly initialized network, only about 50% of
hidden units are activated (i.e. have a non-zero output).
* Better gradient propagation: fewer
vanishing gradient problems compared to sigmoidal activation functions that saturate in both directions.
* Efficiency: only requires comparison and addition.
* Scale-invariant (
homogeneous
Homogeneity and heterogeneity are concepts relating to the uniformity of a substance, process or image. A homogeneous feature is uniform in composition or character (i.e., color, shape, size, weight, height, distribution, texture, language, i ...
, or "intensity equivariance"
):
:
.
Potential problems
Possible downsides can include:
* Non-differentiability at zero (however, it is differentiable anywhere else, and the value of the
derivative
In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
at zero can be chosen to be 0 or 1 arbitrarily).
* Not zero-centered: ReLU outputs are always non-negative. This can make it harder for the network to learn during backpropagation, because gradient updates tend to push weights in one direction (positive or negative).
Batch normalization
Batch normalization (also known as batch norm) is a normalization technique used to make training of artificial neural networks faster and more stable by adjusting the inputs to each layer—re-centering them around zero and re-scaling them to ...
can help address this.
* ReLU is unbounded.
* Redundancy of the parametrization: Because ReLU is scale-invariant, the network computes the exact same function by scaling the weights and biases in front of a ReLU activation by
, and the weights after by
.
* Dying ReLU: ReLU neurons can sometimes be pushed into states in which they become inactive for essentially all inputs. In this state, no gradients flow backward through the neuron, and so the neuron becomes stuck in a perpetually inactive state (it "dies"). This is a form of the
vanishing gradient problem
In machine learning, the vanishing gradient problem is the problem of greatly diverging gradient magnitudes between earlier and later layers encountered when training neural networks with backpropagation. In such methods, neural network weights ar ...
. In some cases, large numbers of neurons in a network can become stuck in dead states, effectively decreasing the model capacity and potentially even halting the learning process. This problem typically arises when the learning rate is set too high. It may be mitigated by using "leaky" ReLU instead, where a small positive slope is assigned for
. However, depending on the task, performance may be reduced.
Variants
Piecewise-linear variants
Leaky ReLU (2014) allows a small, positive gradient when the unit is inactive,
helping to mitigate the vanishing gradient problem. This gradient is defined by a parameter
, typically set to 0.01–0.3.
:
The same function can also be expressed without the piecewise notation as:
:
Parametric ReLU (PReLU, 2016) takes this idea further by making
a learnable parameter along with the other network parameters.
Note that for
, this is equivalent to
:
and thus has a relation to "maxout" networks.
Concatenated ReLU (CReLU, 2016) preserves positive and negative phase information by returning two values:
:
Smooth variants
Softplus

A smooth approximation to the rectifier is the
analytic function
In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex ...
:
which is called the ''softplus'' (2000)
or ''SmoothReLU'' function. For large negative
it is roughly
, so just above 0, while for large positive
it is roughly
, so just above
.
This function can be approximated as:
:
By making the change of variables
, this is equivalent to
:
A sharpness parameter
may be included:
:
The derivative of softplus is the
logistic function
A logistic function or logistic curve is a common S-shaped curve ( sigmoid curve) with the equation
f(x) = \frac
where
The logistic function has domain the real numbers, the limit as x \to -\infty is 0, and the limit as x \to +\infty is L.
...
. This in turn can be viewed as a smooth approximation of the derivative of the rectifier, the
Heaviside step function
The Heaviside step function, or the unit step function, usually denoted by or (but sometimes , or ), is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. Differen ...
.
The multivariable generalization of single-variable softplus is the
LogSumExp
The LogSumExp (LSE) (also called RealSoftMax or multivariable softplus) function is a smooth maximum – a smooth approximation to the maximum function, mainly used by machine learning algorithms. It is defined as the logarithm of the sum of the ...
with the first argument set to zero:
:
The LogSumExp function is
:
and its gradient is the
softmax
The softmax function, also known as softargmax or normalized exponential function, converts a tuple of real numbers into a probability distribution of possible outcomes. It is a generalization of the logistic function to multiple dimensions, a ...
; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning.
ELU
Exponential linear units (2015) smoothly allow negative values. This is an attempt to make the mean activations closer to zero, which speeds up learning. It has been shown that ELUs can obtain higher classification accuracy than ReLUs.
:
In these formulas,
is a
hyperparameter to be tuned with the constraint
.
Given the same interpretation of
, ELU can be viewed as a smoothed version of a shifted ReLU (SReLU), which has the form
.
Gaussian-error linear unit (GELU)
GELU (2016) is a smooth approximation to the rectifier:
:
:
where
is the
cumulative distribution function
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x.
Ever ...
of the standard
normal distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
f(x) = \frac ...
.
This activation function is illustrated in the figure at the start of this article. It has a "bump" with negative derivative to the left of ''x'' < 0. It serves as the default activation for many transformer models such as
BERT.
SiLU

The SiLU (sigmoid linear unit) or
swish function
The swish function is a family of mathematical function defined as follows:
: \operatorname_\beta(x) = x \operatorname(\beta x) = \frac.
where \beta can be constant (usually set to 1) or trainable and "sigmoid" refers to the logistic function.
...
is another smooth approximation which uses the
sigmoid (logistic) function, first introduced in the 2016 GELU paper:
:
:
It is cheaper to calculate than GELU. It also has a "bump".
Mish
The mish function (2019) can also be used as a smooth approximation of the rectifier.
[.] It is defined as
:
where
is the
hyperbolic tangent
In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points form a circle with a unit radius, the points form the right half of the ...
, and
is the
softplus
In mathematics and machine learning, the softplus function is
: f(x) = \log(1 + e^x).
It is a smooth approximation (in fact, an analytic function) to the ramp function, which is known as the ''rectifier'' or ''ReLU (rectified linear unit)'' in m ...
function.
Mish was obtained by experimenting with functions similar to Swish (SiLU, see above). It is non-monotonic (has a "bump") like Swish. The main new feature is that it exhibits a "self-regularizing" behavior attributed to a term in its first derivative.
Squareplus
Squareplus (2021) is the function
:
where
is a hyperparameter that determines the "size" of the curved region near
. (For example, letting
yields ReLU, and letting
yields the
metallic mean function.)
Squareplus shares many properties with softplus: It is
monotonic
In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of ord ...
, strictly
positive, approaches 0 as
, approaches the identity as
, and is
smooth. However, squareplus can be computed using only
algebraic functions
In mathematics, an algebraic function is a function that can be defined
as the root of an irreducible polynomial equation. Algebraic functions are often algebraic expressions using a finite number of terms, involving only the algebraic operatio ...
, making it well-suited for settings where computational resources or instruction sets are limited. Additionally, squareplus requires no special consideration to ensure numerical stability when
is large.
DELU
ExtendeD Exponential Linear Unit (DELU, 2023) is an activation function which is smoother within the neighborhood of zero and sharper for bigger values, allowing better allocation of neurons in the learning process for higher performance. Thanks to its unique design, it has been shown that DELU may obtain higher classification accuracy than ReLU and ELU.
:
In these formulas,
,
and
are
hyperparameter values which could be set as default constraints
,
and
, as done in the original work.
See also
*
Softmax function
The softmax function, also known as softargmax or normalized exponential function, converts a tuple of real numbers into a probability distribution of possible outcomes. It is a generalization of the logistic function to multiple dimensions, a ...
*
Sigmoid function
A sigmoid function is any mathematical function whose graph of a function, graph has a characteristic S-shaped or sigmoid curve.
A common example of a sigmoid function is the logistic function, which is defined by the formula
:\sigma(x ...
*
Tobit model
In statistics, a tobit model is any of a class of regression models in which the observed range of the dependent variable is censored in some way. The term was coined by Arthur Goldberger in reference to James Tobin, who developed the model in 19 ...
*
Layer (deep learning)
A layer in a deep learning model is a structure or network topology in the model's architecture, which takes information from the previous layers and then passes it to the next layer.
Layer types
The first type of layer is the Dense layer, also ...
References
{{Artificial intelligence navbox
Artificial neural networks