Gated recurrent units (GRUs) are a gating mechanism in
recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a
long short-term memory (LSTM) with a forget gate,
but has fewer parameters than LSTM, as it lacks an output gate.
GRU's performance on certain tasks of polyphonic music modeling, speech signal modeling and natural language processing was found to be similar to that of LSTM.
GRUs have been shown to exhibit better performance on certain smaller and less frequent datasets.
[ ]
Architecture
There are several variations on the full gated unit, with gating done using the previous hidden state and the bias in various combinations, and a simplified form called minimal gated unit.
The operator
denotes the
Hadamard product in the following.
Fully gated unit

Initially, for
, the output vector is
.
:
Variables
*
: input vector
*
: output vector
*
: candidate activation vector
*
: update gate vector
*
: reset gate vector
*
,
and
: parameter matrices and vector
Activation functions
*
: The original is a
sigmoid function.
*
: The original is a
hyperbolic tangent.
Alternative activation functions are possible, provided that