HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
, subnormal numbers are the subset of denormalized numbers (sometimes called denormals) that fill the underflow gap around zero in
floating-point arithmetic In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
. Any non-zero number with magnitude smaller than the smallest normal number is ''subnormal''. :: ''Usage note: in some older documents (especially standards documents such as the initial releases of
IEEE 754 The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found ...
and the C language), "denormal" is used to refer exclusively to subnormal numbers. This usage persists in various standards documents, especially when discussing hardware that is incapable of representing any other denormalized numbers, but the discussion here uses the term subnormal in line with the 2008 revision of
IEEE 754 The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found ...
.'' In a normal floating-point value, there are no leading zeros in the significand ( mantissa); rather, leading zeros are removed by adjusting the exponent (for example, the number 0.0123 would be written as ). Conversely, a denormalized floating point value has a significand with a leading digit of zero. Of these, the subnormal numbers represent values which if normalized would have exponents below the smallest representable exponent (the exponent having a limited range). The significand (or mantissa) of an IEEE floating-point number is the part of a floating-point number that represents the
significant digits Significant figures (also known as the significant digits, ''precision'' or ''resolution'') of a number in positional notation are digits in the number that are reliable and necessary to indicate the quantity of something. If a number expres ...
. For a positive normalised number it can be represented as ''m''0.''m''1''m''2''m''3...''m''''p''−2''m''''p''−1 (where ''m'' represents a significant digit, and ''p'' is the precision) with non-zero ''m''0. Notice that for a binary
radix In a positional numeral system, the radix or base is the number of unique digits, including the digit zero, used to represent numbers. For example, for the decimal/denary system (the most common system in use today) the radix (base number) is ...
, the leading binary digit is always 1. In a subnormal number, since the exponent is the least that it can be, zero is the leading significant digit (0.''m''1''m''2''m''3...''m''''p''−2''m''''p''−1), allowing the representation of numbers closer to zero than the smallest normal number. A floating-point number may be recognized as subnormal whenever its exponent is the least value possible. By filling the underflow gap like this, significant digits are lost, but not as abruptly as when using the ''flush to zero on underflow'' approach (discarding all significant digits when underflow is reached). Hence the production of a subnormal number is sometimes called ''gradual underflow'' because it allows a calculation to lose precision slowly when the result is small. In IEEE 754-2008, denormal numbers are renamed ''subnormal numbers'' and are supported in both binary and decimal formats. In binary interchange formats, subnormal numbers are encoded with a biased exponent of 0, but are interpreted with the value of the smallest allowed exponent, which is one greater (i.e., as if it were encoded as a 1). In decimal interchange formats they require no special encoding because the format supports unnormalized numbers directly. Mathematically speaking, the normalized floating-point numbers of a given
sign A sign is an object, quality, event, or entity whose presence or occurrence indicates the probable presence or occurrence of something else. A natural sign bears a causal relation to its object—for instance, thunder is a sign of storm, or ...
are roughly
logarithm In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a number  to the base  is the exponent to which must be raised, to produce . For example, since , the ''logarithm base'' 10 ...
ically spaced, and as such any finite-sized normal float cannot include zero. The subnormal floats are a linearly spaced set of values, which span the gap between the negative and positive normal floats.


Background

Subnormal numbers provide the guarantee that addition and subtraction of floating-point numbers never underflows; two nearby floating-point numbers always have a representable non-zero difference. Without gradual underflow, the subtraction ''a'' − ''b'' can underflow and produce zero even though the values are not equal. This can, in turn, lead to division by zero errors that cannot occur when gradual underflow is used. Subnormal numbers were implemented in the
Intel 8087 The Intel 8087, announced in 1980, was the first x87 floating-point coprocessor for the 8086 line of microprocessors. The purpose of the 8087 was to speed up computations for floating-point arithmetic, such as addition, subtraction, multiplic ...
while the IEEE 754 standard was being written. They were by far the most controversial feature in the K-C-S format proposal that was eventually adopted, but this implementation demonstrated that subnormal numbers could be supported in a practical implementation. Some implementations of
floating-point unit In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
s do not directly support subnormal numbers in hardware, but rather trap to some kind of software support. While this may be transparent to the user, it can result in calculations that produce or consume subnormal numbers being much slower than similar calculations on normal numbers.


Performance issues

Some systems handle subnormal values in hardware, in the same way as normal values. Others leave the handling of subnormal values to system software ("assist"), only handling normal values and zero in hardware. Handling subnormal values in software always leads to a significant decrease in performance. When subnormal values are entirely computed in hardware, implementation techniques exist to allow their processing at speeds comparable to normal numbers. However, the speed of computation remains significantly reduced on many modern x86 processors; in extreme cases, instructions involving subnormal operands may take as many as 100 additional clock cycles, causing the fastest instructions to run as much as six times slower. This speed difference can be a security risk. Researchers showed that it provides a timing side channel that allows a malicious web site to extract page content from another site inside a web browser. Some applications need to contain code to avoid subnormal numbers, either to maintain accuracy, or in order to avoid the performance penalty in some processors. For instance, in audio processing applications, subnormal values usually represent a signal so quiet that it is out of the human hearing range. Because of this, a common measure to avoid subnormals on processors where there would be a performance penalty is to cut the signal to zero once it reaches subnormal levels or mix in an extremely quiet noise signal. Other methods of preventing subnormal numbers include adding a DC offset, quantizing numbers, adding a Nyquist signal, etc. Since the
SSE2 SSE2 (Streaming SIMD Extensions 2) is one of the Intel SIMD (Single Instruction, Multiple Data) processor supplementary instruction sets first introduced by Intel with the initial version of the Pentium 4 in 2000. It extends the earlier SSE i ...
processor extension,
Intel Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California. It is the world's largest semiconductor chip manufacturer by revenue, and is one of the developers of the x86 ser ...
has provided such a functionality in CPU hardware, which rounds subnormal numbers to zero.


Disabling subnormal floats at the code level


Intel SSE

Intel's C and Fortran compilers enable the (denormals-are-zero) and (flush-to-zero) flags for SSE by default for optimization levels higher than . The effect of is to treat subnormal input arguments to floating-point operations as zero, and the effect of is to return zero instead of a subnormal float for operations that would result in a subnormal float, even if the input arguments are not themselves subnormal. clang and gcc have varying default states depending on platform and optimization level. A non- C99-compliant method of enabling the and flags on targets supporting SSE is given below, but is not widely supported. It is known to work on
Mac OS X macOS (; previously OS X and originally Mac OS X) is a Unix operating system developed and marketed by Apple Inc. since 2001. It is the primary operating system for Apple's Mac computers. Within the market of desktop and lap ...
since at least 2006. #include #pragma STDC FENV_ACCESS ON // Sets DAZ and FTZ, clobbering other CSR settings. // See https://opensource.apple.com/source/Libm/Libm-287.1/Source/Intel/, fenv.c and fenv.h. fesetenv(FE_DFL_DISABLE_SSE_DENORMS_ENV); // fesetenv(FE_DFL_ENV) // Disable both, clobbering other CSR settings. For other x86-SSE platforms where the C library has not yet implemented this flag, the following may work: #include _mm_setcsr(_mm_getcsr() , 0x0040); // DAZ _mm_setcsr(_mm_getcsr() , 0x8000); // FTZ _mm_setcsr(_mm_getcsr() , 0x8040); // Both _mm_setcsr(_mm_getcsr() & ~0x8040); // Disable both The and macros wrap a more readable interface for the code above. // To enable DAZ #include _MM_SET_DENORMALS_ZERO_MODE(_MM_DENORMALS_ZERO_ON); // To enable FTZ #include _MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_ON); Most compilers will already provide the previous macro by default, otherwise the following code snippet can be used (the definition for FTZ is analogous): #define _MM_DENORMALS_ZERO_MASK 0x0040 #define _MM_DENORMALS_ZERO_ON 0x0040 #define _MM_DENORMALS_ZERO_OFF 0x0000 #define _MM_SET_DENORMALS_ZERO_MODE(mode) _mm_setcsr((_mm_getcsr() & ~_MM_DENORMALS_ZERO_MASK) , (mode)) #define _MM_GET_DENORMALS_ZERO_MODE() (_mm_getcsr() & _MM_DENORMALS_ZERO_MASK) The default denormalization behavior is mandated by the ABI, and therefore well-behaved software should save and restore the denormalization mode before returning to the caller or calling code in other libraries.


ARM

AArch32 NEON (SIMD) FPU always uses a flush-to-zero mode, which is the same as . For the scalar FPU and in the AArch64 SIMD, the flush-to-zero behavior is optional and controlled by the bit of the control register – FPSCR in Arm32 and FPCR in AArch64. Some ARM processors have hardware handling of subnormals.


See also

* Logarithmic number system


References


Further reading

* * See also various papers on
William Kahan William "Velvel" Morton Kahan (born June 5, 1933) is a Canadian mathematician and computer scientist, who received the Turing Award in 1989 for "''his fundamental contributions to numerical analysis''", was named an ACM Fellow in 1994, and induct ...
's web sit

for examples of where subnormal numbers help improve the results of calculations. {{DEFAULTSORT:Subnormal Number Computer arithmetic Articles with example C code