FLOPS
   HOME

TheInfoList



OR:

In
computing Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes, and development of both hardware and software. Computing has scientific, e ...
, floating point operations per second (FLOPS, flops or flop/s) is a measure of
computer performance In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instructio ...
, useful in fields of scientific computations that require
floating-point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can b ...
calculations. For such cases, it is a more accurate measure than measuring
instructions per second Instructions per second (IPS) is a measure of a computer's processor speed. For complex instruction set computers (CISCs), different instructions take different amounts of time, so the value measured depends on the instruction mix; even for compa ...
.


Floating-point arithmetic

Floating-point arithmetic is needed for very large or very small
real number In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every real ...
s, or computations that require a large dynamic range. Floating-point representation is similar to scientific notation, except everything is carried out in base two, rather than base ten. The encoding scheme stores the sign, the exponent (in base two for Cray and
VAX VAX (an acronym for Virtual Address eXtension) is a series of computers featuring a 32-bit instruction set architecture (ISA) and virtual memory that was developed and sold by Digital Equipment Corporation (DEC) in the late 20th century. The V ...
, base two or ten for
IEEE floating point The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in ...
formats, and base 16 for IBM Floating Point Architecture) and the significand (number after the
radix point A decimal separator is a symbol used to separate the integer part from the fractional part of a number written in decimal form (e.g., "." in 12.45). Different countries officially designate different symbols for use as the separator. The choi ...
). While several similar formats are in use, the most common is ANSI/IEEE Std. 754-1985. This standard defines the format for 32-bit numbers called ''single precision'', as well as 64-bit numbers called ''double precision'' and longer numbers called ''extended precision'' (used for intermediate results). Floating-point representations can support a much wider range of values than fixed-point, with the ability to represent very small numbers and very large numbers.


Dynamic range and precision

The exponentiation inherent in floating-point computation assures a much larger dynamic range – the largest and smallest numbers that can be represented – which is especially important when processing data sets where some of the data may have extremely large range of numerical values or where the range may be unpredictable. As such, floating-point processors are ideally suited for computationally intensive applications.


Computational performance

FLOPS and MIPS are units of measure for the numerical computing performance of a computer. Floating-point operations are typically used in fields such as scientific computational research. The unit MIPS measures integer performance of a computer. Examples of integer operation include data movement (A to B) or value testing (If A = B, then C). MIPS as a performance benchmark is adequate when a computer is used in database queries, word processing, spreadsheets, or to run multiple virtual operating systems. Frank H. McMahon, of the Lawrence Livermore National Laboratory, invented the terms FLOPS and MFLOPS (megaFLOPS) so that he could compare the supercomputers of the day by the number of floating-point calculations they performed per second. This was much better than using the prevalent MIPS to compare computers as this statistic usually had little bearing on the arithmetic capability of the machine. FLOPS on an HPC-system can be calculated using this equation:"Nodes, Sockets, Cores and FLOPS, Oh, My" by Dr. Mark R. Fernandez, Ph.D.
/ref> : \text = \text \times \frac \times \frac \times \frac \times \frac \times \frac. This can be simplified to the most common case: a computer that has exactly 1 CPU: : \text = \text \times \frac \times \frac. FLOPS can be recorded in different measures of precision, for example, the
TOP500 The TOP500 project ranks and details the 500 most powerful non-distributed computing, distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these ...
supercomputer list ranks computers by 64 bit (
double-precision floating-point format Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. Fl ...
) operations per second, abbreviated to ''FP64''. Similar measures are available for
32-bit In computer architecture, 32-bit computing refers to computer systems with a processor, memory, and other major system components that operate on data in 32-bit units. Compared to smaller bit widths, 32-bit computers can perform large calculation ...
(''FP32'') and
16-bit 16-bit microcomputers are microcomputers that use 16-bit microprocessors. A 16-bit register can store 216 different values. The range of integer values that can be stored in 16 bits depends on the integer representation used. With the two mos ...
(''FP16'') operations.


Floating-point operations per clock cycle for various processors


Performance records


Single computer records

In June 1997,
Intel Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California. It is the world's largest semiconductor chip manufacturer by revenue, and is one of the developers of the x86 seri ...
's
ASCI Red ASCI Red (also known as ASCI Option Red or TFLOPS) was the first computer built under the Accelerated Strategic Computing Initiative ( ASCI), the supercomputing initiative of the United States government created to help the maintenance of the ...
was the world's first computer to achieve one teraFLOPS and beyond. Sandia director Bill Camp said that ASCI Red had the best reliability of any supercomputer ever built, and "was supercomputing's high-water mark in longevity, price, and performance".
NEC is a Japanese multinational information technology and electronics corporation, headquartered in Minato, Tokyo. The company was known as the Nippon Electric Company, Limited, before rebranding in 1983 as NEC. It provides IT and network soluti ...
's SX-9 supercomputer was the world's first
vector processor In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set where its instructions are designed to operate efficiently and effectively on large one-dimensional arrays of data called ...
to exceed 100 gigaFLOPS per single core. In June 2006, a new computer was announced by Japanese research institute RIKEN, the
MDGRAPE-3 MDGRAPE-3 is an ultra-high performance petascale supercomputer system developed by the Riken research institute in Japan. It is a special purpose system built for molecular dynamics simulations, especially protein structure prediction. MDGRAPE ...
. The computer's performance tops out at one petaFLOPS, almost two times faster than the Blue Gene/L, but MDGRAPE-3 is not a general purpose computer, which is why it does not appear in the Top500.org list. It has special-purpose pipelines for simulating molecular dynamics. By 2007,
Intel Corporation Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California. It is the world's largest semiconductor chip manufacturer by revenue, and is one of the developers of the x86 series ...
unveiled the experimental multi-core POLARIS chip, which achieves 1 teraFLOPS at 3.13 GHz. The 80-core chip can raise this result to 2 teraFLOPS at 6.26 GHz, although the thermal dissipation at this frequency exceeds 190 watts. In June 2007, Top500.org reported the fastest computer in the world to be the IBM Blue Gene/L supercomputer, measuring a peak of 596 teraFLOPS. The
Cray XT4 The Cray XT4 (codenamed ''Hood'' during development) is an updated version of the Cray XT3 supercomputer. It was released on November 18, 2006. It includes an updated version of the SeaStar interconnect router called SeaStar2, processor sockets ...
hit second place with 101.7 teraFLOPS. On June 26, 2007, IBM announced the second generation of its top supercomputer, dubbed Blue Gene/P and designed to continuously operate at speeds exceeding one petaFLOPS, faster than the Blue Gene/L. When configured to do so, it can reach speeds in excess of three petaFLOPS. On October 25, 2007,
NEC is a Japanese multinational information technology and electronics corporation, headquartered in Minato, Tokyo. The company was known as the Nippon Electric Company, Limited, before rebranding in 1983 as NEC. It provides IT and network soluti ...
Corporation of Japan issued a press release announcing its SX series model SX-9, claiming it to be the world's fastest vector supercomputer. The SX-9 features the first CPU capable of a peak vector performance of 102.4 gigaFLOPS per single core. On February 4, 2008, the
NSF NSF may stand for: Political organizations *National Socialist Front, a Swedish National Socialist party *NS-Frauenschaft, the women's wing of the former German Nazi party *National Students Federation, a leftist Pakistani students' political gr ...
and the
University of Texas at Austin The University of Texas at Austin (UT Austin, UT, or Texas) is a public research university in Austin, Texas. It was founded in 1883 and is the oldest institution in the University of Texas System. With 40,916 undergraduate students, 11,075 ...
opened full scale research runs on an
AMD Advanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company based in Santa Clara, California, that develops computer processors and related technologies for business and consumer markets. While it initially manufactur ...
,
Sun The Sun is the star at the center of the Solar System. It is a nearly perfect ball of hot plasma, heated to incandescence by nuclear fusion reactions in its core. The Sun radiates this energy mainly as light, ultraviolet, and infrared radi ...
supercomputer named
Ranger A Ranger is typically someone in a military/paramilitary or law enforcement role specializing in patrolling a given territory, called “ranging”. The term most often refers to: * Park ranger or forest ranger, a person charged with protecting and ...
, the most powerful supercomputing system in the world for open science research, which operates at sustained speed of 0.5 petaFLOPS. On May 25, 2008, an American supercomputer built by IBM, named '
Roadrunner The roadrunners (genus ''Geococcyx''), also known as chaparral birds or chaparral cocks, are two species of fast-running ground cuckoos with long tails and crests. They are found in the southwestern and south-central United States and Mexico, us ...
', reached the computing milestone of one petaFLOPS. It headed the June 2008 and November 2008
TOP500 The TOP500 project ranks and details the 500 most powerful non-distributed computing, distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these ...
list of the most powerful supercomputers (excluding grid computers). The computer is located at Los Alamos National Laboratory in New Mexico. The computer's name refers to the New Mexico state bird, the
greater roadrunner The greater roadrunner (''Geococcyx californianus'') is a long-legged bird in the cuckoo family, Cuculidae, from the Aridoamerica region in the Southwestern United States and Mexico. The scientific name means "Californian earth-cuckoo". Along wit ...
(''Geococcyx californianus''). In June 2008, AMD released ATI Radeon HD 4800 series, which are reported to be the first GPUs to achieve one teraFLOPS. On August 12, 2008, AMD released the ATI Radeon HD 4870X2 graphics card with two Radeon R770 GPUs totaling 2.4 teraFLOPS. In November 2008, an upgrade to the Cray
Jaguar supercomputer Jaguar or OLCF-2 was a petascale supercomputer built by Cray at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. The massively parallel Jaguar had a peak performance of just over 1,750 teraFLOPS (1.75 petaFLOPS). It had 224,256 x8 ...
at the Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) raised the system's computing power to a peak 1.64 petaFLOPS, making Jaguar the world's first petaFLOPS system dedicated to
open research Open research is research that is openly accessible and modifiable by others. The central theme of open research is to make clear accounts of research methods freely available via the internet, along with any data or results extracted or derived ...
. In early 2009 the supercomputer was named after a mythical creature, Kraken. Kraken was declared the world's fastest university-managed supercomputer and sixth fastest overall in the 2009 TOP500 list. In 2010 Kraken was upgraded and can operate faster and is more powerful. In 2009, the
Cray Cray Inc., a subsidiary of Hewlett Packard Enterprise, is an American supercomputer manufacturer headquartered in Seattle, Washington. It also manufactures systems for data storage and analytics. Several Cray supercomputer systems are listed ...
Jaguar performed at 1.75 petaFLOPS, beating the IBM Roadrunner for the number one spot on the
TOP500 The TOP500 project ranks and details the 500 most powerful non-distributed computing, distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these ...
list. In October 2010, China unveiled the
Tianhe-1 Tianhe-I, Tianhe-1, or TH-1 (, ; '' Sky River Number One'') is a supercomputer capable of an Rmax (maximum range) of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world fro ...
, a supercomputer that operates at a peak computing rate of 2.5 petaFLOPS. the fastest PC
processor Processor may refer to: Computing Hardware * Processor (computing) **Central processing unit (CPU), the hardware within a computer that executes a program *** Microprocessor, a central processing unit contained on a single integrated circuit (I ...
reached 109 gigaFLOPS (
Intel Core i7 The following is a list of Intel Core i7 brand microprocessors. Introduced in 2008, the Core i7 line of microprocessors are intended to be used by high-end users. Desktop processors Nehalem microarchitecture (1st generation) "Bloomfield" ...
980 XE) in double precision calculations.
GPU A graphics processing unit (GPU) is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobi ...
s are considerably more powerful. For example,
Nvidia Tesla Nvidia Tesla was the name of Nvidia's line of products targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 ser ...
C2050 GPU computing processors perform around 515 gigaFLOPS in double precision calculations, and the AMD FireStream 9270 peaks at 240 gigaFLOPS. In November 2011, it was announced that Japan had achieved 10.51 petaFLOPS with its
K computer The K computer named for the Japanese word/numeral , meaning 10 quadrillion (1016)See Japanese numbers was a supercomputer manufactured by Fujitsu, installed at the Riken Advanced Institute for Computational Science campus in Kobe, Hyōgo Pref ...
. It has 88,128 SPARC64 VIIIfx
processor Processor may refer to: Computing Hardware * Processor (computing) **Central processing unit (CPU), the hardware within a computer that executes a program *** Microprocessor, a central processing unit contained on a single integrated circuit (I ...
s in 864 racks, with theoretical performance of 11.28 petaFLOPS. It is named after the Japanese word "
kei Kei may refer to: People * Kei (given name) * Kei, Cantonese for Ji (surname), Ji(姫) * Kei, Cantonese for Qi (surname), Qi(奇, 祁, 亓) * Shō Kei (1700–1752), king of the Ryūkyū Kingdom * Kei (singer) (born 1995), stage name of South Kor ...
", which stands for 10
quadrillion Two naming scales for large numbers have been used in English and other European languages since the early modern era: the long and short scales. Most English variants use the short scale today, but the long scale remains dominant in many non-E ...
, corresponding to the target speed of 10 petaFLOPS. On November 15, 2011, Intel demonstrated a single x86-based processor, code-named "Knights Corner", sustaining more than a teraFLOPS on a wide range of
DGEMM Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix ...
operations. Intel emphasized during the demonstration that this was a sustained teraFLOPS (not "raw teraFLOPS" used by others to get higher but less meaningful numbers), and that it was the first general purpose processor to ever cross a teraFLOPS. On June 18, 2012, IBM's Sequoia supercomputer system, based at the U.S. Lawrence Livermore National Laboratory (LLNL), reached 16 petaFLOPS, setting the world record and claiming first place in the latest TOP500 list. On November 12, 2012, the TOP500 list certified Titan as the world's fastest supercomputer per the LINPACK benchmark, at 17.59 petaFLOPS. It was developed by Cray Inc. at the
Oak Ridge National Laboratory Oak Ridge National Laboratory (ORNL) is a U.S. multiprogram science and technology national laboratory sponsored by the U.S. Department of Energy (DOE) and administered, managed, and operated by UT–Battelle as a federally funded research an ...
and combines AMD Opteron processors with "Kepler" NVIDIA Tesla graphic processing unit (GPU) technologies. On June 10, 2013, China's
Tianhe-2 Tianhe-2 or TH-2 (, i.e. 'Milky Way 2') is a 33.86- petaflops supercomputer located in the National Supercomputer Center in Guangzhou, China. It was developed by a team of 1,300 scientists and engineers. It was the world's fastest supercomputer ...
was ranked the world's fastest with 33.86 petaFLOPS. On June 20, 2016, China's Sunway TaihuLight was ranked the world's fastest with 93 petaFLOPS on the LINPACK benchmark (out of 125 peak petaFLOPS). The system, which is almost exclusively based on technology developed in China, is installed at the National Supercomputing Center in Wuxi, and represents more performance than the next five most powerful systems on the TOP500 list combined. In June 2019,
Summit A summit is a point on a surface that is higher in elevation than all points immediately adjacent to it. The topography, topographic terms acme, apex, peak (mountain peak), and zenith are synonymous. The term (mountain top) is generally used ...
, an IBM-built supercomputer now running at the Department of Energy's (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot with a performance of 148.6 petaFLOPS on High Performance Linpack (HPL), the benchmark used to rank the TOP500 list. Summit has 4,356 nodes, each one equipped with two 22-core Power9 CPUs, and six NVIDIA Tesla V100 GPUs.


Distributed computing records

Distributed computing A distributed system is a system whose components are located on different computer network, networked computers, which communicate and coordinate their actions by message passing, passing messages to one another from any system. Distributed com ...
uses the Internet to link personal computers to achieve more FLOPS: * , the
Folding@home Folding@home (FAH or F@h) is a volunteer computing project aimed to help scientists develop new therapeutics for a variety of diseases by the means of simulating protein dynamics. This includes the process of protein folding and the movements ...
network has over 2.3 exaFLOPS of total computing power. It is the most powerful distributed computer network, being the first ever to break 1 exaFLOPS of total computing power. This level of performance is primarily enabled by the cumulative effort of a vast array of powerful
GPU A graphics processing unit (GPU) is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobi ...
and CPU units. * , the entire
BOINC The Berkeley Open Infrastructure for Network Computing (BOINC, pronounced – rhymes with "oink") is an open-source middleware system for volunteer computing (a type of distributed computing). Developed originally to support SETI@home, it beca ...
network averages about 31 petaFLOPS. * , SETI@home, employing the
BOINC The Berkeley Open Infrastructure for Network Computing (BOINC, pronounced – rhymes with "oink") is an open-source middleware system for volunteer computing (a type of distributed computing). Developed originally to support SETI@home, it beca ...
software platform, averages 896 teraFLOPS. * ,
Einstein@Home Einstein@Home is a volunteer computing project that searches for signals from spinning neutron stars in data from gravitational-wave detectors, from large radio telescopes, and from a gamma-ray telescope. Neutron stars are detected by their pulse ...
, a project using the
BOINC The Berkeley Open Infrastructure for Network Computing (BOINC, pronounced – rhymes with "oink") is an open-source middleware system for volunteer computing (a type of distributed computing). Developed originally to support SETI@home, it beca ...
network, is crunching at 3 petaFLOPS. * , MilkyWay@home, using the
BOINC The Berkeley Open Infrastructure for Network Computing (BOINC, pronounced – rhymes with "oink") is an open-source middleware system for volunteer computing (a type of distributed computing). Developed originally to support SETI@home, it beca ...
infrastructure, computes at 847 teraFLOPS. * ,
GIMPS The Great Internet Mersenne Prime Search (GIMPS) is a collaborative project of volunteers who use freely available software to search for Mersenne prime numbers. GIMPS was founded in 1996 by George Woltman, who also wrote the Prime95 client an ...
, searching for
Mersenne prime In mathematics, a Mersenne prime is a prime number that is one less than a power of two. That is, it is a prime number of the form for some integer . They are named after Marin Mersenne, a French Minim friar, who studied them in the early 17t ...
s, is sustaining 1,354 teraFLOPS.


Cost of computing


Hardware costs


See also

*
Computer performance by orders of magnitude This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS. Scientific E notation index: 2 , 3 , 6 , 9 , 12 , 15 , 18 , 21 , 24 , >24 __TOC__ Deciscale compu ...
*
Gordon Bell Prize The Gordon Bell Prize, commonly referred to as the Nobel Prize of Supercomputing, is an award presented by the Association for Computing Machinery each year in conjunction with the SC Conference series (formerly known as the Supercomputing Confere ...
*
LINPACK benchmarks The LINPACK Benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense ''n'' by ''n'' system of linear equations ''Ax'' = ''b'', which is a commo ...
* Moore's law *
Multiply–accumulate operation In computing, especially digital signal processing, the multiply–accumulate (MAC) or multiply-add (MAD) operation is a common step that computes the product of two numbers and adds that product to an accumulator. The hardware unit that perform ...
* Performance per watt#FLOPS per watt * Exascale computing *
SPECfp SPECfp is a computer benchmark designed to test the floating-point performance of a computer. It is managed by the Standard Performance Evaluation Corporation. SPECfp is the floating-point performance testing component of the SPEC CPU testing ...
*
SPECint SPECint is a computer benchmark specification for CPU integer processing power. It is maintained by the Standard Performance Evaluation Corporation (SPEC). SPECint is the integer performance testing component of the SPEC test suite. The first SPEC ...
* SUPS *
TOP500 The TOP500 project ranks and details the 500 most powerful non-distributed computing, distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these ...


References

{{DEFAULTSORT:Flops Benchmarks (computing) Floating point Units of frequency Computer performance