The Karp–Flatt metric is a measure of
parallelization
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different fo ...
of code in
parallel processor
Parallel computing is a type of computation in which many calculations or Process (computing), processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are ...
systems. This metric exists in addition to
Amdahl's law
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states that ...
and
Gustafson's law
In computer architecture, Gustafson's law (or Gustafson–Barsis's law) gives the speedup in the execution time of a task that theoretically gains from parallel computing, using a hypothetical run of ''the task'' on a single-core machine as the ba ...
as an indication of the extent to which a particular computer code is parallelized. It was proposed by Alan H. Karp and Horace P. Flatt in 1990.
Description
Given a parallel computation exhibiting
speedup
In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem. More technically, it is the improvement in speed of execution of a task executed on two similar architectures with d ...
on
processors, where
> 1, the experimentally determined
serial fraction
Serial may refer to:
Arts, entertainment, and media The presentation of works in sequential segments
* Serial (literature), serialised literature in print
* Serial (publishing), periodical publications and newspapers
* Serial (radio and televisi ...
is defined to be the Karp–Flatt Metric viz:
:
The lower the value of
, the better the parallelization.
Justification
There are many ways to measure the performance of a
parallel algorithm
In computer science, a parallel algorithm, as opposed to a traditional serial algorithm, is an algorithm which can do multiple operations in a given time. It has been a tradition of computer science to describe serial algorithms in abstract machin ...
running on a parallel processor. The Karp–Flatt metric defines a metric which reveals aspects of the performance that are not easily discerned from other metrics. A pseudo-"derivation" of sorts follows from
Amdahl's Law
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states that ...
, which can be written as:
:
Where:
*
is the total time taken for code execution in a
-processor system
*
is the time taken for the serial part of the code to run
*
is the time taken for the parallel part of the code to run in one processor
*
is the number of processors
with the result obtained by substituting
= 1 viz.
, if we define the serial fraction
=
then the equation can be rewritten as
:
In terms of the
speedup
In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem. More technically, it is the improvement in speed of execution of a task executed on two similar architectures with d ...
=
:
:
Solving for the serial fraction, we get the Karp–Flatt metric as above. Note that this is not a "derivation" from Amdahl's law as the left hand side represents a
metric
Metric or metrical may refer to:
* Metric system, an internationally adopted decimal system of measurement
* An adjective indicating relation to measurement in general, or a noun describing a specific type of measurement
Mathematics
In mathema ...
rather than a mathematically derived quantity. The treatment above merely shows that the Karp–Flatt metric is consistent with Amdahl's Law.
Use
While the serial fraction e is often mentioned in
computer science
Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
literature, it was rarely used as a diagnostic tool the way
speedup
In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem. More technically, it is the improvement in speed of execution of a task executed on two similar architectures with d ...
and
efficiency
Efficiency is the often measurable ability to avoid wasting materials, energy, efforts, money, and time in doing something or in producing a desired result. In a more general sense, it is the ability to do things well, successfully, and without ...
are. Karp and Flatt hoped to correct this by proposing this metric. This metric addresses the inadequacies of the other laws and quantities used to measure the parallelization of computer code. In particular, Amdahl's law does not take into account
load balancing issues, nor does it take
overhead into consideration. Using the serial fraction as a metric poses definite advantages over the others, particularly as the number of processors grows.
For a problem of fixed size, the efficiency of a parallel computation typically decreases as the number of processors increases. By using the serial fraction obtained experimentally using the Karp–Flatt metric, we can determine if the efficiency decrease is due to limited opportunities of parallelism or increases in algorithmic or architectural overhead.
References
*
*
External links
Lecture Notes on Karp–Flatt metric-
Virginia Tech
Virginia Tech (formally the Virginia Polytechnic Institute and State University and informally VT, or VPI) is a Public university, public Land-grant college, land-grant research university with its main campus in Blacksburg, Virginia. It also ...
{{DEFAULTSORT:Karp-Flatt metric
Analysis of parallel algorithms