Automatic vectorization
   HOME

TheInfoList



OR:

Automatic vectorization, in
parallel computing Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different fo ...
, is a special case of
automatic parallelization Automatic may refer to: Music Bands * Automatic (band), Australian rock band * Automatic (American band), American rock band * The Automatic, a Welsh alternative rock band Albums * Automatic (Jack Bruce album), ''Automatic'' (Jack Bruce a ...
, where a
computer program A computer program is a sequence or set of instructions in a programming language for a computer to execute. Computer programs are one component of software, which also includes documentation and other intangible components. A computer program ...
is converted from a
scalar Scalar may refer to: *Scalar (mathematics), an element of a field, which is used to define a vector space, usually the field of real numbers * Scalar (physics), a physical quantity that can be described by a single element of a number field such ...
implementation, which processes a single pair of
operand In mathematics, an operand is the object of a mathematical operation, i.e., it is the object or quantity that is operated on. Example The following arithmetic expression shows an example of operators and operands: :3 + 6 = 9 In the above examp ...
s at a time, to a
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
implementation, which processes one operation on multiple pairs of operands at once. For example, modern conventional computers, including specialized
supercomputers A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions p ...
, typically have vector operations that simultaneously perform operations such as the following four additions (via
SIMD Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. SIMD can be internal (part of the hardware design) and it can be directly accessible through an instruction set architecture (ISA), but it should ...
or
SPMD In computing, single program, multiple data (SPMD) is a technique employed to achieve parallelism; it is a subcategory of MIMD. Tasks are split up and run simultaneously on multiple processors with different input in order to obtain results fast ...
hardware): :\begin c_1 & = a_1 + b_1 \\ c_2 & = a_2 + b_2 \\ c_3 & = a_3 + b_3 \\ c_4 & = a_4 + b_4 \end However, in most
programming language A programming language is a system of notation for writing computer programs. Most programming languages are text-based formal languages, but they may also be graphical. They are a kind of computer language. The description of a programming ...
s one typically writes loops that sequentially perform additions of many numbers. Here is an example of such a loop, written in C: for (i = 0; i < n; i++) c = a + b A vectorizing
compiler In computing, a compiler is a computer program that translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primarily used for programs that ...
transforms such loops into sequences of vector operations. These vector operations perform additions on blocks of elements from the arrays a, b and c. Automatic vectorization is a major research topic in computer science.


Background

Early computers usually had one logic unit, which executed one instruction on one pair of operands at a time.
Computer languages A computer language is a formal language used to communicate with a computer. Types of computer languages include: * Construction language – all forms of communication by which a human can specify an executable problem solution to a comput ...
and programs therefore were designed to execute in sequence. Modern computers, though, can do many things at once. So, many optimizing compilers perform automatic vectorization, where parts of sequential programs are transformed into parallel operations. Loop vectorization transforms procedural loops by assigning a processing unit to each pair of operands. Programs spend most of their time within such loops. Therefore, vectorization can significantly accelerate them, especially over large data sets. Loop vectorization is implemented in
Intel Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California. It is the world's largest semiconductor chip manufacturer by revenue, and is one of the developers of the x86 seri ...
's MMX, SSE, and
AVX AVX may refer to: Technology * Advanced Vector Extensions, an instruction set extension in the x86 microprocessor architecture ** AVX2, an expansion of the AVX instruction set ** AVX-512, 512-bit extensions to the 256-bit AVX * AVX Corporation, a m ...
, in
Power ISA Power ISA is a reduced instruction set computer (RISC) instruction set architecture (ISA) currently developed by the OpenPOWER Foundation, led by IBM. It was originally developed by IBM and the now-defunct Power.org industry group. Power IS ...
's AltiVec, and in
ARM In human anatomy, the arm refers to the upper limb in common usage, although academically the term specifically means the upper arm between the glenohumeral joint (shoulder joint) and the elbow joint. The distal part of the upper limb between the ...
's
NEON Neon is a chemical element with the symbol Ne and atomic number 10. It is a noble gas. Neon is a colorless, odorless, inert monatomic gas under standard conditions, with about two-thirds the density of air. It was discovered (along with krypton ...
, SVE and SVE2 instruction sets. Many constraints prevent or hinder vectorization. Sometimes vectorization can slow down execution, for example because of
pipeline Pipeline may refer to: Electronics, computers and computing * Pipeline (computing), a chain of data-processing stages or a CPU optimization found on ** Instruction pipelining, a technique for implementing instruction-level parallelism within a s ...
synchronization or data-movement timing.
Loop dependence analysis In computer science, loop dependence analysis is a process which can be used to find dependencies within iterations of a loop with the goal of determining different relationships between statements. These dependent relationships are tied to the ord ...
identifies loops that can be vectorized, relying on the
data dependence A data dependency in computer science is a situation in which a program statement (instruction) refers to the data of a preceding statement. In compiler theory, the technique used to discover data dependencies among statements (or instructions) is c ...
of the instructions inside loops.


Guarantees

Automatic vectorization, like any
loop optimization In compiler theory, loop optimization is the process of increasing execution speed and reducing the overheads associated with loops. It plays an important role in improving cache performance and making effective use of parallel processing capabi ...
or other compile-time optimization, must exactly preserve program behavior.


Data dependencies

All dependencies must be respected during execution to prevent incorrect results. In general, loop invariant dependencies and lexically forward dependencies can be easily vectorized, and lexically backward dependencies can be transformed into lexically forward dependencies. However, these transformations must be done safely, in order to ensure that the dependence between all statements remain true to the original. Cyclic dependencies must be processed independently of the vectorized instructions.


Data precision

Integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign (−1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the language ...
precision Precision, precise or precisely may refer to: Science, and technology, and mathematics Mathematics and computing (general) * Accuracy and precision, measurement deviation from true value and its scatter * Significant figures, the number of digit ...
(bit-size) must be kept during vector instruction execution. The correct vector instruction must be chosen based on the size and behavior of the internal integers. Also, with mixed integer types, extra care must be taken to promote/demote them correctly without losing precision. Special care must be taken with
sign extension Sign extension (abbreviated as sext) is the operation, in computer arithmetic, of increasing the number of bits of a binary number while preserving the number's sign (positive/negative) and value. This is done by appending digits to the most signi ...
(because multiple integers are packed inside the same register) and during shift operations, or operations with
carry bit In computer processors the carry flag (usually indicated as the C flag) is a single bit in a system status register/flag register used to indicate when an arithmetic carry or borrow has been generated out of the most significant arithmetic logic ...
s that would otherwise be taken into account.
Floating-point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can b ...
precision must be kept as well, unless
IEEE-754 The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in ...
compliance is turned off, in which case operations will be faster but the results may vary slightly. Big variations, even ignoring IEEE-754 usually signify programmer error.


Theory

To vectorize a program, the compiler's optimizer must first understand the dependencies between statements and re-align them, if necessary. Once the dependencies are mapped, the optimizer must properly arrange the implementing instructions changing appropriate candidates to vector instructions, which operate on multiple data items.


Building the dependency graph

The first step is to build the
dependency graph In mathematics, computer science and digital electronics, a dependency graph is a directed graph representing dependencies of several objects towards each other. It is possible to derive an evaluation order or the absence of an evaluation order th ...
, identifying which statements depend on which other statements. This involves examining each statement and identifying every data item that the statement accesses, mapping array access modifiers to functions and checking every access' dependency to all others in all statements.
Alias analysis Alias may refer to: * Pseudonym * Pen name * Nickname Arts and entertainment Film and television * ''Alias'' (2013 film), a 2013 Canadian documentary film * ''Alias'' (TV series), an American action thriller series 2001–2006 * ''Alias the ...
can be used to certify that the different variables access (or intersect) the same region in memory. The dependency graph contains all local dependencies with distance not greater than the vector size. So, if the vector register is 128 bits, and the array type is 32 bits, the vector size is 128/32 = 4. All other non-cyclic dependencies should not invalidate vectorization, since there won't be any concurrent access in the same vector instruction. Suppose the vector size is the same as 4 ints: for (i = 0; i < 128; i++)


Clustering

Using the graph, the optimizer can then cluster the
strongly connected components In the mathematical theory of directed graphs, a graph is said to be strongly connected if every vertex is reachable from every other vertex. The strongly connected components of an arbitrary directed graph form a partition into subgraphs that ...
(SCC) and separate vectorizable statements from the rest. For example, consider a program fragment containing three statement groups inside a loop: (SCC1+SCC2), SCC3 and SCC4, in that order, in which only the second group (SCC3) can be vectorized. The final program will then contain three loops, one for each group, with only the middle one vectorized. The optimizer cannot join the first with the last without violating statement execution order, which would invalidate the necessary guarantees.


Detecting idioms

Some non-obvious dependencies can be further optimized based on specific idioms. For instance, the following self-data-dependencies can be vectorized because the value of the right-hand values ( RHS) are fetched and then stored on the left-hand value, so there is no way the data will change within the assignment. a = a + a +1 Self-dependence by scalars can be vectorized by
variable elimination Variable elimination (VE) is a simple and general exact inference algorithm in probabilistic graphical models, such as Bayesian networks and Markov random fields.Zhang, N.L., Poole, D.:A Simple Approach to Bayesian Network Computations.In: 7th Can ...
.


General framework

The general framework for loop vectorization is split into four stages: * Prelude: Where the loop-independent variables are prepared to be used inside the loop. This normally involves moving them to vector registers with specific patterns that will be used in vector instructions. This is also the place to insert the run-time dependence check. If the check decides vectorization is not possible, branch to Cleanup. * Loop(s): All vectorized (or not) loops, separated by SCCs clusters in order of appearance in the original code. * Postlude: Return all loop-independent variables, inductions and reductions. * Cleanup: Implement plain (non-vectorized) loops for iterations at the end of a loop that are not a multiple of the vector size or for when run-time checks prohibit vector processing.


Run-time vs. compile-time

Some vectorizations cannot be fully checked at compile time. For example, library functions can defeat optimization if the data they process is supplied by the caller. Even in these cases, run-time optimization can still vectorize loops on-the-fly. This run-time check is made in the prelude stage and directs the flow to vectorized instructions if possible, otherwise reverts to standard processing, depending on the variables that are being passed on the registers or scalar variables. The following code can easily be vectorized at compile time, as it doesn't have any dependence on external parameters. Also, the language guarantees that neither will occupy the same region in memory as any other variable, as they are local variables and live only in the execution
stack Stack may refer to: Places * Stack Island, an island game reserve in Bass Strait, south-eastern Australia, in Tasmania’s Hunter Island Group * Blue Stack Mountains, in Co. Donegal, Ireland People * Stack (surname) (including a list of people ...
. int a 28 int b 28 // initialize b for (i = 0; i<128; i++) a = b + 5; On the other hand, the code below has no information on memory positions, because the references are pointers and the memory they point to may overlap. void compute(int *a, int *b) A quick run-time check on the
address An address is a collection of information, presented in a mostly fixed format, used to give the location of a building, apartment, or other structure or a plot of land, generally using political boundaries and street names as references, along w ...
of both ''a'' and ''b'', plus the loop iteration space (128) is enough to tell if the arrays overlap or not, thus revealing any dependencies. There exist some tools to dynamically analyze existing applications to assess the inherent latent potential for SIMD parallelism, exploitable through further compiler advances and/or via manual code changes.


Techniques

An example would be a program to multiply two vectors of numeric data. A scalar approach would be something like: for (i = 0; i < 1024; i++) c = a * b This could be vectorized to look something like: for (i = 0; i < 1024; i += 4) c :i+3= a :i+3* b :i+3 Here, c :i+3represents the four array elements from c to c +3and the vector processor can perform four operations for a single vector instruction. Since the four vector operations complete in roughly the same time as one scalar instruction, the vector approach can run up to four times faster than the original code. There are two distinct compiler approaches: one based on the conventional vectorization technique and the other based on
loop unrolling Loop unrolling, also known as loop unwinding, is a loop transformation technique that attempts to optimize a program's execution speed at the expense of its binary size, which is an approach known as space–time tradeoff. The transformation ca ...
.


Loop-level automatic vectorization

This technique, used for conventional vector machines, tries to find and exploit SIMD parallelism at the loop level. It consists of two major steps as follows. # Find an innermost loop that can be vectorized # Transform the loop and generate vector codes In the first step, the compiler looks for obstacles that can prevent vectorization. A major obstacle for vectorization is true data dependency shorter than the vector length. Other obstacles include function calls and short iteration counts. Once the loop is determined to be vectorizable, the loop is stripmined by the vector length and each scalar instruction within the loop body is replaced with the corresponding vector instruction. Below, the component transformations for this step are shown using the above example. * After stripmining for (i = 0; i < 1024; i += 4) for (j = 0; j < 4; j++) c +j= a +j* b +j * After loop distribution using temporary arrays for (i = 0; i < 1024; i += 4) * After replacing with vector codes for (i = 0; i < 1024; i += 4)


Basic block level automatic vectorization

This relatively new technique specifically targets modern SIMD architectures with short vector lengths. Although loops can be unrolled to increase the amount of SIMD parallelism in basic blocks, this technique exploits SIMD parallelism within basic blocks rather than loops. The two major steps are as follows. # The innermost loop is unrolled by a factor of the vector length to form a large loop body. # Isomorphic scalar instructions (that perform the same operation) are packed into a vector instruction if dependencies do not prevent doing so. To show step-by-step transformations for this approach, the same example is used again. * After loop unrolling (by the vector length, assumed to be 4 in this case) for (i = 0; i < 1024; i += 4) * After packing for (i = 0; i < 1024; i += 4) * After code generation for (i = 0; i < 1024; i += 4) Here, sA1, sB1, ... represent scalar variables and vA, vB, and vC represent vector variables. Most automatically vectorizing commercial compilers use the conventional loop-level approach except the IBM XL Compiler, which uses both.


In the presence of control flow

The presence of if-statements in the loop body requires the execution of instructions in all control paths to merge the multiple values of a variable. One general approach is to go through a sequence of code transformations: predication → vectorization(using one of the above methods) → remove vector predicates → remove scalar predicates. If the following code is used as an example to show these transformations; for (i = 0; i < 1024; i++) if (A > 0) C = B else D = D -1 * After predication for (i = 0; i < 1024; i++) where (P) denotes a predicate guarding the statement. * After vectorization for (i = 0; i < 1024; i += 4) * After removing vector predicates for (i = 0; i < 1024; i += 4) * After removing scalar predicates for (i = 0; i < 1024; i += 4)


Reducing vectorization overhead in the presence of control flow

Having to execute the instructions in all control paths in vector code has been one of the major factors that slow down the vector code with respect to the scalar baseline. The more complex the control flow becomes and the more instructions are bypassed in the scalar code, the larger the vectorization overhead becomes. To reduce this vectorization overhead, vector branches can be inserted to bypass vector instructions similar to the way scalar branches bypass scalar instructions. Below, AltiVec predicates are used to show how this can be achieved. * Scalar baseline (original code) for (i = 0; i < 1024; i++) * After vectorization in the presence of control flow for (i = 0; i < 1024; i += 4) * After inserting vector branches for (i = 0; i < 1024; i += 4) There are two things to note in the final code with vector branches; First, the predicate defining instruction for vPA is also included within the body of the outer vector branch by using vec_any_gt. Second, the profitability of the inner vector branch for vPB depends on the conditional probability of vPB having false values in all fields given vPA has false values in all fields. Consider an example where the outer branch in the scalar baseline is always taken, bypassing most instructions in the loop body. The intermediate case above, without vector branches, executes all vector instructions. The final code, with vector branches, executes both the comparison and the branch in vector mode, potentially gaining performance over the scalar baseline.


Manual vectorization

In most C and
C++ C++ (pronounced "C plus plus") is a high-level general-purpose programming language created by Danish computer scientist Bjarne Stroustrup as an extension of the C programming language, or "C with Classes". The language has expanded significan ...
compilers, it is possible to use
intrinsic function In computer software, in compiler theory, an intrinsic function (or built-in function) is a function (subroutine) available for use in a given programming language whose implementation is handled specially by the compiler. Typically, it may subst ...
s to manually vectorise, at the expense of programmer effort and maintainability.


See also

*
Chaining (vector processing) In computing, chaining is a technique used in computer architecture in which scalar and vector registers generate interim results which can be used immediately, without additional memory references which reduce computational speed. The chaini ...
*
Vector processor In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set where its instructions are designed to operate efficiently and effectively on large one-dimensional arrays of data called ...


References

{{DEFAULTSORT:Vectorization (Computer Science) Compiler optimizations Parallel computing Distributed computing problems SIMD computing lt:Vektorizacija ja:ベクトル化