a
, b
and c
. Automatic vectorization is a major research topic in computer science.
Background
Early computers usually had one logic unit, which executed one instruction on one pair of operands at a time. Computer languages and programs therefore were designed to execute in sequence. Modern computers, though, can do many things at once. So, many optimizing compilers perform automatic vectorization, where parts of sequential programs are transformed into parallel operations. Loop vectorization transforms procedural loops by assigning a processing unit to each pair of operands. Programs spend most of their time within such loops. Therefore, vectorization can significantly accelerate them, especially over large data sets. Loop vectorization is implemented inGuarantees
Automatic vectorization, like any loop optimization or other compile-time optimization, must exactly preserve program behavior.Data dependencies
All dependencies must be respected during execution to prevent incorrect results. In general, loop invariant dependencies and lexically forward dependencies can be easily vectorized, and lexically backward dependencies can be transformed into lexically forward dependencies. However, these transformations must be done safely, in order to ensure that the dependence between all statements remain true to the original. Cyclic dependencies must be processed independently of the vectorized instructions.Data precision
Theory
To vectorize a program, the compiler's optimizer must first understand the dependencies between statements and re-align them, if necessary. Once the dependencies are mapped, the optimizer must properly arrange the implementing instructions changing appropriate candidates to vector instructions, which operate on multiple data items.Building the dependency graph
The first step is to build the dependency graph, identifying which statements depend on which other statements. This involves examining each statement and identifying every data item that the statement accesses, mapping array access modifiers to functions and checking every access' dependency to all others in all statements. Alias analysis can be used to certify that the different variables access (or intersect) the same region in memory. The dependency graph contains all local dependencies with distance not greater than the vector size. So, if the vector register is 128 bits, and the array type is 32 bits, the vector size is 128/32 = 4. All other non-cyclic dependencies should not invalidate vectorization, since there won't be any concurrent access in the same vector instruction. Suppose the vector size is the same as 4 ints:Clustering
Using the graph, the optimizer can then cluster the strongly connected components (SCC) and separate vectorizable statements from the rest. For example, consider a program fragment containing three statement groups inside a loop: (SCC1+SCC2), SCC3 and SCC4, in that order, in which only the second group (SCC3) can be vectorized. The final program will then contain three loops, one for each group, with only the middle one vectorized. The optimizer cannot join the first with the last without violating statement execution order, which would invalidate the necessary guarantees.Detecting idioms
Some non-obvious dependencies can be further optimized based on specific idioms. For instance, the following self-data-dependencies can be vectorized because the value of the right-hand values ( RHS) are fetched and then stored on the left-hand value, so there is no way the data will change within the assignment.General framework
The general framework for loop vectorization is split into four stages: * Prelude: Where the loop-independent variables are prepared to be used inside the loop. This normally involves moving them to vector registers with specific patterns that will be used in vector instructions. This is also the place to insert the run-time dependence check. If the check decides vectorization is not possible, branch to Cleanup. * Loop(s): All vectorized (or not) loops, separated by SCCs clusters in order of appearance in the original code. * Postlude: Return all loop-independent variables, inductions and reductions. * Cleanup: Implement plain (non-vectorized) loops for iterations at the end of a loop that are not a multiple of the vector size or for when run-time checks prohibit vector processing.Run-time vs. compile-time
Some vectorizations cannot be fully checked at compile time. For example, library functions can defeat optimization if the data they process is supplied by the caller. Even in these cases, run-time optimization can still vectorize loops on-the-fly. This run-time check is made in the prelude stage and directs the flow to vectorized instructions if possible, otherwise reverts to standard processing, depending on the variables that are being passed on the registers or scalar variables. The following code can easily be vectorized at compile time, as it doesn't have any dependence on external parameters. Also, the language guarantees that neither will occupy the same region in memory as any other variable, as they are local variables and live only in the execution stack.int *restrict a, int *restrict b
)—tells the compiler that the memory ranges pointed to by ''a'' and ''b'' do not overlap, leading to the same outcome as the example above.)
There exist some tools to dynamically analyze existing applications to assess the inherent latent potential for SIMD parallelism, exploitable through further compiler advances and/or via manual code changes.
Techniques
An example would be a program to multiply two vectors of numeric data. A scalar approach would be something like:Loop-level automatic vectorization
This technique, used for conventional vector machines, tries to find and exploit SIMD parallelism at the loop level. It consists of two major steps as follows. # Find an innermost loop that can be vectorized # Transform the loop and generate vector codes In the first step, the compiler looks for obstacles that can prevent vectorization. A major obstacle for vectorization is true data dependency shorter than the vector length. Other obstacles include function calls and short iteration counts. Once the loop is determined to be vectorizable, the loop is stripmined by the vector length and each scalar instruction within the loop body is replaced with the corresponding vector instruction. Below, the component transformations for this step are shown using the above example. * After stripminingBasic block level automatic vectorization
This relatively new technique specifically targets modern SIMD architectures with short vector lengths. Although loops can be unrolled to increase the amount of SIMD parallelism in basic blocks, this technique exploits SIMD parallelism within basic blocks rather than loops. The two major steps are as follows. # The innermost loop is unrolled by a factor of the vector length to form a large loop body. # Isomorphic scalar instructions (that perform the same operation) are packed into a vector instruction if dependencies do not prevent doing so. To show step-by-step transformations for this approach, the same example is used again. * After loop unrolling (by the vector length, assumed to be 4 in this case)In the presence of control flow
The presence of if-statements in the loop body requires the execution of instructions in all control paths to merge the multiple values of a variable. One general approach is to go through a sequence of code transformations: predication → vectorization(using one of the above methods) → remove vector predicates → remove scalar predicates. If the following code is used as an example to show these transformations;Reducing vectorization overhead in the presence of control flow
Having to execute the instructions in all control paths in vector code has been one of the major factors that slow down the vector code with respect to the scalar baseline. The more complex the control flow becomes and the more instructions are bypassed in the scalar code, the larger the vectorization overhead becomes. To reduce this vectorization overhead, vector branches can be inserted to bypass vector instructions similar to the way scalar branches bypass scalar instructions. Below, AltiVec predicates are used to show how this can be achieved. * Scalar baseline (original code)Manual vectorization
In most C and C++ compilers, it is possible to useSee also
* Chaining (vector processing) * Vector processorExternal links
References
{{DEFAULTSORT:Vectorization (Computer Science) Compiler optimizations Parallel computing Distributed computing problems SIMD computing lt:Vektorizacija ja:ベクトル化