Enabling Transformation
   HOME
*





Enabling Transformation
In computer science, an enabling transformation is a compiler optimization that increases the effectiveness of other compiler optimizations. Such an optimization may or may not improve program performance by itself, but it also alters the structure of the program in such a way that other optimizations may produce superior results. Typical enabling transformations include: * Inline expansion: By inserting the body of a function in place of its call site, a large collection of intraprocedural optimizations such as dead code elimination, loop-invariant code motion, and induction variable elimination can take advantage of information from both the caller and callee. * Loop skewing: By "skewing" the logical shape of an array, this loop optimization can (when combined with loop interchange) eliminate loop-carried dependencies, allowing an inner loop to be parallelized. * Loop unswitching moves loop invariant In computer science, a loop invariant is a property of a program loop th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computer Science
Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (including the design and implementation of hardware and software). Computer science is generally considered an area of academic research and distinct from computer programming. Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Compiler Optimization
In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution time, memory footprint, storage size, and power consumption (the last three being popular for portable computers). Compiler optimization is generally implemented using a sequence of ''optimizing transformations'', algorithms which take a program and transform it to produce a semantically equivalent output program that uses fewer resources or executes faster. It has been shown that some code optimization problems are NP-complete, or even undecidable. In practice, factors such as the programmer's willingness to wait for the compiler to complete its task place upper limits on the optimizations that a compiler might provide. Optimization is generally a very CPU- and memory-intensive process. In the past, computer memory limitations were also a major factor in limiting which optimizations co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Inline Expansion
In computing, inline expansion, or inlining, is a manual or compiler optimization that replaces a function call site with the body of the called function. Inline expansion is similar to macro expansion, but occurs during compilation, without changing the source code (the text), while macro expansion occurs prior to compilation, and results in different text that is then processed by the compiler. Inlining is an important optimization, but has complicated effects on performance. As a rule of thumb, some inlining will improve speed at very minor cost of space, but excess inlining will hurt speed, due to inlined code consuming too much of the instruction cache, and also cost significant space. A survey of the modest academic literature on inlining from the 1980s and 1990s is given in Peyton Jones & Marlow 1999. Overview Inline expansion is similar to macro expansion as the compiler places a new copy of the function in each place it is called. Inlined functions run a little faster ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Dead Code Elimination
In compiler theory, dead-code elimination (also known as DCE, dead-code removal, dead-code stripping, or dead-code strip) is a compiler optimization to remove code which does not affect the program results. Removing such code has several benefits: it shrinks program size, an important consideration in some contexts, and it allows the running program to avoid executing irrelevant operations, which reduces its running time. It can also enable further optimizations by simplifying program structure. ''Dead code'' includes code that can never be executed (''unreachable code''), and code that only affects '' dead variables'' (written to, but never read again), that is, irrelevant to the program. Examples Consider the following example written in C. int foo(void) Simple analysis of the uses of values would show that the value of b after the first assignment is not used inside foo. Furthermore, b is declared as a local variable inside foo, so its value cannot be used outside foo. Th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Loop-invariant Code Motion
In computer programming, loop-invariant code consists of statements or expressions (in an imperative programming language) that can be moved outside the body of a loop without affecting the semantics of the program. Loop-invariant code motion (also called hoisting or scalar promotion) is a compiler optimization that performs this movement automatically. Example In the following code sample, two optimizations can be applied. int i = 0; while (i < n) Although the calculation x = y + z and x * x is loop-invariant, precautions must be taken before moving the code outside the loop. It is possible that the loop condition is false (for example, if n holds a negative value), and in such case, the loop body should not be executed at all. One way of guaranteeing correct behaviour is using a conditional branch outside of the loop. Evaluating the loop condition can have

Induction Variable Elimination
In computer science, an induction variable is a variable that gets increased or decreased by a fixed amount on every iteration of a loop or is a linear function of another induction variable. For example, in the following loop, i and j are induction variables: for (i = 0; i < 10; ++i)


Application to strength reduction

A common compiler optimization is to recognize the existence of induction variables and replace them with simpler computations; for example, the code above could be rewritten by the compiler as follows, on the assumption that the addition of a constant will be cheaper than a multiplication. j = -17; for (i = 0; i < 10; ++i) This optimization is a special case of

Loop Skewing
Loop or LOOP may refer to: Brands and enterprises * Loop (mobile), a Bulgarian virtual network operator and co-founder of Loop Live * Loop, clothing, a company founded by Carlos Vasquez in the 1990s and worn by Digable Planets * Loop Mobile, an Indian mobile phone operator * Loop, a reusable container program announced in 2019 by TerraCycle Geography * Loop, Germany, a municipality in Schleswig-Holstein * Loop (Texarkana), a roadway loop around Texarkana, Arkansas, United States * Loop, Blair County, Pennsylvania, United States * Loop, Indiana County, Pennsylvania, United States * Loop, West Virginia, United States * Loop 101, a semi-beltway of the Phoenix Metropolitan Area * Loop 202, a semi-beltway of the Phoenix Metropolitan Area * Loop 303, a semi-beltway of the Phoenix Metropolitan Area * Chicago Loop, the downtown neighborhood of Chicago bounded by the elevated railway The Loop ** Loop Retail Historic District, a shopping district in the Chicago Loop * Delmar Loop, an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Loop Interchange
In compiler theory, loop interchange is the process of exchanging the order of two iteration variables used by a nested loop. The variable used in the inner loop switches to the outer loop, and vice versa. It is often done to ensure that the elements of a multi-dimensional array are accessed in the order in which they are present in memory, improving locality of reference. For example, in the code fragment: for i from 0 to 10 for j from 0 to 20 a ,j= i + j loop interchange would result in: for j from 0 to 20 for i from 0 to 10 a ,j= i + j On occasion, such a transformation may create opportunities to further optimize, such as automatic vectorization of the array assignments. The utility of loop interchange The major purpose of loop interchange is to take advantage of the CPU cache when accessing array elements. When a processor accesses an array element for the first time, it will retrieve an entire block of data from memory to cache. That block is likely to ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Loop Dependence Analysis
In computer science, loop dependence analysis is a process which can be used to find dependencies within iterations of a loop with the goal of determining different relationships between statements. These dependent relationships are tied to the order in which different statements access memory locations. Using the analysis of these relationships, execution of the loop can be organized to allow multiple processors to work on different portions of the loop in parallel. This is known as parallel processing. In general, loops can consume a lot of processing time when executed as serial code. Through parallel processing, it is possible to reduce the total execution time of a program through sharing the processing load among multiple processors. The process of organizing statements to allow multiple processors to work on different portions of a loop is often referred to as parallelization. In order to see how we can exploit parallelization, we have to first analyze the dependencies with ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Loop Unswitching
Loop unswitching is a compiler optimization. It moves a conditional inside a loop outside of it by duplicating the loop's body, and placing a version of it inside each of the if and else clauses of the conditional. This can improve the parallelization of the loop. Since modern processors can operate quickly on vectors, this improvement increases the speed of the program. Here is a simple example. Suppose we want to add the two arrays ''x'' and ''y'' and also do something depending on the variable ''w''. We have the following C code: int i, w, x000 y000 for (i = 0; i < 1000; i++) The conditional inside this loop makes it difficult to safely parallelize this loop. When we unswitch the loop, this becomes: int i, w, x



Loop Invariant
In computer science, a loop invariant is a property of a program loop that is true before (and after) each iteration. It is a logical assertion, sometimes checked within the code by an assertion call. Knowing its invariant(s) is essential in understanding the effect of a loop. In formal program verification, particularly the Floyd-Hoare approach, loop invariants are expressed by formal predicate logic and used to prove properties of loops and by extension algorithms that employ loops (usually correctness properties). The loop invariants will be true on entry into a loop and following each iteration, so that on exit from the loop both the loop invariants and the loop termination condition can be guaranteed. From a programming methodology viewpoint, the loop invariant can be viewed as a more abstract specification of the loop, which characterizes the deeper purpose of the loop beyond the details of this implementation. A survey article covers fundamental algorithms from many a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]