Partial-redundancy Elimination
   HOME
*





Partial-redundancy Elimination
In compiler theory, partial redundancy elimination (PRE) is a compiler optimization that eliminates expressions that are redundant on some but not necessarily all paths through a program. PRE is a form of common subexpression elimination. An expression is called partially redundant if the value computed by the expression is already available on some but not all paths through a program to that expression. An expression is fully redundant if the value computed by the expression is available on all paths through the program to that expression. PRE can eliminate partially redundant expressions by inserting the partially redundant expression on the paths that do not already compute it, thereby making the partially redundant expression fully redundant. For instance, in the following code: if (some_condition) else z = x + 4; the expression x+4 assigned to z is partially redundant because it is computed twice if some_condition is true. PRE would perform code motion on the expres ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Compiler Theory
In computing, a compiler is a computer program that translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program. Compilers: Principles, Techniques, and Tools by Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman - Second Edition, 2007 There are many different types of compilers which produce output in different useful forms. A ''cross-compiler'' produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A ''bootstrap compiler'' is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language. Related software include, a program that translates from a low-level language to a h ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Compiler Optimization
In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution time, memory footprint, storage size, and power consumption (the last three being popular for portable computers). Compiler optimization is generally implemented using a sequence of ''optimizing transformations'', algorithms which take a program and transform it to produce a semantically equivalent output program that uses fewer resources or executes faster. It has been shown that some code optimization problems are NP-complete, or even undecidable. In practice, factors such as the programmer's willingness to wait for the compiler to complete its task place upper limits on the optimizations that a compiler might provide. Optimization is generally a very CPU- and memory-intensive process. In the past, computer memory limitations were also a major factor in limiting which optimizations co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Expression (programming)
In computer science, an expression is a syntactic entity in a programming language that may be evaluated to determine its value. It is a combination of one or more constants, variables, functions, and operators that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value. This process, for mathematical expressions, is called ''evaluation''. In simple settings, the resulting value is usually one of various primitive types, such as numerical, string, boolean, complex data type or other types. Expression is often contrasted with statement—a syntactic entity that has no value (an instruction). Examples For example, 2 + 3 is both an arithmetic and programming expression, which evaluates to 5. A variable is an expression because it denotes a value in memory, so y + 6 is also an expression. An example of a relational expression is 4 ≠ 4, which eval ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Common Subexpression Elimination
In compiler theory, common subexpression elimination (CSE) is a compiler optimization that searches for instances of identical expressions (i.e., they all evaluate to the same value), and analyzes whether it is worthwhile replacing them with a single variable holding the computed value. Example In the following code: a = b * c + g; d = b * c * e; it may be worth transforming the code to: tmp = b * c; a = tmp + g; d = tmp * e; if the cost of storing and retrieving tmp is less than the cost of calculating b * c an extra time. Principle The possibility to perform CSE is based on available expression analysis (a data flow analysis). An expression b*c is available at a point ''p'' in a program if: * every path from the initial node to p evaluates b*c before reaching ''p'', * and there are no assignments to b or c after the evaluation but before ''p''. The cost/benefit analysis performed by an optimizer will calculate whether the cost of the store to tmp is less than the c ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Value (computer Science)
In computer science and software programming, a value is the representation of some entity that can be manipulated by a program. The members of a type are the values of that type. The "value of a variable" is given by the corresponding mapping in the environment. In languages with assignable variables, it becomes necessary to distinguish between the ''r-value'' (or contents) and the ''l-value'' (or location) of a variable. In declarative (high-level) languages, values have to be referentially transparent. This means that the resulting value is independent of the location of the expression needed to compute the value. Only the contents of the location (the bits, whether they are 1 or 0) and their interpretation are significant. Value category Despite its name, in the C++ language standards this terminology is used to categorize expressions, not values. Assignment: l-values and r-values Some languages use the idea of l-values and r-values, deriving from the typical mode o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Code Motion
In computer science, code motion, also known as code hoisting, code sinking, loop-invariant code motion, or code factoring, is a blanket term for any process that moves code within a program for performance or size benefits, and is a common optimization performed in most optimizing compilers. It can be difficult to differentiate between different types of code motion, due to the inconsistent meaning of the terms surrounding it. Uses Code motion has a variety of uses and benefits, many of which overlap each other in their implementation. Removing unused/useless operations Code Sinking, also known as lazy code motion, is a term for a technique that reduces wasted instructions by moving instructions to branches in which they are used: If an operation is executed before a branch, and only one of the branch paths use the result of that operation, then code sinking entails moving that operation into the branch where it will be used. This technique is a form of dead code eliminatio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Loop-invariant Code Motion
In computer programming, loop-invariant code consists of statements or expressions (in an imperative programming language) that can be moved outside the body of a loop without affecting the semantics of the program. Loop-invariant code motion (also called hoisting or scalar promotion) is a compiler optimization that performs this movement automatically. Example In the following code sample, two optimizations can be applied. int i = 0; while (i < n) Although the calculation x = y + z and x * x is loop-invariant, precautions must be taken before moving the code outside the loop. It is possible that the loop condition is false (for example, if n holds a negative value), and in such case, the loop body should not be executed at all. One way of guaranteeing correct behaviour is using a conditional branch outside of the loop. Evaluating the loop condition can have

Strength Reduction
In compiler construction, strength reduction is a compiler optimization where expensive operations are replaced with equivalent but less expensive operations. The classic example of strength reduction converts "strong" multiplications inside a loop into "weaker" additions – something that frequently occurs in array addressing. Examples of strength reduction include: * replacing a multiplication within a loop with an addition * replacing an exponentiation within a loop with a multiplication Code analysis Most of a program's execution time is typically spent in a small section of code (called a hot spot), and that code is often inside a loop that is executed over and over. A compiler uses methods to identify loops and recognize the characteristics of register values within those loops. For strength reduction, the compiler is interested in: *Loop invariants: the values which do not change within the body of a loop. *Induction variables: the values which are being iterated ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Static Single Assignment Form
In compiler design, static single assignment form (often abbreviated as SSA form or simply SSA) is a property of an intermediate representation (IR) that requires each variable to be assigned exactly once and defined before it is used. Existing variables in the original IR are split into ''versions'', new variables typically indicated by the original name with a subscript in textbooks, so that every definition gets its own version. In SSA form, use-def chains are explicit and each contains a single element. SSA was proposed by Barry K. Rosen, Mark N. Wegman, and F. Kenneth Zadeck in 1988. Ron Cytron, Jeanne Ferrante and the previous three researchers at IBM developed an algorithm that can compute the SSA form efficiently. One can expect to find SSA in a compiler for Fortran, C or C++, whereas in functional language compilers, such as those for Scheme and ML, continuation-passing style (CPS) is generally used. SSA is formally equivalent to a well-behaved subset of CPS excludi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Global Value Numbering
Value numbering is a technique of determining when two computations in a program are equivalent and eliminating one of them with a semantics-preserving optimization. Global value numbering Global value numbering (GVN) is a compiler optimization based on the static single assignment form (SSA) intermediate representation. It sometimes helps eliminate redundant code that common subexpression elimination (CSE) does not. At the same time, however, CSE may eliminate code that GVN does not, so both are often found in modern compilers. Global value numbering is distinct from local value numbering in that the value-number mappings hold across basic block boundaries as well, and different algorithms are used to compute the mappings. Global value numbering works by assigning a value number to variables and expressions. The same value number is assigned to those variables and expressions which are provably equivalent. For instance, in the following code: w := 3 x := 3 y := x + 4 z := ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Value Numbering
Value numbering is a technique of determining when two computations in a program are equivalent and eliminating one of them with a semantics-preserving optimization. Global value numbering Global value numbering (GVN) is a compiler optimization based on the static single assignment form (SSA) intermediate representation. It sometimes helps eliminate redundant code that common subexpression elimination (CSE) does not. At the same time, however, CSE may eliminate code that GVN does not, so both are often found in modern compilers. Global value numbering is distinct from local value numbering in that the value-number mappings hold across basic block boundaries as well, and different algorithms are used to compute the mappings. Global value numbering works by assigning a value number to variables and expressions. The same value number is assigned to those variables and expressions which are provably equivalent. For instance, in the following code: w := 3 x := 3 y := x + 4 z := ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Redundant Code
In computer programming, redundant code is source code or compiled code in a computer program that is unnecessary, such as: * recomputing a value that has previously been calculated and is still available, * code that is never executed (known as unreachable code), * code which is executed but has no external effect (e.g., does not change the output produced by a program; known as dead code). A NOP instruction might be considered to be redundant code that has been explicitly inserted to pad out the instruction stream or introduce a time delay, for example to create a timing loop by "wasting time". Identifiers that are declared, but never referenced, are termed redundant declarations. Examples The following examples are in C. int foo(int iX) The second iX*2 expression is redundant code and can be replaced by a reference to the variable iY. Alternatively, the definition int iY = iX*2 can instead be removed. Consider: #define min(A,B) ((A)(B)?(A):(B)) int random(int cutoff ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]