GCD Test
   HOME





GCD Test
In compiler theory, a greatest common divisor test (GCD test) is the test used in study of loop optimization and loop dependence analysis to test the dependency between loop statements. Description A greatest common divisor (GCD) test is a test used in computer science compiler theory to study of loop optimization and loop dependence analysis to test the dependency between loop statements. Use Whenever a sequential loop like for loop is made to be parallel so that it can be executed on more than one processor—as in case of grid computing or cluster computing—then certain dependencies (e.g., testing the flow (true) dependence of a statement) are checked to know whether the loop can be parallelized. According to this test, by comparing the indices of two arrays present in two or more statements, it can be calculated whether it is legal to parallelize the loop or not. Rationale Theorem A linear Diophantine equation a1*x1 + a2*x2 +... + an*xn =c has an integer sol ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Compiler Theory
In computing, a compiler is a computer program that Translator (computing), translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a lower level language, low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program.Compilers: Principles, Techniques, and Tools by Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman - Second Edition, 2007 There are many different types of compilers which produce output in different useful forms. A ''cross-compiler'' produces code for a different Central processing unit, CPU or operating system than the one on which the cross-compiler itself runs. A ''bootstrap compiler'' is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language. Related software ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Statement (programming)
In computer programming, a statement is a syntactic unit of an imperative programming language that expresses some action to be carried out. A program written in such a language is formed by a sequence of one or more statements. A statement may have internal components (e.g. expressions). Many programming languages (e.g. Ada, Algol 60, C, Java, Pascal) make a distinction between statements and definitions/declarations. A definition or declaration specifies the data on which a program is to operate, while a statement specifies the actions to be taken with that data. Statements which cannot contain other statements are ''simple''; those which can contain other statements are ''compound''. The appearance of a statement (and indeed a program) is determined by its syntax or grammar. The meaning of a statement is determined by its semantics. Simple statements Simple statements are complete in themselves; these include assignments, subroutine calls, and a few statements which may ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Dependence Analysis
In compiler theory, dependence analysis produces execution-order constraints between statements/instructions. Broadly speaking, a statement ''S2'' depends on ''S1'' if ''S1'' must be executed before ''S2''. Broadly, there are two classes of dependencies--control dependencies and data dependencies. Dependence analysis determines whether it is safe to reorder or parallelize statements. Control dependencies Control dependency is a situation in which a program instruction executes if the previous instruction evaluates in a way that allows its execution. A statement ''S2'' is ''control dependent'' on ''S1'' (written S1\ \delta^c\ S2) if and only if ''S2s execution is conditionally guarded by ''S1''. ''S2'' is ''control dependent'' on ''S1'' if and only if S1 \in PDF(S2) where PDF(S) is the post dominance frontier of statement S. The following is an example of such a control dependence: S1 if x > 2 goto L1 S2 y := 3 S3 L1: z := y + 1 Here, ''S2'' only runs if ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Banerjee Test
In compiler theory In computing, a compiler is a computer program that Translator (computing), translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primaril ..., the Banerjee test is a dependence test. The Banerjee test assumes that all loop indices are independent, however in reality, this is often not true. The Banerjee test is a conservative test. That is, it will not break a dependence that does not exist. This means that the only thing the test can guarantee is the absence of a dependence. General form For a loop of the form: for(i=0; i j ~~ \textrm ~~ f \left ( i \right ) = g \left ( j \right ) \! For a loop of the form: for(i=0; i A true dependence exists between statement ''s1'' and statement ''s2'' if and only if : \exists i, j \in \left
[...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Automatic Parallelization
Automatic parallelization, also auto parallelization, or autoparallelization refers to converting sequential code into multi-threaded and/or vectorized code in order to use multiple processors simultaneously in a shared-memory multiprocessor ( SMP) machine. Fully automatic parallelization of sequential programs is a challenge because it requires complex program analysis and the best approach may depend upon parameter values that are not known at compilation time. The programming control structures on which autoparallelization places the most focus are loops, because, in general, most of the execution time of a program takes place inside some form of loop. There are two main approaches to parallelization of loops: pipelined multi-threading and cyclic multi-threading. For example, consider a loop that on each iteration applies a hundred operations, and runs for a thousand iterations. This can be thought of as a grid of 100 columns by 1000 rows, a total of 100,000 operations. Cyc ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Loop Transformation
In compiler theory, loop optimization is the process of increasing execution speed and reducing the overheads associated with loops. It plays an important role in improving cache performance and making effective use of parallel processing capabilities. Most execution time of a scientific program is spent on loops; as such, many compiler optimization techniques have been developed to make them faster. Representation of computation and transformations Since instructions inside loops can be executed repeatedly, it is frequently not possible to give a bound on the number of instruction executions that will be impacted by a loop optimization. This presents challenges when reasoning about the correctness and benefits of a loop optimization, specifically the representations of the computation being optimized and the optimization(s) being performed.In the book Reasoning About Program Transformations', Jean-Francois Collard discusses in depth the general question of representing exec ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Normalized Loop
In computer science, a normalized loop (sometimes called well-behaved loop), is a loop in which the loop variable starts at 0 (or any constant) and gets incremented by one at every iteration until the exit condition is met. Normalized loops are very important for compiler theory, loop dependence analysis as they simplify the data dependence analysis. Well-behaved loops A well behaved loop is normally of the form: for ( i = 0; i < MAX; i++ ) a = b + 5; Because the increment is unitary and constant, it's very easy to see that, if both ''a'' and ''b'' are bigger than MAX, this loop will never access memory outside the allocated range.


Non-normalized loops

A non-normalized loop may begin at different indexes, increment by not-unitary amounts and have exit conditions complicated to define. Such loops are hard to op ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Diophantine Equation
In mathematics, the term ''linear'' is used in two distinct senses for two different properties: * linearity of a ''function'' (or '' mapping''); * linearity of a ''polynomial''. An example of a linear function is the function defined by f(x)=(ax,bx) that maps the real line to a line in the Euclidean plane R2 that passes through the origin. An example of a linear polynomial in the variables X, Y and Z is aX+bY+cZ+d. Linearity of a mapping is closely related to '' proportionality''. Examples in physics include the linear relationship of voltage and current in an electrical conductor (Ohm's law), and the relationship of mass and weight. By contrast, more complicated relationships, such as between velocity and kinetic energy, are ''nonlinear''. Generalized for functions in more than one dimension, linearity means the property of a function of being compatible with addition and scaling, also known as the superposition principle. Linearity of a polynomial means that its degree is ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Array Data Structure
In computer science, an array is a data structure consisting of a collection of ''elements'' (value (computer science), values or variable (programming), variables), of same memory size, each identified by at least one ''array index'' or ''key'', a collection of which may be a tuple, known as an index tuple. An array is stored such that the position (memory address) of each element can be computed from its index tuple by a mathematical formula. The simplest type of data structure is a linear array, also called a one-dimensional array. For example, an array of ten 32-bit (4-byte) integer variables, with indices 0 through 9, may be stored as ten Word (data type), words at memory addresses 2000, 2004, 2008, ..., 2036, (in hexadecimal: 0x7D0, 0x7D4, 0x7D8, ..., 0x7F4) so that the element with index ''i'' has the address 2000 + (''i'' × 4). The memory address of the first element of an array is called first address, foundation address, or base address. Because the mathematical conc ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Greatest Common Divisor
In mathematics, the greatest common divisor (GCD), also known as greatest common factor (GCF), of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers , , the greatest common divisor of and is denoted \gcd (x,y). For example, the GCD of 8 and 12 is 4, that is, . In the name "greatest common divisor", the adjective "greatest" may be replaced by "highest", and the word "divisor" may be replaced by "factor", so that other names include highest common factor, etc. Historically, other names for the same concept have included greatest common measure. This notion can be extended to polynomials (see ''Polynomial greatest common divisor'') and other commutative rings (see ' below). Overview Definition The ''greatest common divisor'' (GCD) of integers and , at least one of which is nonzero, is the greatest positive integer such that is a divisor of both and ; that is, there are integers and such that ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Parallelized
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling.S.V. Adve ''et al.'' (November 2008)"Parallel Computing Research at Illinois: The UPCRC Agenda" (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits—increased clock frequency and smarter but increasingly complex architectures—are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than mak ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  



MORE