Unreachable Code
   HOME
*





Unreachable Code
In computer programming, unreachable code is part of the source code of a program which can never be executed because there exists no control flow path to the code from the rest of the program. Unreachable code is sometimes also called ''dead code'', although dead code may also refer to code that is executed but has no effect on the output of a program. Unreachable code is generally considered undesirable for several reasons: * It uses memory unnecessarily * It can cause unnecessary use of the CPU's instruction cache ** This can also decrease data locality * Time and effort may be spent testing, maintaining and documenting code which is never used ** Sometimes an automated test is the only thing using the code. However, unreachable code can have some legitimate uses, like providing a library of functions for calling or jumping to manually via a debugger while the program is halted after a breakpoint. This is particularly useful for examining and pretty-printing the internal state ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computer Programming
Computer programming is the process of performing a particular computation (or more generally, accomplishing a specific computing result), usually by designing and building an executable computer program. Programming involves tasks such as analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and the implementation of algorithms (usually in a chosen programming language, commonly referred to as coding). The source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, which is directly executed by the central processing unit. The purpose of programming is to find a sequence of instructions that will automate the performance of a task (which can be as complex as an operating system) on a computer, often for solving a given problem. Proficient programming thus usually requires expertise in several different subjects, including knowledge of the application domain, specialized algori ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Control Flow Analysis
In computer science, control-flow analysis (CFA) is a static-code-analysis technique for determining the control flow of a program. The control flow is expressed as a control-flow graph (CFG). For both functional programming languages and object-oriented programming languages, the term CFA, and elaborations such as ''k''-CFA, refer to specific algorithms that compute control flow. For many imperative programming languages, the control flow of a program is explicit in a program's source code. As a result, interprocedural control-flow analysis implicitly usually refers to a static analysis technique for determining the receiver(s) of function or method calls in computer programs written in a higher-order programming language. For example, in a programming language with higher-order functions like Scheme, the target of a function call may not be explicit: in the isolated expression (lambda (f) (f x)) it is unclear to which procedure f may refer. A control-flow analysis must co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Compiler Optimizations
In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution time, memory footprint, storage size, and power consumption (the last three being popular for portable computers). Compiler optimization is generally implemented using a sequence of ''optimizing transformations'', algorithms which take a program and transform it to produce a semantically equivalent output program that uses fewer resources or executes faster. It has been shown that some code optimization problems are NP-complete, or even undecidable. In practice, factors such as the programmer's willingness to wait for the compiler to complete its task place upper limits on the optimizations that a compiler might provide. Optimization is generally a very CPU- and memory-intensive process. In the past, computer memory limitations were also a major factor in limiting which optimizations co ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Oxbow Code
In computer programming, oxbow code refers to fragments of program code that were once needed but which are now never used. Such code is typically formed when a program is modified, either when an item is superseded with a newer version but the old version is not removed, or when an item is removed or replaced, but the item's supporting code is not removed. Such code is normally removed unless sufficiently amusing or educational. Similarly, variables and data structures can be left around after the last code that used them has been removed, though this is more commonly called unused or unreferenced variables. The term is taken by analogy with oxbow lakes which are formed in nature when a bend in a river becomes so pronounced that the water breaks through from before the bend to after it, making the river straight again. When the sides of the new course silt up, a curved lake is left, disconnected from the main stream. Examples (frognash/server/asobj/Global.cpp 1.46 static ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Dead Code
The term dead code has multiple definitions. Some use the term to refer to code (i.e. instructions in memory) which can never be executed at run-time. In some areas of computer programming, dead code is a section in the source code of a program which is executed but whose result is never used in any other computation. The execution of dead code wastes computation time and memory. While the result of a dead computation may never be used, it may raise exceptions or affect some global state, thus removal of such code may change the output of the program and introduce unintended bugs. Compiler optimizations are typically conservative in their approach to dead-code removal if there is any ambiguity as to whether removal of the dead code will affect the program output. The programmer may aid the compiler in this matter by making additional use of static and/or inline functions and enabling the use of link-time optimization. Example int foo (int iX, int iY) In the above example, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Redundant Code
In computer programming, redundant code is source code or compiled code in a computer program that is unnecessary, such as: * recomputing a value that has previously been calculated and is still available, * code that is never executed (known as unreachable code), * code which is executed but has no external effect (e.g., does not change the output produced by a program; known as dead code). A NOP instruction might be considered to be redundant code that has been explicitly inserted to pad out the instruction stream or introduce a time delay, for example to create a timing loop by "wasting time". Identifiers that are declared, but never referenced, are termed redundant declarations. Examples The following examples are in C. int foo(int iX) The second iX*2 expression is redundant code and can be replaced by a reference to the variable iY. Alternatively, the definition int iY = iX*2 can instead be removed. Consider: #define min(A,B) ((A)(B)?(A):(B)) int random(int cutoff ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Code Coverage
In computer science, test coverage is a percentage measure of the degree to which the source code of a program is executed when a particular test suite is run. A program with high test coverage has more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage. Many different metrics can be used to calculate test coverage. Some of the most basic are the percentage of program subroutines and the percentage of program statements called during execution of the test suite. Test coverage was among the first methods invented for systematic software testing. The first published reference was by Miller and Maloney in ''Communications of the ACM'', in 1963. Coverage criteria To measure what percentage of code has been executed by a test suite, one or more ''coverage criteria'' are used. These are usually defined as rules or requirements, which a test suite must satisfy. Basic cove ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Heuristic
A heuristic (; ), or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision. Examples that employ heuristics include using trial and error, a rule of thumb or an educated guess. Heuristics are the strategies derived from previous experiences with similar problems. These strategies depend on using readily accessible, though loosely applicable, information to control problem solving in human beings, machines and abstract issues. When an individual applies a heuristic in practice, it generally performs as expected. However it can alternatively cre ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Profiler (computer Science)
In software engineering, profiling ("program profiling", "software profiling") is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization, and more specifically, performance engineering. Profiling is achieved by instrumenting either the program source code or its binary executable form using a tool called a ''profiler'' (or ''code profiler''). Profilers may use a number of different techniques, such as event-based, statistical, instrumented, and simulation methods. Gathering program events Profilers use a wide variety of techniques to collect data, including hardware interrupts, code instrumentation, instruction set simulation, operating system hooks, and performance counters. Use of profilers The output of a profiler may be: * A statistical ''sum ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Redundant Code
In computer programming, redundant code is source code or compiled code in a computer program that is unnecessary, such as: * recomputing a value that has previously been calculated and is still available, * code that is never executed (known as unreachable code), * code which is executed but has no external effect (e.g., does not change the output produced by a program; known as dead code). A NOP instruction might be considered to be redundant code that has been explicitly inserted to pad out the instruction stream or introduce a time delay, for example to create a timing loop by "wasting time". Identifiers that are declared, but never referenced, are termed redundant declarations. Examples The following examples are in C. int foo(int iX) The second iX*2 expression is redundant code and can be replaced by a reference to the variable iY. Alternatively, the definition int iY = iX*2 can instead be removed. Consider: #define min(A,B) ((A)(B)?(A):(B)) int random(int cutoff ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Common Subexpression Elimination
In compiler theory, common subexpression elimination (CSE) is a compiler optimization that searches for instances of identical expressions (i.e., they all evaluate to the same value), and analyzes whether it is worthwhile replacing them with a single variable holding the computed value. Example In the following code: a = b * c + g; d = b * c * e; it may be worth transforming the code to: tmp = b * c; a = tmp + g; d = tmp * e; if the cost of storing and retrieving tmp is less than the cost of calculating b * c an extra time. Principle The possibility to perform CSE is based on available expression analysis (a data flow analysis). An expression b*c is available at a point ''p'' in a program if: * every path from the initial node to p evaluates b*c before reaching ''p'', * and there are no assignments to b or c after the evaluation but before ''p''. The cost/benefit analysis performed by an optimizer will calculate whether the cost of the store to tmp is less than the c ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]