In-place Activation
   HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
, an in-place algorithm is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
which transforms input using no auxiliary
data structure In computer science, a data structure is a data organization, management, and storage format that is usually chosen for efficient access to data. More precisely, a data structure is a collection of data values, the relationships among them, a ...
. However, a small amount of extra storage space is allowed for auxiliary variables. The input is usually overwritten by the output as the algorithm executes. An in-place algorithm updates its input sequence only through replacement or swapping of elements. An algorithm which is not in-place is sometimes called not-in-place or out-of-place. In-place can have slightly different meanings. In its strictest form, the algorithm can only have a constant amount of extra space, counting everything including
function Function or functionality may refer to: Computing * Function key, a type of key on computer keyboards * Function model, a structured representation of processes in a system * Function object or functor or functionoid, a concept of object-oriente ...
calls and
pointers Pointer may refer to: Places * Pointer, Kentucky * Pointers, New Jersey * Pointers Airport, Wasco County, Oregon, United States * The Pointers, a pair of rocks off Antarctica People with the name * Pointer (surname), a surname (including a lis ...
. However, this form is very limited as simply having an index to a length array requires bits. More broadly, in-place means that the algorithm does not use extra space for manipulating the input but may require a small though nonconstant extra space for its operation. Usually, this space is , though sometimes anything in is allowed. Note that space complexity also has varied choices in whether or not to count the index lengths as part of the space used. Often, the space complexity is given in terms of the number of indices or pointers needed, ignoring their length. In this article, we refer to total space complexity (
DSPACE DSpace is an open source repository software package typically used for creating open access repositories for scholarly and/or published digital content. While DSpace shares some feature overlap with content management systems and document manag ...
), counting pointer lengths. Therefore, the space requirements here have an extra factor compared to an analysis that ignores the length of indices and pointers. An algorithm may or may not count the output as part of its space usage. Since in-place algorithms usually overwrite their input with output, no additional space is needed. When writing the output to write-only memory or a stream, it may be more appropriate to only consider the working space of the algorithm. In theoretical applications such as
log-space reduction In computational complexity theory, a log-space reduction is a reduction computable by a deterministic Turing machine using logarithmic space. Conceptually, this means it can keep a constant number of pointers into the input, along with a logari ...
s, it is more typical to always ignore output space (in these cases it is more essential that the output is ''write-only'').


Examples

Given an
array An array is a systematic arrangement of similar objects, usually in rows and columns. Things called an array include: {{TOC right Music * In twelve-tone and serial composition, the presentation of simultaneous twelve-tone sets such that the ...
of items, suppose we want an array that holds the same elements in reversed order and to dispose of the original. One seemingly simple way to do this is to create a new array of equal size, fill it with copies from in the appropriate order and then delete . function reverse(a ..n - 1 allocate b ..n - 1 for i from 0 to n - 1 b − 1 − i:= a return b Unfortunately, this requires extra space for having the arrays and available simultaneously. Also,
allocation Allocation may refer to: Computing * Block allocation map * C++ allocators * Delayed allocation * File allocation table * IP address allocation * Memory allocation * No-write allocation (cache) * Register allocation Economics * Asset alloca ...
and deallocation are often slow operations. Since we no longer need , we can instead overwrite it with its own reversal using this in-place algorithm which will only need constant number (2) of integers for the auxiliary variables and , no matter how large the array is. function reverse_in_place(a ..n-1 for i from 0 to floor((n-2)/2) tmp := a a := a − 1 − i a − 1 − i:= tmp As another example, many
sorting algorithm In computer science, a sorting algorithm is an algorithm that puts elements of a List (computing), list into an Total order, order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. ...
s rearrange arrays into sorted order in-place, including: bubble sort,
comb sort Comb sort is a relatively simple sorting algorithm originally designed by Włodzimierz Dobosiewicz and Artur Borowy in 1980, later rediscovered (and given the name "Combsort") by Stephen Lacey and Richard Box in 1991. Comb sort improves on bubble ...
,
selection sort In computer science, selection sort is an in-place comparison sorting algorithm. It has an O(''n''2) time complexity, which makes it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is no ...
,
insertion sort Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time by comparisons. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. Ho ...
,
heapsort In computer science, heapsort is a comparison-based sorting algorithm. Heapsort can be thought of as an improved selection sort: like selection sort, heapsort divides its input into a sorted and an unsorted region, and it iteratively shrinks the ...
, and
Shell sort Shellsort, also known as Shell sort or Shell's method, is an in-place comparison sort. It can be seen as either a generalization of sorting by exchange ( bubble sort) or sorting by insertion (insertion sort). The method starts by sorting pairs ...
. These algorithms require only a few pointers, so their space complexity is .
Quicksort Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. Overall, it is slightly faster than ...
operates in-place on the data to be sorted. However, quicksort requires stack space pointers to keep track of the subarrays in its divide and conquer strategy. Consequently, quicksort needs additional space. Although this non-constant space technically takes quicksort out of the in-place category, quicksort and other algorithms needing only additional pointers are usually considered in-place algorithms. Most
selection algorithm In computer science, a selection algorithm is an algorithm for finding the ''k''th smallest number in a list or array; such a number is called the ''k''th ''order statistic''. This includes the cases of finding the minimum, maximum, and median e ...
s are also in-place, although some considerably rearrange the input array in the process of finding the final, constant-sized result. Some text manipulation algorithms such as
trim Trim or TRIM may refer to: Cutting * Cutting or trimming small pieces off something to remove them ** Book trimming, a stage of the publishing process ** Pruning, trimming as a form of pruning often used on trees Decoration * Trim (sewing), or ...
and reverse may be done in-place.


In computational complexity

In
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ...
, the strict definition of in-place algorithms includes all algorithms with space complexity, the class
DSPACE DSpace is an open source repository software package typically used for creating open access repositories for scholarly and/or published digital content. While DSpace shares some feature overlap with content management systems and document manag ...
(1). This class is very limited; it equals the
regular language In theoretical computer science and formal language theory, a regular language (also called a rational language) is a formal language that can be defined by a regular expression, in the strict sense in theoretical computer science (as opposed to ...
s. In fact, it does not even include any of the examples listed above. We usually consider algorithms in L, the class of problems requiring additional space, to be in-place. This class is more in line with the practical definition, as it allows numbers of size as pointers or indices. This expanded definition still excludes quicksort, however, because of its recursive calls. Identifying the in-place algorithms with L has some interesting implications; for example, it means that there is a (rather complex) in-place algorithm to determine whether a path exists between two nodes in an
undirected graph In discrete mathematics, and more specifically in graph theory, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related". The objects correspond to mathematical abstractions called '' v ...
, a problem that requires extra space using typical algorithms such as depth-first search (a visited bit for each node). This in turn yields in-place algorithms for problems such as determining if a graph is bipartite or testing whether two graphs have the same number of connected components. See SL for more information.


Role of randomness

In many cases, the space requirements of an algorithm can be drastically cut by using a
randomized algorithm A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
. For example, say we wish to know if two vertices in a graph of vertices are in the same connected component of the graph. There is no known simple, deterministic, in-place algorithm to determine this, but if we simply start at one vertex and perform a
random walk In mathematics, a random walk is a random process that describes a path that consists of a succession of random steps on some mathematical space. An elementary example of a random walk is the random walk on the integer number line \mathbb Z ...
of about steps, the chance that we will stumble across the other vertex provided that it is in the same component is very high. Similarly, there are simple randomized in-place algorithms for primality testing such as the
Miller–Rabin primality test The Miller–Rabin primality test or Rabin–Miller primality test is a probabilistic primality test: an algorithm which determines whether a given number is likely to be prime, similar to the Fermat primality test and the Solovay–Strassen prim ...
, and there are also simple in-place randomized factoring algorithms such as
Pollard's rho algorithm Pollard's rho algorithm is an algorithm for integer factorization. It was invented by John Pollard in 1975. It uses only a small amount of space, and its expected running time is proportional to the square root of the smallest prime factor of the ...
. See RL and BPL for more discussion of this phenomenon.


In functional programming

Functional programming In computer science, functional programming is a programming paradigm where programs are constructed by Function application, applying and Function composition (computer science), composing Function (computer science), functions. It is a declar ...
languages often discourage or do not support explicit in-place algorithms that overwrite data, since this is a type of
side effect In medicine, a side effect is an effect, whether therapeutic or adverse, that is secondary to the one intended; although the term is predominantly employed to describe adverse effects, it can also apply to beneficial, but unintended, consequence ...
; instead, they only allow new data to be constructed. However, good functional language compilers will often recognize when an object very similar to an existing one is created and then the old one is thrown away, and will optimize this into a simple mutation "under the hood". Note that it is possible in principle to carefully construct in-place algorithms that do not modify data (unless the data is no longer being used), but this is rarely done in practice. See
purely functional data structure In computer science, a purely functional data structure is a data structure that can be implemented in a purely functional language. The main difference between an arbitrary data structure and a purely functional one is that the latter is (strongl ...
s.


See also

* Table of in-place and not-in-place sorting algorithms


References

{{DEFAULTSORT:In-Place Algorithm Algorithms