The space complexity of an
algorithm
In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
or a
computer program
A computer program is a sequence or set of instructions in a programming language for a computer to execute. Computer programs are one component of software, which also includes documentation and other intangible components.
A computer progra ...
is the amount of memory space required to solve an instance of the
computational problem
In theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring
:"Given a positive integer ''n'', find a nontrivial prime factor of ''n''."
is a computational probl ...
as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely.
Similar to
time complexity
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
, space complexity is often expressed asymptotically in
big O notation, such as
etc., where is a characteristic of the input influencing space complexity.
Space complexity classes
Analogously to time complexity classes
DTIME(f(n)) and
NTIME(f(n)), the complexity classes
DSPACE(f(n)) and
NSPACE(f(n)) are the sets of languages that are decidable by deterministic (respectively, non-deterministic)
Turing machine
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algor ...
s that use
space. The complexity classes
PSPACE and
NPSPACE allow
to be any polynomial, analogously to
P and
NP. That is,
:
and
:
Relationships between classes
The
space hierarchy theorem states that, for all
space-constructible function In complexity theory, a time-constructible function is a function ''f'' from natural numbers to natural numbers with the property that ''f''(''n'') can be constructed from ''n'' by a Turing machine in the time of order ''f''(''n''). The purpose of ...
s
, there exists a problem that can be solved by a machine with
memory space, but cannot be solved by a machine with asymptotically less than
space.
The following containments between complexity classes hold.
:
Furthermore,
Savitch's theorem gives the reverse containment that if
,
:
As a direct corollary,
. This result is surprising because it suggests that non-determinism can reduce the space necessary to solve a problem only by a small amount. In contrast, the
exponential time hypothesis
In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by . It states that satisfiability of 3-CNF Boolean formulas cannot be solved more quickly than exponential ...
conjectures that for time complexity, there can be an exponential gap between deterministic and non-deterministic complexity.
The
Immerman–Szelepcsényi theorem
In computational complexity theory, the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for wh ...
states that, again for
,
is closed under complementation. This shows another qualitative difference between time and space complexity classes, as nondeterministic time complexity classes are not believed to be closed under complementation; for instance, it is conjectured that NP ≠
co-NP
In computational complexity theory, co-NP is a complexity class. A decision problem X is a member of co-NP if and only if its complement is in the complexity class NP. The class can be defined as follows: a decision problem is in co-NP precise ...
.
LOGSPACE
L or LOGSPACE is the set of problems that can be solved by a deterministic Turing machine using only
memory space with regards to input size. Even a single counter that can index the entire
-bit input requires
space, so LOGSPACE algorithms can maintain only a constant number of counters or other variables of similar bit complexity.
LOGSPACE and other sub-linear space complexity is useful when processing large data that cannot fit into a computer's
RAM. They are related to
Streaming algorithm In computer science, streaming algorithms are algorithms for processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes (typically just one). In most models, these algorithms have access ...
s, but only restrict how much memory can be used, while streaming algorithms have further constraints on how the input is fed into the algorithm.
This class also sees use in the field of
pseudorandomness and
derandomization
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
, where researchers consider the open problem of whether
L =
RL.
The corresponding nondeterministic space complexity class is
NL.
Auxiliary space complexity
The term auxiliary space refers to space other than that consumed by the input.
Auxiliary space complexity could be formally defined in terms of a
Turing machine
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algor ...
with a separate ''input tape'' which cannot be written to, only read, and a conventional working tape which can be written to.
The auxiliary space complexity is then defined (and analyzed) via the working tape.
For example, consider the
depth-first search
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible alo ...
of a
balanced binary tree
In computer science, a self-balancing binary search tree (BST) is any node-based binary search tree that automatically keeps its height (maximal number of levels below the root) small in the face of arbitrary item insertions and deletions.Donald ...
with
nodes: its auxiliary space complexity is
.
References
{{reflist
Computational complexity theory
Computational resources