HOME

TheInfoList



OR:

In
theoretical computer science computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory. It is difficult to circumscribe the ...
, smoothed analysis is a way of measuring the complexity of an algorithm. Since its introduction in 2001, smoothed analysis has been used as a basis for considerable research, for problems ranging from
mathematical programming Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
,
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
,
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
, and data mining. It can give a more realistic analysis of the practical performance (e.g., running time, success rate, approximation quality) of the algorithm compared to analysis that uses worst-case or average-case scenarios. Smoothed analysis is a hybrid of worst-case and average-case analyses that inherits advantages of both. It measures the expected performance of algorithms under slight random perturbations of worst-case inputs. If the smoothed complexity of an algorithm is low, then it is unlikely that the algorithm will take a long time to solve practical instances whose data are subject to slight noises and imprecisions. Smoothed complexity results are strong probabilistic results, roughly stating that, in every large enough neighbourhood of the space of inputs, most inputs are easily solvable. Thus, a low smoothed complexity means that the hardness of inputs is a "brittle" property. Although
worst-case analysis The worst-case analysis regulation40 CFR 1502.22 was promulgated in 1979 by the US Council on Environmental Quality (CEQ). The regulation is one of many implementing the National Environmental Policy Act of 1969 and it sets out the formal procedur ...
has been widely successful in explaining the practical performance of many algorithms, this style of analysis gives misleading results for a number of problems. Worst-case complexity measures the time it takes to solve any input, although hard-to-solve inputs might never come up in practice. In such cases, the worst-case running time can be much worse than the observed running time in practice. For example, the worst-case complexity of solving a
linear program Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming i ...
using the
simplex algorithm In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are n ...
is exponential, although the observed number of steps in practice is roughly linear. The simplex algorithm is in fact much faster than the
ellipsoid method In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which find ...
in practice, although the latter has polynomial-time worst-case complexity. Average-case analysis was first introduced to overcome the limitations of worst-case analysis. However, the resulting average-case complexity depends heavily on the probability distribution that is chosen over the input. The actual inputs and distribution of inputs may be different in practice from the assumptions made during the analysis: a random input may be very unlike a typical input. Because of this choice of data model, a theoretical average-case result might say little about practical performance of the algorithm. Smoothed analysis generalizes both worst-case and average-case analysis and inherits strengths of both. It is intended to be much more general than average-case complexity, while still allowing low complexity bounds to be proven.


History

ACM and the European Association for Theoretical Computer Science awarded the 2008
Gödel Prize The Gödel Prize is an annual prize for outstanding papers in the area of theoretical computer science, given jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery Special Interes ...
to Daniel Spielman and Shanghua Teng for developing smoothed analysis. The name Smoothed Analysis was coined by
Alan Edelman Alan Stuart Edelman (born June 1963) is an American mathematician and computer scientist. He is a professor of applied mathematics at the Massachusetts Institute of Technology (MIT) and a Principal Investigator at the MIT Computer Science and Ar ...
. In 2010 Spielman received the
Nevanlinna Prize The IMU Abacus Medal, known before 2022 as the Rolf Nevanlinna Prize, is awarded once every four years at the International Congress of Mathematicians, hosted by the International Mathematical Union (IMU), for outstanding contributions in Mathemati ...
for developing smoothed analysis. Spielman and Teng's JACM paper "Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time" was also one of the three winners of the 2009
Fulkerson Prize The Fulkerson Prize for outstanding papers in the area of discrete mathematics is sponsored jointly by the Mathematical Optimization Society (MOS) and the American Mathematical Society (AMS). Up to three awards of $1,500 each are presented at e ...
sponsored jointly by the
Mathematical Programming Society The Mathematical Optimization Society (MOS), known as the Mathematical Programming Society until 2010,American Mathematical Society The American Mathematical Society (AMS) is an association of professional mathematicians dedicated to the interests of mathematical research and scholarship, and serves the national and international community through its publications, meetings, ...
(AMS).


Examples


Simplex algorithm for linear programming

The
simplex algorithm In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are n ...
is a very efficient algorithm in practice, and it is one of the dominant algorithms for linear programming in practice. On practical problems, the number of steps taken by the algorithm is linear in the number of variables and constraints. Yet in the theoretical worst case it takes exponentially many steps for most successfully analyzed pivot rules. This was one of the main motivations for developing smoothed analysis. For the perturbation model, we assume that the input data is perturbed by noise from a Gaussian distribution. For normalization purposes, we assume the unperturbed data \bar \in \mathbb^, \bar \in \mathbb^n, \mathbf c \in \mathbb^d satisfies \, (\bar_i, \bar_i)\, _2 \leq 1 for all rows (\bar_i, \bar_i) of the matrix (\bar, \bar). The noise (\hat, \hat) has independent entries sampled from a Gaussian distribution with mean 0 and standard deviation \sigma. We set \mathbf A = \bar + \hat, \mathbf b = \bar + \hat. The smoothed input data consists of the linear program :maximize ::\mathbf \cdot \mathbf :subject to ::\mathbf A \mathbf \leq \mathbf b. If the running time of our algorithm on data \mathbf A, \mathbf b, \mathbf c is given by T(\mathbf A, \mathbf b,\mathbf c) then the smoothed complexity of the simplex method is ::C_s(n,d,\sigma) := \max_ ~ \mathbb_ (\bar + \hat, \bar + \hat, \mathbf c)= (d,\log n, \sigma^). This bound holds for a specific pivot rule called the shadow vertex rule. The shadow vertex rule is slower than more commonly used pivot rules such as Dantzig's rule or the steepest edge rule but it has properties that make it very well-suited to probabilistic analysis.


Local search for combinatorial optimization

A number of local search algorithms have bad worst-case running times but perform well in practice. One example is the
2-opt In optimization, 2-opt is a simple local search algorithm for solving the traveling salesman problem. The 2-opt algorithm was first proposed by Croes in 1958, although the basic move had already been suggested by Flood.M. M. Flood, The traveling-sa ...
heuristic for the
traveling salesman problem The travelling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each cit ...
. It can take exponentially many iterations until it finds a locally optimal solution, although in practice the running time is subquadratic in the number of vertices. The
approximation ratio An approximation is anything that is intentionally similar but not exactly equal to something else. Etymology and usage The word ''approximation'' is derived from Latin ''approximatus'', from ''proximus'' meaning ''very near'' and the prefix ' ...
, which is the ratio between the length of the output of the algorithm and the length of the optimal solution, tends to be good in practice but can also be bad in the theoretical worst case. One class of problem instances can be given by n points in the box ,1d, where their pairwise distances come from a
norm Naturally occurring radioactive materials (NORM) and technologically enhanced naturally occurring radioactive materials (TENORM) consist of materials, usually industrial wastes or by-products enriched with radioactive elements found in the envi ...
. Already in two dimensions, the 2-opt heuristic might take exponentially many iterations until finding a local optimum. In this setting, one can analyze the perturbation model where the vertices v_1,\dots,v_n are independently sampled according to probability distributions with
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
f_1,\dots,f_n : ,1d \rightarrow ,\theta/math>. For \theta = 1, the points are uniformly distributed. When \theta > 1 is big, the adversary has more ability to increase the likelihood of hard problem instances. In this perturbation model, the expected number of iterations of the 2-opt heuristic, as well as the approximation ratios of resulting output, are bounded by polynomial functions of n and \theta. Another local search algorithm for which smoothed analysis was successful is the k-means method. Given n points in ,1d, it is NP-hard to find a good partition into clusters with small pairwise distances between points in the same cluster. Lloyd's algorithm is widely used and very fast in practice, although it can take e^ iterations in the worst case to find a locally optimal solution. However, assuming that the points have independent Gaussian distributions, each with expectation in ,1d and standard deviation \sigma, the expected number of iterations of the algorithm is bounded by a polynomial in n, d and \sigma.


See also

* Average-case complexity *
Pseudo-polynomial time In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the ''numeric value'' of the input (the largest integer present in the input)—but not necessarily in the ''length'' of t ...
*
Worst-case complexity In computer science (specifically computational complexity theory), the worst-case complexity measures the System resource, resources (e.g. running time, Computer memory, memory) that an algorithm requires given an input of arbitrary size (commonl ...


References

{{Reflist, refs= {{Citation , last = Andrei , first = Neculai , title = Andrei, Neculai. "On the complexity of MINOS package for linear programming , journal = Studies in Informatics and Control , volume = 13 , issue = 1 , pages = 35–46 , year = 2004 {{Citation , last1 = Amenta , first1 = Nina , author-link = Nina Amenta , last2 = Ziegler , first2 = Günter , author2-link = Gunter M. Ziegler , title = Deformed products and maximal shadows of polytopes , journal = Contemporary Mathematics , volume = 223 , pages = 10–19 , publisher = American Mathematical Society , mr = 1661377 , doi = 10.1090/conm/223 , isbn = 9780821806746 , citeseerx = 10.1.1.80.3241 , year = 1999 {{Citation , last1 = Borgwardt , first1 = Karl-Heinz , last2 = Damm , first2 = Renate , last3 = Donig , first3 = Rudolf , last4 = Joas , first4 = Gabriele , title = Empirical studies on the average efficiency of simplex variants under rotation symmetry , journal = ORSA Journal on Computing , publisher = Operations Research Society of America , year = 1993 , volume = 5 , issue = 3 , pages = 249–260 , doi = 10.1287/ijoc.5.3.249 {{Citation , last = Borgwardt , first = Karl-Heinz , title = The Simplex Method: A Probabilistic Analysis , volume = 1 , year = 1987 , publisher = Springer-Verlag , isbn = 978-3-540-17096-9 , doi = 10.1007/978-3-642-61578-8 , series = Algorithms and Combinatorics , url = https://nbn-resolving.org/urn:nbn:de:bvb:384-opus4-143220 {{Citation , last1 = Arthur , first1 = David , last2 = Manthey , first2 = Bodo , last3 = Röglin , first3 = Heiko , title = Smoothed Analysis of the k-Means Method , year = 2011 , journal = Journal of the ACM , volume = 58 , issue = 5 , pages = 1–31 , doi = 10.1145/2027216.2027217 {{Citation , last1 = Englert , first1 = Matthias , last2 = Röglin , first2 = Heiko , last3 = Vöcking , first3 = Berthold , title = Worst Case and Probabilistic Analysis of the 2-Opt Algorithm for the TSP , journal = Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms , volume = 68 , pages = 190–264 , year = 2007 , doi = 10.1007/s00453-013-9801-4 , doi-access = free {{Citation , last = Shamir , first = Ron , author-link = Ron Shamir , title = The Efficiency of the Simplex Method: A Survey , journal = Management Science , volume = 33 , issue = 3 , pages = 301–334 , year = 1987 , doi = 10.1287/mnsc.33.3.301 {{Citation , last1=Spielman , first1=Daniel , last2=Teng , first2=Shang-Hua , author1-link=Daniel Spielman , author2-link=Shanghua Teng , journal=Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing , publisher=ACM , isbn=978-1-58113-349-3 , doi=10.1145/380752.380813 , year=2001 , title=Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time , pages=296–305 , arxiv=cs/0111050 , bibcode=2001cs.......11050S {{Citation , last1=Spielman , first1=Daniel , author-link=Daniel Spielman , last2=Teng , first2=Shang-Hua , author2-link=Shanghua Teng , title=Smoothed analysis: an attempt to explain the behavior of algorithms in practice , url=http://cs-www.cs.yale.edu/homes/spielman/Research/cacmSmooth.pdf , journal=Communications of the ACM , volume=52 , issue=10 , page=76–84 , year=2009 , publisher=ACM , doi=10.1145/1562764.1562785 Computational complexity theory Mathematical optimization