HOME

TheInfoList



OR:

Information-based complexity (IBC) studies optimal
algorithms In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing c ...
and
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
for the continuous problems that arise in
physical science Physical science is a branch of natural science that studies non-living systems, in contrast to life science. It in turn has many branches, each referred to as a "physical science", together called the "physical sciences". Definition Physi ...
,
economics Economics () is the social science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services. Economics focuses on the behaviour and intera ...
,
engineering Engineering is the use of scientific method, scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings. The discipline of engineering encompasses a broad rang ...
, and
mathematical finance Mathematical finance, also known as quantitative finance and financial mathematics, is a field of applied mathematics, concerned with mathematical modeling of financial markets. In general, there exist two separate branches of finance that require ...
. IBC has studied such continuous problems as
path integration Path integration is the method thought to be used by animals for dead reckoning. History Charles Darwin first postulated an inertially-based navigation system in animals in 1873.partial differential equations In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function. The function is often thought of as an "unknown" to be solved for, similarly to ...
, systems of
ordinary differential equations In mathematics, an ordinary differential equation (ODE) is a differential equation whose unknown(s) consists of one (or more) function(s) of one variable and involves the derivatives of those functions. The term ''ordinary'' is used in contrast w ...
,
nonlinear equation In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other ...
s,
integral equations In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form: f(x_1,x_2,x_3,...,x_n ; u(x_1,x_2,x_3,...,x_n) ...
, fixed points, and very-high-dimensional
integration Integration may refer to: Biology *Multisensory integration *Path integration * Pre-integration complex, viral genetic material used to insert a viral genome into a host genome *DNA integration, by means of site-specific recombinase technology, ...
. All these problems involve functions (typically multivariate) of a
real Real may refer to: Currencies * Brazilian real (R$) * Central American Republic real * Mexican real * Portuguese real * Spanish real * Spanish colonial real Music Albums * ''Real'' (L'Arc-en-Ciel album) (2000) * ''Real'' (Bright album) (2010) ...
or
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
variable. Since one can never obtain a closed-form solution to the problems of interest one has to settle for a numerical solution. Since a function of a real or complex variable cannot be entered into a digital computer, the solution of continuous problems involves ''partial'' information. To give a simple illustration, in the numerical approximation of an integral, only samples of the integrand at a finite number of points are available. In the numerical solution of partial differential equations the functions specifying the boundary conditions and the coefficients of the differential operator can only be sampled. Furthermore, this partial information can be expensive to obtain. Finally the information is often ''contaminated'' by noise. The goal of information-based complexity is to create a theory of computational complexity and optimal algorithms for problems with partial, contaminated and priced information, and to apply the results to answering questions in various disciplines. Examples of such disciplines include
physics Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which r ...
, economics, mathematical finance,
computer vision Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the hum ...
,
control theory Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a ...
,
geophysics Geophysics () is a subject of natural science concerned with the physical processes and physical properties of the Earth and its surrounding space environment, and the use of quantitative methods for their analysis. The term ''geophysics'' som ...
,
medical imaging Medical imaging is the technique and process of imaging the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to rev ...
,
weather forecasting Weather forecasting is the application of science and technology forecasting, to predict the conditions of the Earth's atmosphere, atmosphere for a given location and time. People have attempted to predict the weather informally for millennia a ...
and climate prediction, and
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
. The theory is developed over abstract spaces, typically
Hilbert David Hilbert (; ; 23 January 1862 – 14 February 1943) was a German mathematician, one of the most influential mathematicians of the 19th and early 20th centuries. Hilbert discovered and developed a broad range of fundamental ideas in many ...
or Banach spaces, while the applications are usually for multivariate problems. Since the information is partial and contaminated, only approximate solutions can be obtained. IBC studies computational complexity and optimal algorithms for approximate solutions in various settings. Since the worst case setting often leads to negative results such as unsolvability and intractability, settings with weaker assurances such as average, probabilistic and randomized are also studied. A fairly new area of IBC research is continuous quantum computing.


Overview

We illustrate some important concepts with a very simple example, the computation of ::::\int_0^1 f(x)\,dx. For most integrands we can't use the
fundamental theorem of calculus The fundamental theorem of calculus is a theorem that links the concept of differentiating a function (calculating its slopes, or rate of change at each time) with the concept of integrating a function (calculating the area under its graph, or ...
to compute the integral analytically; we have to approximate it numerically. We compute the values of f at ''n'' points :::: (t_1),\dots,f(t_n) The ''n'' numbers are the partial information about the true integrand f(x). We combine these ''n'' numbers by a combinatory algorithm to compute an approximation to the integral. See the monograp
Complexity and Information
for particulars. Because we have only partial information we can use an ''adversary argument'' to tell us how large ''n'' has to be to compute an \epsilon-approximation. Because of these information-based arguments we can often obtain tight bounds on the complexity of continuous problems. For discrete problems such as
integer factorization In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these factors are further restricted to prime numbers, the process is called prime factorization. When the numbers are suf ...
or the
travelling salesman problem The travelling salesman problem (also called the travelling salesperson problem or TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each cit ...
we have to settle for conjectures about the complexity hierarchy. The reason is that the input is a number or a vector of numbers and can thus be entered into the computer. Thus there is typically no adversary argument at the information level and the complexity of a discrete problem is rarely known. The univariate integration problem was for illustration only. Significant for many applications is multivariate integration. The number of variables is in the hundreds or thousands. The number of variables may even be infinite; we then speak of path integration. The reason that integrals are important in many disciplines is that they occur when we want to know the expected behavior of a continuous process. See for example, the application to mathematical finance below. Assume we want to compute an integral in ''d'' dimensions (dimensions and variables are used interchangeably) and that we want to guarantee an error at most \epsilon for any integrand in some class. The computational complexity of the problem is known to be of order \epsilon^. (Here we are counting the number of function evaluations and the number of arithmetic operations so this is the time complexity.) This would take many years for even modest values of d. The exponential dependence on ''d'' is called the ''
curse of dimensionality The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The ...
''. We say the problem is intractable. We stated the curse of dimensionality for integration. But exponential dependence on ''d'' occurs for almost every continuous problem that has been investigated. How can we try to vanquish the curse? There are two possibilities: * We can weaken the guarantee that the error must be less than \epsilon (worst case setting) and settle for a stochastic assurance. For example, we might only require that the expected error be less than \epsilon (average case setting.) Another possibility is the randomized setting. For some problems we can break the curse of dimensionality by weakening the assurance; for others, we cannot. There is a large IBC literature on results in various settings; see Where to Learn More below. * We can incorporate
domain knowledge Domain knowledge is knowledge of a specific, specialized discipline or field, in contrast to general (or domain-independent) knowledge. The term is often used in reference to a more general discipline—for example, in describing a software engin ...
. See An Example: Mathematical Finance below.


An example: mathematical finance

Very high dimensional integrals are common in finance. For example, computing expected cash flows for a collateralized mortgage obligation (CMO) requires the calculation of a number of 360 dimensional integrals, the 360 being the number of months in 30 years. Recall that if a worst case assurance is required the time is of order \epsilon^ time units. Even if the error is not small, say \epsilon=10^, this is 10^ time units. People in finance have long been using the
Monte Carlo method Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be determi ...
(MC), an instance of a randomized algorithm. Then in 1994 a research group at
Columbia University Columbia University (also known as Columbia, and officially as Columbia University in the City of New York) is a private research university in New York City. Established in 1754 as King's College on the grounds of Trinity Church in Manhatt ...

Papageorgiou
Paskov
Traub
Woźniakowski) discovered that the quasi-Monte Carlo (QMC) method using low discrepancy sequences beat MC by one to three orders of magnitude. The results were reported to a number of Wall Street finance to considerable initial skepticism. The results were first published by Paskov and Traub, ''Faster Valuation of Financial Derivatives'', Journal of Portfolio Management 22, 113-120. Today QMC is widely used in the financial sector to value financial derivatives. These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives? Here's a possible explanation. The 360 dimensions in the CMO represent monthly future times. Due to the discounted value of money variables representing times for in the future are less important than the variables representing nearby times. Thus the integrals are non-isotropic. Sloan and Woźniakowski introduced the very powerful idea of weighted spaces, which is a formalization of the above observation. They were able to show that with this additional domain knowledge high dimensional integrals satisfying certain conditions were tractable even in the worst case! In contrast the Monte Carlo method gives only a stochastic assurance. See Sloan and Woźniakowski ''When are Quasi-Monte Carlo Algorithms Efficient for High Dimensional Integration?'' J. Complexity 14, 1-33, 1998. For which classes of integrals is QMC superior to MC? This continues to be a major research problem.


Brief history

Precursors to IBC may be found in the 1950s by Kiefer, Sard, and Nikolskij. In 1959 Traub had the key insight that the optimal algorithm and the computational complexity of solving a continuous problem depended on the available information. He applied this insight to the solution of nonlinear equations, which started the area of optimal iteration theory. This research was published in the 1964 monograph ''Iterative Methods for the Solution of Equations.'' The general setting for information-based complexity was formulated by Traub and Woźniakowski in 1980 in ''A General Theory of Optimal Algorithms.'' For a list of more recent monographs and pointers to the extensive literature see To Learn More below.


Prizes

There are a number of prizes for IBC research. *Prize for Achievement in Information-Based Complexity This annual prize, which was created in 1999, consists of $3000 and a plaque. It is given for outstanding achievement in information-based complexity. The recipients are listed below. The affiliation is as of the time of the award. ** 1999 Erich Novak, University of Jena, Germany ** 2000 Sergei Pereverzev, Ukrainian Academy of Science, Ukraine ** 2001 G. W. Wasilkowski, University of Kentucky, USA ** 2002 Stefan Heinrich, University of Kaiserslautern, Germany ** 2003 Arthur G. Werschulz, Fordham University, USA ** 2004 Peter Mathe, Weierstrass Institute for Applied Analysis and Stochastics, Germany ** 2005 Ian Sloan, Scientia Professor, University of New South Wales, Sydney, Australia ** 2006 Leszek Plaskota, Department of Mathematics, Informatics and Mechanics, University of Warsaw, Poland ** 2007 Klaus Ritter, Department of Mathematics, TU Darmstadt, Germany ** 2008 Anargyros Papageorgiou, Columbia University, USA ** 2009 Thomas Mueller-Gronbach, Fakultaet fuer Informatik und Mathematik, Universitaet Passau, Germany ** 2010 Boleslaw Z. Kacewicz, Department of Mathematics, AGH University of Science and Technology, Cracow, Poland ** 2011 Aicke Hinrichs, Fakultät für Mathematik und Informatik, FSU Jena, Germany ** 2012 Michael Gnewuch, Department of Computer Science, Christian-Albrechts-Universitaet zu Kiel, Germany and School of Mathematics and Statistics, University of New South Wales, Sydney, Australia ** 2012 (Special prize) Krzysztof Sikorski, Department of Computer Science, University of Utah ** 2013 Co-winners *** Josef Dick, University of New South Wales, Sydney, Australia *** Friedrich Pillichshammer, Johannes Kepler University, Linz, Austria ** 2014 Frances Kuo, School of Mathematics, University of New South Wales, Sydney, Australia ** 2015 Peter Kritzer, Department of Financial Mathematics, University of Linz, Austria ** 2016 Fred J. Hickernell, Department of Applied Mathematics, Illinois Institute of Technology, Chicago, USA ** 2017 Co-winners *** Thomas Kühn, University of Leipzig, Germany *** Winfried Sickel, University of Jena, Germany. ** 2018 Paweł Przybyłowicz, AGH University of Science and Technology in Kraków, Poland ** 2019 Jan Vybíral, Czech Technical University, Prague, Czech Republic *Information-Based Complexity Young Researcher Award This annual award, which created in 2003, consists of $1000 and a plaque. The recipients have been ** 2003 Frances Kuo, School of Mathematics, University of New South Wales, Sydney, Australia ** 2004 Christiane Lemieux, University of Calgary, Calgary, Alberta, Canada, and Josef Dick, University of New South Wales, Sydney, Australia ** 2005 Friedrich Pillichshammer, Institute for Financial Mathematics, University of Linz, Austria ** 2006 Jakob Creutzig, TU Darmstadt, Germany and Dirk Nuyens, Katholieke Universiteit, Leuven, Belgium ** 2007 Andreas Neuenkirch, Universität Frankfurt, Germany ** 2008 Jan Vybíral, University of Jena, Germany ** 2009 Steffen Dereich, TU Berlin, Germany ** 2010 Daniel Rudolf, University of Jena, Germany ** 2011 Peter Kritzer, University of Linz, Austria ** 2012 Pawel Przybylowicz, AGH University of Science and Technology, Cracow, Poland ** 2013 Christoph Aistleitner, Department of Analysis and Computational Number Theory, Technische Universitat Graz, Austria ** 2014 Tino Ullrich, Institute for Numerical Simulation, University of Bonn, Germany ** 2015 Mario Ullrich, Institute of Analysis, Johannes Kepler University Linz, Austria ** 2016 Mario Hefter, TU Kaiserslautern, Germany ** 2017 Co-winners *** Takashi Goda, University of Tokyo *** Larisa Yaroslavtseva, University of Passau ** 2018 Arnulf Jentzen, Eidgenössische Technische Hochschule (ETH) Zürich, Switzerland *Best Paper Award, Journal of Complexity This annual award, which was created in 1996, consists of $3000 ($4000 since 2015) and a plaque. Many, but by no means all of the awards have been for research in IBC. The recipients have been ** 1996 Pascal Koiran ** 1997 Co-winners ***B. Bank, M. Giusti, J. Heintz, and G. M. Mbakop ***R. DeVore and V. Temlyakov ** 1998 Co-winners ***Stefan Heinrich ***P. Kirrinis ** 1999 Arthur G. Werschulz ** 2000 Co-winners ***Bernard Mourrain and Victor Y. Pan ***J. Maurice Rojas ** 2001 Erich Novak ** 2002 Peter Hertling ** 2003 Co-winners ***Markus Blaeser ***Boleslaw Kacewicz ** 2004 Stefan Heinrich ** 2005 Co-winners ***Yosef Yomdin ***Josef Dick and Friedrich Pillichshammer ** 2006 Knut Petras and Klaus Ritter ** 2007 Co-winners ***Martin Avendano, Teresa Krick and Martin Sombra ***Istvan Berkes, Robert F. Tichy and the late Walter Philipp ** 2008 Stefan Heinrich and Bernhard Milla ** 2009 Frank Aurzada, Steffen Dereich, Michael Scheutzow and Christian Vormoor ** 2010 Co-winners *** Aicke Hinrichs *** Simon Foucart, Alain Pajor, Holger Rauhut, Tino Ullrich ** 2011 Co-winners *** Thomas Daun *** Leszek Plaskota, Greg W. Wasilkowski ** 2012 Co-winners *** Dmitriy Bilyk, V.N. Temlyakov, Rui Yu *** Lutz Kämmerer, Stefan Kunis, Daniel Potts ** 2013 Co-winners *** Shu Tezuka *** Joos Heintz, Bart Kuijpers, Andrés Rojas Paredes ** 2014 Bernd Carl, Aicke Hinrichs, Philipp Rudolph ** 2015 Thomas Müller-Gronbach, Klaus Ritter, Larisa Yaroslavtseva ** 2016 Co-winners *** David Harvey, Joris van der Hoeven and Grégoire Lecerf *** Carlos Beltrán, Jordi Marzo and Joaquim Ortega-Cerdà ** 2017 Martijn Baartse and Klaus Meer ** 2018 Co-winners *** Stefan Heinrich *** Julian Grote and Christoph Thäle ** 2019 Winners *** Aicke Hinrichs, , Mario Ullrich


References

*Traub, J. F., Iterative Methods for the Solution of Equations, Prentice Hall, 1964. Reissued Chelsea Publishing Company, 1982; Russian translation MIR, 1985; Reissued American Mathematical Society, 1998 *Traub, J. F., and Woźniakowski, H., A General Theory of Optimal Algorithms, Academic Press, New York, 1980 *Traub, J. F., Woźniakowski, H., and Wasilkowski, G. W., Information, Uncertainty, Complexity, Addison-Wesley, New York, 1983 *Novak, E., Deterministic and Stochastic Error Bounds in Numerical Analysis, Lecture Notes in Mathematics, vol. 1349, Springer-Verlag, New York, 1988 *{{cite book, author=Traub, J. F., Woźniakowski, H., and Wasilkowski, G. W., title=Information-Based Complexity, publisher=Academic Press, location=New York, year=1988, isbn=978-0126975451 *Werschulz, A. G., The Computational Complexity of Differential and Integral Equations: An Information-Based Approach, Oxford University Press, New York, 1991 *Kowalski, M., Sikorski, K., and Stenger, F., Selected Topics in Approximation and Computation, Oxford University Press, Oxford, UK, 1995 *Plaskota, L., Noisy Information and Computational Complexity, Cambridge University Press, Cambridge, UK, 1996 *Traub, J. F., and Werschulz, A. G., Complexity and Information, Oxford University Press, Oxford, UK, 1998 *Ritter, K., Average-Case Analysis of Numerical Problems, Springer-Verlag, New York, 2000 *Sikorski, K., Optimal Solution of Nonlinear Equations, Oxford University Press, Oxford, UK, 2001 Extensive bibliographies may be found in the monographs N (1988), TW (1980), TWW (1988) and TW (1998). Th
IBC website
has a searchable data base of some 730 items.


External links


Journal of ComplexityComplexity and InformationJoseph TraubJ.F Traub, 1985. An Introduction to Information-Based Complexity
Computational complexity theory