Fixed-point Computation
   HOME



picture info

Fixed-point Computation
Fixed-point computation refers to the process of computing an exact or approximate Fixed point (mathematics), fixed point of a given function. In its most common form, the given function f satisfies the condition to the Brouwer fixed-point theorem: that is, f is continuous and maps the unit N-cube, ''d''-cube to itself. The Brouwer fixed-point theorem guarantees that f has a fixed point, but the proof is not Constructive proof, constructive. Various algorithms have been devised for computing an approximate fixed point. Such algorithms are used in economics for computing a market equilibrium, in game theory for computing a Nash equilibrium, and in dynamic system analysis. Definitions The unit interval is denoted by E := [0, 1], and the unit N-cube, ''d''-dimensional cube is denoted by E^d. A continuous function f is defined on E^d (from E^d to itself)''.'' Often, it is assumed that f is not only continuous but also Lipschitz continuous, that is, for some constant L, , f(x)-f(y), ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Fixed Point (mathematics)
In mathematics, a fixed point (sometimes shortened to fixpoint), also known as an invariant point, is a value that does not change under a given transformation (mathematics), transformation. Specifically, for function (mathematics), functions, a fixed point is an element that is mapped to itself by the function. Any set of fixed points of a transformation is also an invariant set. Fixed point of a function Formally, is a fixed point of a function if belongs to both the domain of a function, domain and the codomain of , and . In particular, cannot have any fixed point if its domain is disjoint from its codomain. If is defined on the real numbers, it corresponds, in graphical terms, to a curve in the Euclidean plane, and each fixed-point corresponds to an intersection of the curve with the line , cf. picture. For example, if is defined on the real numbers by f(x) = x^2 - 3 x + 4, then 2 is a fixed point of , because . Not all functions have fixed points: for example, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Ellipsoid Method
In mathematical optimization, the ellipsoid method is an iterative method for convex optimization, minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size. History The ellipsoid method has a long history. As an iterative method, a preliminary version was introduced by Naum Z. Shor. In 1972, an approximation algorithm for real convex optimization, convex minimization was studied by Arkadi Nemirovski and David B. Yudin (Judin). As an algorithm for solving linear programming problems with rational data, the ellipsoid algorithm was studied by Leonid Khachiyan; Khachiyan's achievement was to prove the Polynomial time, polynomial-time ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Root-finding Algorithm
In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function is a number such that . As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros. For functions from the real numbers to real numbers or from the complex numbers to the complex numbers, these are expressed either as floating-point numbers without error bounds or as floating-point values together with error bounds. The latter, approximations with error bounds, are equivalent to small isolating intervals for real roots or disks for complex roots. Solving an equation is the same as finding the roots of the function . Thus root-finding algorithms can be used to solve any equation of continuous functions. However, most root-finding algorithms do not guarantee that they will find all roots of a function, and if such an algorithm does not f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


PPAD-complete
In computer science, PPAD ("Polynomial Parity Arguments on Directed graphs") is a complexity class introduced by Christos Papadimitriou in 1994. PPAD is a subclass of TFNP based on functions that can be shown to be total by a parity argument. The class attracted significant attention in the field of algorithmic game theory because it contains the problem of computing a Nash equilibrium: this problem was shown to be Complete (complexity), complete for PPAD by Daskalakis, Goldberg and Papadimitriou with at least 3 players and later extended by Chen and Deng to 2 players.*. Definition PPAD is a subset of the class TFNP, the class of function problems in FNP (complexity), FNP that are guaranteed to be total function, total. The TFNP formal definition is given as follows: :A binary relation P(''x'',''y'') is in TFNP if and only if there is a deterministic polynomial time algorithm that can determine whether P(''x'',''y'') holds given both ''x'' and ''y'', and for every ''x'', there ex ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  



MORE