Danskin's Theorem
   HOME
*





Danskin's Theorem
In convex analysis, Danskin's theorem is a theorem which provides information about the derivatives of a function of the form f(x) = \max_ \phi(x,z). The theorem has applications in optimization, where it sometimes is used to solve minimax problems. The original theorem given by J. M. Danskin in his 1967 monograph provides a formula for the directional derivative of the maximum of a (not necessarily convex) directionally differentiable function. An extension to more general conditions was proven 1971 by Dimitri Bertsekas. Statement The following version is proven in "Nonlinear programming" (1991). Suppose \phi(x,z) is a continuous function of two arguments, \phi : \R^n \times Z \to \R where Z \subset \R^m is a compact set. Further assume that \phi(x,z) is convex in x for every z \in Z. Under these conditions, Danskin's theorem provides conclusions regarding the convexity and differentiability of the function f(x) = \max_ \phi(x,z). To state these results, we define the set of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Convex Analysis
Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory. Convex sets A subset C \subseteq X of some vector space X is if it satisfies any of the following equivalent conditions: #If 0 \leq r \leq 1 is real and x, y \in C then r x + (1 - r) y \in C. #If 0 is a if holds for any real 0 is called if \operatorname f \neq \varnothing and f(x) > -\infty for x \in \operatorname f. Alternatively, this means that there exists some x in the domain of f at which f(x) \in \mathbb and f is also equal to -\infty. In words, a function is if its domain is not empty, it never takes on the value -\infty, and it also is not identically equal to +\infty. If f : \mathbb^n \to \infty, \infty/math> is a proper convex function then there exist some vector b \in \mathbb^n and some r \in \mathbb such that :f(x) \geq x \cdot b - r for every x where ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Semi-differentiability
In calculus, a branch of mathematics, the notions of one-sided differentiability and semi-differentiability of a real-valued function ''f'' of a real variable are weaker than differentiability. Specifically, the function ''f'' is said to be right differentiable at a point ''a'' if, roughly speaking, a derivative can be defined as the function's argument ''x'' moves to ''a'' from the right, and left differentiable at ''a'' if the derivative can be defined as ''x'' moves to ''a'' from the left. One-dimensional case In mathematics, a left derivative and a right derivative are derivatives (rates of change of a function) defined for movement in one direction only (left or right; that is, to lower or higher values) by the argument of a function. Definitions Let ''f'' denote a real-valued function defined on a subset ''I'' of the real numbers. If is a limit point of   and the one-sided limit :\partial_+f(a):=\lim_\frac exists as a real number, then ''f'' is called right differe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hotelling's Lemma
Hotelling's lemma is a result in microeconomics that relates the supply of a good to the maximum profit of the producer. It was first shown by Harold Hotelling, and is widely used in the theory of the firm. Specifically, it states: ''The rate of an increase in maximized profits with respect to a price increase is equal to the net supply of the good.'' In other words, if the firm makes its choices to maximize profits, then the choices can be recovered from the knowledge of the maximum profit function. Formal Statement Let p denote a variable price, and w be a constant cost of each input. Let x:\rightarrow X be a mapping from the price to a set of feasible input choices X\subset . Let f:\rightarrow be the production function, and y(p)\triangleq f(x(p)) be the net supply. The maximum profit can be written by :\pi (p) = \max_ p\cdot y(p) - w \cdot x(p). Then the lemma states that if the profit \pi is differentiable at p, the maximizing net supply is given by :y^*(p) = \frac . Proof ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Envelope Theorem
In mathematics and economics, the envelope theorem is a major result about the differentiability properties of the value function of a parameterized optimization problem. As we change parameters of the objective, the envelope theorem shows that, in a certain sense, changes in the optimizer of the objective do not contribute to the change in the objective function. The envelope theorem is an important tool for comparative statics of optimization models. The term envelope derives from describing the graph of the value function as the "upper envelope" of the graphs of the parameterized family of functions \left\ _ that are optimized. Statement Let f(x,\alpha) and g_(x,\alpha), j = 1,2, \ldots, m be real-valued continuously differentiable functions on \mathbb^, where x \in \mathbb^ are choice variables and \alpha \in \mathbb^ are parameters, and consider the problem of choosing x, for a given \alpha, so as to: : \max_ f(x, \alpha) subject to g_(x,\alpha) \geq 0, j = 1,2, \ldots, m an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Maximum Theorem
The maximum theorem provides conditions for the continuity of an optimized function and the set of its maximizers with respect to its parameters. The statement was first proven by Claude Berge in 1959. The theorem is primarily used in mathematical economics and optimal control. Statement of theorem Maximum Theorem. Let X and \Theta be topological spaces, f:X\times\Theta\to\mathbb be a continuous function on the product X \times \Theta, and C:\Theta\rightrightarrows X be a compact-valued correspondence such that C(\theta) \ne \emptyset for all \theta \in \Theta. Define the ''marginal function'' (or ''value function'') f^* : \Theta \to \mathbb by :f^*(\theta)=\sup\ and the ''set of maximizers'' C^* : \Theta \rightrightarrows X by : C^*(\theta)= \mathrm\max\ = \ . If C is continuous (i.e. both upper and lower hemicontinuous) at \theta, then f^* is continuous and C^*is upper hemicontinuous with nonempty and compact values. As a consequence, the \sup may be replaced by \max ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Convex Hull
In geometry, the convex hull or convex envelope or convex closure of a shape is the smallest convex set that contains it. The convex hull may be defined either as the intersection of all convex sets containing a given subset of a Euclidean space, or equivalently as the set of all convex combinations of points in the subset. For a bounded subset of the plane, the convex hull may be visualized as the shape enclosed by a rubber band stretched around the subset. Convex hulls of open sets are open, and convex hulls of compact sets are compact. Every compact convex set is the convex hull of its extreme points. The convex hull operator is an example of a closure operator, and every antimatroid can be represented by applying this closure operator to finite sets of points. The algorithmic problems of finding the convex hull of a finite set of points in the plane or other low-dimensional Euclidean spaces, and its dual problem of intersecting half-spaces, are fundamental problems of com ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Subdifferential
In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connection to convex optimization. Let f:I \to \mathbb be a real-valued convex function defined on an open interval of the real line. Such a function need not be differentiable at all points: For example, the absolute value function ''f''(''x'')=, ''x'', is nondifferentiable when ''x''=0. However, as seen in the graph on the right (where ''f(x)'' in blue has non-differentiable kinks similar to the absolute value function), for any ''x''0 in the domain of the function one can draw a line which goes through the point (''x''0, ''f''(''x''0)) and which is everywhere either touching or below the graph of ''f''. The slope of such a line is called a ''subderivative'' (because the line is under the graph of ''f''). Definition Rigorously, a ''subderivat ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Directional Derivative
In mathematics, the directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity specified by v. The directional derivative of a scalar function ''f'' with respect to a vector v at a point (e.g., position) x may be denoted by any of the following: \nabla_(\mathbf)=f'_\mathbf(\mathbf)=D_\mathbff(\mathbf)=Df(\mathbf)(\mathbf)=\partial_\mathbff(\mathbf)=\mathbf\cdot=\mathbf\cdot \frac. It therefore generalizes the notion of a partial derivative, in which the rate of change is taken along one of the curvilinear coordinate curves, all other coordinates being constant. The directional derivative is a special case of the Gateaux derivative. Definition The ''directional derivative'' of a scalar function :f(\mathbf) = f(x_1, x_2, \ldots, x_n) along a vector :\mathbf = (v_1, \ldots, v_n) is the function \nabla_ defined b ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gradient
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to maximize a function by gradient ascent. In coordinate-free terms, the gradient of a function f(\bf) may be defined by: :df=\nabla f \cdot d\bf where ''df'' is the total infinitesimal change in ''f'' for an infinitesimal displacement d\bf, and is seen to be maximal when d\bf is in the direction of the gradi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Differentiable Function
In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth (the function is locally well approximated as a linear function at each interior point) and does not contain any break, angle, or cusp. If is an interior point in the domain of a function , then is said to be ''differentiable at'' if the derivative f'(x_0) exists. In other words, the graph of has a non-vertical tangent line at the point . is said to be differentiable on if it is differentiable at every point of . is said to be ''continuously differentiable'' if its derivative is also a continuous function over the domain of the function f. Generally speaking, is said to be of class if its first k derivatives f^(x), f^(x), \ldots, f^(x) exist and are continuous over the domain of the func ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Convex Function
In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of a function, graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph (mathematics), epigraph (the set of points on or above the graph of the function) is a convex set. A twice-differentiable function of a single variable is convex if and only if its second derivative is nonnegative on its entire domain. Well-known examples of convex functions of a single variable include the quadratic function x^2 and the exponential function e^x. In simple terms, a convex function refers to a function whose graph is shaped like a cup \cup, while a concave function's graph is shaped like a cap \cap. Convex functions play an important role in many areas of mathematics. They are especially important in the study of optimization problems where they are distinguished by a number of convenient properties. For instance, a st ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Theorem
In mathematics, a theorem is a statement that has been proved, or can be proved. The ''proof'' of a theorem is a logical argument that uses the inference rules of a deductive system to establish that the theorem is a logical consequence of the axioms and previously proved theorems. In the mainstream of mathematics, the axioms and the inference rules are commonly left implicit, and, in this case, they are almost always those of Zermelo–Fraenkel set theory with the axiom of choice, or of a less powerful theory, such as Peano arithmetic. A notable exception is Wiles's proof of Fermat's Last Theorem, which involves the Grothendieck universes whose existence requires the addition of a new axiom to the set theory. Generally, an assertion that is explicitly called a theorem is a proved result that is not an immediate consequence of other known theorems. Moreover, many authors qualify as ''theorems'' only the most important results, and use the terms ''lemma'', ''proposition'' and ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]