HOME
*





Gittins Index
The Gittins index is a measure of the reward that can be achieved through a given stochastic process with certain properties, namely: the process has an ultimate termination state and evolves with an option, at each intermediate state, of terminating. Upon terminating at a given state, the reward achieved is the sum of the probabilistic expected rewards associated with every state from the actual terminating state to the ultimate terminal state, inclusive. The index is a real scalar. Terminology To illustrate the theory we can take two examples from a developing sector, such as from electricity generating technologies: wind power and wave power. If we are presented with the two technologies when they are both proposed as ideas we cannot say which will be better in the long run as we have no data, as yet, to base our judgments on. It would be easy to say that wave power would be too problematic to develop as it seems easier to put up many wind turbines than to make the long floating ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Stochastic Process
In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, image processing, signal processing, control theory, information theory, computer science, cryptography and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance. Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such stochastic processes include the Wiener process or Brownian motion process, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Lagrange Multiplier
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition that one or more equations have to be satisfied exactly by the chosen values of the variables). It is named after the mathematician Joseph-Louis Lagrange. The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied. The relationship between the gradient of the function and gradients of the constraints rather naturally leads to a reformulation of the original problem, known as the Lagrangian function. The method can be summarized as follows: in order to find the maximum or minimum of a function f(x) subjected to the equality constraint g(x) = 0, form the Lagrangian function :\mathcal(x, \lambda) = f(x) + \lambda g(x) and find the stationary points of \mathcal considered as a function of x and the Lagrange mu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Decision Theory
Decision theory (or the theory of choice; not to be confused with choice theory) is a branch of applied probability theory concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome. There are three branches of decision theory: # Normative decision theory: Concerned with the identification of optimal decisions, where optimality is often determined by considering an ideal decision-maker who is able to calculate with perfect accuracy and is in some sense fully rational. # Prescriptive decision theory: Concerned with describing observed behaviors through the use of conceptual models, under the assumption that those making the decisions are behaving under some consistent rules. # Descriptive decision theory: Analyzes how individuals actually make the decisions that they do. Decision theory is closely related to the field of game theory and is an interdisciplinary topic, studied by econom ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Richard R
Richard is a male given name. It originates, via Old French, from Old Frankish and is a compound of the words descending from Proto-Germanic ''*rīk-'' 'ruler, leader, king' and ''*hardu-'' 'strong, brave, hardy', and it therefore means 'strong in rule'. Nicknames include "Richie", "Dick", "Dickon", " Dickie", "Rich", "Rick", "Rico", "Ricky", and more. Richard is a common English, German and French male name. It's also used in many more languages, particularly Germanic, such as Norwegian, Danish, Swedish, Icelandic, and Dutch, as well as other languages including Irish, Scottish, Welsh and Finnish. Richard is cognate with variants of the name in other European languages, such as the Swedish "Rickard", the Catalan "Ricard" and the Italian "Riccardo", among others (see comprehensive variant list below). People named Richard Multiple people with the same name * Richard Andersen (other) * Richard Anderson (other) * Richard Cartwright (other) * Ri ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Don Berry (statistician)
Donald Arthur Berry (born May 26, 1940) is an American statistician and a practitioner and proponent of Bayesian statistics in medical science. He was the chairman of the Department of Biostatistics and Applied Mathematics at the University of Texas M. D. Anderson Cancer Center from 1999-2010, where he played a role in the use of Bayesian methods to develop innovative, adaptive clinical trials. He is best known for the development of statistical theory relating to the design of clinical trials. He is a fellow of the American Statistical Association and the Institute of Mathematical Sciences. He founded Berry Consultants, a statistical consulting group, with Scott Berry in 2000.About Us
, Berry Consultants. Retrieved 11 May 2009.


Biography

Berry was born in

Michael Katehakis
Michael N. Katehakis ( el, Μιχαήλ Ν. Κατεχάκης; born 1952) is a Professor of Management Science at Rutgers University. He is noted for his work in Markov decision process, Gittins index, the multi-armed bandit, Markov chains and other related fields. Early life Katehakis was born and grew up in Heraklion Greece. He received his BS degree from the School of Physics and Mathematics of the University of Athens – Greece, an M.A. degree in Statistics from the University of South Florida, an M.S.. degree in Mathematical Methods in Engineering and Operations Research and a Ph.D. degree in Operations Research from Columbia University. His dissertation advisor at Columbia was Cyrus Derman. While a graduate student at Columbia Katehakis worked developing mathematical simulation programs for the Two-rotor engine and the Tri-rotor engine projects under George Doundoulakis. Career After a member of technical staff position at Bell Laboratories, in 1981 he joined the Dep ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Domain Of A Function
In mathematics, the domain of a function is the set of inputs accepted by the function. It is sometimes denoted by \operatorname(f) or \operatornamef, where is the function. More precisely, given a function f\colon X\to Y, the domain of is . Note that in modern mathematical language, the domain is part of the definition of a function rather than a property of it. In the special case that and are both subsets of \R, the function can be graphed in the Cartesian coordinate system. In this case, the domain is represented on the -axis of the graph, as the projection of the graph of the function onto the -axis. For a function f\colon X\to Y, the set is called the codomain, and the set of values attained by the function (which is a subset of ) is called its range or image. Any function can be restricted to a subset of its domain. The restriction of f \colon X \to Y to A, where A\subseteq X, is written as \left. f \_A \colon A \to Y. Natural domain If a real function is giv ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gaussian Elimination
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855) although some special cases of the method—albeit presented without proof—were known to Chinese mathematicians as early as circa 179 AD. To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations: * Swapping two rows, * Multiplying a row by a nonzero number, * Adding a multiple of one row to another row. (subtraction can be achieved by multiplying one row with -1 and adding ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization). More formally, linear programming is a technique for the mathematical optimization, optimization of a linear objective function, subject to linear equality and linear inequality Constraint (mathematics), constraints. Its feasible region is a convex polytope, which is a set defined as the intersection (mathematics), intersection of finitely many Half-space (geometry), half spaces, each of which is defined by a linear inequality. Its objective function is a real number, real-valued affine function, affine (linear) function defined on this polyhedron. A linear programming algorithm finds a point in the polytope where ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Markov Chain
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs ''now''." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov. Markov chains have many applications as statistical models of real-world processes, such as studying cruise control systems in motor vehicles, queues or lines of customers arriving at an airport, currency exchange rates and animal population dynamics. Markov processes are the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability dist ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Dynamic Programming
Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have ''optimal substructure''. If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems.Cormen, T. H.; Leiserson, C. E.; Rives ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]