HOME

TheInfoList



OR:

In the field of
mathematical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, stochastic programming is a framework for modeling
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
problems that involve
uncertainty Uncertainty refers to epistemic situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially observable o ...
. A stochastic program is an optimization problem in which some or all problem parameters are uncertain, but follow known
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomeno ...
s. This framework contrasts with deterministic optimization, in which all problem parameters are assumed to be known exactly. The goal of stochastic programming is to find a decision which both optimizes some criteria chosen by the decision maker, and appropriately accounts for the uncertainty of the problem parameters. Because many real-world decisions involve uncertainty, stochastic programming has found applications in a broad range of areas ranging from
finance Finance is the study and discipline of money, currency and capital assets. It is related to, but not synonymous with economics, the study of production, distribution, and consumption of money, assets, goods and services (the discipline of ...
to
transportation Transport (in British English), or transportation (in American English), is the intentional movement of humans, animals, and goods from one location to another. Modes of transport include air, land (rail and road), water, cable, pipeline, ...
to energy optimization.


Two-stage problems

The basic idea of two-stage stochastic programming is that (optimal) decisions should be based on data available at the time the decisions are made and cannot depend on future observations. The two-stage formulation is widely used in stochastic programming. The general formulation of a two-stage stochastic programming problem is given by: \min_\ where Q(x,\xi) is the optimal value of the second-stage problem \min_\. The classical two-stage linear stochastic programming problems can be formulated as \begin \min\limits_ &g(x)= c^T x + E_ (x,\xi) & \\ \text & Ax = b &\\ & x \geq 0 & \end where Q(x,\xi) is the optimal value of the second-stage problem \begin \min\limits_ & q(\xi)^T y & \\ \text & T(\xi)x+W(\xi)y = h(\xi) &\\ & y \geq 0 & \end In such formulation x\in \mathbb^n is the first-stage decision variable vector, y\in \mathbb^m is the second-stage decision variable vector, and \xi(q,T,W,h) contains the data of the second-stage problem. In this formulation, at the first stage we have to make a "here-and-now" decision x before the realization of the uncertain data \xi, viewed as a random vector, is known. At the second stage, after a realization of \xi becomes available, we optimize our behavior by solving an appropriate optimization problem. At the first stage we optimize (minimize in the above formulation) the cost c^Tx of the first-stage decision plus the expected cost of the (optimal) second-stage decision. We can view the second-stage problem simply as an optimization problem which describes our supposedly optimal behavior when the uncertain data is revealed, or we can consider its solution as a recourse action where the term Wy compensates for a possible inconsistency of the system Tx\leq h and q^Ty is the cost of this recourse action. The considered two-stage problem is ''linear'' because the objective functions and the constraints are linear. Conceptually this is not essential and one can consider more general two-stage stochastic programs. For example, if the first-stage problem is integer, one could add integrality constraints to the first-stage problem so that the feasible set is discrete. Non-linear objectives and constraints could also be incorporated if needed.


Distributional assumption

The formulation of the above two-stage problem assumes that the second-stage data \xi is modeled as a random vector with a ''known'' probability distribution. This would be justified in many situations. For example, the distribution of \xi could be inferred from historical data if one assumes that the distribution does not significantly change over the considered period of time. Also, the empirical distribution of the sample could be used as an approximation to the distribution of the future values of \xi. If one has a prior model for \xi, one could obtain a posteriori distribution by a Bayesian update.


Discretization

To solve the two-stage stochastic problem numerically, one often needs to assume that the random vector \xi has a finite number of possible realizations, called ''scenarios'', say \xi_1,\dots,\xi_K, with respective probability masses p_1,\dots,p_K. Then the expectation in the first-stage problem's objective function can be written as the summation: E (x,\xi)\sum\limits_^ p_kQ(x,\xi_k) and, moreover, the two-stage problem can be formulated as one large linear programming problem (this is called the deterministic equivalent of the original problem, see section ). When \xi has an infinite (or very large) number of possible realizations the standard approach is then to represent this distribution by scenarios. This approach raises three questions, namely: # How to construct scenarios, see ; # How to solve the deterministic equivalent. Optimizers such as
CPLEX IBM ILOG CPLEX Optimization Studio (often informally referred to simply as CPLEX) is an optimization software package. In 2004, the work on CPLEX earned the first INFORMS Impact Prize. History The CPLEX Optimizer was named for the simplex m ...
, and
GLPK The GNU Linear Programming Kit (GLPK) is a software package intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form ...
can solve large linear/nonlinear problems. The NEOS Server, hosted at the University of Wisconsin, Madison, allows free access to many modern solvers. The structure of a deterministic equivalent is particularly amenable to apply decomposition methods, such as Benders' decomposition or scenario decomposition; # How to measure quality of the obtained solution with respect to the "true" optimum. These questions are not independent. For example, the number of scenarios constructed will affect both the tractability of the deterministic equivalent and the quality of the obtained solutions.


Stochastic linear programming

A stochastic
linear program Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is ...
is a specific instance of the classical two-stage stochastic program. A stochastic LP is built from a collection of multi-period linear programs (LPs), each having the same structure but somewhat different data. The k^ two-period LP, representing the k^ scenario, may be regarded as having the following form: \begin \text & f^T x & + & g^T y & + & h_k^Tz_k & & \\ \text & Tx & + & Uy & & & = & r \\ & & & V_k y & + & W_kz_k & = & s_k \\ & x & , & y & , & z_k & \geq & 0 \end The vectors x and y contain the first-period variables, whose values must be chosen immediately. The vector z_k contains all of the variables for subsequent periods. The constraints Tx + Uy = r involve only first-period variables and are the same in every scenario. The other constraints involve variables of later periods and differ in some respects from scenario to scenario, reflecting uncertainty about the future. Note that solving the k^ two-period LP is equivalent to assuming the k^ scenario in the second period with no uncertainty. In order to incorporate uncertainties in the second stage, one should assign probabilities to different scenarios and solve the corresponding deterministic equivalent.


Deterministic equivalent of a stochastic problem

With a finite number of scenarios, two-stage stochastic linear programs can be modelled as large linear programming problems. This formulation is often called the deterministic equivalent linear program, or abbreviated to deterministic equivalent. (Strictly speaking a deterministic equivalent is any mathematical program that can be used to compute the optimal first-stage decision, so these will exist for continuous probability distributions as well, when one can represent the second-stage cost in some closed form.) For example, to form the deterministic equivalent to the above stochastic linear program, we assign a probability p_k to each scenario k=1,\dots,K. Then we can minimize the expected value of the objective, subject to the constraints from all scenarios: \begin \text & f^\top x & + & g^\top y & + & p_1h_1^\top z_1 & + & p_2h_2^Tz_2 & + & \cdots & + & p_Kh_K^\top z_K & & \\ \text & Tx & + & Uy & & & & & & & & & = & r \\ & & & V_1 y & + & W_1z_1 & & & & & & & = & s_1 \\ & & & V_2 y & & & + & W_2z_2 & & & & & = & s_2 \\ & & & \vdots & & & & & & \ddots & & & & \vdots \\ & & & V_Ky & & & & & & & + & W_Kz_K & = & s_K \\ & x & , & y & , & z_1 & , & z_2 & , & \ldots & , & z_K & \geq & 0 \\ \end We have a different vector z_k of later-period variables for each scenario k. The first-period variables x and y are the same in every scenario, however, because we must make a decision for the first period before we know which scenario will be realized. As a result, the constraints involving just x and y need only be specified once, while the remaining constraints must be given separately for each scenario.


Scenario construction

In practice it might be possible to construct scenarios by eliciting experts' opinions on the future. The number of constructed scenarios should be relatively modest so that the obtained deterministic equivalent can be solved with reasonable computational effort. It is often claimed that a solution that is optimal using only a few scenarios provides more adaptable plans than one that assumes a single scenario only. In some cases such a claim could be verified by a simulation. In theory some measures of guarantee that an obtained solution solves the original problem with reasonable accuracy. Typically in applications only the ''first stage'' optimal solution x^* has a practical value since almost always a "true" realization of the random data will be different from the set of constructed (generated) scenarios. Suppose \xi contains d independent random components, each of which has three possible realizations (for example, future realizations of each random parameters are classified as low, medium and high), then the total number of scenarios is K=3^d. Such ''exponential growth'' of the number of scenarios makes model development using expert opinion very difficult even for reasonable size d. The situation becomes even worse if some random components of \xi have continuous distributions.


Monte Carlo sampling and Sample Average Approximation (SAA) Method

A common approach to reduce the scenario set to a manageable size is by using Monte Carlo simulation. Suppose the total number of scenarios is very large or even infinite. Suppose further that we can generate a sample \xi^1,\xi^2,\dots,\xi^N of N replications of the random vector \xi. Usually the sample is assumed to be
independent and identically distributed In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usu ...
(i.i.d sample). Given a sample, the expectation function q(x)=E (x,\xi)/math> is approximated by the sample average \hat_N(x) = \frac \sum_^N Q(x,\xi^j) and consequently the first-stage problem is given by \begin \hat_N(x)=&\min\limits_ & c^T x + \frac \sum_^N Q(x,\xi^j) & \\ &\text & Ax &=& b \\ & & x &\geq& 0 \end This formulation is known as the ''Sample Average Approximation'' method. The SAA problem is a function of the considered sample and in that sense is random. For a given sample \xi^1,\xi^2,\dots,\xi^N the SAA problem is of the same form as a two-stage stochastic linear programming problem with the scenarios \xi^j., j=1,\dots,N, each taken with the same probability p_j=\frac.


Statistical inference

Consider the following stochastic programming problem
\min\limits_\
Here X is a nonempty closed subset of \mathbb^n, \xi is a random vector whose probability distribution P is supported on a set \Xi \subset \mathbb^d, and Q: X \times \Xi \rightarrow \mathbb. In the framework of two-stage stochastic programming, Q(x,\xi) is given by the optimal value of the corresponding second-stage problem. Assume that g(x) is well defined and ''finite valued'' for all x\in X. This implies that for every x\in X the value Q(x,\xi) is finite almost surely. Suppose that we have a sample \xi^1,\dots,\xi^N of Nrealizations of the random vector \xi. This random sample can be viewed as historical data of N observations of \xi, or it can be generated by Monte Carlo sampling techniques. Then we can formulate a corresponding ''sample average approximation''
\min\limits_\
By the
Law of Large Numbers In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials shou ...
we have that, under some regularity conditions \frac \sum_^N Q(x,\xi^j) converges pointwise with probability 1 to E (x,\xi)/math> as N \rightarrow \infty. Moreover, under mild additional conditions the convergence is uniform. We also have E hat_N(x)g(x), i.e., \hat_N(x) is an ''unbiased'' estimator of g(x). Therefore, it is natural to expect that the optimal value and optimal solutions of the SAA problem converge to their counterparts of the true problem as N \rightarrow \infty.


Consistency of SAA estimators

Suppose the feasible set X of the SAA problem is fixed, i.e., it is independent of the sample. Let \vartheta^* and S^* be the optimal value and the set of optimal solutions, respectively, of the true problem and let \hat_N and \hat_N be the optimal value and the set of optimal solutions, respectively, of the SAA problem. # Let g: X \rightarrow \mathbb and \hat_N: X \rightarrow \mathbb be a sequence of (deterministic) real valued functions. The following two properties are equivalent: #* for any \overline\in X and any sequence \\subset X converging to \overline it follows that \hat_N(x_N) converges to g(\overline) #* the function g(\cdot) is continuous on X and \hat_N(\cdot) converges to g(\cdot) uniformly on any compact subset of X # If the objective of the SAA problem \hat_N(x) converges to the true problem's objective g(x) with probability 1, as N \rightarrow \infty, uniformly on the feasible set X. Then \hat_N converges to \vartheta^* with probability 1 as N \rightarrow \infty. # Suppose that there exists a compact set C \subset \mathbb^n such that #* the set S of optimal solutions of the true problem is nonempty and is contained in C #* the function g(x) is finite valued and continuous on C #* the sequence of functions \hat_N(x) converges to g(x) with probability 1, as N \rightarrow \infty, uniformly in x\in C #* for N large enough the set \hat_N is nonempty and \hat_N \subset C with probability 1 :: then \hat_N \rightarrow \vartheta^* and \mathbb(S^*,\hat_N)\rightarrow 0 with probability 1 as N\rightarrow \infty . Note that \mathbb(A,B) denotes the ''deviation of set A from set B'', defined as
\mathbb(A,B) := \sup_ \
In some situations the feasible set X of the SAA problem is estimated, then the corresponding SAA problem takes the form
\min_ \hat_N(x)
where X_N is a subset of \mathbb^n depending on the sample and therefore is random. Nevertheless, consistency results for SAA estimators can still be derived under some additional assumptions: # Suppose that there exists a compact set C \subset \mathbb^n such that #* the set S of optimal solutions of the true problem is nonempty and is contained in C #* the function g(x) is finite valued and continuous on C #* the sequence of functions \hat_N(x) converges to g(x) with probability 1, as N \rightarrow \infty, uniformly in x\in C #* for N large enough the set \hat_N is nonempty and \hat_N \subset C with probability 1 #* if x_N \in X_N and x_N converges with probability 1 to a point x, then x \in X #* for some point x \in S^* there exists a sequence x_N \in X_N such that x_N \rightarrow x with probability 1. :: then \hat_N \rightarrow \vartheta^* and \mathbb(S^*,\hat_N)\rightarrow 0 with probability 1 as N\rightarrow \infty .


Asymptotics of the SAA optimal value

Suppose the sample \xi^1,\dots,\xi^N is i.i.d. and fix a point x \in X. Then the sample average estimator \hat_N(x), of g(x), is unbiased and has variance \frac\sigma^2(x), where \sigma^2(x):=Var (x,\xi)/math> is supposed to be finite. Moreover, by the
central limit theorem In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themse ...
we have that
\sqrt hat_N- g(x)\xrightarrow Y_x
where \xrightarrow denotes convergence in ''distribution'' and Y_x has a normal distribution with mean 0 and variance \sigma^2(x), written as \mathcal(0,\sigma^2(x)). In other words, \hat_N(x) has ''asymptotically normal'' distribution, i.e., for large N, \hat_N(x) has approximately normal distribution with mean g(x) and variance \frac\sigma^2(x). This leads to the following (approximate) 100(1-\alpha)% confidence interval for f(x):
\left \hat_N(x)-z_ \frac, \hat_N(x)+z_ \frac\right
where z_:=\Phi^(1-\alpha/2) (here \Phi(\cdot) denotes the cdf of the standard normal distribution) and
\hat^2(x) := \frac\sum_^ \left Q(x,\xi^j)-\frac \sum_^N Q(x,\xi^j) \right2
is the sample variance estimate of \sigma^2(x). That is, the error of estimation of g(x) is (stochastically) of order O(\sqrt).


Applications and Examples


Biological applications

Stochastic dynamic programming is frequently used to model
animal behaviour Ethology is the scientific study of animal behaviour, usually with a focus on behaviour under natural conditions, and viewing behaviour as an evolutionarily adaptive trait. Behaviourism as a term also describes the scientific and objectiv ...
in such fields as behavioural ecology. Empirical tests of models of
optimal foraging Optimal foraging theory (OFT) is a behavioral ecology model that helps predict how an animal behaves when searching for food. Although obtaining food provides the animal with energy, searching for and capturing the food require both energy and t ...
, life-history transitions such as fledging in birds and egg laying in
parasitoid In evolutionary ecology, a parasitoid is an organism that lives in close association with its host (biology), host at the host's expense, eventually resulting in the death of the host. Parasitoidism is one of six major evolutionarily stable str ...
wasps have shown the value of this modelling technique in explaining the evolution of behavioural decision making. These models are typically many-staged, rather than two-staged.


Economic applications

Stochastic dynamic programming is a useful tool in understanding decision making under uncertainty. The accumulation of capital stock under uncertainty is one example; often it is used by resource economists to analyze bioeconomic problemsHowitt, R., Msangi, S., Reynaud, A and K. Knapp. 2002
"Using Polynomial Approximations to Solve Stochastic Dynamic Programming Problems: or A "Betty Crocker " Approach to SDP."
University of California, Davis, Department of Agricultural and Resource Economics Working Paper.
where the uncertainty enters in such as weather, etc.


Example: multistage portfolio optimization

The following is an example from finance of multi-stage stochastic programming. Suppose that at time t=0 we have initial capital W_0 to invest in n assets. Suppose further that we are allowed to rebalance our portfolio at times t=1,\dots,T-1 but without injecting additional cash into it. At each period t we make a decision about redistributing the current wealth W_t among the n assets. Let x_0=(x_,\dots,x_) be the initial amounts invested in the n assets. We require that each x_ is nonnegative and that the balance equation \sum_^x_=W_0 should hold. Consider the total returns \xi_t=(\xi_,\dots,\xi_) for each period t=1,\dots,T. This forms a vector-valued random process \xi_1,\dots,\xi_T. At time period t=1, we can rebalance the portfolio by specifying the amounts x_1=(x_,\dots,x_) invested in the respective assets. At that time the returns in the first period have been realized so it is reasonable to use this information in the rebalancing decision. Thus, the second-stage decisions, at time t=1, are actually functions of realization of the random vector \xi_1, i.e., x_1=x_1(\xi_1). Similarly, at time t the decision x_t=(x_,\dots,x_) is a function x_t=x_t(\xi_) of the available information given by \xi_=(\xi_,\dots,\xi_) the history of the random process up to time t. A sequence of functions x_t=x_t(\xi_), t=0,\dots,T-1, with x_0 being constant, defines an ''implementable policy'' of the decision process. It is said that such a policy is ''feasible'' if it satisfies the model constraints with probability 1, i.e., the nonnegativity constraints x_(\xi_)\geq 0, i=1,\dots,n, t=0,\dots,T-1, and the balance of wealth constraints, : \sum_^x_(\xi_) = W_t, where in period t=1,\dots,T the wealth W_t is given by : W_t = \sum_^\xi_ x_(\xi_), which depends on the realization of the random process and the decisions up to time t. Suppose the objective is to maximize the expected utility of this wealth at the last period, that is, to consider the problem : \max E (W_T) This is a multistage stochastic programming problem, where stages are numbered from t=0 to t=T-1. Optimization is performed over all implementable and feasible policies. To complete the problem description one also needs to define the probability distribution of the random process \xi_1,\dots,\xi_T. This can be done in various ways. For example, one can construct a particular scenario tree defining time evolution of the process. If at every stage the random return of each asset is allowed to have two continuations, independent of other assets, then the total number of scenarios is 2^. In order to write
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
equations, consider the above multistage problem backward in time. At the last stage t=T-1, a realization \xi_=(\xi_,\dots,\xi_) of the random process is known and x_ has been chosen. Therefore, one needs to solve the following problem : \begin \max\limits_ & E \xi_ & \\ \text & W_T &=& \sum_^\xi_x_ \\ &\sum_^x_&=&W_\\ & x_ &\geq& 0 \end where E \xi_/math> denotes the conditional expectation of U(W_T) given \xi_. The optimal value of the above problem depends on W_ and \xi_ and is denoted Q_(W_,\xi_). Similarly, at stages t=T-2,\dots,1, one should solve the problem : \begin \max\limits_ & E \xi_ & \\ \text & W_ &=& \sum_^\xi_x_ \\ &\sum_^x_&=&W_\\ & x_ &\geq& 0 \end whose optimal value is denoted by Q_(W_,\xi_). Finally, at stage t=0, one solves the problem : \begin \max\limits_ & E _(W_,\xi_) & \\ \text & W_ &=& \sum_^\xi_x_ \\ &\sum_^x_&=&W_\\ & x_ &\geq& 0 \end


Stagewise independent random process

For a general distribution of the process \xi_t, it may be hard to solve these dynamic programming equations. The situation simplifies dramatically if the process \xi_t is stagewise independent, i.e., \xi_t is (stochastically) independent of \xi_1,\dots,\xi_ for t=2,\dots,T. In this case, the corresponding conditional expectations become unconditional expectations, and the function Q_t(W_t), t=1,\dots,T-1 does not depend on \xi_. That is, Q_(W_) is the optimal value of the problem : \begin \max\limits_ & E (W_T) & \\ \text & W_T &=& \sum_^\xi_x_ \\ &\sum_^x_&=&W_\\ & x_ &\geq& 0 \end and Q_t(W_t) is the optimal value of : \begin \max\limits_ & E _(W_) & \\ \text & W_ &=& \sum_^\xi_x_ \\ &\sum_^x_&=&W_\\ & x_ &\geq& 0 \end for t=T-2,\dots,1.


Software tools


Modelling languages

All discrete stochastic programming problems can be represented with any
algebraic modeling language Algebraic modeling languages (AML) are high-level computer programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). One particular advantage of ...
, manually implementing explicit or implicit non-anticipativity to make sure the resulting model respects the structure of the information made available at each stage. An instance of an SP problem generated by a general modelling language tends to grow quite large (linearly in the number of scenarios), and its matrix loses the structure that is intrinsic to this class of problems, which could otherwise be exploited at solution time by specific decomposition algorithms. Extensions to modelling languages specifically designed for SP are starting to appear, see: *
AIMMS AIMMS (acronym for Advanced Interactive Multidimensional Modeling System) is a prescriptive analytics software company with offices in the Netherlands, United States, China and Singapore. It has two main product offerings that provide modeling and ...
- supports the definition of SP problems * EMP SP (Extended Mathematical Programming for Stochastic Programming) - a module of
GAMS Gams is a municipality in the ''Wahlkreis'' (constituency) of Werdenberg in the canton of St. Gallen in Switzerland. History Gams is first mentioned in 835 as ''Campesias''. In 1210 it was mentioned as ''Chames'', in 1236 as ''Gamps''. Unt ...
created to facilitate stochastic programming (includes keywords for parametric distributions, chance constraints and risk measures such as
Value at risk Value at risk (VaR) is a measure of the risk of loss for investments. It estimates how much a set of investments might lose (with a given probability), given normal market conditions, in a set time period such as a day. VaR is typically used by ...
and
Expected shortfall Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is the expected return on the portfolio in the wor ...
). * SAMPL - a set of extensions to
AMPL AMPL (A Mathematical Programming Language) is an algebraic modeling language to describe and solve high-complexity problems for large-scale mathematical computing (i.e., large-scale optimization and scheduling-type problems). It was developed b ...
specifically designed to express stochastic programs (includes syntax for chance constraints, integrated chance constraints and
Robust Optimization Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the ...
problems) They both can generate SMPS instance level format, which conveys in a non-redundant form the structure of the problem to the solver.


See also

* Correlation gap * EMP for Stochastic Programming *
Entropic value at risk In financial mathematics and stochastic optimization, the concept of risk measure is used to quantify the risk involved in a random outcome or risk position. Many risk measures have hitherto been proposed, each having certain characteristics. The en ...
*
FortSP FortSP is a software package for solving stochastic programming (SP) problems. It solves scenario-based SP problems with recourse as well as problems with chance constraints and integrated chance constraints. FortSP is available as a standalone e ...
* SAMPL algebraic modeling language *
Scenario optimization The scenario approach or scenario optimization approach is a technique for obtaining solutions to robust optimization and chance-constrained optimization problems based on a sample of the constraints. It also relates to inductive reasoning in mod ...
*
Stochastic optimization Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functi ...
*
Chance-constrained portfolio selection Chance-constrained portfolio selection is an approach to portfolio selection under Loss aversion. The formulation assumes that investor’s preferences are representable by the expected utility of final wealth, and that they require to be acceptab ...


References


Further reading

* John R. Birge and François V. Louveaux.
Introduction to Stochastic Programming
'. Springer Verlag, New York, 1997. * * G. Ch. Pflug:
Optimization of Stochastic Models. The Interface between Simulation and Optimization
'. Kluwer, Dordrecht, 1996. * András Prékopa. Stochastic Programming. Kluwer Academic Publishers, Dordrecht, 1995. * Andrzej Ruszczynski and Alexander Shapiro (eds.) (2003) ''Stochastic Programming''. Handbooks in Operations Research and Management Science, Vol. 10, Elsevier. * * Stein W. Wallace and William T. Ziemba (eds.) (2005) ''Applications of Stochastic Programming''. MPS-SIAM Book Series on Optimization 5 *


External links


Stochastic Programming Community Home Page
{{DEFAULTSORT:Stochastic Programming Stochastic optimization Optimization algorithms and methods