Exponential Utility
In economics and finance, exponential utility is a specific form of the utility function, used in some contexts because of its convenience when risk (sometimes referred to as uncertainty) is present, in which case expected utility is maximized. Formally, exponential utility is given by: :u(c) = \begin (1-e^)/a & a \neq 0 \\ c & a = 0 \\ \end c is a variable that the economic decision-maker prefers more of, such as consumption, and a is a constant that represents the degree of risk preference (a>0 for risk aversion, a=0 for risk-neutrality, or a of final wealth ''W'' subject to :W = x'r + (W_0 - x'k) \cdot r_f where the prime sign indicates a vector transpose and where W_0 is initial wealth, ''x'' is a column vector of quantities placed in the ''n'' risky assets, ''r'' is a random vector of stochastic returns on the ''n'' assets, ''k'' is a vector of ones (so W_0 - x'k is the quantity placed in the risk-free asset), and ''r''''f'' is the known scalar return on the risk-free asset ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Exponential Utility Function
Exponential may refer to any of several mathematical topics related to exponentiation, including: *Exponential function, also: **Matrix exponential, the matrix analogue to the above *Exponential decay, decrease at a rate proportional to value *Exponential discounting, a specific form of the discount function, used in the analysis of choice over time *Exponential growth, where the growth rate of a mathematical function is proportional to the function's current value *Exponential map (Riemannian geometry), in Riemannian geometry *Exponential map (Lie theory), in Lie theory * Exponential notation, also known as scientific notation, or standard form *Exponential object, in category theory *Exponential time, in complexity theory *in probability and statistics: **Exponential distribution, a family of continuous probability distributions **Exponentially modified Gaussian distribution, describes the sum of independent normal and exponential random variables **Exponential family, a parametric ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Expected Value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a large number of independently selected outcomes of a random variable. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration. The expected value of a random variable is often denoted by , , or , with also often stylized as or \mathbb. History The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes ''in a fair way'' between two players, who have to end th ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Isoelastic Utility
In economics, the isoelastic function for utility, also known as the isoelastic utility function, or power utility function, is used to express utility in terms of consumption or some other economic variable that a decision-maker is concerned with. The isoelastic utility function is a special case of hyperbolic absolute risk aversion and at the same time is the only class of utility functions with constant relative risk aversion, which is why it is also called the CRRA utility function. It is : u(c) = \begin \frac & \eta \ge 0, \eta \neq 1 \\ \ln(c) & \eta = 1 \end where c is consumption, u(c) the associated utility, and \eta is a constant that is positive for risk averse agents. Since additive constant terms in objective functions do not affect optimal decisions, the term –1 in the numerator can be, and usually is, omitted (except when establishing the limiting case of \ln(c) as below). When the context involves risk, the utility function is viewed as a von Neumann–Morgen ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Entropic Risk Measure
In financial mathematics (concerned with mathematical modeling of financial markets), the entropic risk measure is a risk measure which depends on the risk aversion of the user through the exponential utility function. It is a possible alternative to other risk measures as value-at-risk or expected shortfall. It is a theoretically interesting measure because it provides different risk values for different individuals whose attitudes toward risk may differ. However, in practice it would be difficult to use since quantifying the risk aversion for an individual is difficult to do. The entropic risk measure is the prime example of a convex risk measure which is not coherent. Given the connection to utility functions, it can be used in utility maximization problems. Mathematical definition The entropic risk measure with the risk aversion parameter \theta > 0 is defined as : \rho^(X) = \frac\log\left(\mathbb ^right) = \sup_ \left\ \, where H(Q, P) = E\left frac\log\frac\right/math> ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Matrix Calculus
In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics. Two competing notational conventions split the field of matrix calculus into two separate groups. The two groups can be distinguished by whether they write the derivative of a scalar with respect to a vector as a column vector or a row vector. Both of these conventions are possible even when the common assumption is made that vectors should be tre ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Covariance Matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself). Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the x and y directions contain all of the necessary information; a 2 \times 2 matrix would be necessary to fully characterize the two-dimensional variation. The covariance matrix of a random vector \mathbf is typically denoted by \operatorname_ or \Sigma. Definition Throughout this article, boldfaced unsubsc ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Joint Normality
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be ''k''-variate normally distributed if every linear combination of its ''k'' components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value. Definitions Notation and parameterization The multivariate normal distribution of a ''k''-dimensional random vector \mathbf = (X_1,\ldots,X_k)^ can be written in the following notation: : \mathbf\ \sim\ \mathcal(\boldsymbol\mu,\, \boldsymbol\Sigma), or to make it explicitly known that ''X'' i ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Stochastic Process
In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, image processing, signal processing, control theory, information theory, computer science, cryptography and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance. Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such stochastic processes include the Wiener process or Brownian motion process, ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Random Vector
In probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual statistical unit. For example, while a given person has a specific age, height and weight, the representation of these features of ''an unspecified person'' from within a group would be a random vector. Normally each element of a random vector is a real number. Random vectors are often used as the underlying implementation of various types of aggregate random variables, e.g. a random matrix, random tree, random sequence, stochastic process, etc. More formally, a multivariate random variable is a column vector \mathbf = (X_1,\dots,X_n)^\mathsf (or its ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The transpose of a matrix was introduced in 1858 by the British mathematician Arthur Cayley. In the case of a logical matrix representing a binary relation R, the transpose corresponds to the converse relation RT. Transpose of a matrix Definition The transpose of a matrix , denoted by , , , A^, , , or , may be constructed by any one of the following methods: # Reflect over its main diagonal (which runs from top-left to bottom-right) to obtain #Write the rows of as the columns of #Write the columns of as the rows of Formally, the -th row, -th column element of is the -th row, -th column element of : :\left mathbf^\operatorname\right = \left mathbf\right. If is an matrix, then is an matrix. In the case of square matrices, ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Column Vector
In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, consisting of a single row of n entries, \boldsymbol a = \begin a_1 & a_2 & \dots & a_n \end. (Throughout this article, boldface is used for both row and column vectors.) The transpose (indicated by T) of any row vector is a column vector, and the transpose of any column vector is a row vector: \begin x_1 \; x_2 \; \dots \; x_m \end^ = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end and \begin x_1 \\ x_2 \\ \vdots \\ x_m \end^ = \begin x_1 \; x_2 \; \dots \; x_m \end. The set of all row vectors with ''n'' entries in a given field (such as the real numbers) forms an ''n''-dimensional vector space; similarly, the set of all column vectors with ''m'' entries forms an ''m''-dimensional vector space. The space of row vectors with ''n'' entries can b ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Modern Portfolio Theory
Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. It uses the variance of asset prices as a proxy for risk. Economist Harry Markowitz introduced MPT in a 1952 essay, for which he was later awarded a Nobel Memorial Prize in Economic Sciences; see Markowitz model. Mathematical model Risk and expected return MPT assumes that investors are risk averse, meaning that given two portfolios that offer the same expected return, investors will prefer the less risky one. Thus, an investor will take on increased risk only if compensat ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |