HOME

TheInfoList



OR:

In mathematics, the Jack function is a generalization of the Jack polynomial, introduced by Henry Jack. The Jack polynomial is a homogeneous,
symmetric Symmetry (from grc, συμμετρία "agreement in dimensions, due proportion, arrangement") in everyday language refers to a sense of harmonious and beautiful proportion and balance. In mathematics, "symmetry" has a more precise definiti ...
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An example ...
which generalizes the Schur and zonal polynomials, and is in turn generalized by the
Heckman–Opdam polynomials In mathematics, Heckman–Opdam polynomials (sometimes called Jacobi polynomials) ''P''λ(''k'') are orthogonal polynomials in several variables associated to root systems. They were introduced by . They generalize Jack polynomials when the roots ...
and Macdonald polynomials.


Definition

The Jack function J_\kappa^(x_1,x_2,\ldots,x_m) of an
integer partition In number theory and combinatorics, a partition of a positive integer , also called an integer partition, is a way of writing as a sum of positive integers. Two sums that differ only in the order of their summands are considered the same part ...
\kappa, parameter \alpha, and arguments x_1,x_2,\ldots,x_m can be recursively defined as follows: ; For ''m''=1 : : J_^(x_1)=x_1^k(1+\alpha)\cdots (1+(k-1)\alpha) ; For ''m''>1: : J_\kappa^(x_1,x_2,\ldots,x_m)=\sum_\mu J_\mu^(x_1,x_2,\ldots,x_) x_m^\beta_, where the summation is over all partitions \mu such that the skew partition \kappa/\mu is a horizontal strip, namely : \kappa_1\ge\mu_1\ge\kappa_2\ge\mu_2\ge\cdots\ge\kappa_\ge\mu_\ge\kappa_n (\mu_n must be zero or otherwise J_\mu(x_1,\ldots,x_)=0) and : \beta_=\frac, where B_^\nu(i,j) equals \kappa_j'-i+\alpha(\kappa_i-j+1) if \kappa_j'=\mu_j' and \kappa_j'-i+1+\alpha(\kappa_i-j) otherwise. The expressions \kappa' and \mu' refer to the conjugate partitions of \kappa and \mu, respectively. The notation (i,j)\in\kappa means that the product is taken over all coordinates (i,j) of boxes in the
Young diagram In mathematics, a Young tableau (; plural: tableaux) is a combinatorial object useful in representation theory and Schubert calculus. It provides a convenient way to describe the group representations of the symmetric and general linear groups ...
of the partition \kappa.


Combinatorial formula

In 1997, F. Knop and S. Sahi gave a purely combinatorial formula for the Jack polynomials J_\mu^ in ''n'' variables: :J_\mu^ = \sum_ d_T(\alpha) \prod_ x_. The sum is taken over all ''admissible'' tableaux of shape \lambda, and :d_T(\alpha) = \prod_ d_\lambda(\alpha)(s) with :d_\lambda(\alpha)(s) = \alpha(a_\lambda(s) +1) + (l_\lambda(s) + 1). An ''admissible'' tableau of shape \lambda is a filling of the Young diagram \lambda with numbers 1,2,…,''n'' such that for any box (''i'',''j'') in the tableau, * T(i,j) \neq T(i',j) whenever i'>i. * T(i,j) \neq T(i,j-1) whenever j>1 and i' A box s = (i,j) \in \lambda is ''critical'' for the tableau ''T'' if j > 1 and T(i,j)=T(i,j-1). This result can be seen as a special case of the more general combinatorial formula for
Macdonald polynomials In mathematics, Macdonald polynomials ''P''λ(''x''; ''t'',''q'') are a family of orthogonal symmetric polynomials in several variables, introduced by Macdonald in 1987. He later introduced a non-symmetric generalization in 1995. Macdonald origi ...
.


C normalization

The Jack functions form an orthogonal basis in a space of symmetric polynomials, with inner product: :\langle f,g\rangle = \int_ f \left (e^,\ldots,e^ \right ) \overline \prod_ \left , e^-e^ \right , ^ d\theta_1\cdots d\theta_n This orthogonality property is unaffected by normalization. The normalization defined above is typically referred to as the J normalization. The C normalization is defined as :C_\kappa^(x_1,\ldots,x_n) = \frac J_\kappa^(x_1,\ldots,x_n), where :j_\kappa=\prod_ \left (\kappa_j'-i+\alpha \left (\kappa_i-j+1 \right ) \right ) \left (\kappa_j'-i+1+\alpha \left (\kappa_i-j \right ) \right ). For \alpha=2, C_\kappa^(x_1,\ldots,x_n) is often denoted by C_\kappa(x_1,\ldots,x_n) and called the
Zonal polynomial In mathematics, a zonal polynomial is a multivariate symmetric homogeneous polynomial. The zonal polynomials form a basis of the space of symmetric polynomials. They appear as zonal spherical functions of the Gelfand pairs (S_,H_n) (here, H_n is ...
.


P normalization

The ''P'' normalization is given by the identity J_\lambda = H'_\lambda P_\lambda, where :H'_\lambda = \prod_ (\alpha a_\lambda(s) + l_\lambda(s) + 1) where a_\lambda and l_\lambda denotes the arm and leg length respectively. Therefore, for \alpha=1, P_\lambda is the usual Schur function. Similar to Schur polynomials, P_\lambda can be expressed as a sum over Young tableaux. However, one need to add an extra weight to each tableau that depends on the parameter \alpha. Thus, a formula for the Jack function P_\lambda is given by : P_\lambda = \sum_ \psi_T(\alpha) \prod_ x_ where the sum is taken over all tableaux of shape \lambda, and T(s) denotes the entry in box ''s'' of ''T''. The weight \psi_T(\alpha) can be defined in the following fashion: Each tableau ''T'' of shape \lambda can be interpreted as a sequence of partitions : \emptyset = \nu_1 \to \nu_2 \to \dots \to \nu_n = \lambda where \nu_/\nu_i defines the skew shape with content ''i'' in ''T''. Then : \psi_T(\alpha) = \prod_i \psi_(\alpha) where :\psi_(\alpha) = \prod_ \frac \frac and the product is taken only over all boxes ''s'' in \lambda such that ''s'' has a box from \lambda/\mu in the same row, but ''not'' in the same column.


Connection with the Schur polynomial

When \alpha=1 the Jack function is a scalar multiple of the
Schur polynomial In mathematics, Schur polynomials, named after Issai Schur, are certain symmetric polynomials in ''n'' variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In ...
: J^_\kappa(x_1,x_2,\ldots,x_n) = H_\kappa s_\kappa(x_1,x_2,\ldots,x_n), where : H_\kappa=\prod_ h_\kappa(i,j)= \prod_ (\kappa_i+\kappa_j'-i-j+1) is the product of all hook lengths of \kappa.


Properties

If the partition has more parts than the number of variables, then the Jack function is 0: :J_\kappa^(x_1,x_2,\ldots,x_m)=0, \mbox\kappa_>0.


Matrix argument

In some texts, especially in random matrix theory, authors have found it more convenient to use a matrix argument in the Jack function. The connection is simple. If X is a matrix with eigenvalues x_1,x_2,\ldots,x_m, then : J_\kappa^(X)=J_\kappa^(x_1,x_2,\ldots,x_m).


References

*. *. * * *{{citation , last = Stanley , first = Richard P. , authorlink = Richard P. Stanley , doi = 10.1016/0001-8708(89)90015-7 , doi-access=free , mr = 1014073 , issue = 1 , journal = Advances in Mathematics , pages = 76–115 , title = Some combinatorial properties of Jack symmetric functions , volume = 77 , year = 1989.


External links


Software for computing the Jack function
by Plamen Koev and Alan Edelman.

* ttp://www.sagemath.org/doc/reference/sage/combinat/sf/jack.html SAGE documentation for Jack Symmetric Functions Orthogonal polynomials Special functions Symmetric functions