Darwin–Fowler method
   HOME

TheInfoList



OR:

In statistical mechanics, the Darwin–Fowler method is used for deriving the distribution functions with mean probability. It was developed by
Charles Galton Darwin Sir Charles Galton Darwin (19 December 1887 – 31 December 1962) was an English physicist who served as director of the National Physical Laboratory (NPL) during the Second World War. He was a son of the mathematician George Howard Darwin an ...
and Ralph H. Fowler in 1922–1923. Distribution functions are used in statistical physics to estimate the mean number of particles occupying an energy level (hence also called occupation numbers). These distributions are mostly derived as those numbers for which the system under consideration is in its state of maximum probability. But one really requires average numbers. These average numbers can be obtained by the Darwin–Fowler method. Of course, for systems in the
thermodynamic limit In statistical mechanics, the thermodynamic limit or macroscopic limit, of a system is the limit for a large number of particles (e.g., atoms or molecules) where the volume is taken to grow in proportion with the number of particles.S.J. Blundel ...
(large number of particles), as in statistical mechanics, the results are the same as with maximization.


Darwin–Fowler method

In most texts on statistical mechanics the statistical distribution functions f in
Maxwell–Boltzmann statistics In statistical mechanics, Maxwell–Boltzmann statistics describes the distribution of classical material particles over various energy states in thermal equilibrium. It is applicable when the temperature is high enough or the particle density ...
,
Bose–Einstein statistics In quantum statistics, Bose–Einstein statistics (B–E statistics) describes one of two possible ways in which a collection of non-interacting, indistinguishable particles may occupy a set of available discrete energy states at thermodynamic ...
, Fermi–Dirac statistics) are derived by determining those for which the system is in its state of maximum probability. But one really requires those with average or mean probability, although – of course – the results are usually the same for systems with a huge number of elements, as is the case in statistical mechanics. The method for deriving the distribution functions with mean probability has been developed by C. G. Darwin and FowlerC.G. Darwin and R.H. Fowler, Phil. Mag. 44(1922) 450–479, 823–842. and is therefore known as the Darwin–Fowler method. This method is the most reliable general procedure for deriving statistical distribution functions. Since the method employs a selector variable (a factor introduced for each element to permit a counting procedure) the method is also known as the Darwin–Fowler method of selector variables. Note that a distribution function is not the same as the probability – cf.
Maxwell–Boltzmann distribution In physics (in particular in statistical mechanics), the Maxwell–Boltzmann distribution, or Maxwell(ian) distribution, is a particular probability distribution named after James Clerk Maxwell and Ludwig Boltzmann. It was first defined and use ...
,
Bose–Einstein distribution Bose–Einstein may refer to: * Bose–Einstein condensate ** Bose–Einstein condensation (network theory) * Bose–Einstein correlations * Bose–Einstein statistics In quantum statistics, Bose–Einstein statistics (B–E statistics) describ ...
,
Fermi–Dirac distribution Fermi–Dirac may refer to: * Fermi–Dirac statistics Fermi–Dirac statistics (F–D statistics) is a type of quantum statistics that applies to the physics of a system consisting of many non-interacting, identical particles that obey the Pa ...
. Also note that the distribution function f_i which is a measure of the fraction of those states which are actually occupied by elements, is given by f_i = n_i/g_i or n_i= f_ig_i, where g_i is the degeneracy of energy level i of energy \varepsilon_i and n_i is the number of elements occupying this level (e.g. in Fermi–Dirac statistics 0 or 1). Total energy E and total number of elements N are then given by E = \sum_i n_i\varepsilon_i and N = \sum n_i. The Darwin–Fowler method has been treated in the texts of E. Schrödinger, Fowler and Fowler and E. A. Guggenheim, of K. Huang, and of H. J. W. Müller–Kirsten. The method is also discussed and used for the derivation of
Bose–Einstein condensation Bose–Einstein may refer to: * Bose–Einstein condensate ** Bose–Einstein condensation (network theory) * Bose–Einstein correlations * Bose–Einstein statistics In quantum statistics, Bose–Einstein statistics (B–E statistics) describe ...
in the book of .R. B. Dingle, Asymptotic Expansions: Their Derivation and Interpretation, Academic Press (1973); pp. 267–271.


Classical statistics

For N=\sum_in_i independent elements with n_i on level with energy \varepsilon_i and E=\sum_in_i\varepsilon_i for a canonical system in a heat bath with temperature T we set : Z = \sum_\texte^ = \sum_\text\prod_iz_i^, \;\;\; z_i = e^. The average over all arrangements is the mean occupation number : (n_i)_\text = \frac = z_j\frac\ln Z. Insert a selector variable \omega by setting : Z_\omega = \sum \prod_i(\omega z_i)^. In classical statistics the N elements are (a) distinguishable and can be arranged with packets of n_i elements on level \varepsilon_i whose number is : \frac, so that in this case : Z_\omega = N!\sum_\prod_i\frac. Allowing for (b) the degeneracy g_i of level \varepsilon_i this expression becomes : Z_\omega = N!\prod_^\left(\sum_\frac\right)^ = N!e^. The selector variable \omega allows one to pick out the coefficient of \omega^N which is Z. Thus : Z = \left(\sum_ig_iz_i\right)^N, and hence : (n_j)_\text = z_j\frac\ln Z = N\frac. This result which agrees with the most probable value obtained by maximization does not involve a single approximation and is therefore exact, and thus demonstrates the power of this Darwin–Fowler method.


Quantum statistics

We have as above : Z_=\sum\prod (\omega z_i)^, \;\; z_i=e^, where n_i is the number of elements in energy level \varepsilon_i. Since in quantum statistics elements are indistinguishable no preliminary calculation of the number of ways of dividing elements into packets n_1, n_2, n_3, ... is required. Therefore the sum \sum refers only to the sum over possible values of n_i. In the case of Fermi–Dirac statistics we have : n_i=0 or n_i=1 per state. There are g_i states for energy level \varepsilon_i. Hence we have : Z_\omega=(1+\omega z_1)^(1+\omega z_2)^\cdots=\prod(1+\omega z_i)^. In the case of
Bose–Einstein statistics In quantum statistics, Bose–Einstein statistics (B–E statistics) describes one of two possible ways in which a collection of non-interacting, indistinguishable particles may occupy a set of available discrete energy states at thermodynamic ...
we have : n_i=0,1,2,3, \ldots \infty. By the same procedure as before we obtain in the present case :Z_=(1+\omega z_1+(\omega z_1)^2 + (\omega z_1)^3 + \cdots)^(1+\omega z_2 + (\omega z_2)^2 + \cdots)^ \cdots. But :1 + \omega z_1 + (\omega z_1)^2 + \cdots = \frac. Therefore : Z_\omega=\prod_i(1-\omega z_i)^. Summarizing both cases and recalling the definition of Z, we have that Z is the coefficient of \omega^N in : Z_\omega=\prod_i(1\pm \omega z_i)^, where the upper signs apply to Fermi–Dirac statistics, and the lower signs to Bose–Einstein statistics. Next we have to evaluate the coefficient of \omega^N in Z_\omega. In the case of a function \phi(\omega) which can be expanded as : \phi(\omega) = a_0 + a_1\omega + a_2\omega^2 + \cdots, the coefficient of \omega^N is, with the help of the
residue theorem In complex analysis, the residue theorem, sometimes called Cauchy's residue theorem, is a powerful tool to evaluate line integrals of analytic functions over closed curves; it can often be used to compute real integrals and infinite series as wel ...
of
Cauchy Baron Augustin-Louis Cauchy (, ; ; 21 August 178923 May 1857) was a French mathematician, engineer, and physicist who made pioneering contributions to several branches of mathematics, including mathematical analysis and continuum mechanics. He w ...
, : a_N = \frac\oint \frac. We note that similarly the coefficient Z in the above can be obtained as : Z=\frac\oint\fracd\omega\equiv \frac\int e^d\omega, where : f(\omega)=\pm\sum_ig_i\ln (1\pm \omega z_i)-(N+1)\ln\omega. Differentiating one obtains : f'(\omega) = \frac\left sum_i\frac-(N+1)\right and : f''(\omega) = \frac\mp \frac\sum_i\frac. One now evaluates the first and second derivatives of f(\omega) at the stationary point \omega_0 at which f'(\omega_0)=0.. This method of evaluation of Z around the saddle point \omega_0is known as the
method of steepest descent In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in r ...
. One then obtains : Z = \frac. We have f'(\omega_0) = 0 and hence :(N+1) = \sum_i\frac (the +1 being negligible since N is large). We shall see in a moment that this last relation is simply the formula : N = \sum_in_i. We obtain the mean occupation number (n_i)_ by evaluating :(n_j)_ = z_j\frac\ln Z = \frac = \frac, \quad e^= \omega_0. This expression gives the mean number of elements of the total of N in the volume V which occupy at temperature T the 1-particle level \varepsilon_j with degeneracy g_j (see e.g.
a priori probability An ''a priori'' probability is a probability that is derived purely by deductive reasoning. One way of deriving ''a priori'' probabilities is the principle of indifference, which has the character of saying that, if there are ''N'' mutually exclu ...
). For the relation to be reliable one should check that higher order contributions are initially decreasing in magnitude so that the expansion around the saddle point does indeed yield an asymptotic expansion.


Further reading

*


References

* * * * {{DEFAULTSORT:Darwin-Fowler method Statistical mechanics