HOME
        TheInfoList






Statistical mechanics, one of the pillars of modern physics, describes how macroscopic observations (such as temperature and pressure) are related to microscopic parameters that fluctuate around an average. It connects thermodynamic quantities (such as heat capacity) to microscopic behavior, whereas, in classical thermodynamics, the only available option would be to measure and tabulate such quantities for various materials.[1]

Statistical mechanics is necessary for the fundamental study of any physical system that h

Statistical mechanics, one of the pillars of modern physics, describes how macroscopic observations (such as temperature and pressure) are related to microscopic parameters that fluctuate around an average. It connects thermodynamic quantities (such as heat capacity) to microscopic behavior, whereas, in classical thermodynamics, the only available option would be to measure and tabulate such quantities for various materials.[1]

Statistical mechanics is necessary for the fundamental study of any physical system that has many degrees of freedom. The approach is based on statistical methods, probability theory and the microscopic physical laws.[1][2][3][note 1]

It can be used to explain the thermodynamic behaviour of large systems. This branch of statistical mechanics, which treats and extends classical thermodynamics, is known as statistical thermodynamics or equilibrium statistical mechanics.

Statistical mechanics can also be used to study systems that are out of equilibrium. An important sub-branch known as non-equilibrium statistical mechanics (sometimes called statistical dynamics) deals with the issue of microscopically modelling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions or flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles.

Principles: mechanics and ensembles

In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics. For both types of mechanics, the standard mathematical approach is to consider two concepts:

  1. The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure degrees of freedom. The approach is based on statistical methods, probability theory and the microscopic physical laws.[1][2][3][note 1]

    It can be used to explain the thermodynamic behaviour of large systems. This branch of statistical mechanics, which treats and extends classical thermodynamics, is known as statistical thermodynamics or equilibrium statistical mechanics.

    Statistical mechanics can also be used to study systems that are out of equilibrium. An important sub-branch known as non-equilibrium statistical mechanics (sometimes called statistical dynamics) deals with the issue of microscopically modelling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions or flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles.

    In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics. For both types of mechanics, the standard mathematical approach is to consider two concepts:

    1. The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics).
    2. An equation of motion which carries the state forward in time: Hamilton's equations (classical mechanics) or the Schrödinger equation (quantum mechanics)

    Using these two concepts, the state at any other time, past or future, can in principle be calculated. There is however a disconnection between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in.

    Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinates. In quantum statistical mechanics, the ensemble is a probability distribution over pure states,[note 2] and can be compactly summarized as a density matrix.

    As is usual for probabilities, the ensemble can be interpreted in different ways:[1]

    • an ensemble can be taken to represent the various possible states that a single system could be in (epistemic probability, a form of knowledge), or
    • the members of the ensemble can be understood as the states of the systems in experiments repeated on independent systems which have been prepared in a similar but imperfectly controlled manner (empirical probability), in the limit of an infinite number of trials.

    These two meanings are equivalent for many purposes, and will be used interchangeably in this article.

    However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase poi

    Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinates. In quantum statistical mechanics, the ensemble is a probability distribution over pure states,[note 2] and can be compactly summarized as a density matrix.

    As is usual for probabilities, the ensemble can be interpreted in different ways:[1]

    These two meanings are equivalent for many purposes, and will be used interchangeably in this article.

    However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics

    However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state.

    One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state.[note 3] The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems.

    The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material.

    Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving. <

    Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving.

    A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.).[1] There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics.[1] Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another.

    A common approach found in many textbooks is to take the equal a priori probability postulate.[2] This postulate states that

    For an isolated system with an exactly known energy and exactly known composition, the system can be fou

    A common approach found in many textbooks is to take the equal a priori probability postulate.[2] This postulate states that

    The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate:

    • Ergodic hypothesis: An ergodic system is one that evolves over time to explore "all accessible" states: all those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only possible equilibrium e

      Other fundamental postulates for statistical mechanics have also been proposed.[5]

      Three thermodynamic ensembles

      There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume.[1] These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics.

      isolated system bounded inside a finite volume.[1] These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics.

      Microcanonical ensemble
      describes a system with a precisely given energy and fixed composition (precise number of particles). The microcanonical ensemble contains with equal probability each possible state that is consistent with that energy and composition.
      Canonical ensemble
      describes a system of fixed composition that is in thermal equilibrium[note 4] with a thermodynamic limit), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used.[6] The Gibbs theorem about equivalence of ensembles[7] was developed into the theory of concentration of measure phenomenon,[8] which has applications in many areas of science, from functional analysis to methods of artificial intelligence and big data technology.[9]

      Important cases where the thermodynamic ensembles do not give identical results include:

      • Microscopic systems.
      • Large systems at a phase transition.
      • Large systems with long-range interactions.

      In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system.[2]

      Thermodynamic ensembles[1]
      Microcanonical Canonical Grand canonical
      Fixed variables