HOME

TheInfoList



OR:

In
theoretical physics Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to experimental physics, which uses experi ...
, the term renormalization group (RG) refers to a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales. In
particle physics Particle physics or high energy physics is the study of fundamental particles and forces that constitute matter and radiation. The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) and ...
, it reflects the changes in the underlying force laws (codified in a quantum field theory) as the energy scale at which physical processes occur varies, energy/momentum and resolution distance scales being effectively conjugate under the
uncertainty principle In quantum mechanics, the uncertainty principle (also known as Heisenberg's uncertainty principle) is any of a variety of mathematical inequalities asserting a fundamental limit to the accuracy with which the values for certain pairs of physic ...
. A change in scale is called a scale transformation. The renormalization group is intimately related to ''scale invariance'' and ''conformal invariance'', symmetries in which a system appears the same at all scales (so-called self-similarity). As the scale varies, it is as if one is changing the magnifying power of a notional microscope viewing the system. In so-called renormalizable theories, the system at one scale will generally be seen to consist of self-similar copies of itself when viewed at a smaller scale, with different parameters describing the components of the system. The components, or fundamental variables, may relate to atoms, elementary particles, atomic spins, etc. The parameters of the theory typically describe the interactions of the components. These may be variable couplings which measure the strength of various forces, or mass parameters themselves. The components themselves may appear to be composed of more of the self-same components as one goes to shorter distances. For example, in quantum electrodynamics (QED), an electron appears to be composed of electrons, positrons (anti-electrons) and photons, as one views it at higher resolution, at very short distances. The electron at such short distances has a slightly different electric charge than does the dressed electron seen at large distances, and this change, or ''running'', in the value of the electric charge is determined by the renormalization group equation.


History

The idea of scale transformations and scale invariance is old in physics: Scaling arguments were commonplace for the
Pythagorean school Pythagorean, meaning of or pertaining to the ancient Ionian mathematician, philosopher, and music theorist Pythagoras, may refer to: Philosophy * Pythagoreanism, the esoteric and metaphysical beliefs purported to have been held by Pythagoras * Ne ...
,
Euclid Euclid (; grc-gre, Εὐκλείδης; BC) was an ancient Greek mathematician active as a geometer and logician. Considered the "father of geometry", he is chiefly known for the ''Elements'' treatise, which established the foundations of ...
, and up to Galileo. They became popular again at the end of the 19th century, perhaps the first example being the idea of enhanced
viscosity The viscosity of a fluid is a measure of its resistance to deformation at a given rate. For liquids, it corresponds to the informal concept of "thickness": for example, syrup has a higher viscosity than water. Viscosity quantifies the inte ...
of Osborne Reynolds, as a way to explain turbulence. The renormalization group was initially devised in particle physics, but nowadays its applications extend to solid-state physics,
fluid mechanics Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them. It has applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical and ...
,
physical cosmology Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of f ...
, and even nanotechnology. An early article by Ernst Stueckelberg and André Petermann in 1953 anticipates the idea in quantum field theory. Stueckelberg and Petermann opened the field conceptually. They noted that renormalization exhibits a group of transformations which transfer quantities from the bare terms to the counter terms. They introduced a function ''h''(''e'') in quantum electrodynamics (QED), which is now called the beta function (see below).


Beginnings

Murray Gell-Mann and Francis E. Low restricted the idea to scale transformations in QED in 1954, which are the most physically significant, and focused on asymptotic forms of the photon propagator at high energies. They determined the variation of the electromagnetic coupling in QED, by appreciating the simplicity of the scaling structure of that theory. They thus discovered that the coupling parameter ''g''(''μ'') at the energy scale ''μ'' is effectively given by the (one-dimensional translation) group equation :g(\mu)=G^\left(\left(\frac\right)^d G(g(M))\right) or equivalently, G\left(g(\mu)\right)= G(g(M))\left(/\right)^d, for some function ''G'' (unspecified—nowadays called
Wegner Wegner is a surname, and may refer to: __NOTOC__ A * Armin T. Wegner (1886–1978), German WWI soldier, writer, and human-rights activist * Axel Wegner (born 1963), German sports shooter B * Benjamin Wegner (1795–1864), Prussian-Norwegian indus ...
's scaling function) and a constant ''d'', in terms of the coupling ''g(M)'' at a reference scale ''M''. Gell-Mann and Low realized in these results that the effective scale can be arbitrarily taken as ''μ'', and can vary to define the theory at any other scale: :g(\kappa)=G^\left(\left(\frac\right)^d G(g(\mu))\right) = G^\left(\left(\frac\right)^d G(g(M))\right) The gist of the RG is this group property: as the scale ''μ'' varies, the theory presents a self-similar replica of itself, and any scale can be accessed similarly from any other scale, by group action, a formal transitive conjugacy of couplings in the mathematical sense ( Schröder's equation). On the basis of this (finite) group equation and its scaling property, Gell-Mann and Low could then focus on infinitesimal transformations, and invented a computational method based on a mathematical flow function of the coupling parameter ''g'', which they introduced. Like the function ''h''(''e'') of Stueckelberg and Petermann, their function determines the differential change of the coupling ''g''(''μ'') with respect to a small change in energy scale ''μ'' through a differential equation, the ''renormalization group equation'': : \displaystyle\frac = \psi(g) = \beta(g) The modern name is also indicated, the beta function, introduced by C. Callan and K. Symanzik in 1970. Since it is a mere function of ''g'', integration in ''g'' of a perturbative estimate of it permits specification of the renormalization trajectory of the coupling, that is, its variation with energy, effectively the function ''G'' in this perturbative approximation. The renormalization group prediction (cf. Stueckelberg–Petermann and Gell-Mann–Low works) was confirmed 40 years later at the LEP accelerator experiments: the fine structure "constant" of QED was measured to be about at energies close to 200 GeV, as opposed to the standard low-energy physics value of  .


Deeper understanding

The renormalization group emerges from the renormalization of the quantum field variables, which normally has to address the problem of infinities in a quantum field theory. This problem of systematically handling the infinities of quantum field theory to obtain finite physical quantities was solved for QED by Richard Feynman, Julian Schwinger and Shin'ichirō Tomonaga, who received the 1965 Nobel prize for these contributions. They effectively devised the theory of mass and charge renormalization, in which the infinity in the momentum scale is cut off by an ultra-large regulator, Λ. The dependence of physical quantities, such as the electric charge or electron mass, on the scale Λ is hidden, effectively swapped for the longer-distance scales at which the physical quantities are measured, and, as a result, all observable quantities end up being finite instead, even for an infinite Λ. Gell-Mann and Low thus realized in these results that, infinitesimally, while a tiny change in '' g'' is provided by the above RG equation given ψ(''g''), the self-similarity is expressed by the fact that ψ(''g'') depends explicitly only upon the parameter(s) of the theory, and not upon the scale ''μ''. Consequently, the above renormalization group equation may be solved for (''G'' and thus) ''g''(''μ''). A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilation group of conventional ''renormalizable'' theories, considers methods where widely different scales of lengths appear simultaneously. It came from
condensed matter physics Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases which arise from electromagnetic forces between atoms. More generally, the s ...
:
Leo P. Kadanoff Leo Philip Kadanoff (January 14, 1937 – October 26, 2015) was an American physicist. He was a professor of physics (emeritus from 2004) at the University of Chicago and a former President of the American Physical Society (APS). He contributed t ...
's paper in 1966 proposed the "block-spin" renormalization group. The "blocking idea" is a way to define the components of the theory at large distances as aggregates of components at shorter distances. This approach covered the conceptual point and was given full computational substance in the extensive important contributions of Kenneth Wilson. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1975, as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. He was awarded the Nobel prize for these decisive contributions in 1982.


Reformulation

Meanwhile, the RG in particle physics had been reformulated in more practical terms by Callan and Symanzik in 1970. The above beta function, which describes the "running of the coupling" parameter with scale, was also found to amount to the "canonical trace anomaly", which represents the quantum-mechanical breaking of scale (dilation) symmetry in a field theory. Applications of the RG to particle physics exploded in number in the 1970s with the establishment of the
Standard Model The Standard Model of particle physics is the theory describing three of the four known fundamental forces ( electromagnetic, weak and strong interactions - excluding gravity) in the universe and classifying all known elementary particles. I ...
. In 1973, it was discovered that a theory of interacting colored quarks, called quantum chromodynamics, had a negative beta function. This means that an initial high-energy value of the coupling will eventuate a special value of at which the coupling blows up (diverges). This special value is the scale of the strong interactions, = and occurs at about 200 MeV. Conversely, the coupling becomes weak at very high energies ( asymptotic freedom), and the quarks become observable as point-like particles, in deep inelastic scattering, as anticipated by Feynman–Bjorken scaling. QCD was thereby established as the quantum field theory controlling the strong interactions of particles. Momentum space RG also became a highly developed tool in solid state physics, but was hindered by the extensive use of perturbation theory, which prevented the theory from succeeding in strongly correlated systems.


Conformal symmetry

The conformal symmetry is associated with the vanishing of the beta function. This can occur naturally if a coupling constant is attracted, by running, toward a ''fixed point'' at which ''β''(''g'') = 0. In QCD, the fixed point occurs at short distances where ''g'' → 0 and is called a ( trivial) ultraviolet fixed point. For heavy quarks, such as the top quark, the coupling to the mass-giving
Higgs boson The Higgs boson, sometimes called the Higgs particle, is an elementary particle in the Standard Model of particle physics produced by the quantum excitation of the Higgs field, one of the fields in particle physics theory. In the St ...
runs toward a fixed non-zero (non-trivial)
infrared fixed point In physics, an infrared fixed point is a set of coupling constants, or other parameters, that evolve from initial values at very high energies (short distance) to fixed stable values, usually predictable, at low energies (large distance). This us ...
, first predicted by Pendleton and Ross (1981), and
C. T. Hill Christopher T. Hill (born June 19, 1951) is an American theoretical physicist at the Fermi National Accelerator Laboratory who did undergraduate work in physics at M.I.T. (B.S., M.S., 1972), and graduate work at Caltech (Ph.D., 1977, Murray Gell- ...
. The top quark Yukawa coupling lies slightly below the infrared fixed point of the Standard Model suggesting the possibility of additional new physics, such as sequential heavy Higgs bosons. In string theory conformal invariance of the string world-sheet is a fundamental symmetry: ''β'' = 0 is a requirement. Here, ''β'' is a function of the geometry of the space-time in which the string moves. This determines the space-time dimensionality of the string theory and enforces Einstein's equations of
general relativity General relativity, also known as the general theory of relativity and Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. ...
on the geometry. The RG is of fundamental importance to string theory and theories of grand unification. It is also the modern key idea underlying critical phenomena in condensed matter physics. Indeed, the RG has become one of the most important tools of modern physics. It is often used in combination with the
Monte Carlo method Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deter ...
.


Block spin

This section introduces pedagogically a picture of RG which may be easiest to grasp: the block spin RG, devised by
Leo P. Kadanoff Leo Philip Kadanoff (January 14, 1937 – October 26, 2015) was an American physicist. He was a professor of physics (emeritus from 2004) at the University of Chicago and a former President of the American Physical Society (APS). He contributed t ...
in 1966. Consider a 2D solid, a set of atoms in a perfect square array, as depicted in the figure. Assume that atoms interact among themselves only with their nearest neighbours, and that the system is at a given temperature . The strength of their interaction is quantified by a certain coupling . The physics of the system will be described by a certain formula, say the Hamiltonian . Now proceed to divide the solid into blocks of 2×2 squares; we attempt to describe the system in terms of block variables, i.e., variables which describe the average behavior of the block. Further assume that, by some lucky coincidence, the physics of block variables is described by a ''formula of the same kind'', but with different values for and : . (This isn't exactly true, in general, but it is often a good first approximation.) Perhaps, the initial problem was too hard to solve, since there were too many atoms. Now, in the renormalized problem we have only one fourth of them. But why stop now? Another iteration of the same kind leads to , and only one sixteenth of the atoms. We are increasing the observation scale with each RG step. Of course, the best idea is to iterate until there is only one very big block. Since the number of atoms in any real sample of material is very large, this is more or less equivalent to finding the ''long range'' behaviour of the RG transformation which took and . Often, when iterated many times, this RG transformation leads to a certain number of fixed points. To be more concrete, consider a
magnetic Magnetism is the class of physical attributes that are mediated by a magnetic field, which refers to the capacity to induce attractive and repulsive phenomena in other entities. Electric currents and the magnetic moments of elementary particles ...
system (e.g., the Ising model), in which the coupling denotes the trend of neighbour spins to be parallel. The configuration of the system is the result of the tradeoff between the ordering term and the disordering effect of temperature. For many models of this kind there are three fixed points: # and . This means that, at the largest size, temperature becomes unimportant, i.e., the disordering factor vanishes. Thus, in large scales, the system appears to be ordered. We are in a ferromagnetic phase. # and . Exactly the opposite; here, temperature dominates, and the system is disordered at large scales. # A nontrivial point between them, and . In this point, changing the scale does not change the physics, because the system is in a
fractal In mathematics, a fractal is a geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scales, as il ...
state. It corresponds to the Curie
phase transition In chemistry, thermodynamics, and other related fields, a phase transition (or phase change) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states ...
, and is also called a critical point. So, if we are given a certain material with given values of and , all we have to do in order to find out the large-scale behaviour of the system is to iterate the pair until we find the corresponding fixed point.


Elementary theory

In more technical terms, let us assume that we have a theory described by a certain function Z of the state variables \ and a certain set of coupling constants \. This function may be a partition function, an action, a Hamiltonian, etc. It must contain the whole description of the physics of the system. Now we consider a certain blocking transformation of the state variables \\to \, the number of \tilde s_i must be lower than the number of s_i. Now let us try to rewrite the Z function ''only'' in terms of the \tilde s_i. If this is achievable by a certain change in the parameters, \\to \, then the theory is said to be renormalizable. For some reason, most fundamental theories of physics such as quantum electrodynamics, quantum chromodynamics and electro-weak interaction, but not gravity, are exactly renormalizable. Also, most theories in condensed matter physics are approximately renormalizable, from
superconductivity Superconductivity is a set of physical properties observed in certain materials where electrical resistance vanishes and magnetic flux fields are expelled from the material. Any material exhibiting these properties is a superconductor. Unlik ...
to fluid turbulence. The change in the parameters is implemented by a certain beta function: \=\beta(\), which is said to induce a renormalization group flow (or RG flow) on the J-space. The values of J under the flow are called running couplings. As was stated in the previous section, the most important information in the RG flow are its fixed points. The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to exhibit
quantum triviality In a quantum field theory, charge screening can restrict the value of the observable "renormalized" charge of a classical theory. If the only resulting value of the renormalized charge is zero, the theory is said to be "trivial" or noninteracting. ...
, possessing what is called a Landau pole, as in quantum electrodynamics. For a 4 interaction, Michael Aizenman proved that this theory is indeed trivial, for space-time dimension ≥ 5. For = 4, the triviality has yet to be proven rigorously (pendin
recent submission to the arxiv
, but lattice computations have provided strong evidence for this. This fact is important as
quantum triviality In a quantum field theory, charge screening can restrict the value of the observable "renormalized" charge of a classical theory. If the only resulting value of the renormalized charge is zero, the theory is said to be "trivial" or noninteracting. ...
can be used to bound or even ''predict'' parameters such as the
Higgs boson The Higgs boson, sometimes called the Higgs particle, is an elementary particle in the Standard Model of particle physics produced by the quantum excitation of the Higgs field, one of the fields in particle physics theory. In the St ...
mass in asymptotic safety scenarios. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question. Since the RG transformations in such systems are lossy (i.e.: the number of variables decreases - see as an example in a different context, Lossy data compression), there need not be an inverse for a given RG transformation. Thus, in such lossy systems, the renormalization group is, in fact, a
semigroup In mathematics, a semigroup is an algebraic structure consisting of a Set (mathematics), set together with an associative internal binary operation on it. The binary operation of a semigroup is most often denoted multiplication, multiplicatively ...
, as lossiness implies that there is no unique inverse for each element.


Relevant and irrelevant operators and universality classes

Consider a certain observable of a physical system undergoing an RG transformation. The magnitude of the observable as the length scale of the system goes from small to large determines the importance of the observable(s) for the scaling law: A ''relevant'' observable is needed to describe the macroscopic behaviour of the system; ''irrelevant'' observables are not needed. ''Marginal'' observables may or may not need to be taken into account. A remarkable broad fact is that ''most observables are irrelevant'', i.e., ''the macroscopic physics is dominated by only a few observables in most systems''. As an example, in microscopic physics, to describe a system consisting of a mole of carbon-12 atoms we need of the order of 10 (the Avogadro number) variables, while to describe it as a macroscopic system (12 grams of carbon-12) we only need a few. Before Wilson's RG approach, there was an astonishing empirical fact to explain: The coincidence of the critical exponents (i.e., the exponents of the reduced-temperature dependence of several quantities near a second order phase transition) in very disparate phenomena, such as magnetic systems, superfluid transition ( Lambda transition), alloy physics, etc. So in general, thermodynamic features of a system near a phase transition ''depend only on a small number of variables'', such as the dimensionality and symmetry, but are insensitive to details of the underlying microscopic properties of the system. This coincidence of critical exponents for ostensibly quite different physical systems, called universality, is easily explained using the renormalization group, by demonstrating that the differences in phenomena among the individual fine-scale components are determined by ''irrelevant observables'', while the ''relevant observables'' are shared in common. Hence many macroscopic phenomena may be grouped into a small set of
universality class In statistical mechanics, a universality class is a collection of mathematical models which share a single scale invariant limit under the process of renormalization group flow. While the models within a class may differ dramatically at finite s ...
es, specified by the shared sets of relevant observables.


Momentum space

Renormalization groups, in practice, come in two main "flavours". The Kadanoff picture explained above refers mainly to the so-called real-space RG. Momentum-space RG on the other hand, has a longer history despite its relative subtlety. It can be used for systems where the degrees of freedom can be cast in terms of the
Fourier modes A Fourier series () is a summation of harmonically related sinusoidal functions, also known as components or harmonics. The result of the summation is a periodic function whose functional form is determined by the choices of cycle length (or ''p ...
of a given field. The RG transformation proceeds by ''integrating out'' a certain set of high-momentum (large-wavenumber) modes. Since large wavenumbers are related to short-length scales, the momentum-space RG results in an essentially analogous coarse-graining effect as with real-space RG. Momentum-space RG is usually performed on a perturbation expansion. The validity of such an expansion is predicated upon the actual physics of a system being close to that of a free field system. In this case, one may calculate observables by summing the leading terms in the expansion. This approach has proved successful for many theories, including most of particle physics, but fails for systems whose physics is very far from any free system, i.e., systems with strong correlations. As an example of the physical meaning of RG in particle physics, consider an overview of ''charge renormalization'' in quantum electrodynamics (QED). Suppose we have a point positive charge of a certain true (or bare) magnitude. The electromagnetic field around it has a certain energy, and thus may produce some virtual electron-positron pairs (for example). Although virtual particles annihilate very quickly, during their short lives the electron will be attracted by the charge, and the positron will be repelled. Since this happens uniformly everywhere near the point charge, where its electric field is sufficiently strong, these pairs effectively create a screen around the charge when viewed from far away. The measured strength of the charge will depend on how close our measuring probe can approach the point charge, bypassing more of the screen of virtual particles the closer it gets. Hence a ''dependence of a certain coupling constant (here, the electric charge) with distance scale''. Momentum and length scales are related inversely, according to the de Broglie relation: The higher the energy or momentum scale we may reach, the lower the length scale we may probe and resolve. Therefore, the momentum-space RG practitioners sometimes declaim to ''integrate out'' high momenta or high energy from their theories.


Exact renormalization group equations

An exact renormalization group equation (ERGE) is one that takes
irrelevant Relevance is the concept of one topic being connected to another topic in a way that makes it useful to consider the second topic when considering the first. The concept of relevance is studied in many different fields, including cognitive sci ...
couplings into account. There are several formulations. The Wilson ERGE is the simplest conceptually, but is practically impossible to implement.
Fourier transform A Fourier transform (FT) is a mathematical transform that decomposes functions into frequency components, which are represented by the output of the transform as a function of frequency. Most commonly functions of time or space are transformed, ...
into momentum space after Wick rotating into
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean sp ...
. Insist upon a hard momentum cutoff, so that the only degrees of freedom are those with momenta less than . The partition function is :Z=\int_ \mathcal\phi \exp\left S_\Lambda[\phiright">phi.html" ;"title="S_\Lambda[\phi">S_\Lambda[\phiright For any positive less than , define (a functional over field configurations whose Fourier transform has momentum support within ) as :\exp\left(-S_ phiright)\ \stackrel\ \int_ \mathcal\phi \exp\left S_\Lambda[\phiright">phi.html" ;"title="S_\Lambda[\phi">S_\Lambda[\phiright Obviously, :Z=\int_\mathcal\phi \exp\left[-S_ phiright]. In fact, this transformation is transitive relation, transitive. If you compute from and then compute SΛ″ from SΛ′, this gives you the sam