Neural Network Quantum States
   HOME
*





Neural Network Quantum States
Neural Network Quantum States (NQS or NNQS) is a general class of variational quantum states parameterized in terms of an artificial neural network. It was first introduced in 2017 by the physicists Giuseppe Carleo and Matthias Troyer to approximate wave functions of many-body quantum systems. Given a many-body quantum state , \Psi\rangle comprising N degrees of freedom and a choice of associated quantum numbers s_1 \ldots s_N , then an NQS parameterizes the wave-function amplitudes \langle s_1 \ldots s_N , \Psi; W \rangle = F(s_1 \ldots s_N; W), where F(s_1 \ldots s_N; W) is an artificial neural network of parameters (weights) W , N input variables ( s_1 \ldots s_N ) and one complex-valued output corresponding to the wave-function amplitude. This variational form is used in conjunction with specific stochastic learning approaches to approximate quantum states of interest. Learning the Ground-State Wave Function One common application of NQS is to find an approxi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Variational Method (quantum Mechanics)
In quantum mechanics, the variational method is one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states. This allows calculating approximate wavefunctions such as molecular orbitals. The basis for this method is the variational principle. The method consists of choosing a "trial wavefunction" depending on one or more parameters, and finding the values of these parameters for which the expectation value of the energy is the lowest possible. The wavefunction obtained by fixing the parameters to such values is then an approximation to the ground state wavefunction, and the expectation value of the energy in that state is an upper bound to the ground state energy. The Hartree–Fock method, Density matrix renormalization group, and Ritz method apply the variational method. Description Suppose we are given a Hilbert space and a Hermitian operator over it called the Hamiltonian H . Ignoring complications about continuous spectra, w ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Variational Monte Carlo
In computational physics, variational Monte Carlo (VMC) is a quantum Monte Carlo method that applies the variational method to approximate the ground state of a quantum system. The basic building block is a generic wave function , \Psi(a) \rangle depending on some parameters a . The optimal values of the parameters a is then found upon minimizing the total energy of the system. In particular, given the Hamiltonian \mathcal , and denoting with X a many-body configuration, the expectation value of the energy can be written as: E(a) = \frac = \frac . Following the Monte Carlo method for evaluating integrals, we can interpret \frac as a probability distribution function, sample it, and evaluate the energy expectation value E(a) as the average of the so-called local energy E_(X) = \frac . Once E(a) is known for a given set of variational parameters a , then optimization is performed in order to minimize the energy and obtain the best possible representation of the g ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Quantum Mechanics
Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistry, quantum field theory, quantum technology, and quantum information science. Classical physics, the collection of theories that existed before the advent of quantum mechanics, describes many aspects of nature at an ordinary (macroscopic) scale, but is not sufficient for describing them at small (atomic and subatomic) scales. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large (macroscopic) scale. Quantum mechanics differs from classical physics in that energy, momentum, angular momentum, and other quantities of a bound system are restricted to discrete values ( quantization); objects have characteristics of both particles and waves (wave–particle duality); and there are limits to ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Differentiable Programming
Differentiable programming is a programming paradigm in which a numeric computer program can be differentiated throughout via automatic differentiation. This allows for gradient-based optimization of parameters in the program, often via gradient descent, as well as other learning approaches that are based on higher order derivative information. Differentiable programming has found use in a wide variety of areas, particularly scientific computing and artificial intelligence. One of the early proposals to adopt such a framework in a systematic fashion to improve upon learning algorithms was made by the Advanced Concepts Team at the European Space Agency in early 2016. Approaches Most differentiable programming frameworks work by constructing a graph containing the control flow and data structures in the program. Attempts generally fall into two groups: * Static, compiled graph-based approaches such as TensorFlow,TensorFlow 1 uses the static graph approach, whereas TensorFlow 2 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Entropy Of Entanglement
The entropy of entanglement (or entanglement entropy) is a measure of the degree of quantum entanglement between two subsystems constituting a two-part composite quantum system. Given a pure bipartite quantum state of the composite system, it is possible to obtain a reduced density matrix describing knowledge of the state of a subsystem. The entropy of entanglement is the Von Neumann entropy of the reduced density matrix for any of the subsystems. If it is non-zero, i.e. the subsystem is in a mixed state, it indicates the two subsystems are entangled. More mathematically; if a state describing two subsystems ''A'' and ''B'' , \Psi_\rangle=, \phi_A\rangle, \phi_B\rangleis a separable state, then the reduced density matrix \rho_A=\operatorname_B, \Psi_\rangle\langle\Psi_, =, \phi_A\rangle\langle\phi_A, is a pure state. Thus, the entropy of the state is zero. Similarly, the density matrix of ''B'' would also have 0 entropy. A reduced density matrix having a non-zero entropy is there ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Product State
Matrix product state (MPS) is a quantum state of many particles (in N sites), written in the following form: : , \Psi\rangle = \sum_ \operatorname\left _1^ A_2^ \cdots A_N^\right, s_1 s_2 \ldots s_N\rangle, where A_i^ are complex, square matrices of order \chi (this dimension is called local dimension). Indices s_i go over states in the computational basis. For qubits, it is s_i\in \. For qudits (d-level systems), it is s_i\in \. It is particularly useful for dealing with ground states of one-dimensional quantum spin models (e.g. Heisenberg model (quantum)). The parameter \chi is related to the entanglement between particles. In particular, if the state is a product state (i.e. not entangled at all), it can be described as a matrix product state with \chi = 1. For states that are translationally symmetric, we can choose: : A_1^ = A_2^ = \cdots = A_N^ \equiv A^. In general, every state can be written in the MPS form (with \chi growing exponentially with the particle number ' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Tensor Networks
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors (which are the simplest tensors), dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system. Tensors have become important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as mechanics ( stress, elasticity, fluid mechanics, moment of inertia, ...), electrodynamics ( electromagnetic tensor, Maxwell tensor, permittivity, magnetic susceptibility, ...), general relativity (stress–energy tenso ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Unsupervised Learning
Unsupervised learning is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, which is an important mode of learning in people, the machine is forced to build a concise representation of its world and then generate imaginative content from it. In contrast to supervised learning where data is tagged by an expert, e.g. tagged as a "ball" or "fish", unsupervised methods exhibit self-organization that captures patterns as probability densities or a combination of neural feature preferences encoded in the machine's weights and activations. The other levels in the supervision spectrum are reinforcement learning where the machine is given only a numerical performance score as guidance, and semi-supervised learning where a small portion of the data is tagged. Neural networks Tasks vs. methods Neural network tasks are often categorized as discriminative (recognition) or generative (imagination). Often but not always, discriminative tas ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Stochastic Gradient Descent
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in trade for a lower convergence rate. While the basic idea behind stochastic approximation can be traced back to the Robbins–Monro algorithm of the 1950s, stochastic gradient descent has become an important optimization method in machine learning. Background Both statistical estimation and machine learning consider the problem of minimizing an objective function that has the form of a sum: : Q(w) = \frac\sum_^n Q_i(w), ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Backpropagation
In machine learning, backpropagation (backprop, BP) is a widely used algorithm for training feedforward neural network, feedforward artificial neural networks. Generalizations of backpropagation exist for other artificial neural networks (ANNs), and for functions generally. These classes of algorithms are all referred to generically as "backpropagation". In Artificial neural network#Learning, fitting a neural network, backpropagation computes the gradient of the loss function with respect to the Glossary of graph theory terms#weight, weights of the network for a single input–output example, and does so Algorithmic efficiency, efficiently, unlike a naive direct computation of the gradient with respect to each weight individually. This efficiency makes it feasible to use gradient methods for training multilayer networks, updating weights to minimize loss; gradient descent, or variants such as stochastic gradient descent, are commonly used. The backpropagation algorithm works by ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Deep Learning
Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. Deep-learning architectures such as deep neural networks, deep belief networks, deep reinforcement learning, recurrent neural networks, convolutional neural networks and Transformers have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, Climatology, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance. Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains. Specifically, artificial ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gradient
In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to maximize a function by gradient ascent. In coordinate-free terms, the gradient of a function f(\bf) may be defined by: :df=\nabla f \cdot d\bf where ''df'' is the total infinitesimal change in ''f'' for an infinitesimal displacement d\bf, and is seen to be maximal when d\bf is in the direction of the gradi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]