HOME

TheInfoList



OR:

Boson sampling is a restricted model of non-universal quantum computation introduced by
Scott Aaronson Scott Joel Aaronson (born May 21, 1981) is an American theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing a ...
and Alex Arkhipov after the original work of Lidror Troyansky and Naftali Tishby, that explored possible usage of
boson In particle physics, a boson ( ) is a subatomic particle whose spin quantum number has an integer value (0,1,2 ...). Bosons form one of the two fundamental classes of subatomic particle, the other being fermions, which have odd half-integer spi ...
scattering to evaluate expectation values of permanents of matrices. The model consists of sampling from the probability distribution of identical
boson In particle physics, a boson ( ) is a subatomic particle whose spin quantum number has an integer value (0,1,2 ...). Bosons form one of the two fundamental classes of subatomic particle, the other being fermions, which have odd half-integer spi ...
s scattered by a linear interferometer. Although the problem is well defined for any bosonic particles, its
photon A photon () is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless, so they a ...
ic version is currently considered as the most promising platform for a scalable implementation of a boson sampling device, which makes it a non-universal approach to
linear optical quantum computing Linear optical quantum computing or linear optics quantum computation (LOQC) is a paradigm of quantum computation, allowing (under certain conditions, described below) universal quantum computation. LOQC uses photons as information carriers, main ...
. Moreover, while not universal, the boson sampling scheme is strongly believed to implement computing tasks which are hard to implement with classical computers by using far fewer physical resources than a full linear-optical quantum computing setup. This advantage makes it an ideal candidate for demonstrating the power of quantum computation in the near term.


Description

Consider a multimode linear-optical circuit of ''N'' modes that is injected with ''M'' indistinguishable single photons (''N>M''). Then, the photonic implementation of the boson sampling task consists of generating a sample from the probability distribution of single-photon measurements at the output of the circuit. Specifically, this requires reliable sources of single photons (currently the most widely used ones are
parametric down-conversion Spontaneous parametric down-conversion (also known as SPDC, parametric fluorescence or parametric scattering) is a nonlinear instant optical process that converts one photon of higher energy (namely, a pump photon), into a pair of photons (namely, ...
crystals), as well as a linear interferometer. The latter can be fabricated, e.g., with fused-fiber beam splitters, through silica-on-silicon or laser-written integrated interferometers, or electrically and optically interfaced optical chips. Finally, the scheme also necessitates high efficiency single photon-counting detectors, such as those based on current-biased superconducting nanowires, which perform the measurements at the output of the circuit. Therefore, based on these three ingredients, the boson sampling setup does not require any ancillas, adaptive measurements or entangling operations, as does e.g. the universal optical scheme by Knill, Laflamme and Milburn (the KLM scheme). This makes it a non-universal model of quantum computation, and reduces the amount of physical resources needed for its practical realization. Specifically, suppose the linear interferometer is described by an ''N×N'' unitary matrix U, which performs a linear transformation of the
creation Creation may refer to: Religion *''Creatio ex nihilo'', the concept that matter was created by God out of nothing * Creation myth, a religious story of the origin of the world and how people first came to inhabit it * Creationism, the belief tha ...
(
annihilation In particle physics, annihilation is the process that occurs when a subatomic particle collides with its respective antiparticle to produce other particles, such as an electron colliding with a positron to produce two photons. The total energy ...
) operators a^\dagger_i (a_i^) of the circuit's input modes: :b^\dagger_j=\sum_^N U_a^\dagger_i \;\;(b_j=\sum_^N U_ a_i). Here ''i'' (''j'') labels the input (output) modes, and b^\dagger_j (b_j^) denotes the creation (annihilation) operators of the output modes (''i,j''=1'',..., N''). An interferometer characterized by some unitary U naturally induces a unitary evolution \varphi_M(U) on M-photon states. Moreover, the map \varphi_M is a
homomorphism In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). The word ''homomorphism'' comes from the Ancient Greek language: () meaning "same" ...
between N-dimensional unitary matrices, and unitaries acting on the exponentially large Hilbert space of the system: simple counting arguments show that the size of the Hilbert space corresponding to a system of ''M'' indistinguishable photons distributed among ''N'' modes is given by the binomial coefficient \tbinom (notice that since this
homomorphism In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). The word ''homomorphism'' comes from the Ancient Greek language: () meaning "same" ...
exists, not all values of W are possible). Suppose the interferometer is injected with an input state of single photons , \psi_\rangle=, s_1, s_2, ..., s_N \rangle with \sum_^s_k=M (s_k is the number of photons injected into the ''k''th mode). Then, the state , \psi_\rangle at the output of the circuit can be written down as , \psi_\rangle=\varphi_M(U) , s_1, s_2, ..., s_ \rangle. A simple way to understand the
homomorphism In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). The word ''homomorphism'' comes from the Ancient Greek language: () meaning "same" ...
between U and \varphi_M(U) is the following : We define the
isomorphism In mathematics, an isomorphism is a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word i ...
for the basis states: P_()\equiv x^_1x^_2x^_, and get the following result : P_()= P_(U) Consequently, the probability p(t_1, t_2, ..., t_N) of detecting t_k photons at the ''k''th output mode is given as :p(t_1, t_2, ..., t_N)=, \langle t_1, t_2, ..., t_N, \psi_\rangle, ^2=\frac. In the above expression \text\,U_ stands for the
permanent Permanent may refer to: Art and entertainment * ''Permanent'' (film), a 2017 American film * ''Permanent'' (Joy Division album) * "Permanent" (song), by David Cook Other uses * Permanent (mathematics), a concept in linear algebra * Permanent (cy ...
of the matrix U_, which is obtained from the unitary U by repeating s_i times its ''i''th column and t_j times its ''j''th row. Usually, in the context of the boson sampling problem the input state is taken of a standard form, denoted as , 1_M\rangle, for which each of the first ''M'' modes of the interferometer is injected with a single photon. In this case the above expression reads: :p(t_1, t_2, ..., t_N)=, \langle t_1, t_2, ..., t_N, \varphi_M(U) , 1_M\rangle, ^2=\frac, where the matrix U_ is obtained from U by keeping its first ''M'' columns and repeating t_j times its ''j''th row. Subsequently, the task of boson sampling is to sample either exactly or approximately from the above output distribution, given the unitary U describing the linear-optical circuit as input. As detailed below, the appearance of the permanent in the corresponding statistics of single-photon measurements contributes to the hardness of the boson sampling problem.


Complexity of the problem

The main reason of the growing interest towards the model of boson sampling is that despite being non-universal it is strongly believed to perform a computational task that is intractable for a classical computer. One of the main reasons behind this is that the probability distribution, which the boson sampling device has to sample from, is related to the permanent of
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
matrices. The computation of the permanent is in the general case an extremely hard task: it falls in the #P-hard complexity class. Moreover, its approximation to within multiplicative error is a #P-hard problem as well. All current proofs of the hardness of simulating boson sampling on a classical computer rely on the strong computational consequences that its efficient simulation by a classical algorithm would have. Namely, these proofs show that an efficient classical simulation would imply the collapse of the
polynomial hierarchy In computational complexity theory, the polynomial hierarchy (sometimes called the polynomial-time hierarchy) is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. ...
to its third level, a possibility that is considered very unlikely by the computer science community, due to its strong computational implications (in line with the strong implications of P=NP problem).


Exact sampling

The hardness proof of the exact boson sampling problem can be achieved following two distinct paths. Specifically, the first one uses the tools of the
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved ...
and combines the following two facts: # Approximating the probability p(t_1, t_2, ..., t_N) of a specific measurement outcome at the output of a linear interferometer to within a multiplicative constant is a #P-hard problem (due to the complexity of the permanent) # If a polynomial-time classical algorithm for exact boson sampling existed, then the above probability p(t_1, t_2, ..., t_N) could have been approximated to within a multiplicative constant in the BPPNPcomplexity class, i.e. within the third level of the
polynomial hierarchy In computational complexity theory, the polynomial hierarchy (sometimes called the polynomial-time hierarchy) is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. ...
When combined these two facts along with
Toda's theorem Toda's theorem is a result in computational complexity theory that was proven by Seinosuke Toda in his paper "PP is as Hard as the Polynomial-Time Hierarchy" and was given the 1998 Gödel Prize. Statement The theorem states that the entire polyno ...
result in the collapse of the polynomial hierarchy, which as mentioned above is highly unlikely to occur. This leads to the conclusion that there is no classical polynomial-time algorithm for the exact boson sampling problem. On the other hand, the alternative proof is inspired by a similar result for another restricted model of quantum computation – the model of instantaneous quantum computing. Namely, the proof uses the KLM scheme, which says that linear optics with adaptive measurements is universal for the class BQP. It also relies on the following facts: # Linear optics with postselected measurements is universal for
PostBQP In computational complexity theory, PostBQP is a complexity class consisting of all of the computational problems solvable in polynomial time on a quantum Turing machine with postselection and bounded error (in the sense that the algorithm is corr ...
, i.e. quantum polynomial-time class with postselection (a straightforward corollary of the KLM construction) # The class
PostBQP In computational complexity theory, PostBQP is a complexity class consisting of all of the computational problems solvable in polynomial time on a quantum Turing machine with postselection and bounded error (in the sense that the algorithm is corr ...
is equivalent to PP (i.e. the probabilistic polynomial-time class): PostBQP = PP # The existence of a classical boson sampling algorithm implies the simulability of postselected linear optics in the PostBPP class (that is, classical polynomial-time with postselection, known also as the class BPPpath) Again, the combination of these three results, as in the previous case, results in the collapse of the polynomial hierarchy. This makes the existence of a classical polynomial-time algorithm for the exact boson sampling problem highly unlikely. The best proposed classical
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
for exact boson sampling runs in time O(n2^n+mn^2) for a system with ''n''
photon A photon () is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless, so they a ...
s and ''m'' output modes. This algorithm leads to an estimate of 50
photon A photon () is an elementary particle that is a quantum of the electromagnetic field, including electromagnetic radiation such as light and radio waves, and the force carrier for the electromagnetic force. Photons are massless, so they a ...
s required to demonstrate quantum supremacy with boson sampling. There is also a
open-source implementation
in R.


Approximate sampling

The above hardness proofs are not applicable to the realistic implementation of a boson sampling device, due to the imperfection of any experimental setup (including the presence of noise, decoherence, photon losses, etc.). Therefore, for practical needs one necessitates the hardness proof for the corresponding approximate task. The latter consists of sampling from a probability distribution that is \varepsilon close to the one given by p(t_1, t_2, ..., t_N), in terms of the
total variation distance In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes called the statistical distance, statistical difference or variational dist ...
. The understanding of the complexity of this problem relies then on several additional assumptions, as well as on two yet unproven conjectures. Specifically, the proofs of the exact boson sampling problem cannot be directly applied here, since they are based on the #P-hardness of estimating the exponentially-small probability p(t_1, t_2, ..., t_N) of a specific measurement outcome. Thus, if a sampler "''knew''" which p(t_1, t_2, ..., t_N) we wanted to estimate, then it could adversarially choose to corrupt it (as long as the task is approximate). That is why, the idea is to "''hide''" the above probability p(t_1, t_2, ..., t_N) into an ''N×N'' random unitary matrix. This can be done knowing that any ''M×M'' submatrix of a unitary U, randomly chosen according to the Haar measure, is close in variation distance to a matrix of
i.i.d. In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is us ...
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
random Gaussian variables, provided that ''M ≤ N''1/6 (Haar random matrices can be directly implemented in optical circuits by mapping independent probability density functions for their parameters, to optical circuit components, i.e., beam splitters and phase shifters). Therefore, if the linear optical circuit implements a Haar random unitary matrix, the adversarial sampler will not be able to detect which of the exponentially many probabilities p(t_1, t_2, ..., t_N) we care about, and thus will not be able to avoid its estimation. In this case p(t_1, t_2, ..., t_N) is proportional to the squared absolute value of the permanent of the ''M×M'' matrix X\sim \mathcal(0, 1)^_ of i.i.d. Gaussians, smuggled inside U. These arguments bring us to the first conjecture of the hardness proof of approximate boson sampling problem – the permanent-of-Gaussians conjecture: * Approximating the permanent of a matrix X\sim \mathcal(0, 1)^_ of i.i.d. Gaussians to within multiplicative error is a #P-hard task. Moreover, the above conjecture can be linked to the estimation of , \text\,X, ^2, which the given probability of a specific measurement outcome is proportional to. However to establish this link one has to rely on another conjecture – the permanent anticoncentration conjecture: * There exists a polynomial ''Q'' such that for any ''M'' and ''δ''>0 the probability over ''M×M'' matrices X\sim \mathcal(0, 1)^_ of the following inequality to hold is smaller than ''δ'': , \,\text\,X, <\frac. By making use of the above two conjectures (which have several evidences of being true), the final proof eventually states that the existence of a classical polynomial-time algorithm for the approximate boson sampling task implies the collapse of the polynomial hierarchy. It is also worth mentioning another fact important to the proof of this statement, namely the so-called bosonic birthday paradox (in analogy with the well-known
birthday paradox In probability theory, the birthday problem asks for the probability that, in a set of randomly chosen people, at least two will share a birthday. The birthday paradox is that, counterintuitively, the probability of a shared birthday exceeds 5 ...
). The latter states that if ''M'' identical bosons are scattered among ''N''≫''M''2 modes of a linear interferometer with no two bosons in the same mode, then with high probability two bosons will not be found in the same output mode either. This property has been experimentally observed with two and three photons in integrated interferometers of up to 16 modes. On the one hand this feature facilitates the implementation of a restricted boson sampling device. Namely, if the probability of having more than one photon at the output of a linear optical circuit is negligible, one does not require photon-number-resolving detectors anymore: on-off detectors will be sufficient for the realization of the setup. Although the probability p(t_1, t_2, ..., t_N) of a specific measurement outcome at the output of the interferometer is related to the permanent of submatrices of a unitary matrix, a boson sampling machine does not allow its estimation. The main reason behind is that the corresponding detection probability is usually exponentially small. Thus, in order to collect enough statistics to approximate its value, one has to run the quantum experiment for exponentially long time. Therefore, the estimate obtained from a boson sampler is not more efficient that running the classical polynomial-time algorithm by Gurvits for approximating the permanent of any matrix to within additive error.


Variants


Scattershot boson sampling

As already mentioned above, for the implementation of a boson sampling machine one necessitates a reliable source of many indistinguishable photons, and this requirement currently remains one of the main difficulties in scaling up the complexity of the device. Namely, despite recent advances in photon generation techniques using atoms, molecules,
quantum dots Quantum dots (QDs) are semiconductor particles a few nanometres in size, having optical and electronic properties that differ from those of larger particles as a result of quantum mechanics. They are a central topic in nanotechnology. When the ...
and color centers in diamonds, the most widely used method remains the
parametric down-conversion Spontaneous parametric down-conversion (also known as SPDC, parametric fluorescence or parametric scattering) is a nonlinear instant optical process that converts one photon of higher energy (namely, a pump photon), into a pair of photons (namely, ...
(PDC) mechanism. The main advantages of PDC sources are the high photon indistinguishability, collection efficiency and relatively simple experimental setups. However, one of the drawbacks of this approach is its non-deterministic (heralded) nature. Specifically, suppose the probability of generating a single photon by means of a PDC crystal is ''ε''. Then, the probability of generating simultaneously ''M'' single photons is ''εM'', which decreases exponentially with ''M''. In other words, in order to generate the input state for the boson sampling machine, one would have to wait for exponentially long time, which would kill the advantage of the quantum setup over a classical machine. Subsequently, this characteristic restricted the use of PDC sources to proof-of-principle demonstrations of a boson sampling device. Recently, however, a new scheme has been proposed to make the best use of PDC sources for the needs of boson sampling, greatly enhancing the rate of ''M''-photon events. This approach has been named scattershot boson sampling, which consists of connecting ''N'' (''N''>''M'') heralded single-photon sources to different input ports of the linear interferometer. Then, by pumping all ''N'' PDC crystals with simultaneous laser pulses, the probability of generating ''M'' photons will be given as \tbinom \varepsilon^M. Therefore, for ''N''≫''M'', this results in an exponential improvement in the single photon generation rate with respect to the usual, fixed-input boson sampling with ''M'' sources. This setting can also be seen as a problem of sampling ''N'' two-mode squeezed vacuum states generated from ''N'' PDC sources. Scattershot boson sampling is still intractable for a classical computer: in the conventional setup we fixed the columns that defined our ''M''×''M'' submatrix and only varied the rows, whereas now we vary the columns too, depending on which ''M'' out of ''N'' PDC crystals generated single photons. Therefore, the proof can be constructed here similar to the original one. Furthermore, scattershot boson sampling has been also recently implemented with six photon-pair sources coupled to integrated photonic circuits of nine and thirteen modes, being an important leap towards a convincing experimental demonstration of the quantum computational supremacy. The scattershot boson sampling model can be further generalized to the case where both legs of PDC sources are subject to linear optical transformations (in the original scattershot case, one of the arms is used for heralding, i.e., it goes through the identity channel). Such a twofold scattershot boson sampling model is also computationally hard, as proven by making use of the symmetry of quantum mechanics under time reversal.


Gaussian boson sampling

Another photonic implementation of boson sampling concerns Gaussian input states, i.e. states whose quasiprobability
Wigner distribution function The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis. The WDF was first proposed in physics to account for quantum corrections to classical statistical mechanics in 1932 by Eugene Wigner, ...
is a Gaussian one. The hardness of the corresponding sampling task can be linked to that of scattershot boson sampling. Namely, the latter can be embedded into the conventional boson sampling setup with Gaussian inputs. For this, one has to generate two-mode entangled Gaussian states and apply a Haar-random unitary U to their "right halves", while doing nothing to the others. Then we can measure the "left halves" to find out which of the input states contained a photon before we applied U. This is precisely equivalent to scattershot boson sampling, except for the fact that our measurement of the herald photons has been deferred till the end of the experiment, instead of happening at the beginning. Therefore, approximate Gaussian boson sampling can be argued to be hard under precisely the same complexity assumption as can approximate ordinary or scattershot boson sampling. Gaussian resources can be employed at the measurement stage, as well. Namely, one can define a boson sampling model, where a linear optical evolution of input single-photon states is concluded by Gaussian measurements (more specifically, by eight-port
homodyne detection In electrical engineering, homodyne detection is a method of extracting information encoded as modulation of the phase and/or frequency of an oscillating signal, by comparing that signal with a standard oscillation that would be identical to the s ...
that projects each output mode onto a
squeezed coherent state In physics, a squeezed coherent state is a quantum state that is usually described by two non-commuting observables having continuous spectra of eigenvalues. Examples are position x and momentum p of a particle, and the (dimension-less) elect ...
). Such a model deals with continuous-variable measurement outcome, which, under certain conditions, is a computationally hard task. Finally, a linear optics platform for implementing a boson sampling experiment where input single-photons undergo an active (non-linear) Gaussian transformation is also available. This setting makes use of a set of two-mode squeezed vacuum states as a prior resource, with no need of single-photon sources or in-line nonlinear amplification medium. This variant uses the
Hafnian In mathematics, the hafnian of an adjacency matrix of a graph is the number of perfect matchings in the graph. It was so named by Eduardo R. Caianiello "to mark the fruitful period of stay in Copenhagen (Hafnia in Latin)." The hafnian of a 2n\t ...
, a generalization of the permanent.


Classically simulable boson sampling tasks

The above results state that the existence of a polynomial-time classical algorithm for the original boson sampling scheme with indistinguishable single photons (in the exact and approximate cases), for scattershot, as well as for the general Gaussian boson sampling problems is highly unlikely. Nevertheless, there are some non-trivial realizations of the boson sampling problem that allow for its efficient classical simulation. One such example is when the optical circuit is injected with distinguishable single photons. In this case, instead of summing the probability ''amplitudes'' corresponding to photonic many-particle paths, one has to sum the corresponding probabilities (i.e. the squared absolute values of the amplitudes). Consequently, the detection probability p(t_1, t_2, ..., t_N) will be proportional to the permanent of submatrices of (component-wise) squared absolute value of the unitary U. The latter is now a non-negative matrix. Therefore, although the exact computation of the corresponding permanent is a #P-complete problem, its approximation can be performed efficiently on a classical computer, due to the seminal algorithm by Jerrum, Sinclaire and Vigoda. In other words, approximate boson sampling with distinguishable photons is efficiently classically simulable. Another instance of classically simulable boson sampling setups consists of sampling from the probability distribution of
coherent states In physics, specifically in quantum mechanics, a coherent state is the specific quantum state of the quantum harmonic oscillator, often described as a state which has dynamics most closely resembling the oscillatory behavior of a classical harmo ...
injected into the linear interferometer. The reason is that at the output of a linear optical circuit coherent states remain such, and do not create any
quantum entanglement Quantum entanglement is the phenomenon that occurs when a group of particles are generated, interact, or share spatial proximity in a way such that the quantum state of each particle of the group cannot be described independently of the state of ...
among the modes. More precisely, only their amplitudes are transformed, and the transformation can be efficiently calculated on a classical computer (the computation comprises
matrix multiplication In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the s ...
). This fact can be used to perform corresponding sampling tasks from another set of states: so-called classical states, whose Glauber-Sudarshan ''P'' function is a well-defined probability distribution. These states can be represented as a mixture of coherent states due to the optical equivalence theorem. Therefore, picking random coherent states distributed according to the corresponding ''P'' function, one can perform efficient classical simulation of boson sampling from this set of classical states.


Experimental implementations

The above requirements for the photonic boson sampling machine allow for its small-scale construction by means of existing technologies. Consequently, shortly after the theoretical model was introduced, four different groups simultaneously reported its realization. Specifically, this included the implementation of boson sampling with: * two and three photons scattered by a six-mode linear unitary transformation (represented by two orthogonal polarizations in 3×3 spatial modes of a fused-fiber beam splitter) by a collaboration between the University of Queensland and MIT * three photons in different modes of a six-mode silica-on-silicon waveguide circuit, by a collaboration between Universities of Oxford, Shanghai, London and Southampton * three photons in a femtosecond laser-written five-mode interferometer, by a collaboration between universities of Vienna and Jena * three photons in a femtosecond laser-written five-mode interferometer implementing a Haar-random unitary transformation, by a collaboration between Milan's Institute of Photonics and Nanotechnology, Universidade Federal Fluminense and Sapienza University of Rome. Later on, more complex boson sampling experiments have been performed, increasing the number of spatial modes of random interferometers up to 13 and 9 modes, and realizing a 6-mode fully reconfigurable integrated circuit. These experiments altogether constitute the proof-of-principle demonstrations of an operational boson sampling device, and route towards its larger-scale implementations.


Implementation of scattershot boson sampling

A first scattershot boson sampling experiment has been recently implemented using six photon-pair sources coupled to integrated photonic circuits with 13 modes. The 6 photon-pair sources were obtained via type-II PDC processes in 3 different nonlinear crystals (exploiting the polarization degree of freedom). This allowed to sample simultaneously between 8 different input states. The 13-mode interferometer was realized by femtosecond laser-writing technique on alumino-borosilicate glas. This experimental implementation represents a leap towards an experimental demonstration of the quantum computational supremacy.


Proposals with alternative photonic platform

There are several other proposals for the implementation of photonic boson sampling. This includes, e.g., the scheme for arbitrarily scalable boson sampling using two nested fiber loops. In this case, the architecture employs time-bin encoding, whereby the incident photons form a pulse train entering the loops. Meanwhile, dynamically controlled loop coupling ratios allow the construction of arbitrary linear interferometers. Moreover, the architecture employs only a single point of interference and may thus be easier to stabilize than other implementations. Another approach relies on the realization of unitary transformations on temporal modes based on dispersion and pulse shaping. Namely, passing consecutively heralded photons through time-independent dispersion and measuring the output time of the photons is equivalent to a boson sampling experiment. With time-dependent dispersion, it is also possible to implement arbitrary single-particle unitaries. This scheme requires a much smaller number of sources and detectors and do not necessitate a large system of beam splitters.


Certification

The output of a
universal quantum computer A quantum Turing machine (QTM) or universal quantum computer is an abstract machine used to model the effects of a quantum computer. It provides a simple model that captures all of the power of quantum computation—that is, any quantum algorith ...
running, for example, Shor's factoring algorithm, can be efficiently verified classically, as is the case for all problems in the non-deterministic polynomial-time (NP) complexity class. It is however not clear that a similar structure exists for the boson sampling scheme. Namely, as the latter is related to the problem of estimating matrix permanents (falling into #P-hard complexity class), it is not understood how to verify correct operation for large versions of the setup. Specifically, the naive verification of the output of a boson sampler by computing the corresponding measurement probabilities represents a problem intractable for a classical computer. A first relevant question is whether it is possible or not to distinguish between uniform and boson-sampling distributions by performing a polynomial number of measurements. The initial argument introduced in Ref. stated that as long as one makes use of symmetric measurement settings the above is impossible (roughly speaking a symmetric measurement scheme does not allow for labeling the output modes of the optical circuit). However, within current technologies the assumption of a symmetric setting is not justified (the tracking of the measurement statistics is fully accessible), and therefore the above argument does not apply. It is then possible to define a rigorous and efficient test to discriminate the boson sampling statistics from an unbiased probability distribution. The corresponding discriminator is correlated to the permanent of the submatrix associated with a given measurement pattern, but can be efficiently calculated. This test has been applied experimentally to distinguish between a boson sampling and a uniform distribution in the 3-photon regime with integrated circuits of 5, 7, 9 and 13 modes. The test above does not distinguish between more complex distributions, such as quantum and classical, or between fermionic and bosonic statistics. A physically motivated scenario to be addressed is the unwanted introduction of distinguishability between photons, which destroys quantum interference (this regime is readily accessible experimentally, for example by introducing temporal delay between photons). The opportunity then exists to tune between ideally indistinguishable (quantum) and perfectly distinguishable (classical) data and measure the change in a suitably constructed metric. This scenario can be addressed by a statistical test which performs a one-on-one likelihood comparison of the output probabilities. This test requires the calculation of a small number of permanents, but does not need the calculation of the full expected probability distribution. Experimental implementation of the test has been successfully reported in integrated laser-written circuits for both the standard boson sampling (3 photons in 7-, 9- and 13-mode interferometers) and the scattershot version (3 photons in 9- and 13-mode interferometers with different input states). Another possibility is based on the bunching property of indinguishable photons. One can analyze the probability to find a ''k''-fold coincidence measurement outcomes (without any multiply populated input mode), which is significantly higher for distinguishable particles than for bosons due to the bunching tendency of the latters. Finally, leaving the space of random matrices one may focus on specific multimode setups with certain features. In particular, the analysis of the effect of bosonic clouding (the tendency for bosons to favor events with all particles in the same half of the output array of a continuous-time many-particle quantum walk) has been proven to discriminate the behavior of distinguishable and indistinguishable particles in this specific platform. A different approach to confirm that the boson sampling machine behaves as the theory predicts is to make use of fully reconfigurable optical circuits. With large-scale single-photon and multiphoton interference verified with predictable multimode correlations in a fully characterized circuit, a reasonable assumption is that the system maintains correct operation as the circuit is continuously reconfigured to implement a random unitary operation. To this end, one can exploit quantum suppression laws (the probability of specific input-output combinations is suppressed when the linear interferometer is described by a Fourier matrix or other matrices with relevant symmetries). These suppression laws can be classically predicted in efficient ways. This approach allows also to exclude other physical models, such as mean-field states, which mimic some collective multiparticle properties (including bosonic clouding). The implementation of a Fourier matrix circuit in a fully reconfigurable 6-mode device has been reported, and experimental observations of the suppression law have been shown for 2 photons in 4- and 8-mode Fourier matrices.


Alternative implementations and applications

Apart from the photonic realization of the boson sampling task, several other setups have been proposed. This includes, e.g., the encoding of bosons into the local transverse phonon modes of trapped ions. The scheme allows deterministic preparation and high-efficiency readout of the corresponding phonon Fock states and universal manipulation of the phonon modes through a combination of inherent
Coulomb interaction Coulomb's inverse-square law, or simply Coulomb's law, is an experimental law of physics that quantifies the amount of force between two stationary, electrically charged particles. The electric force between charged bodies at rest is convention ...
and individual phase shifts. This scheme is scalable and relies on the recent advances in ion trapping techniques (several dozens of ions can be successfully trapped, for example, in linear Paul traps by making use of anharmonic axial potentials). Another platform for implementing the boson sampling setup is a system of interacting spins: recent observation show that boson sampling with ''M'' particles in ''N'' modes is equivalent to the short-time evolution with ''M'' excitations in the ''XY'' model of 2''N'' spins. One necessitates several additional assumptions here, including small boson bunching probability and efficient error postselection. This scalable scheme, however, is rather promising, in the light of considerable development in the construction and manipulation of coupled superconducting qubits and specifically the D-Wave machine. The task of boson sampling shares peculiar similarities with the problem of determining molecular vibronic spectra: a feasible modification of the boson sampling scheme results in a setup that can be used for the reconstruction of a molecule's Franck–Condon profiles (for which no efficient classical algorithm is currently known). Specifically, the task now is to input specific squeezed coherent states into a linear interferometer that is determined by the properties of the molecule of interest. Therefore, this prominent observation makes the interest towards the implementation of the boson sampling task to get spread well beyond the fundamental basis. It has also been suggested to use a superconducting resonator network Boson Sampling device as an interferometer. This application is assumed to be practical, as small changes in the couplings between the resonators will change the sampling results. Sensing of variation in the parameters capable of altering the couplings is thus achieved, when comparing the sampling results to an unaltered reference. Variants of the boson sampling model have been used to construct ''classical'' computational algorithms, aimed, e.g., at the estimation of certain matrix permanents (for instance, permanents of positive-semidefinite matrices related to the corresponding open problem in computer science) by combining tools proper to
quantum optics Quantum optics is a branch of atomic, molecular, and optical physics dealing with how individual quanta of light, known as photons, interact with atoms and molecules. It includes the study of the particle-like properties of photons. Photons have ...
and computational complexity. Coarse-grained boson sampling has been proposed as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. Gaussian boson sampling has been analyzed as a search component for computing binding propensity between molecules of pharmacological interest as well.


See also

*
Linear optical quantum computing Linear optical quantum computing or linear optics quantum computation (LOQC) is a paradigm of quantum computation, allowing (under certain conditions, described below) universal quantum computation. LOQC uses photons as information carriers, main ...
* KLM protocol * Cross-entropy benchmarking


References


External links


QUCHIP project

Quantum Information Lab – Sapienza: video on boson sampling

Quantum Information Lab – Sapienza: video on scattershot boson sampling

The Qubit Lab – Boson Sampling
{{Quantum computing Quantum information science Quantum optics Quantum gates