Imaging process
The overall imaging process can be broken down in four simple steps: 1. Coherent beam scatters from sample 2. Modulus of Fourier transform measured 3. Computational algorithms used to retrieve phases 4. Image recovered by Inverse Fourier transform In CDI, the objective lens used in a traditional microscope is replaced with computational algorithms and software which are able to convert from the reciprocal space into the real space. The diffraction pattern picked up by the detector is in reciprocal space while the final image must be in real space to be of any use to the human eye. To begin, a highly coherent light source of x-rays, electrons, or other wavelike particles must be incident on an object. This beam, although popularly x-rays, has potential to be made up of electrons due to their decreased overall wavelength; this lower wavelength allows for higher resolution and, thus, a clearer final image. Due to this incident light, a spot is illuminated on the object being detected and reflected off of its surface. The beam is then scattered by the object producing a diffraction pattern representative of the Fourier transform of the object. The complex diffraction pattern is then collected by the detector and the Fourier transform of all the features that exist on the object’s surface are evaluated. With the diffraction information being put into the frequency domain, the image is not detectable by the human eye and, thus, very different from what we’re used to observing using normal microscopy techniques. A reconstructed image is then made through utilization of an iterative feedback phase-retrieval algorithm where a few hundred of these incident rays are detected and overlapped to provide sufficient redundancy in the reconstruction process. Lastly, a computer algorithm transforms the diffraction information into the real space and produces an image observable by the human eye; this image is what we would likely see by means of traditional microscopy techniques. The hope is that using CDI would produce a higher resolution image due to its aberration-free design and computational algorithms.The phase problem
There are two relevant parameters for diffracted waves: amplitude and phase. In typical microscopy using lenses there is no phase problem, as phase information is retained when waves are refracted. When a diffraction pattern is collected, the data is described in terms of absolute counts of photons or electrons, a measurement which describes amplitudes but loses phase information. This results in an ill-posedReconstruction
In a typical reconstruction the first step is to generate random phases and combine them with the amplitude information from the reciprocal space pattern. Then a Fourier transform is applied back and forth to move between real space and reciprocal space with theAlgorithms
One of the most important aspects of coherent diffraction imaging is the algorithm that recovers the phase from Fourier magnitudes and reconstructs the image. Several algorithms exist for this purpose, though they each follow a similar format of iterating between the real and reciprocal space of the object (Pham 2020). Furthermore, a support region is frequently defined to separate the object from its surrounding zero-density region (Pham 2020). As mentioned earlier, Fienup developed the initial algorithms of Error Reduction (ER) and Hybrid Input-Output (HIO) which both utilized a support constraint for real space and Fourier magnitudes as a constraint in reciprocal space (Fienup 1978). The ER algorithm sets both the zero-density region and the negative densities inside the support to zero for each iteration (Fienup 1978). The HIO algorithm relaxes the conditions of ER by gradually reducing the negative densities of the support to zero with each iteration (Fienup 1978). While HIO allowed for the reconstruction of an image from a noise-free diffraction pattern, it struggled to recover the phase in actual experiments where the Fourier magnitudes were corrupted by noise. This led to further development of algorithms that could better handle noise in image reconstruction. In 2010, a new algorithm called oversampling smoothness (OSS) was created to use a smoothness constraint on the imaged object. OSS would utilize Guassian filters to apply a smoothness constraint to the zero-density region which was found to increase robustness to noise and reduce oscillations in reconstruction (Rodriguez 2013).Generalized Proximal Imaging (GPS)
Building upon the success of OSS, a new algorithm called generalized proximal smoothness (GPS) has been developed. GPS addresses noise in the real and reciprocal space by incorporating principles of Moreau-Yosida regularization, which is a method of turning a convex function into a smooth convex function (Moreau 1965) (Yosida 1964). The magnitude constraint is relaxed into a least-fidelity squares term as a means of lessening the noise in the reciprocal space (Pham 2020). Overall, GPS was found to perform better than OSS and HIO in consistency, convergence speed, and robustness to noise. Using R-factor (relative error) as a measurement for effectiveness, GPS was found to have a lower R-factor in both real and reciprocal spaces (Pham 2020). Moreover, it took fewer iterations for GPS to converge towards a lower R-factor when compared to OSS and HIO in both spaces (Pham 2020).Coherence
Two wave sources are coherent when their frequency and waveforms are identical; this property of waves allows for stationary interference in which the wave is temporally or spatially constant and the waves are either added or subtracted from one another. Coherence is important in the context of CDI as the coherence of the two sources allows for the continuous emission of waves to occur. A constant phase difference and the coherence of a wave are necessary in order to obtain any type of interference pattern. Clearly a highly coherent beam of waves is required for CDI to work since the technique requires interference of diffracted waves. Coherent waves must be generated at the source (synchrotron, field emitter, etc.) and must maintain coherence until diffraction. It has been shown that the coherence width of the incident beam needs to be approximately twice the lateral width of the object to be imaged. However determining the size of the coherent patch to decide whether the object does or does not meet the criterion is subject to debate. As the coherence width is decreased, the size of the Bragg peaks in reciprocal space grows and they begin to overlap leading to decreased image resolution.Energy sources
X-ray
Coherent x-ray diffraction imaging (CXDI or CXD) uses x-rays (typically .5-4keV) to form a diffraction pattern which may be more attractive for 3D applications than electron diffraction since x-rays typically have better penetration. For imaging surfaces, the penetration of X-rays may be undesirable, in which case a glancing angle geometry may be used such as GISAXS. A typical x-ray CCD is used to record the diffraction pattern. If the sample is rotated about an axis perpendicular to the beam a 3-Dimensional image may be reconstructed. Due to radiation damage, resolution is limited (for continuous illumination set-ups) to about 10 nm for frozen-hydrated biological samples but resolutions of as high as 1 to 2 nm should be possible for inorganic materials less sensitive to damage (using modern synchrotron sources). It has been proposed that radiation damage may be avoided by using ultra short pulses of x-rays where the time scale of the destruction mechanism is longer than the pulse duration. This may enable higher energy and therefore higher resolution CXDI of organic materials such as proteins. However, without the loss of information "the linear number of detector pixels fixes the energy spread needed in the beam" which becomes increasingly difficult to control at higher energies. In a 2006 report, resolution was 40 nm using the Advanced Photon Source (APS) but the authors suggest this could be improved with higher power and more coherent X-ray sources such as the X-ray free electron laser.Electrons
Coherent electron diffraction imaging works the same as CXDI in principle only electrons are the diffracted waves and an imaging plate is used to detect electrons rather than a CCD. In one published report a double walled carbon nanotube (DWCNT) was imaged using nano area electron diffraction (NAED) with atomic resolution. In principle, electron diffraction imaging should yield a higher resolution image because the wavelength of electrons can be much smaller than photons without going to very high energies. Electrons also have much weaker penetration so they are more surface sensitive than X-rays. However, typically electron beams are more damaging than x-rays so this technique may be limited to inorganic materials. In Zuo's approach, a low resolution electron image is used to locate a nanotube. A field emission electron gun generates a beam with high coherence and high intensity. The beam size is limited to nano area with the condenser aperture in order to ensure scattering from only a section of the nanotube of interest. The diffraction pattern is recorded in the far field using electron imaging plates to a resolution of 0.0025 1/Å. Using a typical HIO reconstruction method an image is produced with Å resolution in which the DWCNT chirality (lattice structure) can be directly observed. Zuo found that it is possible to start with non-random phases based on a low resolution image from aIn situ CDI
Incomplete measurements have been a problem observed across all algorithms in CDI. Since the detector is too sensitive to absorb a particle beam directly, a beamstop or hole must be placed at its center to prevent direct contact (Pham 2020). Furthermore, detectors are often constructed with multiple panels with gaps between them where data again cannot be collected (Pham 2020). Ultimately, these qualities of the detector result in missing data within the diffraction patterns. In situ CDI is a new method of this imaging technology that could increase resistance to incomplete measurements. In situ CDI images a static region and a dynamic region that changes over time as a result of external stimuli (Hung Lo 2018). A series of diffraction patterns are collected over time with interference from the static and dynamic regions (Hung Lo 2018). Because of this interference, the static region acts as a time invariant constraint that phases patterns together in fewer iterations (Hung Lo 2018). Enforcing this static region as a constraint makes in situ CDI more robust to incomplete data and noise interference in the diffraction patterns (Hung Lo 2018). Overall, in situ CDI provides clearer data collection in fewer iterations than other CDI techniques.Related techniques
Various techniques for CDI have been developed over the years and utilized to study samples in physics, chemistry, materials, science, nanoscience, geology, and biology (6); this includes, but is not limited to, plane-wave DCI, Bragg CDI, ptychography, reflection CDI, Fresnel CDI, and sparsity CDI.See also
* Diffraction *References
External links