Authors:Gaston Dupouy Abstract: Publication date: Available online 20 June 2017 Source:Advances in Imaging and Electron Physics Author(s): Gaston Dupouy In 1968, when this article first appeared, electron microscopy in the megavolt range was still very new. The advantages and problems of operation at such high voltages are set out very readably and many examples of micrographs, mostly obtained with the Toulouse high-voltage electron microscope, are included. Beam–specimen interactions at high voltage are explored. Detailed descriptions of the construction of such microscopes are provided.

Authors:Ernst Ruska Abstract: Publication date: Available online 16 May 2017 Source:Advances in Imaging and Electron Physics Author(s): Ernst Ruska The builder of the first electron microscope with magnetic lenses assesses the difficulties of improving the resolution of the electron microscope, as they appeared in 1966. Each of the obstacles is examined in turn and its importance gauged. The condenser objective is described and clearly illustrated as is a cooling system designed to limit radiation damage.

Authors:John C.H. Spence Abstract: Publication date: Available online 24 April 2017 Source:Advances in Imaging and Electron Physics Author(s): John C.H. Spence The recent invention of the X-ray laser (XFEL), with its high spatial coherence and ability to outrun radiation damage, has provided unprecedented new opportunities for structural biology. Here, we review the challenges and advances which have occurred over the past 7 years since the first beamtimes, provide their historical context, and describe the underlying principles of the new techniques used and the XFEL. The main focus is on the achievements and prospects for imaging protein dynamics at near-atomic spatial resolution under physiological and controlled chemical conditions, in the correct thermal bath, and a summary of the many approaches to this aim. Radiation damage, comparisons of XFEL and synchrotron work, single-particle diffraction, fast solution scattering, pump-probe studies on photosensitive proteins, mixing jets, caged molecules, pH jump, and other reaction initiation methods, and the thermodynamics of molecular machines are all discussed, in addition to data analysis methods for all the instrumental modes. The ability of the XFEL to separate chemical reaction effects in dynamical imaging from radiation-induced effects (by minimizing these), while imaging at the physiological temperatures required for molecular machines, is highlighted.

Authors:Christopher J. Edgcombe Abstract: Publication date: Available online 20 April 2017 Source:Advances in Imaging and Electron Physics Author(s): Christopher J. Edgcombe A brief survey is given of prespecimen plates used to generate vortex beams, followed by some details of postspecimen plates as now used to provide image intensity modulation from phase objects. Spectral transfer theory applied to some simple model systems shows that the maximum size of object that can be imaged accurately with the Zernike plate depends not only on the object diameter but also on the system parameters. Further analysis suggests that when a Hilbert plate is located exactly on the cylindrical axis, the usual choice of an added phase of π minimizes the linear response of intensity to the phase of a weak-phase object. A linear response may be available if the added phase is reduced to π/2.

Authors:Inder Jeet Taneja Abstract: Publication date: Available online 20 March 2017 Source:Advances in Imaging and Electron Physics Author(s): Inder Jeet Taneja In the literature on information theory, there exist many divergence measures. These are known by Jensen difference divergence measure, J-Divergence, and arithmetic and geometric mean divergence. These are symmetric in pair of probability distributions. There is an interesting inequality relating these measures. These are with logarithmic expressions. Still there are measures without logarithmic expressions, known as, Hellinger's distance, triangular discrimination, etc. All these measures can be unified in three different parametric generalizations having much more particular cases. On the other sides, arithmetic, geometric, harmonic, square-root means, etc. are also well famous mathematics. In parametric situation, generalized Gini-mean is also known in literature. We can create new measures by using the idea of difference of means arising due to inequalities among these means. The same can also be done with difference of divergence measures. The aim of this work is to relate these differences arising due to inequalities among the divergence measures and means, and to find relations among them. Refinement inequalities are also studied.

Authors:Ivan Lazić; Eric G.T. Bosch Abstract: Publication date: Available online 11 March 2017 Source:Advances in Imaging and Electron Physics Author(s): Ivan Lazić, Eric G.T. Bosch Scanning transmission electron microscopy (STEM) imaging, which has been in use for many decades, is analyzed mathematically for thin nonmagnetic samples. The result is a closed-form description of a general STEM image, showing that STEM imaging is, in general, nonlinear (contrast transfer is sample dependent), except when an ideal first moment detector is used. The closed-form description is subsequently used to optimize STEM imaging. We distinguish between STEM techniques using symmetric scalar detectors and antisymmetric vector detectors and show that for both cases practical experimental techniques can be defined that are approximately linear. The case of antisymmetric vector detectors yields the newly introduced integrated differential phase contrast (iDPC-STEM) technique. For this technique we show experimental results, showing that it is capable of imaging light and heavy elements together as well as giving full low-frequency transfer. We demonstrate that it can be used under low-dose conditions.

Authors:Mai Xu; Jie Ren; Zulin Wang Abstract: Publication date: Available online 6 March 2017 Source:Advances in Imaging and Electron Physics Author(s): Mai Xu, Jie Ren, Zulin Wang This chapter addresses the problem of identifying and interpreting the components (e.g., balconies and windows) of the 3D model of a building. First, a voting scheme is presented for solving the problem of component identification in the 3D model. It is intuitive that interferences, such as occlusions, rarely happen at the same place nor at different times, when a person looks at a scene from different directions. In the spirit of this intuition, the voting scheme combines the information from various multiple view images to identify and segment the components of a building. For the component identification task, we use (from 3 to 11 views per building) multiple view images with short baselines in our experiments. Here, a priori 3D building model with a set of perpendicular and rectangular planes is set up for the identification task. The experimental results show the effectiveness of our scheme in identifying the components of 3D models of several buildings. With the identified components, we can proceed to the interpreting stage using the proposed tower of knowledge (ToK) approach, which automatically labels 3D components of buildings. Specifically, ToK is designed for discovering and encoding the logic rules (such as functionalities) for labeling components of the 3D model of a building. Then, we show how to make decisions on labeling components using ToK and utility theory. In order to deal with the case of lacking training data for making such decisions, we introduce a recursive version of ToK. Finally, a prototype of labeling components of building scenes is employed for validating the proposed ToK approach.

Authors:Igor A. Kopaev; Dmitry Grinfeld; Mikhail A. Monastyrskiy; Roman S. Ablizen; Sergei S. Alimpiev; Andrei A. Trubitsyn Abstract: Publication date: Available online 28 February 2017 Source:Advances in Imaging and Electron Physics Author(s): Igor A. Kopaev, Dmitry Grinfeld, Mikhail A. Monastyrskiy, Roman S. Ablizen, Sergei S. Alimpiev, Andrei A. Trubitsyn A variational approach is proposed for simulation of equilibrium ion distributions in radiofrequency low-vacuum ion traps with allowance for the Coulomb interaction and collisions of ions with the buffer gas molecules. A unimodal thermodynamic functional (potential) is introduced, the Euler equation for which is equivalent to the Poisson equation for Coulomb field and the Boltzmann distribution for ion density. The original problem is thus reduced to minimization of this thermodynamic potential in a functional space. By using the potential theory and Fourier analysis, the infinite-dimensional minimization problem is further reduced to a relevant finite-dimensional quadratic programming problem, which is numerically solved by means of the conjugate gradient method. Special emphasis is given to the physical grounds underlying the numerical method proposed. Examples of 2D and 3D calculations are presented and discussed.

Authors:Luiz H.G. Tizei; Mathieu Kociak Abstract: Publication date: Available online 28 February 2017 Source:Advances in Imaging and Electron Physics Author(s): Luiz H.G. Tizei, Mathieu Kociak Quantum optics is a very active field, with applications such as quantum cryptography which had once though impossible. Surprisingly, quantum engineering is becoming nevertheless a reality. The basic building blocks of this emerging field, such as single-photon emitters, are typically few nanometer in size, if not smaller. Therefore subwavelength techniques are necessary. In this chapter, we review the very recent developments of cathodoluminescence in scanning transmission electron microscopy applied to quantum nanooptics. This chapter is intended mainly to electron microscopists that have no specific knowledge in either quantum optics or cathodoluminescence. We therefore review basics quantities of interest in nanooptics such as the time-correlation function and how they can be measured in practice. We then describe the basics of electron/matter/photon interaction relevant to the field of nanooptics and finish by describing recent experiments in the field.

Authors:Michael Haschke; Stephan Boehm Abstract: Publication date: Available online 20 February 2017 Source:Advances in Imaging and Electron Physics Author(s): Michael Haschke, Stephan Boehm Micro-X-ray fluorescence is an established analytical method that is used for approx. 15 years for a sensitive elemental analysis. It could be developed due to the availability of X-ray optics, in particular polycapillary optics which can concentrate divergent tube radiation to spot sizes down to the 10μm range. After the introduction of separate μ-XRF instruments, this excitation possibility becomes interesting for scanning electron microscopes (SEMs). SEMs typically are equipped with an energy-dispersive detector. Therefore, the X-ray excitation would expand its analytical performance. In particular the sensitivity for trace elements could be increased and the characterization of layer systems will be possible. Additionally, sample preparation would be easier and the sample stress by irradiation with high energetic radiation is reduced. A special benefit is the use of both electron- and X-ray-excited spectra for a combined quantification, and thus, a more complete material characterization is possible. The combination of light element analysis and trace detection improves quantification results.