Subjects -> INSTRUMENTS (Total: 63 journals)
Showing 1 - 16 of 16 Journals sorted alphabetically
Applied Mechanics Reviews     Full-text available via subscription   (Followers: 27)
Computational Visual Media     Open Access   (Followers: 5)
Devices and Methods of Measurements     Open Access  
Documenta & Instrumenta - Documenta et Instrumenta     Open Access  
EPJ Techniques and Instrumentation     Open Access  
European Journal of Remote Sensing     Open Access   (Followers: 18)
Experimental Astronomy     Hybrid Journal   (Followers: 38)
Flow Measurement and Instrumentation     Hybrid Journal   (Followers: 15)
Geoscientific Instrumentation, Methods and Data Systems     Open Access   (Followers: 2)
Geoscientific Instrumentation, Methods and Data Systems Discussions     Open Access   (Followers: 1)
IEEE Journal on Miniaturization for Air and Space Systems     Hybrid Journal   (Followers: 2)
IEEE Sensors Journal     Hybrid Journal   (Followers: 107)
IEEE Sensors Letters     Hybrid Journal   (Followers: 4)
IJEIS (Indonesian Journal of Electronics and Instrumentation Systems)     Open Access   (Followers: 3)
Imaging & Microscopy     Hybrid Journal   (Followers: 7)
InfoTekJar : Jurnal Nasional Informatika dan Teknologi Jaringan     Open Access  
Instrumentation Science & Technology     Hybrid Journal   (Followers: 7)
Instruments and Experimental Techniques     Hybrid Journal   (Followers: 1)
International Journal of Applied Mechanics     Hybrid Journal   (Followers: 8)
International Journal of Instrumentation Science     Open Access   (Followers: 41)
International Journal of Measurement Technologies and Instrumentation Engineering     Full-text available via subscription   (Followers: 1)
International Journal of Metrology and Quality Engineering     Full-text available via subscription   (Followers: 6)
International Journal of Remote Sensing     Hybrid Journal   (Followers: 144)
International Journal of Remote Sensing Applications     Open Access   (Followers: 49)
International Journal of Sensor Networks     Hybrid Journal   (Followers: 2)
International Journal of Testing     Hybrid Journal   (Followers: 1)
Invention Disclosure     Open Access   (Followers: 1)
Journal of Astronomical Instrumentation     Open Access   (Followers: 3)
Journal of Instrumentation     Hybrid Journal   (Followers: 31)
Journal of Instrumentation Technology & Innovations     Full-text available via subscription   (Followers: 2)
Journal of Medical Devices     Full-text available via subscription   (Followers: 4)
Journal of Medical Signals and Sensors     Open Access   (Followers: 1)
Journal of Optical Technology     Full-text available via subscription   (Followers: 4)
Journal of Research of NIST     Open Access   (Followers: 1)
Journal of Sensors and Sensor Systems     Open Access   (Followers: 12)
Journal of Vacuum Science & Technology B     Hybrid Journal   (Followers: 1)
Jurnal Informatika Upgris     Open Access  
Measurement : Sensors     Open Access   (Followers: 5)
Measurement and Control     Open Access   (Followers: 36)
Measurement Instruments for the Social Sciences     Open Access  
Measurement Techniques     Hybrid Journal   (Followers: 3)
Medical Devices & Sensors     Hybrid Journal   (Followers: 1)
Metrology and Instruments / Метрологія та прилади     Open Access  
Metrology and Measurement Systems     Open Access   (Followers: 8)
Microscopy     Hybrid Journal   (Followers: 7)
Modern Instrumentation     Open Access   (Followers: 57)
Optoelectronics, Instrumentation and Data Processing     Hybrid Journal   (Followers: 4)
PFG : Journal of Photogrammetry, Remote Sensing and Geoinformation Science     Hybrid Journal   (Followers: 4)
Photogrammetric Engineering & Remote Sensing     Full-text available via subscription   (Followers: 32)
Remote Sensing     Open Access   (Followers: 57)
Remote Sensing Applications : Society and Environment     Full-text available via subscription   (Followers: 9)
Remote Sensing of Environment     Hybrid Journal   (Followers: 94)
Remote Sensing Science     Open Access   (Followers: 30)
Review of Scientific Instruments     Hybrid Journal   (Followers: 20)
Science of Remote Sensing     Open Access   (Followers: 7)
Sensors International     Open Access   (Followers: 3)
Solid State Nuclear Magnetic Resonance     Hybrid Journal   (Followers: 3)
Standards     Open Access  
Transactions of the Institute of Measurement and Control     Hybrid Journal   (Followers: 12)
Videoscopy     Full-text available via subscription   (Followers: 5)
Труды СПИИРАН     Open Access  
Similar Journals
Journal Cover
Computational Visual Media
Number of Followers: 5  

  This is an Open Access Journal Open Access journal
ISSN (Print) 2096-0433 - ISSN (Online) 2096-0662
Published by SpringerOpen Homepage  [228 journals]
  • Message from the Best Paper Award Committee

    • PubDate: 2022-09-01
  • AOGAN: A generative adversarial network for screen space ambient occlusion

    • Abstract: Abstract Ambient occlusion (AO) is a widely-used real-time rendering technique which estimates light intensity on visible scene surfaces. Recently, a number of learning-based AO approaches have been proposed, which bring a new angle to solving screen space shading via a unified learning framework with competitive quality and speed. However, most such methods have high error for complex scenes or tend to ignore details. We propose an end-to-end generative adversarial network for the production of realistic AO, and explore the importance of perceptual loss in the generative model to AO accuracy. An attention mechanism is also described to improve the accuracy of details, whose effectiveness is demonstrated on a wide variety of scenes.
      PubDate: 2022-09-01
  • Progressive edge-sensing dynamic scene deblurring

    • Abstract: Abstract Deblurring images of dynamic scenes is a challenging task because blurring occurs due to a combination of many factors. In recent years, the use of multi-scale pyramid methods to recover high-resolution sharp images has been extensively studied. We have made improvements to the lack of detail recovery in the cascade structure through a network using progressive integration of data streams. Our new multi-scale structure and edge feature perception design deals with changes in blurring at different spatial scales and enhances the sensitivity of the network to blurred edges. The coarse-to-fine architecture restores the image structure, first performing global adjustments, and then performing local refinement. In this way, not only is global correlation considered, but also residual information is used to significantly improve image restoration and enhance texture details. Experimental results show quantitative and qualitative improvements over existing methods.
      PubDate: 2022-09-01
  • Scene text removal via cascaded text stroke detection and erasing

    • Abstract: Abstract Recent learning-based approaches show promising performance improvement for the scene text removal task but usually leave several remnants of text and provide visually unpleasant results. In this work, a novel end-to-end framework is proposed based on accurate text stroke detection. Specifically, the text removal problem is decoupled into text stroke detection and stroke removal; we design separate networks to solve these two subproblems, the latter being a generative network. These two networks are combined as a processing unit, which is cascaded to obtain our final model for text removal. Experimental results demonstrate that the proposed method substantially outperforms the state-of-the-art for locating and erasing scene text. A new large-scale real-world dataset with 12,120 images has been constructed and is being made available to facilitate research, as current publicly available datasets are mainly synthetic so cannot properly measure the performance of different methods.
      PubDate: 2022-06-01
  • Joint 3D facial shape reconstruction and texture completion from a single

    • Abstract: Abstract Recent years have witnessed significant progress in image-based 3D face reconstruction using deep convolutional neural networks. However, current reconstruction methods often perform improperly in self-occluded regions and can lead to inaccurate correspondences between a 2D input image and a 3D face template, hindering use in real applications. To address these problems, we propose a deep shape reconstruction and texture completion network, SRTC-Net, which jointly reconstructs 3D facial geometry and completes texture with correspondences from a single input face image. In SRTC-Net, we leverage the geometric cues from completed 3D texture to reconstruct detailed structures of 3D shapes. The SRTC-Net pipeline has three stages. The first introduces a correspondence network to identify pixel-wise correspondence between the input 2D image and a 3D template model, and transfers the input 2D image to a U-V texture map. Then we complete the invisible and occluded areas in the U-V texture map using an inpainting network. To get the 3D facial geometries, we predict coarse shape (U-V position maps) from the segmented face from the correspondence network using a shape network, and then refine the 3D coarse shape by regressing the U-V displacement map from the completed U-V texture map in a pixel-to-pixel way. We examine our methods on 3D reconstruction tasks as well as face frontalization and pose invariant face recognition tasks, using both in-the-lab datasets (MICC, MultiPIE) and in-the-wild datasets (CFP). The qualitative and quantitative results demonstrate the effectiveness of our methods on inferring 3D facial geometry and complete texture; they outperform or are comparable to the state-of-the-art.
      PubDate: 2022-06-01
  • Towards natural object-based image recoloring

    • Abstract: Abstract Existing color editing algorithms enable users to edit the colors in an image according to their own aesthetics. Unlike artists who have an accurate grasp of color, ordinary users are inexperienced in color selection and matching, and allowing non-professional users to edit colors arbitrarily may lead to unrealistic editing results. To address this issue, we introduce a palette-based approach for realistic object-level image recoloring. Our data-driven approach consists of an offline learning part that learns the color distributions for different objects in the real world, and an online recoloring part that first recognizes the object category, and then recommends appropriate realistic candidate colors learned in the offline step for that category. We also provide an intuitive user interface for efficient color manipulation. After color selection, image matting is performed to ensure smoothness of the object boundary. Comprehensive evaluation on various color editing examples demonstrates that our approach outperforms existing state-of-the-art color editing algorithms.
      PubDate: 2022-06-01
  • Trajectory distributions: A new description of movement for trajectory

    • Abstract: Abstract Trajectory prediction is a fundamental and challenging task for numerous applications, such as autonomous driving and intelligent robots. Current works typically treat pedestrian trajectories as a series of 2D point coordinates. However, in real scenarios, the trajectory often exhibits randomness, and has its own probability distribution. Inspired by this observation and other movement characteristics of pedestrians, we propose a simple and intuitive movement description called a trajectory distribution, which maps the coordinates of the pedestrian trajectory to a 2D Gaussian distribution in space. Based on this novel description, we develop a new trajectory prediction method, which we call the social probability method. The method combines trajectory distributions and powerful convolutional recurrent neural networks. Both the input and output of our method are trajectory distributions, which provide the recurrent neural network with sufficient spatial and random information about moving pedestrians. Furthermore, the social probability method extracts spatio-temporal features directly from the new movement description to generate robust and accurate predictions. Experiments on public benchmark datasets show the effectiveness of the proposed method.
      PubDate: 2022-06-01
  • Co-occurrence based texture synthesis

    • Abstract: Abstract As image generation techniques mature, there is a growing interest in explainable representations that are easy to understand and intuitive to manipulate. In this work, we turn to co-occurrence statistics, which have long been used for texture analysis, to learn a controllable texture synthesis model. We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images while having local, interpretable control over texture appearance. To encourage fidelity to the input condition, we introduce a novel differentiable co-occurrence loss that is integrated seamlessly into our framework in an end-to-end fashion. We demonstrate that our solution offers a stable, intuitive, and interpretable latent representation for texture synthesis, which can be used to generate smooth texture morphs between different textures. We further show an interactive texture tool that allows a user to adjust local characteristics of the synthesized texture by directly using the co-occurrence values.
      PubDate: 2022-06-01
  • Non-dominated sorting based multi-page photo collage

    • Abstract: Abstract The development of social networking services (SNSs) revealed a surge in image sharing. The sharing mode of multi-page photo collage (MPC), which posts several image collages at a time, can often be observed on many social network platforms, which enables uploading images and arrangement in a logical order. This study focuses on the construction of MPC for an image collection and its formulation as an issue of joint optimization, which involves not only the arrangement in a single collage but also the arrangement among different collages. Novel balance-aware measurements, which merge graphic features and psychological achievements, are introduced. Non-dominated sorting genetic algorithm is adopted to optimize the MPC guided by the measurements. Experiments demonstrate that the proposed method can lead to diverse, visually pleasant, and logically clear MPC results, which are comparable to manually designed MPC results.
      PubDate: 2022-06-01
  • Unsupervised random forest for affinity estimation

    • Abstract: Abstract This paper presents an unsupervised clustering random-forest-based metric for affinity estimation in large and high-dimensional data. The criterion used for node splitting during forest construction can handle rank-deficiency when measuring cluster compactness. The binary forest-based metric is extended to continuous metrics by exploiting both the common traversal path and the smallest shared parent node. The proposed forest-based metric efficiently estimates affinity by passing down data pairs in the forest using a limited number of decision trees. A pseudo-leaf-splitting (PLS) algorithm is introduced to account for spatial relationships, which regularizes affinity measures and overcomes inconsistent leaf assign-ments. The random-forest-based metric with PLS facilitates the establishment of consistent and point-wise correspondences. The proposed method has been applied to automatic phrase recognition using color and depth videos and point-wise correspondence. Extensive experiments demonstrate the effectiveness of the proposed method in affinity estimation in a comparison with the state-of-the-art.
      PubDate: 2022-06-01
  • A survey on rendering homogeneous participating media

    • Abstract: Abstract Participating media are frequent in real-world scenes, whether they contain milk, fruit juice, oil, or muddy water in a river or the ocean. Incoming light interacts with these participating media in complex ways: refraction at boundaries and scattering and absorption inside volumes. The radiative transfer equation is the key to solving this problem. There are several categories of rendering methods which are all based on this equation, but using different solutions. In this paper, we introduce these groups, which include volume density estimation based approaches, virtual point/ray/beam lights, point based approaches, Monte Carlo based approaches, acceleration techniques, accurate single scattering methods, neural network based methods, and spatially-correlated participating media related methods. As well as discussing these methods, we consider the challenges and open problems in this research area.
      PubDate: 2022-06-01
  • 3D corrective nose reconstruction from a single image

    • Abstract: Abstract There is a steadily growing range of applications that can benefit from facial reconstruction techniques, leading to an increasing demand for reconstruction of high-quality 3D face models. While it is an important expressive part of the human face, the nose has received less attention than other expressive regions in the face reconstruction literature. When applying existing reconstruction methods to facial images, the reconstructed nose models are often inconsistent with the desired shape and expression. In this paper, we propose a coarse-to-fine 3D nose reconstruction and correction pipeline to build a nose model from a single image, where 3D and 2D nose curve correspondences are adaptively updated and refined. We first correct the reconstruction result coarsely using constraints of 3D-2D sparse landmark correspondences, and then heuristically update a dense 3D-2D curve correspondence based on the coarsely corrected result. A final refinement step is performed to correct the shape based on the updated 3D-2D dense curve constraints. Experimental results show the advantages of our method for 3D nose reconstruction over existing methods.
      PubDate: 2022-06-01
  • Neighborhood co-occurrence modeling in 3D point cloud segmentation

    • Abstract: Abstract A significant performance boost has been achieved in point cloud semantic segmentation by utilization of the encoder-decoder architecture and novel convolution operations for point clouds. However, co-occurrence relationships within a local region which can directly influence segmentation results are usually ignored by current works. In this paper, we propose a neighborhood co-occurrence matrix (NCM) to model local co-occurrence relationships in a point cloud. We generate target NCM and prediction NCM from semantic labels and a prediction map respectively. Then, Kullback-Leibler (KL) divergence is used to maximize the similarity between the target and prediction NCMs to learn the co-occurrence relationship. Moreover, for large scenes where the NCMs for a sampled point cloud and the whole scene differ greatly, we introduce a reverse form of KL divergence which can better handle the difference to supervise the prediction NCMs. We integrate our method into an existing backbone and conduct comprehensive experiments on three datasets: Semantic3D for outdoor space segmentation, and S3DIS and ScanNet v2 for indoor scene segmentation. Results indicate that our method can significantly improve upon the backbone and outperform many leading competitors.
      PubDate: 2022-06-01
  • NPRportrait 1.0: A three-level benchmark for non-photorealistic rendering
           of portraits

    • Abstract: Abstract Recently, there has been an upsurge of activity in image-based non-photorealistic rendering (NPR), and in particular portrait image stylisation, due to the advent of neural style transfer (NST). However, the state of performance evaluation in this field is poor, especially compared to the norms in the computer vision and machine learning communities. Unfortunately, the task of evaluating image stylisation is thus far not well defined, since it involves subjective, perceptual, and aesthetic aspects. To make progress towards a solution, this paper proposes a new structured, three-level, benchmark dataset for the evaluation of stylised portrait images. Rigorous criteria were used for its construction, and its consistency was validated by user studies. Moreover, a new methodology has been developed for evaluating portrait stylisation algorithms, which makes use of the different benchmark levels as well as annotations provided by user studies regarding the characteristics of the faces. We perform evaluation for a wide variety of image stylisation methods (both portrait-specific and general purpose, and also both traditional NPR approaches and NST) using the new benchmark dataset.
      PubDate: 2022-04-06
  • Rendering discrete participating media using geometrical optics

    • Abstract: Abstract We consider the scattering of light in participating media composed of sparsely and randomly distributed discrete particles. The particle size is expected to range from the scale of the wavelength to several orders of magnitude greater, resulting in an appearance with distinct graininess as opposed to the smooth appearance of continuous media. One fundamental issue in the physically-based synthesis of such appearance is to determine the necessary optical properties in every local region. Since these properties vary spatially, we resort to geometrical optics approximation (GOA), a highly efficient alternative to rigorous Lorenz—Mie theory, to quantitatively represent the scattering of a single particle. This enables us to quickly compute bulk optical properties for any particle size distribution. We then use a practical Monte Carlo rendering solution to solve energy transfer in the discrete participating media. Our proposed framework is the first to simulate a wide range of discrete participating media with different levels of graininess, converging to the continuous media case as the particle concentration increases.
      PubDate: 2022-04-01
  • PVT v2: Improved baselines with Pyramid Vision Transformer

    • Abstract: Abstract Transformers have recently lead to encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs: (i) a linear complexity attention layer, (ii) an overlapping patch embedding, and (iii) a convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linearity and provides significant improvements on fundamental vision tasks such as classification, detection, and segmentation. In particular, PVT v2 achieves comparable or better performance than recent work such as the Swin transformer. We hope this work will facilitate state-of-the-art transformer research in computer vision. Code is available at
      PubDate: 2022-03-16
  • Attention mechanisms in computer vision: A survey

    • Abstract: Abstract Humans can naturally and effectively find salient regions in complex scenes. Motivated by this observation, attention mechanisms were introduced into computer vision with the aim of imitating this aspect of the human visual system. Such an attention mechanism can be regarded as a dynamic weight adjustment process based on features of the input image. Attention mechanisms have achieved great success in many visual tasks, including image classification, object detection, semantic segmentation, video understanding, image generation, 3D vision, multimodal tasks, and self-supervised learning. In this survey, we provide a comprehensive review of various attention mechanisms in computer vision and categorize them according to approach, such as channel attention, spatial attention, temporal attention, and branch attention; a related repository is dedicated to collecting related work. We also suggest future directions for attention mechanism research.
      PubDate: 2022-03-15
  • ARM3D: Attention-based relation module for indoor 3D object detection

    • Abstract: Abstract Relation contexts have been proved to be useful for many challenging vision tasks. In the field of 3D object detection, previous methods have been taking the advantage of context encoding, graph embedding, or explicit relation reasoning to extract relation contexts. However, there exist inevitably redundant relation contexts due to noisy or low-quality proposals. In fact, invalid relation contexts usually indicate underlying scene misunderstanding and ambiguity, which may, on the contrary, reduce the performance in complex scenes. Inspired by recent attention mechanism like Transformer, we propose a novel 3D attention-based relation module (ARM3D). It encompasses object-aware relation reasoning to extract pair-wise relation contexts among qualified proposals and an attention module to distribute attention weights towards different relation contexts. In this way, ARM3D can take full advantage of the useful relation contexts and filter those less relevant or even confusing contexts, which mitigates the ambiguity in detection. We have evaluated the effectiveness of ARM3D by plugging it into several state-of-the-art 3D object detectors and showing more accurate and robust detection results. Extensive experiments show the capability and generalization of ARM3D on 3D object detection. Our source code is available at
      PubDate: 2022-03-08
  • Robust and efficient edge-based visual odometry

    • Abstract: Abstract Visual odometry, which aims to estimate relative camera motion between sequential video frames, has been widely used in the fields of augmented reality, virtual reality, and autonomous driving. However, it is still quite challenging for state-of-the-art approaches to handle low-texture scenes. In this paper, we propose a robust and efficient visual odometry algorithm that directly utilizes edge pixels to track camera pose. In contrast to direct methods, we choose reprojection error to construct the optimization energy, which can effectively cope with illumination changes. The distance transform map built upon edge detection for each frame is used to improve tracking efficiency. A novel weighted edge alignment method together with sliding window optimization is proposed to further improve the accuracy. Experiments on public datasets show that the method is comparable to state-of-the-art methods in terms of tracking accuracy, while being faster and more robust.
      PubDate: 2022-03-07
  • High-quality indoor scene 3D reconstruction with RGB-D cameras: A brief

    • Abstract: Abstract High-quality 3D reconstruction is an important topic in computer graphics and computer vision with many applications, such as robotics and augmented reality. The advent of consumer RGB-D cameras has made a profound advance in indoor scene reconstruction. For the past few years, researchers have spent significant effort to develop algorithms to capture 3D models with RGB-D cameras. As depth images produced by consumer RGB-D cameras are noisy and incomplete when surfaces are shiny, bright, transparent, or far from the camera, obtaining high-quality 3D scene models is still a challenge for existing systems. We here review high-quality 3D indoor scene reconstruction methods using consumer RGB-D cameras. In this paper, we make comparisons and analyses from the following aspects: (i) depth processing methods in 3D reconstruction are reviewed in terms of enhancement and completion, (ii) ICP-based, feature-based, and hybrid methods of camera pose estimation methods are reviewed, and (iii) surface reconstruction methods are reviewed in terms of surface fusion, optimization, and completion. The performance of state-of-the-art methods is also compared and analyzed. This survey will be useful for researchers who want to follow best practices in designing new high-quality 3D reconstruction methods.
      PubDate: 2022-03-06
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762

Your IP address:
Home (Search)
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-