Subjects -> INSTRUMENTS (Total: 63 journals)
Showing 1 - 16 of 16 Journals sorted by number of followers
International Journal of Remote Sensing     Hybrid Journal   (Followers: 161)
IEEE Sensors Journal     Hybrid Journal   (Followers: 119)
Remote Sensing of Environment     Hybrid Journal   (Followers: 96)
Journal of Applied Remote Sensing     Hybrid Journal   (Followers: 87)
Modern Instrumentation     Open Access   (Followers: 58)
Remote Sensing     Open Access   (Followers: 57)
International Journal of Remote Sensing Applications     Open Access   (Followers: 48)
International Journal of Instrumentation Science     Open Access   (Followers: 42)
Experimental Astronomy     Hybrid Journal   (Followers: 39)
Measurement and Control     Open Access   (Followers: 36)
Photogrammetric Engineering & Remote Sensing     Full-text available via subscription   (Followers: 34)
Journal of Instrumentation     Hybrid Journal   (Followers: 32)
Remote Sensing Science     Open Access   (Followers: 30)
Applied Mechanics Reviews     Full-text available via subscription   (Followers: 27)
Review of Scientific Instruments     Hybrid Journal   (Followers: 20)
European Journal of Remote Sensing     Open Access   (Followers: 17)
Videoscopy     Full-text available via subscription   (Followers: 15)
Flow Measurement and Instrumentation     Hybrid Journal   (Followers: 15)
Transactions of the Institute of Measurement and Control     Hybrid Journal   (Followers: 12)
Journal of Sensors and Sensor Systems     Open Access   (Followers: 11)
Remote Sensing Applications : Society and Environment     Full-text available via subscription   (Followers: 9)
Instrumentation Science & Technology     Hybrid Journal   (Followers: 8)
International Journal of Applied Mechanics     Hybrid Journal   (Followers: 8)
Imaging & Microscopy     Hybrid Journal   (Followers: 7)
Microscopy     Hybrid Journal   (Followers: 7)
Metrology and Measurement Systems     Open Access   (Followers: 7)
Science of Remote Sensing     Open Access   (Followers: 7)
Optoelectronics, Instrumentation and Data Processing     Hybrid Journal   (Followers: 6)
International Journal of Metrology and Quality Engineering     Full-text available via subscription   (Followers: 5)
Measurement : Sensors     Open Access   (Followers: 5)
PFG : Journal of Photogrammetry, Remote Sensing and Geoinformation Science     Hybrid Journal   (Followers: 5)
Computational Visual Media     Open Access   (Followers: 5)
Journal of Medical Devices     Full-text available via subscription   (Followers: 4)
Sensors and Materials     Open Access   (Followers: 4)
IEEE Sensors Letters     Hybrid Journal   (Followers: 4)
Journal of Astronomical Instrumentation     Open Access   (Followers: 4)
Journal of Optical Technology     Full-text available via subscription   (Followers: 4)
IEEE Journal on Miniaturization for Air and Space Systems     Hybrid Journal   (Followers: 3)
IJEIS (Indonesian Journal of Electronics and Instrumentation Systems)     Open Access   (Followers: 3)
Sensors International     Open Access   (Followers: 3)
Solid State Nuclear Magnetic Resonance     Hybrid Journal   (Followers: 3)
Measurement Techniques     Hybrid Journal   (Followers: 3)
Journal of Instrumentation Technology & Innovations     Full-text available via subscription   (Followers: 3)
International Journal of Sensor Networks     Hybrid Journal   (Followers: 2)
International Journal of Measurement Technologies and Instrumentation Engineering     Full-text available via subscription   (Followers: 2)
Geoscientific Instrumentation, Methods and Data Systems     Open Access   (Followers: 2)
International Journal of Testing     Hybrid Journal   (Followers: 1)
Medical Devices & Sensors     Hybrid Journal   (Followers: 1)
Instruments and Experimental Techniques     Hybrid Journal   (Followers: 1)
Geoscientific Instrumentation, Methods and Data Systems Discussions     Open Access   (Followers: 1)
Journal of Research of NIST     Open Access   (Followers: 1)
Journal of Vacuum Science & Technology B     Hybrid Journal   (Followers: 1)
Invention Disclosure     Open Access   (Followers: 1)
Metrology and Instruments / Метрологія та прилади     Open Access  
Measurement Instruments for the Social Sciences     Open Access  
Труды СПИИРАН     Open Access  
Standards     Open Access  
Jurnal Informatika Upgris     Open Access  
InfoTekJar : Jurnal Nasional Informatika dan Teknologi Jaringan     Open Access  
Devices and Methods of Measurements     Open Access  
EPJ Techniques and Instrumentation     Open Access  
Journal of Medical Signals and Sensors     Open Access  
Documenta & Instrumenta - Documenta et Instrumenta     Open Access  
Similar Journals
Journal Cover
Computational Visual Media
Number of Followers: 5  

  This is an Open Access Journal Open Access journal
ISSN (Print) 2096-0433 - ISSN (Online) 2096-0662
Published by SpringerOpen Homepage  [228 journals]
  • Automatic location and semantic labeling of landmarks on 3D human body
           models

    • Abstract: Abstract Landmarks on human body models are of great significance for applications such as digital anthropometry and clothing design. The diversity of pose and shape of human body models and the semantic gap make landmarking a challenging problem. In this paper, a learning-based method is proposed to locate landmarks on human body models by analyzing the relationship between geometric descriptors and semantic labels of landmarks. A shape alignment algorithm is proposed to align human body models to break symmetric ambiguity. A symmetry-aware descriptor is proposed based on the structure of the human body models, which is robust to both pose and shape variations in human body models. An AdaBoost regression algorithm is adopted to establish the correspondence between several descriptors and semantic labels of the landmarks. Quantitative and qualitative analyses and comparisons show that the proposed method can obtain more accurate landmarks and distinguish symmetrical landmarks semantically. Additionally, a dataset of landmarked human body models is also provided, containing 271 human body models collected from current human body datasets; each model has 17 landmarks labeled manually.
      PubDate: 2022-12-01
       
  • Joint self-supervised and reference-guided learning for depth inpainting

    • Abstract: Abstract Depth information can benefit various computer vision tasks on both images and videos. However, depth maps may suffer from invalid values in many pixels, and also large holes. To improve such data, we propose a joint self-supervised and reference-guided learning approach for depth inpainting. For the self-supervised learning strategy, we introduce an improved spatial convolutional sparse coding module in which total variation regularization is employed to enhance the structural information while preserving edge information. This module alternately learns a convolutional dictionary and sparse coding from a corrupted depth map. Then, both the learned convolutional dictionary and sparse coding are convolved to yield an initial depth map, which is effectively smoothed using local contextual information. The reference-guided learning part is inspired by the fact that adjacent pixels with close colors in the RGB image tend to have similar depth values. We thus construct a hierarchical joint bilateral filter module using the corresponding color image to fill in large holes. In summary, our approach integrates a convolutional sparse coding module to preserve local contextual information and a hierarchical joint bilateral filter module for filling using specific adjacent information. Experimental results show that the proposed approach works well for both invalid value restoration and large hole inpainting.
      PubDate: 2022-12-01
       
  • Constructing self-supporting surfaces with planar quadrilateral elements

    • Abstract: Abstract We present a simple yet effective method for constructing 3D self-supporting surfaces with planar quadrilateral (PQ) elements. Starting with a triangular discretization of a self-supporting surface, we first compute the principal curvatures and directions of each triangular face using a new discrete differential geometry approach, yielding more accurate results than existing methods. Then, we smooth the principal direction field to reduce the number of singularities. Next, we partition all faces into two groups in terms of principal curvature difference. For each face with small curvature difference, we compute a stretch matrix that turns the principal directions into a pair of conjugate directions. For the remaining triangular faces, we simply keep their smoothed principal directions. Finally, applying a mixed-integer programming solver to the mixed principal and conjugate direction field, we obtain a planar quadrilateral mesh. Experimental results show that our method is computationally efficient and can yield high-quality PQ meshes that well approximate the geometry of the input surfaces and maintain their self-supporting properties.
      PubDate: 2022-12-01
       
  • Image-guided color mapping for categorical data visualization

    • Abstract: Abstract Appropriate color mapping for categorical data visualization can significantly facilitate the discovery of underlying data patterns and effectively bring out visual aesthetics. Some systems suggest predefined palettes for this task. However, a predefined color mapping is not always optimal, failing to consider users’ needs for customization. Given an input categorical data visualization and a reference image, we present an effective method to automatically generate a coloring that resembles the reference while allowing classes to be easily distinguished. We extract a color palette with high perceptual distance between the colors by sampling dominant and discriminable colors from the image’s color space. These colors are assigned to given classes by solving an integer quadratic program to optimize point distinctness of the given chart while preserving the color spatial relations in the source image. We show results on various coloring tasks, with a diverse set of new coloring appearances for the input data. We also compare our approach to state-of-the-art palettes in a controlled user study, which shows that our method achieves comparable performance in class discrimination, while being more similar to the source image. User feedback after using our system verifies its efficiency in automatically generating desirable colorings that meet the user’s expectations when choosing a reference.
      PubDate: 2022-12-01
       
  • BLNet: Bidirectional learning network for point clouds

    • Abstract: Abstract The key challenge in processing point clouds lies in the inherent lack of ordering and irregularity of the 3D points. By relying on perpoint multi-layer perceptions (MLPs), most existing point-based approaches only address the first issue yet ignore the second one. Directly convolving kernels with irregular points will result in loss of shape information. This paper introduces a novel point-based bidirectional learning network (BLNet) to analyze irregular 3D points. BLNet optimizes the learning of 3D points through two iterative operations: feature-guided point shifting and feature learning from shifted points, so as to minimise intra-class variances, leading to a more regular distribution. On the other hand, explicitly modeling point positions leads to a new feature encoding with increased structure-awareness. Then, an attention pooling unit selectively combines important features. This bidirectional learning alternately regularizes the point cloud and learns its geometric features, with these two procedures iteratively promoting each other for more effective feature learning. Experiments show that BLNet is able to learn deep point features robustly and efficiently, and outperforms the prior state-of-the-art on multiple challenging tasks.
      PubDate: 2022-12-01
       
  • Light field salient object detection: A review and benchmark

    • Abstract: Abstract Salient object detection (SOD) is a long-standing research topic in computer vision with increasing interest in the past decade. Since light fields record comprehensive information of natural scenes that benefit SOD in a number of ways, using light field inputs to improve saliency detection over conventional RGB inputs is an emerging trend. This paper provides the first comprehensive review and a benchmark for light field SOD, which has long been lacking in the saliency community. Firstly, we introduce light fields, including theory and data forms, and then review existing studies on light field SOD, covering ten traditional models, seven deep learning-based models, a comparative study, and a brief review. Existing datasets for light field SOD are also summarized. Secondly, we benchmark nine representative light field SOD models together with several cutting-edge RGB-D SOD models on four widely used light field datasets, providing insightful discussions and analyses, including a comparison between light field SOD and RGB-D SOD models. Due to the inconsistency of current datasets, we further generate complete data and supplement focal stacks, depth maps, and multi-view images for them, making them consistent and uniform. Our supplemental data make a universal benchmark possible. Lastly, light field SOD is a specialised problem, because of its diverse data representations and high dependency on acquisition hardware, so it differs greatly from other saliency detection tasks. We provide nine observations on challenges and future directions, and outline several open issues. All the materials including models, datasets, benchmarking results, and supplemented light field datasets are publicly available at https://github.com/kerenfu/LFSOD-Survey.
      PubDate: 2022-12-01
       
  • Message from the Best Paper Award Committee

    • PubDate: 2022-09-01
       
  • AOGAN: A generative adversarial network for screen space ambient occlusion

    • Abstract: Abstract Ambient occlusion (AO) is a widely-used real-time rendering technique which estimates light intensity on visible scene surfaces. Recently, a number of learning-based AO approaches have been proposed, which bring a new angle to solving screen space shading via a unified learning framework with competitive quality and speed. However, most such methods have high error for complex scenes or tend to ignore details. We propose an end-to-end generative adversarial network for the production of realistic AO, and explore the importance of perceptual loss in the generative model to AO accuracy. An attention mechanism is also described to improve the accuracy of details, whose effectiveness is demonstrated on a wide variety of scenes.
      PubDate: 2022-09-01
       
  • Progressive edge-sensing dynamic scene deblurring

    • Abstract: Abstract Deblurring images of dynamic scenes is a challenging task because blurring occurs due to a combination of many factors. In recent years, the use of multi-scale pyramid methods to recover high-resolution sharp images has been extensively studied. We have made improvements to the lack of detail recovery in the cascade structure through a network using progressive integration of data streams. Our new multi-scale structure and edge feature perception design deals with changes in blurring at different spatial scales and enhances the sensitivity of the network to blurred edges. The coarse-to-fine architecture restores the image structure, first performing global adjustments, and then performing local refinement. In this way, not only is global correlation considered, but also residual information is used to significantly improve image restoration and enhance texture details. Experimental results show quantitative and qualitative improvements over existing methods.
      PubDate: 2022-09-01
       
  • High fidelity virtual try-on network via semantic adaptation and
           distributed componentization

    • Abstract: Abstract Image-based virtual try-on systems have significant commercial value in online garment shopping. However, prior methods fail to appropriately handle details, so are defective in maintaining the original appearance of organizational items including arms, the neck, and in-shop garments. We propose a novel high fidelity virtual try-on network to generate realistic results. Specifically, a distributed pipeline is used for simultaneous generation of organizational items. First, the in-shop garment is warped using thin plate splines (TPS) to give a coarse shape reference, and then a corresponding target semantic map is generated, which can adaptively respond to the distribution of different items triggered by different garments. Second, organizational items are componentized separately using our novel semantic map-based image adjustment network (SMIAN) to avoid interference between body parts. Finally, all components are integrated to generate the overall result by SMIAN. A priori dual-modal information is incorporated in the tail layers of SMIAN to improve the convergence rate of the network. Experiments demonstrate that the proposed method can retain better details of condition information than current methods. Our method achieves convincing quantitative and qualitative results on existing benchmark datasets.
      PubDate: 2022-06-16
       
  • Self-supervised coarse-to-fine monocular depth estimation using a
           lightweight attention module

    • Abstract: Abstract Self-supervised monocular depth estimation has been widely investigated and applied in previous works. However, existing methods suffer from texture-copy, depth drift, and incomplete structure. It is difficult for normal CNN networks to completely understand the relationship between the object and its surrounding environment. Moreover, it is hard to design the depth smoothness loss to balance depth smoothness and sharpness. To address these issues, we propose a coarse-to-fine method with a normalized convolutional block attention module (NCBAM). In the coarse estimation stage, we incorporate the NCBAM into depth and pose networks to overcome the texture-copy and depth drift problems. Then, we use a new network to refine the coarse depth guided by the color image and produce a structure-preserving depth result in the refinement stage. Our method can produce results competitive with state-of-the-art methods. Comprehensive experiments prove the effectiveness of our two-stage method using the NCBAM.
      PubDate: 2022-06-16
       
  • Recent advances in glinty appearance rendering

    • Abstract: Abstract The interaction between light and materials is key to physically-based realistic rendering. However, it is also complex to analyze, especially when the materials contain a large number of details and thus exhibit “glinty” visual effects. Recent methods of producing glinty appearance are expected to be important in next-generation computer graphics. We provide here a comprehensive survey on recent glinty appearance rendering. We start with a definition of glinty appearance based on microfacet theory, and then summarize research works in terms of representation and practical rendering. We have implemented typical methods using our unified platform and compare them in terms of visual effects, rendering speed, and memory consumption. Finally, we briefly discuss limitations and future research directions. We hope our analysis, implementations, and comparisons will provide insight for readers hoping to choose suitable methods for applications, or carry out research.
      PubDate: 2022-06-16
       
  • Scene text removal via cascaded text stroke detection and erasing

    • Abstract: Abstract Recent learning-based approaches show promising performance improvement for the scene text removal task but usually leave several remnants of text and provide visually unpleasant results. In this work, a novel end-to-end framework is proposed based on accurate text stroke detection. Specifically, the text removal problem is decoupled into text stroke detection and stroke removal; we design separate networks to solve these two subproblems, the latter being a generative network. These two networks are combined as a processing unit, which is cascaded to obtain our final model for text removal. Experimental results demonstrate that the proposed method substantially outperforms the state-of-the-art for locating and erasing scene text. A new large-scale real-world dataset with 12,120 images has been constructed and is being made available to facilitate research, as current publicly available datasets are mainly synthetic so cannot properly measure the performance of different methods.
      PubDate: 2022-06-01
       
  • NPRportrait 1.0: A three-level benchmark for non-photorealistic rendering
           of portraits

    • Abstract: Abstract Recently, there has been an upsurge of activity in image-based non-photorealistic rendering (NPR), and in particular portrait image stylisation, due to the advent of neural style transfer (NST). However, the state of performance evaluation in this field is poor, especially compared to the norms in the computer vision and machine learning communities. Unfortunately, the task of evaluating image stylisation is thus far not well defined, since it involves subjective, perceptual, and aesthetic aspects. To make progress towards a solution, this paper proposes a new structured, three-level, benchmark dataset for the evaluation of stylised portrait images. Rigorous criteria were used for its construction, and its consistency was validated by user studies. Moreover, a new methodology has been developed for evaluating portrait stylisation algorithms, which makes use of the different benchmark levels as well as annotations provided by user studies regarding the characteristics of the faces. We perform evaluation for a wide variety of image stylisation methods (both portrait-specific and general purpose, and also both traditional NPR approaches and NST) using the new benchmark dataset.
      PubDate: 2022-04-06
       
  • Rendering discrete participating media using geometrical optics
           approximation

    • Abstract: Abstract We consider the scattering of light in participating media composed of sparsely and randomly distributed discrete particles. The particle size is expected to range from the scale of the wavelength to several orders of magnitude greater, resulting in an appearance with distinct graininess as opposed to the smooth appearance of continuous media. One fundamental issue in the physically-based synthesis of such appearance is to determine the necessary optical properties in every local region. Since these properties vary spatially, we resort to geometrical optics approximation (GOA), a highly efficient alternative to rigorous Lorenz—Mie theory, to quantitatively represent the scattering of a single particle. This enables us to quickly compute bulk optical properties for any particle size distribution. We then use a practical Monte Carlo rendering solution to solve energy transfer in the discrete participating media. Our proposed framework is the first to simulate a wide range of discrete participating media with different levels of graininess, converging to the continuous media case as the particle concentration increases.
      PubDate: 2022-04-01
       
  • PVT v2: Improved baselines with Pyramid Vision Transformer

    • Abstract: Abstract Transformers have recently lead to encouraging progress in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding three designs: (i) a linear complexity attention layer, (ii) an overlapping patch embedding, and (iii) a convolutional feed-forward network. With these modifications, PVT v2 reduces the computational complexity of PVT v1 to linearity and provides significant improvements on fundamental vision tasks such as classification, detection, and segmentation. In particular, PVT v2 achieves comparable or better performance than recent work such as the Swin transformer. We hope this work will facilitate state-of-the-art transformer research in computer vision. Code is available at https://github.com/whai362/PVT.
      PubDate: 2022-03-16
       
  • Attention mechanisms in computer vision: A survey

    • Abstract: Abstract Humans can naturally and effectively find salient regions in complex scenes. Motivated by this observation, attention mechanisms were introduced into computer vision with the aim of imitating this aspect of the human visual system. Such an attention mechanism can be regarded as a dynamic weight adjustment process based on features of the input image. Attention mechanisms have achieved great success in many visual tasks, including image classification, object detection, semantic segmentation, video understanding, image generation, 3D vision, multimodal tasks, and self-supervised learning. In this survey, we provide a comprehensive review of various attention mechanisms in computer vision and categorize them according to approach, such as channel attention, spatial attention, temporal attention, and branch attention; a related repository https://github.com/MenghaoGuo/Awesome-Vision-Attentions is dedicated to collecting related work. We also suggest future directions for attention mechanism research.
      PubDate: 2022-03-15
       
  • ARM3D: Attention-based relation module for indoor 3D object detection

    • Abstract: Abstract Relation contexts have been proved to be useful for many challenging vision tasks. In the field of 3D object detection, previous methods have been taking the advantage of context encoding, graph embedding, or explicit relation reasoning to extract relation contexts. However, there exist inevitably redundant relation contexts due to noisy or low-quality proposals. In fact, invalid relation contexts usually indicate underlying scene misunderstanding and ambiguity, which may, on the contrary, reduce the performance in complex scenes. Inspired by recent attention mechanism like Transformer, we propose a novel 3D attention-based relation module (ARM3D). It encompasses object-aware relation reasoning to extract pair-wise relation contexts among qualified proposals and an attention module to distribute attention weights towards different relation contexts. In this way, ARM3D can take full advantage of the useful relation contexts and filter those less relevant or even confusing contexts, which mitigates the ambiguity in detection. We have evaluated the effectiveness of ARM3D by plugging it into several state-of-the-art 3D object detectors and showing more accurate and robust detection results. Extensive experiments show the capability and generalization of ARM3D on 3D object detection. Our source code is available at https://github.com/lanlan96/ARM3D.
      PubDate: 2022-03-08
       
  • Robust and efficient edge-based visual odometry

    • Abstract: Abstract Visual odometry, which aims to estimate relative camera motion between sequential video frames, has been widely used in the fields of augmented reality, virtual reality, and autonomous driving. However, it is still quite challenging for state-of-the-art approaches to handle low-texture scenes. In this paper, we propose a robust and efficient visual odometry algorithm that directly utilizes edge pixels to track camera pose. In contrast to direct methods, we choose reprojection error to construct the optimization energy, which can effectively cope with illumination changes. The distance transform map built upon edge detection for each frame is used to improve tracking efficiency. A novel weighted edge alignment method together with sliding window optimization is proposed to further improve the accuracy. Experiments on public datasets show that the method is comparable to state-of-the-art methods in terms of tracking accuracy, while being faster and more robust.
      PubDate: 2022-03-07
       
  • High-quality indoor scene 3D reconstruction with RGB-D cameras: A brief
           review

    • Abstract: Abstract High-quality 3D reconstruction is an important topic in computer graphics and computer vision with many applications, such as robotics and augmented reality. The advent of consumer RGB-D cameras has made a profound advance in indoor scene reconstruction. For the past few years, researchers have spent significant effort to develop algorithms to capture 3D models with RGB-D cameras. As depth images produced by consumer RGB-D cameras are noisy and incomplete when surfaces are shiny, bright, transparent, or far from the camera, obtaining high-quality 3D scene models is still a challenge for existing systems. We here review high-quality 3D indoor scene reconstruction methods using consumer RGB-D cameras. In this paper, we make comparisons and analyses from the following aspects: (i) depth processing methods in 3D reconstruction are reviewed in terms of enhancement and completion, (ii) ICP-based, feature-based, and hybrid methods of camera pose estimation methods are reviewed, and (iii) surface reconstruction methods are reviewed in terms of surface fusion, optimization, and completion. The performance of state-of-the-art methods is also compared and analyzed. This survey will be useful for researchers who want to follow best practices in designing new high-quality 3D reconstruction methods.
      PubDate: 2022-03-06
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 44.200.175.255
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-