Subjects -> INSTRUMENTS (Total: 62 journals)
Showing 1 - 16 of 16 Journals sorted alphabetically
Annali dell'Istituto e Museo di storia della scienza di Firenze     Hybrid Journal  
Applied Mechanics Reviews     Full-text available via subscription   (Followers: 27)
Bulletin of Social Informatics Theory and Application     Open Access   (Followers: 1)
Computational Visual Media     Open Access   (Followers: 4)
Devices and Methods of Measurements     Open Access  
Documenta & Instrumenta - Documenta et Instrumenta     Open Access  
EPJ Techniques and Instrumentation     Open Access  
European Journal of Remote Sensing     Open Access   (Followers: 9)
Experimental Astronomy     Hybrid Journal   (Followers: 39)
Flow Measurement and Instrumentation     Hybrid Journal   (Followers: 18)
Geoscientific Instrumentation, Methods and Data Systems     Open Access   (Followers: 4)
Geoscientific Instrumentation, Methods and Data Systems Discussions     Open Access   (Followers: 1)
IEEE Journal on Miniaturization for Air and Space Systems     Hybrid Journal   (Followers: 2)
IEEE Sensors Journal     Hybrid Journal   (Followers: 103)
IEEE Sensors Letters     Hybrid Journal   (Followers: 3)
IJEIS (Indonesian Journal of Electronics and Instrumentation Systems)     Open Access   (Followers: 3)
Imaging & Microscopy     Hybrid Journal   (Followers: 9)
InfoTekJar : Jurnal Nasional Informatika dan Teknologi Jaringan     Open Access  
Instrumentation Science & Technology     Hybrid Journal   (Followers: 7)
Instruments and Experimental Techniques     Hybrid Journal   (Followers: 1)
International Journal of Applied Mechanics     Hybrid Journal   (Followers: 7)
International Journal of Instrumentation Science     Open Access   (Followers: 40)
International Journal of Measurement Technologies and Instrumentation Engineering     Full-text available via subscription   (Followers: 2)
International Journal of Metrology and Quality Engineering     Full-text available via subscription   (Followers: 4)
International Journal of Remote Sensing     Hybrid Journal   (Followers: 274)
International Journal of Remote Sensing Applications     Open Access   (Followers: 43)
International Journal of Sensor Networks     Hybrid Journal   (Followers: 4)
International Journal of Testing     Hybrid Journal   (Followers: 1)
Journal of Applied Remote Sensing     Hybrid Journal   (Followers: 83)
Journal of Astronomical Instrumentation     Open Access   (Followers: 3)
Journal of Instrumentation     Hybrid Journal   (Followers: 32)
Journal of Instrumentation Technology & Innovations     Full-text available via subscription   (Followers: 1)
Journal of Medical Devices     Full-text available via subscription   (Followers: 5)
Journal of Medical Signals and Sensors     Open Access   (Followers: 3)
Journal of Optical Technology     Full-text available via subscription   (Followers: 5)
Journal of Sensors and Sensor Systems     Open Access   (Followers: 11)
Journal of Vacuum Science & Technology B     Hybrid Journal   (Followers: 2)
Jurnal Informatika Upgris     Open Access  
Measurement : Sensors     Open Access   (Followers: 3)
Measurement and Control     Open Access   (Followers: 36)
Measurement Instruments for the Social Sciences     Open Access  
Measurement Science and Technology     Hybrid Journal   (Followers: 7)
Measurement Techniques     Hybrid Journal   (Followers: 3)
Medical Devices & Sensors     Hybrid Journal  
Medical Instrumentation     Open Access  
Metrology and Measurement Systems     Open Access   (Followers: 6)
Microscopy     Hybrid Journal   (Followers: 8)
Modern Instrumentation     Open Access   (Followers: 50)
Optoelectronics, Instrumentation and Data Processing     Hybrid Journal   (Followers: 4)
PFG : Journal of Photogrammetry, Remote Sensing and Geoinformation Science     Hybrid Journal  
Photogrammetric Engineering & Remote Sensing     Full-text available via subscription   (Followers: 29)
Remote Sensing     Open Access   (Followers: 54)
Remote Sensing Applications : Society and Environment     Full-text available via subscription   (Followers: 8)
Remote Sensing of Environment     Hybrid Journal   (Followers: 93)
Remote Sensing Science     Open Access   (Followers: 24)
Review of Scientific Instruments     Hybrid Journal   (Followers: 22)
Sensors and Materials     Open Access   (Followers: 2)
Solid State Nuclear Magnetic Resonance     Hybrid Journal   (Followers: 3)
Standards     Open Access  
Transactions of the Institute of Measurement and Control     Hybrid Journal   (Followers: 13)
Труды СПИИРАН     Open Access  
Similar Journals
Journal Cover
Computational Visual Media
Number of Followers: 4  

  This is an Open Access Journal Open Access journal
ISSN (Print) 2096-0433 - ISSN (Online) 2096-0662
Published by SpringerOpen Homepage  [261 journals]
  • Fast raycasting using a compound deep image for virtual point light range
           determination

    • Abstract: Abstract The concept of using multiple deep images, under a variety of different names, has been explored as a possible acceleration approach for finding ray-geometry intersections. We leverage recent advances in deep image processing from {tiorder independent transparency} for fast building of a {ticompound deep image} ({tiCDI}) using a coherent memory format well suited for raycasting. We explore the use of a CDI and raycasting for the problem of determining distance between {tivirtual point lights} (VPLs) and geometry for indirect lighting, with the key raycasting step being a small fraction of total frametime.
      PubDate: 2019-05-24
       
  • Manufacturable pattern collage along a boundary

    • Abstract: Abstract Recent years have shown rapid development of digital fabrication techniques, making manufacturing individual models reachable for ordinary users. Thus, tools for designing customized objects in a user-friendly way are in high demand. In this paper, we tackle the problem of generating a collage of patterns along a given boundary, aimed at digital fabrication. We represent the packing space by a pipe-like closed shape along the boundary and use ellipses as packing elements for computing an initial layout of the patterns. Then we search for the best matching pattern for each ellipse and construct the initial pattern collage in an automatic manner. To facilitate editing the collage, we provide interactive operations which allow the user to adjust the layout at the coarse level. The patterns are fine-tuned based on a spring–mass system after each interaction step. After this interactive process, the collage result is further optimized to enforce connectivity. Finally, we perform structural analysis on the collage and enhance its stability, so that the result can be fabricated. To demonstrate the effectiveness of our method, we show results fabricated by 3D printing and laser cutting.
      PubDate: 2019-05-21
       
  • Deep residual learning for denoising Monte Carlo renderings

    • Abstract: Abstract Learning-based techniques have recently been shown to be effective for denoising Monte Carlo rendering methods. However, there remains a quality gap to state-of-the-art handcrafted denoisers. In this paper, we propose a deep residual learning based method that outperforms both state-of-the-art handcrafted denoisers and learning-based denoisers. Unlike the indirect nature of existing learning-based methods (which e.g., estimate the parameters and kernel weights of an explicit feature based filter), we directly map the noisy input pixels to the smoothed output. Using this direct mapping formulation, we demonstrate that even a simple-and-standard ResNet and three common auxiliary features (depth, normal, and albedo) are sufficient to achieve high-quality denoising. This minimal requirement on auxiliary data simplifies both training and integration of our method into most production rendering pipelines. We have evaluated our method on unseen images created by a different renderer. Consistently superior quality denoising is obtained in all cases.
      PubDate: 2019-05-09
       
  • Seamless and non-repetitive 4D texture variation synthesis and real-time
           rendering for measured optical material behavior

    • Abstract: Abstract We show how to overcome the single weakness of an existing fully automatic system for acquisition of spatially varying optical material behavior of real object surfaces. While the expression of spatially varying material behavior with spherical dependence on incoming light as a 4D texture (an ABTF material model) allows flexible mapping onto arbitrary 3D geometry, with photo-realistic rendering and interaction in real time, this very method of texture-like representation exposes it to common problems of texturing, striking in two disadvantages. Firstly, non-seamless textures create visible artifacts at boundaries. Secondly, even a perfectly seamless texture causes repetition artifacts due to their organised placement in large numbers over a 3D surface. We have solved both problems through our novel texture synthesis method that generates a set of seamless texture variations randomly distributed over the surface at shading time. When compared to regular 2D textures, the inter-dimensional coherence of the 4D ABTF material model poses entirely new challenges to texture synthesis, which includes maintaining the consistency of material behavior throughout the 4D space spanned by the spatial image domain and the angular illumination hemisphere. In addition, we tackle the increased memory consumption caused by the numerous variations through a fitting scheme specifically designed to reconstruct the most prominent effects captured in the material model.
      PubDate: 2019-05-09
       
  • Automated brain tumor segmentation on multi-modal MR image using SegNet

    • Abstract: Abstract The potential of improving disease detection and treatment planning comes with accurate and fully automatic algorithms for brain tumor segmentation. Glioma, a type of brain tumor, can appear at different locations with different shapes and sizes. Manual segmentation of brain tumor regions is not only time-consuming but also prone to human error, and its performance depends on pathologists’ experience. In this paper, we tackle this problem by applying a fully convolutional neural network SegNet to 3D data sets for four MRI modalities (Flair, T1, T1ce, and T2) for automated segmentation of brain tumor and subtumor parts, including necrosis, edema, and enhancing tumor. To further improve tumor segmentation, the four separately trained SegNet models are integrated by post-processing to produce four maximum feature maps by fusing the machine-learned feature maps from the fully convolutional layers of each trained model. The maximum feature maps and the pixel intensity values of the original MRI modalities are combined to encode interesting information into a feature representation. Taking the combined feature as input, a decision tree (DT) is used to classify the MRI voxels into different tumor parts and healthy brain tissue. Evaluating the proposed algorithm on the dataset provided by the Brain Tumor Segmentation 2017 (BraTS 2017) challenge, we achieved F-measure scores of 0.85, 0.81, and 0.79 for whole tumor, tumor core, and enhancing tumor, respectively. Experimental results demonstrate that using SegNet models with 3D MRI datasets and integrating the four maximum feature maps with pixel intensity values of the original MRI modalities has potential to perform well on brain tumor segmentation.
      PubDate: 2019-04-23
       
  • Optimal and interactive keyframe selection for motion capture

    • Abstract: Abstract Motion capture is increasingly used in games and movies, but often requires editing before it can be used, for many reasons. The motion may need to be adjusted to correctly interact with virtual objects or to fix problems that result from mapping the motion to a character of a different size or, beyond such technical requirements, directors can request stylistic changes. Unfortunately, editing is laborious because of the low-level representation of the data. While existing motion editing methods accomplish modest changes, larger edits can require the artist to “re-animate” the motion by manually selecting a subset of the frames as keyframes. In this paper, we automatically find sets of frames to serve as keyframes for editing the motion. We formulate the problem of selecting an optimal set of keyframes as a shortest-path problem, and solve it efficiently using dynamic programming. We create a new simplified animation by interpolating the found keyframes using a naive curve fitting technique. Our algorithm can simplify motion capture to around 10% of the original number of frames while retaining most of its detail. By simplifying animation with our algorithm, we realize a new approach to motion editing and stylization founded on the time-tested keyframe interface. We present results that show our algorithm outperforms both research algorithms and a leading commercial tool.
      PubDate: 2019-04-13
       
  • A method for estimating the errors in many-light rendering with
           supersampling

    • Abstract: Abstract In many-light rendering, a variety of visual and illumination effects, including anti-aliasing, depth of field, volumetric scattering, and subsurface scattering, are combined to create a number of virtual point lights (VPLs). This is done in order to simplify computation of the resulting illumination. Naive approaches that sum the direct illumination from many VPLs are computationally expensive; scalable methods can be computed more efficiently by clustering VPLs, and then estimating their sum by sampling a small number of VPLs. Although significant speed-up has been achieved using scalable methods, clustering leads to uncontrollable errors, resulting in noise in the rendered images. In this paper, we propose a method to improve the estimation accuracy of many-light rendering involving such visual and illumination effects. We demonstrate that our method can improve the estimation accuracy by a factor of 2.3 over the previous method.
      PubDate: 2019-04-11
       
  • Livestock detection in aerial images using a fully convolutional network

    • Abstract: Abstract In order to accurately count the number of animals grazing on grassland, we present a livestock detection algorithm using modified versions of U-net and Google Inception-v4 net. This method works well to detect dense and touching instances. We also introduce a dataset for livestock detection in aerial images, consisting of 89 aerial images collected by quadcopter. Each image has resolution of about 3000×4000 pixels, and contains livestock with varying shapes, scales, and orientations. We evaluate our method by comparison against Faster RCNN and Yolo-v3 algorithms using our aerial livestock dataset. The average precision of our method is better than Yolo-v3 and is comparable to Faster RCNN.
      PubDate: 2019-03-30
       
  • No-reference synthetic image quality assessment with convolutional neural
           network and local image saliency

    • Abstract: Abstract Depth-image-based rendering (DIBR) is widely used in 3DTV, free-viewpoint video, and interactive 3D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, particularly geometric distortions induced by object dis-occlusion. Ensuring the quality of synthetic images is critical to maintaining adequate system service. However, traditional 2D image quality metrics are ineffective for evaluating synthetic images as they are not sensitive to geometric distortion. In this paper, we propose a novel no-reference image quality assessment method for synthetic images based on convolutional neural networks, introducing local image saliency as prediction weights. Due to the lack of existing training data, we construct a new DIBR synthetic image dataset as part of our contribution. Experiments were conducted on both the public benchmark IRCCyN/IVC DIBR image dataset and our own dataset. Results demonstrate that our proposed metric outperforms traditional 2D image quality metrics and state-of-the-art DIBR-related metrics.
      PubDate: 2019-03-30
       
  • Message from the Editor-in-Chief

    • PubDate: 2019-03-01
       
  • Recurrent 3D attentional networks for end-to-end active object recognition

    • Abstract: Abstract Active vision is inherently attention-driven: an agent actively selects views to attend in order to rapidly perform a vision task while improving its internal representation of the scene being observed. Inspired by the recent success of attention-based models in 2D vision tasks based on single RGB images, we address multi-view depth-based active object recognition using an attention mechanism, by use of an end-to-end recurrent 3D attentional network. The architecture takes advantage of a recurrent neural network to store and update an internal representation. Our model, trained with 3D shape datasets, is able to iteratively attend the best views targeting an object of interest for recognizing it. To realize 3D view selection, we derive a 3D spatial transformer network. It is differentiable, allowing training with backpropagation, and so achieving much faster convergence than the reinforcement learning employed by most existing attention-based models. Experiments show that our method, with only depth input, achieves state-of-the-art next-best-view performance both in terms of time taken and recognition accuracy.
      PubDate: 2019-03-01
       
  • Discernible image mosaic with edge-aware adaptive tiles

    • Abstract: Abstract We present a novel method to produce discernible image mosaics, with relatively large image tiles replaced by images drawn from a database, to resemble a target image. Compared to existing works on image mosaics, the novelty of our method is two-fold. Firstly, believing that the presence of visual edges in the final image mosaic strongly supports image perception, we develop an edge-aware photo retrieval scheme which emphasizes the preservation of visual edges in the target image. Secondly, unlike most previous works which apply a pre-determined partition to an input image, our image mosaics are composed of adaptive tiles, whose sizes are determined based on the available images in the database and the objective of maximizing resemblance to the target image. We show discernible image mosaics obtained by our method, using image collections of only moderate size. To evaluate our method, we conducted a user study to validate that the image mosaics generated present both globally and locally appropriate visual impressions to the human observers. Visual comparisons with existing techniques demonstrate the superiority of our method in terms of mosaic quality and perceptibility.
      PubDate: 2019-03-01
       
  • Real-time stereo matching on CUDA using Fourier descriptors and dynamic
           programming

    • Abstract: Abstract Computation of stereoscopic depth and disparity map extraction are dynamic research topics. A large variety of algorithms has been developed, among which we cite feature matching, moment extraction, and image representation using descriptors to determine a disparity map. This paper proposes a new method for stereo matching based on Fourier descriptors. The robustness of these descriptors under photometric and geometric transformations provides a better representation of a template or a local region in the image. In our work, we specifically use generalized Fourier descriptors to compute a robust cost function. Then, a box filter is applied for cost aggregation to enforce a smoothness constraint between neighboring pixels. Optimization and disparity calculation are done using dynamic programming, with a cost based on similarity between generalized Fourier descriptors using Euclidean distance. This local cost function is used to optimize correspondences. Our stereo matching algorithm is evaluated using the Middlebury stereo benchmark; our approach has been implemented on parallel high-performance graphics hardware using CUDA to accelerate our algorithm, giving a real-time implementation.
      PubDate: 2019-03-01
       
  • Automated pebble mosaic stylization of images

    • Abstract: Abstract Digital mosaics have usually used regular tiles, simulating historical tessellated mosaics. In this paper, we present a method for synthesizing pebble mosaics, a historical mosaic style in which the tiles are rounded pebbles. We address both the tiling problem, of distributing pebbles over the image plane so as to approximate the input image content, and the problem of geometry, creating a smooth rounded shape for each pebble. We adopt simple linear iterative clustering (SLIC) to obtain elongated tiles conforming to image content, and smooth the resulting irregular shapes into shapes resembling pebble cross-sections. Then, we create an interior and exterior contour for each pebble and solve a Laplace equation over the region between them to obtain height-field geometry. The resulting pebble set approximates the input image while representing full geometry that can be rendered and textured for a highly detailed representation of a pebble mosaic.
      PubDate: 2019-03-01
       
  • ShadowGAN: Shadow synthesis for virtual objects with conditional
           adversarial networks

    • Abstract: Abstract We introduce ShadowGAN, a generative adversarial network (GAN) for synthesizing shadows for virtual objects inserted in images. Given a target image containing several existing objects with shadows, and an input source object with a specified insertion position, the network generates a realistic shadow for the source object. The shadow is synthesized by a generator; using the proposed local adversarial and global adversarial discriminators, the synthetic shadow’s appearance is locally realistic in shape, and globally consistent with other objects’ shadows in terms of shadow direction and area. To overcome the lack of training data, we produced training samples based on public 3D models and rendering technology. Experimental results from a user study show that the synthetic shadowed results look natural and authentic.
      PubDate: 2019-03-01
       
  • BING: Binarized normed gradients for objectness estimation at 300fps

    • Abstract: Abstract Training a generic objectness measure to produce object proposals has recently become of significant interest. We observe that generic objects with well-defined closed boundaries can be detected by looking at the norm of gradients, with a suitable resizing of their corresponding image windows to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g., add, bitwise shift, etc.). To improve localization quality of the proposals while maintaining efficiency, we propose a novel fast segmentation method and demonstrate its effectiveness for improving BING’s localization performance, when used in multi-thresholding straddling expansion (MTSE) post-processing. On the challenging PASCAL VOC2007 dataset, using 1000 proposals per image and intersection-over-union threshold of 0.5, our proposal method achieves a 95.6% object detection rate and 78.6% mean average best overlap in less than 0.005 second per image.
      PubDate: 2019-03-01
       
  • Removing fences from sweep motion videos using global 3D reconstruction
           and fence-aware light field rendering

    • Abstract: Abstract Diminishing the appearance of a fence in an image is a challenging research area due to the characteristics of fences (thinness, lack of texture, etc.) and the need for occluded background restoration. In this paper, we describe a fence removal method for an image sequence captured by a user making a sweep motion, in which occluded background is potentially observed. To make use of geometric and appearance information such as consecutive images, we use two well-known approaches: structure from motion and light field rendering. Results using real image sequences show that our method can stably segment fences and preserve background details for various fence and background combinations. A new video without the fence, with frame coherence, can be successfully provided.
      PubDate: 2019-03-01
       
  • Image-based appearance acquisition of effect coatings

    • Abstract: Abstract Paint manufacturers strive to introduce unique visual effects to coatings in order to visually communicate functional properties of products using value-added, customized design. However, these effects often feature complex, angularly dependent, spatially-varying behavior, thus representing a challenge in digital reproduction. In this paper we analyze several approaches to capturing spatially-varying appearances of effect coatings. We compare a baseline approach based on a bidirectional texture function (BTF) with four variants of half-difference parameterization. Through a psychophysical study, we determine minimal sampling along individual dimensions of this parameterization. We conclude that, compared to BTF, bivariate representations better preserve visual fidelity of effect coatings, better characterizing near-specular behavior and significantly the restricting number of images which must be captured.
      PubDate: 2019-03-01
       
  • FusionMLS: Highly dynamic 3D reconstruction with consumer-grade RGB-D
           cameras

    • Abstract: Abstract Multi-view dynamic three-dimensional reconstruction has typically required the use of custom shutter-synchronized camera rigs in order to capture scenes containing rapid movements or complex topology changes. In this paper, we demonstrate that multiple unsynchronized low-cost RGB-D cameras can be used for the same purpose. To alleviate issues caused by unsynchronized shutters, we propose a novel depth frame interpolation technique that allows synchronized data capture from highly dynamic 3D scenes. To manage the resulting huge number of input depth images, we also introduce an efficient moving least squares-based volumetric reconstruction method that generates triangle meshes of the scene. Our approach does not store the reconstruction volume in memory, making it memory-efficient and scalable to large scenes. Our implementation is completely GPU based and works in real time. The results shown herein, obtained with real data, demonstrate the effectiveness of our proposed method and its advantages compared to state-of-the-art approaches.
      PubDate: 2018-12-01
       
  • Deforming generalized cylinders without self-intersection by means of a
           parametric center curve

    • Abstract: Abstract Large-scale deformations of a tubular object, or generalized cylinder, are often defined by a target shape for its center curve, typically using a parametric target curve. This task is non-trivial for free-form deformations or direct manipulation methods because it is hard to manually control the centerline by adjusting control points. Most skeleton-based methods are no better, again due to the small number of manually adjusted control points. In this paper, we propose a method to deform a generalized cylinder based on its skeleton composed of a centerline and orthogonal cross sections. Although we are not the first to use such a skeleton, we propose a novel skeletonization method that tries to minimize the number of intersections between neighboring cross sections by means of a relative curvature condition to detect intersections. The mesh deformation is first defined geometrically by deforming the centerline and mapping the cross sections. Rotation minimizing frames are used during mapping to control twisting. Secondly, given displacements on the cross sections, the deformation is decomposed into finely subdivided regions. We limit distortion at these vertices by minimizing an elastic thin shell bending energy, in linear time. Our method can handle complicated generalized cylinders such as the human colon.
      PubDate: 2018-12-01
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 18.215.185.97
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-