Subjects -> ENGINEERING (Total: 2688 journals)
    - CHEMICAL ENGINEERING (229 journals)
    - CIVIL ENGINEERING (237 journals)
    - ELECTRICAL ENGINEERING (176 journals)
    - ENGINEERING (1325 journals)
    - ENGINEERING MECHANICS AND MATERIALS (452 journals)
    - HYDRAULIC ENGINEERING (56 journals)
    - INDUSTRIAL ENGINEERING (98 journals)
    - MECHANICAL ENGINEERING (115 journals)

ENGINEERING (1325 journals)            First | 1 2 3 4 5 6 7 | Last

Showing 601 - 800 of 1205 Journals sorted alphabetically
International Journal of Quality Assurance in Engineering and Technology Education     Full-text available via subscription   (Followers: 3)
International Journal of Quality Engineering and Technology     Hybrid Journal   (Followers: 4)
International Journal of Quantum Information     Hybrid Journal   (Followers: 6)
International Journal of Rapid Manufacturing     Hybrid Journal   (Followers: 3)
International Journal of Recent Contributions from Engineering, Science & IT     Open Access  
International Journal of Reliability, Quality and Safety Engineering     Hybrid Journal   (Followers: 14)
International Journal of Renewable Energy Technology     Hybrid Journal   (Followers: 8)
International Journal of Robust and Nonlinear Control     Hybrid Journal   (Followers: 10)
International Journal of Science and Innovative Technology     Open Access  
International Journal of Science Engineering and Advance Technology     Open Access  
International Journal of Sediment Research     Full-text available via subscription   (Followers: 2)
International Journal of Self-Propagating High-Temperature Synthesis     Hybrid Journal  
International Journal of Service Science, Management, Engineering, and Technology     Full-text available via subscription   (Followers: 2)
International Journal of Signal and Imaging Systems Engineering     Hybrid Journal  
International Journal of Six Sigma and Competitive Advantage     Hybrid Journal   (Followers: 4)
International Journal of Social Robotics     Hybrid Journal   (Followers: 3)
International Journal of Software Engineering and Knowledge Engineering     Hybrid Journal   (Followers: 6)
International Journal of Space Science and Engineering     Hybrid Journal   (Followers: 13)
International Journal of Speech Technology     Hybrid Journal   (Followers: 7)
International Journal of Spray and Combustion Dynamics     Full-text available via subscription   (Followers: 15)
International Journal of Student Project Reporting     Hybrid Journal   (Followers: 4)
International Journal of Surface Engineering and Interdisciplinary Materials Science     Full-text available via subscription   (Followers: 1)
International Journal of Surface Science and Engineering     Hybrid Journal   (Followers: 6)
International Journal of Sustainable Engineering     Hybrid Journal   (Followers: 4)
International Journal of Sustainable Lighting     Open Access  
International Journal of Sustainable Manufacturing     Hybrid Journal   (Followers: 4)
International Journal of Systems and Service-Oriented Engineering     Full-text available via subscription  
International Journal of Systems Assurance Engineering and Management     Hybrid Journal  
International Journal of Systems, Control and Communications     Hybrid Journal   (Followers: 6)
International Journal of Technoethics     Full-text available via subscription   (Followers: 2)
International Journal of Technology Management and Sustainable Development     Hybrid Journal   (Followers: 1)
International Journal of Technology Policy and Law     Hybrid Journal   (Followers: 6)
International Journal of Telemedicine and Applications     Open Access   (Followers: 4)
International Journal of Thermal Sciences     Hybrid Journal   (Followers: 18)
International Journal of Thermodynamics     Open Access   (Followers: 15)
International Journal of Transportation Engineering     Open Access   (Followers: 2)
International Journal of Turbomachinery, Propulsion and Power     Open Access   (Followers: 23)
International Journal of Ultra Wideband Communications and Systems     Hybrid Journal  
International Journal of Vehicle Autonomous Systems     Hybrid Journal  
International Journal of Vehicle Design     Hybrid Journal   (Followers: 6)
International Journal of Vehicle Information and Communication Systems     Hybrid Journal   (Followers: 1)
International Journal of Vehicle Noise and Vibration     Hybrid Journal   (Followers: 6)
International Journal of Vehicle Safety     Hybrid Journal   (Followers: 3)
International Journal of Virtual Technology and Multimedia     Hybrid Journal   (Followers: 2)
International Journal of Wavelets, Multiresolution and Information Processing     Hybrid Journal  
International Journal on Artificial Intelligence Tools     Hybrid Journal   (Followers: 9)
International Journal on Smart Sensing and Intelligent Systems     Open Access  
International Nano Letters     Open Access   (Followers: 6)
International Review of Applied Sciences and Engineering     Full-text available via subscription  
International Scholarly Research Notices     Open Access   (Followers: 121)
International Scientific and Vocational Studies Journal     Open Access  
International Scientific Journal of Engineering and Technology (ISJET)     Open Access  
Inventions     Open Access  
Inventum     Open Access  
Inverse Problems in Science and Engineering     Hybrid Journal   (Followers: 3)
Investiga : TEC     Open Access  
Ionics     Hybrid Journal   (Followers: 2)
IPTEK The Journal for Technology and Science     Open Access  
Iranian Journal of Optimization     Open Access   (Followers: 2)
Iranian Journal of Science and Technology, Transactions A : Science     Hybrid Journal  
IRBM News     Full-text available via subscription  
Ironmaking & Steelmaking     Hybrid Journal   (Followers: 4)
ISA Transactions     Full-text available via subscription  
ISSS Journal of Micro and Smart Systems     Hybrid Journal   (Followers: 3)
IT Professional     Full-text available via subscription   (Followers: 24)
Iteckne     Open Access  
J-ENSITEC : Journal Of Engineering and Sustainable Technology     Open Access   (Followers: 4)
Johnson Matthey Technology Review     Open Access  
Journal of Advanced College of Engineering and Management     Open Access  
Journal of Advanced Joining Processes     Open Access  
Journal of Advanced Manufacturing Systems     Hybrid Journal   (Followers: 5)
Journal of Aerodynamics     Open Access   (Followers: 27)
Journal of Aerosol Science     Hybrid Journal   (Followers: 7)
Journal of Aerospace Engineering     Full-text available via subscription   (Followers: 66)
Journal of Alloys and Compounds     Hybrid Journal   (Followers: 16)
Journal of Analytical and Applied Pyrolysis     Hybrid Journal   (Followers: 5)
Journal of Analytical Science & Technology     Open Access   (Followers: 4)
Journal of Analytical Sciences, Methods and Instrumentation     Open Access   (Followers: 4)
Journal of Applied Engineering Sciences     Open Access  
Journal of Applied Physics     Hybrid Journal   (Followers: 69)
Journal of Applied Research and Technology     Open Access  
Journal of Applied Science and Technology     Full-text available via subscription   (Followers: 1)
Journal of Applied Sciences     Open Access   (Followers: 4)
Journal of Architectural and Engineering Research     Open Access   (Followers: 3)
Journal of Architectural Engineering     Full-text available via subscription   (Followers: 4)
Journal of Automation and Control     Open Access   (Followers: 9)
Journal of Aviation Technology and Engineering     Open Access   (Followers: 10)
Journal of Biological Dynamics     Open Access   (Followers: 1)
Journal of Biomedical Science     Open Access   (Followers: 4)
Journal of Biomolecular NMR     Hybrid Journal   (Followers: 6)
Journal of Biosciences     Open Access   (Followers: 1)
Journal of Building Pathology and Rehabilitation     Hybrid Journal  
Journal of Catalysis     Hybrid Journal   (Followers: 11)
Journal of Catalyst & Catalysis     Full-text available via subscription   (Followers: 2)
Journal of Central South University     Hybrid Journal   (Followers: 1)
Journal of Chemical and Petroleum Engineering     Open Access   (Followers: 1)
Journal of China Coal Society     Open Access  
Journal of China Universities of Posts and Telecommunications     Full-text available via subscription  
Journal of Cleaner Production     Hybrid Journal   (Followers: 27)
Journal of Coastal and Riverine Flood Risk (JCRFR)     Open Access   (Followers: 1)
Journal of Cold Regions Engineering     Full-text available via subscription   (Followers: 3)
Journal of Combinatorial Designs     Hybrid Journal   (Followers: 4)
Journal of Combustion     Open Access   (Followers: 40)
Journal of Computational and Nonlinear Dynamics     Full-text available via subscription   (Followers: 6)
Journal of Computational and Theoretical Nanoscience     Full-text available via subscription  
Journal of Computational Biology     Hybrid Journal   (Followers: 9)
Journal of Computational Design and Engineering     Open Access   (Followers: 1)
Journal of Computational Electronics     Hybrid Journal   (Followers: 5)
Journal of Computational Multiphase Flows     Open Access   (Followers: 1)
Journal of Computing and Information Science in Engineering     Full-text available via subscription   (Followers: 1)
Journal of Coupled Systems and Multiscale Dynamics     Full-text available via subscription  
Journal of Dairy Science     Open Access   (Followers: 12)
Journal of Delta Urbanism     Open Access   (Followers: 2)
Journal of Display Technology     Hybrid Journal   (Followers: 3)
Journal of Dynamic Systems, Measurement, and Control     Full-text available via subscription   (Followers: 14)
Journal of Dynamical and Control Systems     Hybrid Journal   (Followers: 7)
Journal of Earthquake Engineering     Hybrid Journal   (Followers: 14)
Journal of Elasticity     Hybrid Journal   (Followers: 7)
Journal of Electroceramics     Hybrid Journal  
Journal of Electromagnetic Waves and Applications     Hybrid Journal   (Followers: 10)
Journal of Electronic Testing     Hybrid Journal   (Followers: 2)
Journal of Electronics Cooling and Thermal Control     Open Access   (Followers: 9)
Journal of Electrostatics     Hybrid Journal   (Followers: 2)
Journal of Energy Engineering     Full-text available via subscription   (Followers: 7)
Journal of Energy Resources Technology     Full-text available via subscription   (Followers: 4)
Journal of Engineering     Open Access  
Journal of Engineering     Open Access   (Followers: 1)
Journal of Engineering and Applied Science     Open Access  
Journal of Engineering and Technological Sciences     Open Access   (Followers: 2)
Journal of Engineering Design     Hybrid Journal   (Followers: 17)
Journal of Engineering Education     Hybrid Journal   (Followers: 7)
Journal of Engineering for Gas Turbines and Power     Full-text available via subscription   (Followers: 15)
Journal of Engineering Mathematics     Hybrid Journal   (Followers: 2)
Journal of Engineering Mechanics     Full-text available via subscription   (Followers: 16)
Journal of Engineering Physics and Thermophysics     Hybrid Journal   (Followers: 2)
Journal of Engineering Research     Open Access   (Followers: 1)
Journal of Engineering Research and Reports     Open Access  
Journal of Engineering Technology and Applied Sciences     Open Access  
Journal of Engineering Thermophysics     Hybrid Journal   (Followers: 4)
Journal of Engineering, Design and Technology     Hybrid Journal   (Followers: 17)
Journal of Engineering, Project, and Production Management     Open Access  
Journal of Environmental & Engineering Geophysics     Hybrid Journal   (Followers: 2)
Journal of Environmental Engineering     Full-text available via subscription   (Followers: 49)
Journal of Environmental Engineering and Landscape Management     Open Access   (Followers: 8)
Journal of Environmental Engineering and Science     Hybrid Journal   (Followers: 2)
Journal of Experimental Nanoscience     Hybrid Journal  
Journal of Fire Sciences     Hybrid Journal   (Followers: 6)
Journal of Flood Risk Management     Hybrid Journal   (Followers: 14)
Journal of Flow Control, Measurement & Visualization     Open Access   (Followers: 1)
Journal of Fluids Engineering     Full-text available via subscription   (Followers: 30)
Journal of Fourier Analysis and Applications     Hybrid Journal   (Followers: 3)
Journal of Fuel Cell Science and Technology     Full-text available via subscription   (Followers: 2)
Journal of Functional Analysis     Full-text available via subscription   (Followers: 3)
Journal of Fundamental and Applied Sciences     Open Access  
Journal of Geological Research     Open Access   (Followers: 1)
Journal of Geotechnical and Geoenvironmental Engineering     Full-text available via subscription   (Followers: 30)
Journal of Geotechnical Engineering     Full-text available via subscription   (Followers: 4)
Journal of Geovisualization and Spatial Analysis     Hybrid Journal  
Journal of Global Optimization     Hybrid Journal   (Followers: 6)
Journal of Graduate School of Natural and Applied Sciences of Mehmet Akif Ersoy University     Open Access  
Journal of Hazardous Materials Advances     Open Access  
Journal of Healthcare Engineering     Open Access   (Followers: 3)
Journal of Heat Transfer     Full-text available via subscription   (Followers: 66)
Journal of Humanitarian Engineering     Open Access   (Followers: 1)
Journal of Hydraulic Engineering     Full-text available via subscription   (Followers: 27)
Journal of Hyperspectral Remote Sensing     Open Access   (Followers: 23)
Journal of Imaging     Open Access   (Followers: 3)
Journal of Industrial and Production Engineering     Hybrid Journal   (Followers: 4)
Journal of Industrial Engineering and Management     Open Access   (Followers: 5)
Journal of Industrial Safety Engineering     Full-text available via subscription   (Followers: 6)
Journal of Inequalities and Applications     Open Access  
Journal of Infrared, Millimeter and Terahertz Waves     Hybrid Journal   (Followers: 3)
Journal of Infrastructure Preservation and Resilience     Open Access   (Followers: 1)
Journal of Institute of Science and Technology     Open Access  
Journal of Intelligent and Connected Vehicles     Open Access   (Followers: 1)
Journal of Intelligent Systems : Theory and Applications     Open Access  
Journal of International Maritime Safety, Environmental Affairs, and Shipping     Open Access   (Followers: 1)
Journal of Iron and Steel Research International     Hybrid Journal   (Followers: 7)
Journal of Irrigation and Drainage Engineering     Full-text available via subscription   (Followers: 24)
Journal of King Saud University - Engineering Sciences     Open Access  
Journal of KONBiN     Open Access   (Followers: 4)
Journal of Liquid Chromatography & Related Technologies     Hybrid Journal   (Followers: 7)
Journal of Management in Engineering     Full-text available via subscription   (Followers: 9)
Journal of Manufacturing Science and Engineering     Full-text available via subscription   (Followers: 32)
Journal of Manufacturing Systems     Full-text available via subscription   (Followers: 3)
Journal of Manufacturing Technology Management     Hybrid Journal   (Followers: 3)
Journal of Mechanical Design and Testing     Open Access   (Followers: 4)
Journal of Mechatronics     Full-text available via subscription   (Followers: 3)
Journal of Mega Infrastructure & Sustainable Development     Hybrid Journal  
Journal of Metallurgy     Open Access   (Followers: 8)
Journal of Mining Institute     Open Access  
Journal of Motor Behavior     Hybrid Journal   (Followers: 8)
Journal of Multivariate Analysis     Hybrid Journal   (Followers: 15)
Journal of Nanoengineering and Nanomanufacturing     Full-text available via subscription  
Journal of Nanoparticle Research     Hybrid Journal   (Followers: 3)
Journal of Nanoscience     Open Access  
Journal of Nanoscience and Nanotechnology     Full-text available via subscription   (Followers: 8)
Journal of NanoScience, NanoEngineering & Applications     Full-text available via subscription  
Journal of Nanotechnology     Open Access   (Followers: 10)
Journal of Nanotechnology in Engineering and Medicine     Full-text available via subscription   (Followers: 4)

  First | 1 2 3 4 5 6 7 | Last

Similar Journals
Journal Cover
Journal of Imaging
Number of Followers: 3  

  This is an Open Access Journal Open Access journal
ISSN (Online) 2313-433X
Published by MDPI Homepage  [84 journals]
  • J. Imaging, Vol. 8, Pages 118: Discriminative Shape Feature Pooling in
           Deep Neural Networks

    • Authors: Gang Hu, Chahna Dixit, Guanqiu Qi
      First page: 118
      Abstract: Although deep learning approaches are able to generate generic image features from massive labeled data, discriminative handcrafted features still have advantages in providing explicit domain knowledge and reflecting intuitive visual understanding. Much of the existing research focuses on integrating both handcrafted features and deep networks to leverage the benefits. However, the issues of parameter quality have not been effectively solved in existing applications of handcrafted features in deep networks. In this research, we propose a method that enriches deep network features by utilizing the injected discriminative shape features (generic edge tokens and curve partitioning points) to adjust the network’s internal parameter update process. Thus, the modified neural networks are trained under the guidance of specific domain knowledge, and they are able to generate image representations that incorporate the benefits from both handcrafted and deep learned features. The comparative experiments were performed on several benchmark datasets. The experimental results confirmed our method works well on both large and small training datasets. Additionally, compared with existing models using either handcrafted features or deep network representations, our method not only improves the corresponding performance, but also reduces the computational costs.
      Citation: Journal of Imaging
      PubDate: 2022-04-20
      DOI: 10.3390/jimaging8050118
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 119: X-ray Digital Radiography and Computed
           Tomography

    • Authors: Maria Pia Morigi, Fauzia Albertin
      First page: 119
      Abstract: In recent years, X-ray imaging has rapidly grown and spread beyond the medical field; today, it plays a key role in diverse research areas [...]
      Citation: Journal of Imaging
      PubDate: 2022-04-21
      DOI: 10.3390/jimaging8050119
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 120: Time Synchronization of Multimodal
           Physiological Signals through Alignment of Common Signal Types and Its
           Technical Considerations in Digital Health

    • Authors: Ran Xiao, Cheng Ding, Xiao Hu
      First page: 120
      Abstract: Background: Despite advancements in digital health, it remains challenging to obtain precise time synchronization of multimodal physiological signals collected through different devices. Existing algorithms mainly rely on specific physiological features that restrict the use cases to certain signal types. The present study aims to complement previous algorithms and solve a niche time alignment problem when a common signal type is available across different devices. Methods: We proposed a simple time alignment approach based on the direct cross-correlation of temporal amplitudes, making it agnostic and thus generalizable to different signal types. The approach was tested on a public electrocardiographic (ECG) dataset to simulate the synchronization of signals collected from an ECG watch and an ECG patch. The algorithm was evaluated considering key practical factors, including sample durations, signal quality index (SQI), resilience to noise, and varying sampling rates. Results: The proposed approach requires a short sample duration (30 s) to operate, and demonstrates stable performance across varying sampling rates and resilience to common noise. The lowest synchronization delay achieved by the algorithm is 0.13 s with the integration of SQI thresholding. Conclusions: Our findings help improve the time alignment of multimodal signals in digital health and advance healthcare toward precise remote monitoring and disease prevention.
      Citation: Journal of Imaging
      PubDate: 2022-04-21
      DOI: 10.3390/jimaging8050120
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 121: Weakly Supervised Polyp Segmentation in
           Colonoscopy Images Using Deep Neural Networks

    • Authors: Siwei Chen, Gregor Urban, Pierre Baldi
      First page: 121
      Abstract: Colorectal cancer (CRC) is a leading cause of mortality worldwide, and preventive screening modalities such as colonoscopy have been shown to noticeably decrease CRC incidence and mortality. Improving colonoscopy quality remains a challenging task due to limiting factors including the training levels of colonoscopists and the variability in polyp sizes, morphologies, and locations. Deep learning methods have led to state-of-the-art systems for the identification of polyps in colonoscopy videos. In this study, we show that deep learning can also be applied to the segmentation of polyps in real time, and the underlying models can be trained using mostly weakly labeled data, in the form of bounding box annotations that do not contain precise contour information. A novel dataset, Polyp-Box-Seg of 4070 colonoscopy images with polyps from over 2000 patients, is collected, and a subset of 1300 images is manually annotated with segmentation masks. A series of models is trained to evaluate various strategies that utilize bounding box annotations for segmentation tasks. A model trained on the 1300 polyp images with segmentation masks achieves a dice coefficient of 81.52%, which improves significantly to 85.53% when using a weakly supervised strategy leveraging bounding box images. The Polyp-Box-Seg dataset, together with a real-time video demonstration of the segmentation system, are publicly available.
      Citation: Journal of Imaging
      PubDate: 2022-04-22
      DOI: 10.3390/jimaging8050121
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 122: Surreptitious Adversarial Examples through
           Functioning QR Code

    • Authors: Aran Chindaudom, Prarinya Siritanawan, Karin Sumongkayothin, Kazunori Kotani
      First page: 122
      Abstract: The continuous advances in the technology of Convolutional Neural Network (CNN) and Deep Learning have been applied to facilitate various tasks of human life. However, security risks of the users’ information and privacy have been increasing rapidly due to the models’ vulnerabilities. We have developed a novel method of adversarial attack that can conceal its intent from human intuition through the use of a modified QR code. The modified QR code can be consistently scanned with a reader while retaining adversarial efficacy against image classification models. The QR adversarial patch was created and embedded into an input image to generate adversarial examples, which were trained against CNN image classification models. Experiments were performed to investigate the trade-off in different patch shapes and find the patch’s optimal balance of scannability and adversarial efficacy. Furthermore, we have investigated whether particular classes of images are more resistant or vulnerable to the adversarial QR attack, and we also investigated the generality of the adversarial attack across different image classification models.
      Citation: Journal of Imaging
      PubDate: 2022-04-22
      DOI: 10.3390/jimaging8050122
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 123: Microwave Imaging for Early Breast Cancer
           Detection: Current State, Challenges, and Future Directions

    • Authors: Nour AlSawaftah, Salma El-Abed, Salam Dhou, Amer Zakaria
      First page: 123
      Abstract: Breast cancer is the most commonly diagnosed cancer type and is the leading cause of cancer-related death among females worldwide. Breast screening and early detection are currently the most successful approaches for the management and treatment of this disease. Several imaging modalities are currently utilized for detecting breast cancer, of which microwave imaging (MWI) is gaining quite a lot of attention as a promising diagnostic tool for early breast cancer detection. MWI is a noninvasive, relatively inexpensive, fast, convenient, and safe screening tool. The purpose of this paper is to provide an up-to-date survey of the principles, developments, and current research status of MWI for breast cancer detection. This paper is structured into two sections; the first is an overview of current MWI techniques used for detecting breast cancer, followed by an explanation of the working principle behind MWI and its various types, namely, microwave tomography and radar-based imaging. In the second section, a review of the initial experiments along with more recent studies on the use of MWI for breast cancer detection is presented. Furthermore, the paper summarizes the challenges facing MWI as a breast cancer detection tool and provides future research directions. On the whole, MWI has proven its potential as a screening tool for breast cancer detection, both as a standalone or complementary technique. However, there are a few challenges that need to be addressed to unlock the full potential of this imaging modality and translate it to clinical settings.
      Citation: Journal of Imaging
      PubDate: 2022-04-23
      DOI: 10.3390/jimaging8050123
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 124: Extraction and Calculation of Roadway Area
           from Satellite Images Using Improved Deep Learning Model and
           Post-Processing

    • Authors: Varun Yerram, Hiroyuki Takeshita, Yuji Iwahori, Yoshitsugu Hayashi, M. K. Bhuyan, Shinji Fukui, Boonserm Kijsirikul, Aili Wang
      First page: 124
      Abstract: Roadway area calculation is a novel problem in remote sensing and urban planning. This paper models this problem as a two-step problem, roadway extraction, and area calculation. Roadway extraction from satellite images is a problem that has been tackled many times before. This paper proposes a method using pixel resolution to calculate the area of the roads covered in satellite images. The proposed approach uses novel U-net and Resnet architectures called U-net++ and ResNeXt. The state-of-the-art model is combined with the proposed efficient post-processing approach to improve the overlap with ground truth labels. The performance of the proposed road extraction algorithm is evaluated on the Massachusetts dataset and it is shown that the proposed approach outperforms the existing solutions which use models from the U-net family.
      Citation: Journal of Imaging
      PubDate: 2022-04-25
      DOI: 10.3390/jimaging8050124
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 125: Colored Point Cloud Completion for a Head
           Using Adversarial Rendered Image Loss

    • Authors: Yuki Ishida, Yoshitsugu Manabe, Noriko Yata
      First page: 125
      Abstract: Recent advances in depth measurement and its utilization have made point cloud processing more critical. Additionally, the human head is essential for communication, and its three-dimensional data are expected to be utilized in this regard. However, a single RGB-Depth (RGBD) camera is prone to occlusion and depth measurement failure for dark hair colors such as black hair. Recently, point cloud completion, where an entire point cloud is estimated and generated from a partial point cloud, has been studied, but only the shape is learned, rather than the completion of colored point clouds. Thus, this paper proposes a machine learning-based completion method for colored point clouds with XYZ location information and the International Commission on Illumination (CIE) LAB (L*a*b*) color information. The proposed method uses the color difference between point clouds based on the Chamfer Distance (CD) or Earth Mover’s Distance (EMD) of point cloud shape evaluation as a color loss. In addition, an adversarial loss to L*a*b*-Depth images rendered from the output point cloud can improve the visual quality. The experiments examined networks trained using a colored point cloud dataset created by combining two 3D datasets: hairstyles and faces. Experimental results show that using the adversarial loss with the colored point cloud renderer in the proposed method improves the image domain’s evaluation.
      Citation: Journal of Imaging
      PubDate: 2022-04-26
      DOI: 10.3390/jimaging8050125
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 126: Airborne Hyperspectral Imagery for Band
           Selection Using Moth–Flame Metaheuristic Optimization

    • Authors: Raju Anand, Sathishkumar Samiaappan, Shanmugham Veni, Ethan Worch, Meilun Zhou
      First page: 126
      Abstract: In this research, we study a new metaheuristic algorithm called Moth–Flame Optimization (MFO) for hyperspectral band selection. With the hundreds of highly correlated narrow spectral bands, the number of training samples required to train a statistical classifier is high. Thus, the problem is to select a subset of bands without compromising the classification accuracy. One of the ways to solve this problem is to model an objective function that measures class separability and utilize it to arrive at a subset of bands. In this research, we studied MFO to select optimal spectral bands for classification. MFO is inspired by the behavior of moths with respect to flames, which is the navigation method of moths in nature called transverse orientation. In MFO, a moth navigates the search space through a process called transverse orientation by keeping a constant angle with the Moon, which is a compelling strategy for traveling long distances in a straight line, considering that the Moon’s distance from the moth is considerably long. Our research tested MFO on three benchmark hyperspectral datasets—Indian Pines, University of Pavia, and Salinas. MFO produced an Overall Accuracy (OA) of 88.98%, 94.85%, and 97.17%, respectively, on the three datasets. Our experimental results indicate that MFO produces better OA and Kappa when compared to state-of-the-art band selection algorithms such as particle swarm optimization, grey wolf, cuckoo search, and genetic algorithms. The analysis results prove that the proposed approach effectively addresses the spectral band selection problem and provides a high classification accuracy.
      Citation: Journal of Imaging
      PubDate: 2022-04-27
      DOI: 10.3390/jimaging8050126
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 127: A Review of Watershed Implementations for
           Segmentation of Volumetric Images

    • Authors: Anton Kornilov, Ilia Safonov, Ivan Yakimchuk
      First page: 127
      Abstract: Watershed is a widely used image segmentation algorithm. Most researchers understand just an idea of this method: a grayscale image is considered as topographic relief, which is flooded from initial basins. However, frequently they are not aware of the options of the algorithm and the peculiarities of its realizations. There are many watershed implementations in software packages and products. Even if these packages are based on the identical algorithm–watershed, by flooding their outcomes, processing speed, and consumed memory, vary greatly. In particular, the difference among various implementations is noticeable for huge volumetric images; for instance, tomographic 3D images, for which low performance and high memory requirements of watershed might be bottlenecks. In our review, we discuss the peculiarities of algorithms with and without waterline generation, the impact of connectivity type and relief quantization level on the result, approaches for parallelization, as well as other method options. We present detailed benchmarking of seven open-source and three commercial software implementations of marker-controlled watershed for semantic or instance segmentation. We compare those software packages for one synthetic and two natural volumetric images. The aim of the review is to provide information and advice for practitioners to select the appropriate version of watershed for their problem solving. In addition, we forecast future directions of software development for 3D image segmentation by watershed.
      Citation: Journal of Imaging
      PubDate: 2022-04-26
      DOI: 10.3390/jimaging8050127
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 128: Elimination of Defects in Mammograms Caused
           by a Malfunction of the Device Matrix

    • Authors: Dmitrii Tumakov, Zufar Kayumov, Alisher Zhumaniezov, Dmitry Chikrin, Diaz Galimyanov
      First page: 128
      Abstract: Today, the processing and analysis of mammograms is quite an important field of medical image processing. Small defects in images can lead to false conclusions. This is especially true when the distortion occurs due to minor malfunctions in the equipment. In the present work, an algorithm for eliminating a defect is proposed, which includes a change in intensity on a mammogram and deteriorations in the contrast of individual areas. The algorithm consists of three stages. The first is the defect identification stage. The second involves improvement and equalization of the contrasts of different parts of the image outside the defect. The third involves restoration of the defect area via a combination of interpolation and an artificial neural network. The mammogram obtained as a result of applying the algorithm shows significantly better image quality and does not contain distortions caused by changes in brightness of the pixels. The resulting images are evaluated using Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) and Naturalness Image Quality Evaluator (NIQE) metrics. In total, 98 radiomics features are extracted from the original and obtained images, and conclusions are drawn about the minimum changes in features between the original image and the image obtained by the proposed algorithm.
      Citation: Journal of Imaging
      PubDate: 2022-05-02
      DOI: 10.3390/jimaging8050128
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 129: Image Classification in JPEG Compression
           Domain for Malaria Infection Detection

    • Authors: Yuhang Dong, W. David Pan
      First page: 129
      Abstract: Digital images are usually stored in compressed format. However, image classification typically takes decompressed images as inputs rather than compressed images. Therefore, performing image classification directly in the compression domain will eliminate the need for decompression, thus increasing efficiency and decreasing costs. However, there has been very sparse work on image classification in the compression domain. In this paper, we studied the feasibility of classifying images in their JPEG compression domain. We analyzed the underlying mechanisms of JPEG as an example and conducted classification on data from different stages during the compression. The images we used were malaria-infected red blood cells and normal cells. The training data include multiple combinations of DCT coefficients, DC values in both decimal and binary forms, the “scan” segment in both binary and decimal form, and the variable length of the entire bitstream. The result shows that LSTM can successfully classify the image in its compressed form, with accuracies around 80%. If using only coded DC values, we can achieve accuracies higher than 90%. This indicates that images from different classes can still be well separated in their JPEG compressed format. Our simulations demonstrate that the proposed compression domain-processing method can reduce the input data, and eliminate the image decompression step, thereby achieving significant savings on memory and computation time.
      Citation: Journal of Imaging
      PubDate: 2022-05-03
      DOI: 10.3390/jimaging8050129
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 130: Weakly Supervised Tumor Detection in PET
           Using Class Response for Treatment Outcome Prediction

    • Authors: Amine Amyar, Romain Modzelewski, Pierre Vera, Vincent Morard, Su Ruan
      First page: 130
      Abstract: It is proven that radiomic characteristics extracted from the tumor region are predictive. The first step in radiomic analysis is the segmentation of the lesion. However, this task is time consuming and requires a highly trained physician. This process could be automated using computer-aided detection (CAD) tools. Current state-of-the-art methods are trained in a supervised learning setting, which requires a lot of data that are usually not available in the medical imaging field. The challenge is to train one model to segment different types of tumors with only a weak segmentation ground truth. In this work, we propose a prediction framework including a 3D tumor segmentation in positron emission tomography (PET) images, based on a weakly supervised deep learning method, and an outcome prediction based on a 3D-CNN classifier applied to the segmented tumor regions. The key step is to locate the tumor in 3D. We propose to (1) calculate two maximum intensity projection (MIP) images from 3D PET images in two directions, (2) classify the MIP images into different types of cancers, (3) generate the class activation maps through a multitask learning approach with a weak prior knowledge, and (4) segment the 3D tumor region from the two 2D activation maps with a proposed new loss function for the multitask. The proposed approach achieves state-of-the-art prediction results with a small data set and with a weak segmentation ground truth. Our model was tested and validated for treatment response and survival in lung and esophageal cancers on 195 patients, with an area under the receiver operating characteristic curve (AUC) of 67% and 59%, respectively, and a dice coefficient of 73% and 0.77% for tumor segmentation.
      Citation: Journal of Imaging
      PubDate: 2022-05-09
      DOI: 10.3390/jimaging8050130
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 131: BI-RADS BERT and Using Section Segmentation
           to Understand Radiology Reports

    • Authors: Grey Kuling, Belinda Curpen, Anne L. Martel
      First page: 131
      Abstract: Radiology reports are one of the main forms of communication between radiologists and other clinicians, and contain important information for patient care. In order to use this information for research and automated patient care programs, it is necessary to convert the raw text into structured data suitable for analysis. State-of-the-art natural language processing (NLP) domain-specific contextual word embeddings have been shown to achieve impressive accuracy for these tasks in medicine, but have yet to be utilized for section structure segmentation. In this work, we pre-trained a contextual embedding BERT model using breast radiology reports and developed a classifier that incorporated the embedding with auxiliary global textual features in order to perform section segmentation. This model achieved 98% accuracy in segregating free-text reports, sentence by sentence, into sections of information outlined in the Breast Imaging Reporting and Data System (BI-RADS) lexicon, which is a significant improvement over the classic BERT model without auxiliary information. We then evaluated whether using section segmentation improved the downstream extraction of clinically relevant information such as modality/procedure, previous cancer, menopausal status, purpose of exam, breast density, and breast MRI background parenchymal enhancement. Using the BERT model pre-trained on breast radiology reports, combined with section segmentation, resulted in an overall accuracy of 95.9% in the field extraction tasks. This is a 17% improvement, compared to an overall accuracy of 78.9% for field extraction with models using classic BERT embeddings and not using section segmentation. Our work shows the strength of using BERT in the analysis of radiology reports and the advantages of section segmentation by identifying the key features of patient factors recorded in breast radiology reports.
      Citation: Journal of Imaging
      PubDate: 2022-05-09
      DOI: 10.3390/jimaging8050131
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 132: Are Social Networks Watermarking Us or Are
           We (Unawarely) Watermarking Ourself'

    • Authors: Flavio Bertini, Rajesh Sharma, Danilo Montesi
      First page: 132
      Abstract: In the last decade, Social Networks (SNs) have deeply changed many aspects of society, and one of the most widespread behaviours is the sharing of pictures. However, malicious users often exploit shared pictures to create fake profiles, leading to the growth of cybercrime. Thus, keeping in mind this scenario, authorship attribution and verification through image watermarking techniques are becoming more and more important. In this paper, we firstly investigate how thirteen of the most popular SNs treat uploaded pictures in order to identify a possible implementation of image watermarking techniques by respective SNs. Second, we test the robustness of several image watermarking algorithms on these thirteen SNs. Finally, we verify whether a method based on the Photo-Response Non-Uniformity (PRNU) technique, which is usually used in digital forensic or image forgery detection activities, can be successfully used as a watermarking approach for authorship attribution and verification of pictures on SNs. The proposed method is sufficiently robust, in spite of the fact that pictures are often downgraded during the process of uploading to the SNs. Moreover, in comparison to conventional watermarking methods the proposed method can successfully pass through different SNs, solving related problems such as profile linking and fake profile detection. The results of our analysis on a real dataset of 8400 pictures show that the proposed method is more effective than other watermarking techniques and can help to address serious questions about privacy and security on SNs. Moreover, the proposed method paves the way for the definition of multi-factor online authentication mechanisms based on robust digital features.
      Citation: Journal of Imaging
      PubDate: 2022-05-10
      DOI: 10.3390/jimaging8050132
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 133: Integration of Deep Learning and Active
           Shape Models for More Accurate Prostate Segmentation in 3D MR Images

    • Authors: Massimo Salvi, Bruno De Santi, Bianca Pop, Martino Bosco, Valentina Giannini, Daniele Regge, Filippo Molinari, Kristen M. Meiburger
      First page: 133
      Abstract: Magnetic resonance imaging (MRI) has a growing role in the clinical workup of prostate cancer. However, manual three-dimensional (3D) segmentation of the prostate is a laborious and time-consuming task. In this scenario, the use of automated algorithms for prostate segmentation allows us to bypass the huge workload of physicians. In this work, we propose a fully automated hybrid approach for prostate gland segmentation in MR images using an initial segmentation of prostate volumes using a custom-made 3D deep network (VNet-T2), followed by refinement using an Active Shape Model (ASM). While the deep network focuses on three-dimensional spatial coherence of the shape, the ASM relies on local image information and this joint effort allows for improved segmentation of the organ contours. Our method is developed and tested on a dataset composed of T2-weighted (T2w) MRI prostatic volumes of 60 male patients. In the test set, the proposed method shows excellent segmentation performance, achieving a mean dice score and Hausdorff distance of 0.851 and 7.55 mm, respectively. In the future, this algorithm could serve as an enabling technology for the development of computer-aided systems for prostate cancer characterization in MR imaging.
      Citation: Journal of Imaging
      PubDate: 2022-05-11
      DOI: 10.3390/jimaging8050133
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 134: LightBot: A Multi-Light Position Robotic
           Acquisition System for Adaptive Capturing of Cultural Heritage Surfaces

    • Authors: Ramamoorthy Luxman, Yuly Emilia Castro, Hermine Chatoux, Marvin Nurit, Amalia Siatou, Gaëtan Le Goïc, Laura Brambilla, Christian Degrigny, Franck Marzani, Alamin Mansouri
      First page: 134
      Abstract: Multi-light acquisitions and modeling are well-studied techniques for characterizing surface geometry, widely used in the cultural heritage field. Current systems that are used to perform this kind of acquisition are mainly free-form or dome-based. Both of them have constraints in terms of reproducibility, limitations on the size of objects being acquired, speed, and portability. This paper presents a novel robotic arm-based system design, which we call LightBot, as well as its applications in reflectance transformation imaging (RTI) in particular. The proposed model alleviates some of the limitations observed in the case of free-form or dome-based systems. It allows the automation and reproducibility of one or a series of acquisitions adapting to a given surface in two-dimensional space.
      Citation: Journal of Imaging
      PubDate: 2022-05-12
      DOI: 10.3390/jimaging8050134
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 135: On Acquisition Parameters and Processing
           Techniques for Interparticle Contact Detection in Granular Packings Using
           Synchrotron Computed Tomography

    • Authors: Fernando Alvarez-Borges, Sharif Ahmed, Robert C. Atwood
      First page: 135
      Abstract: X-ray computed tomography (XCT) is regularly employed in geomechanics to non-destructively measure the solid and pore fractions of soil and rock from reconstructed 3D images. With the increasing availability of high-resolution XCT imaging systems, researchers now seek to measure microfabric parameters such as the number and area of interparticle contacts, which can then be used to inform soil behaviour modelling techniques. However, recent research has evidenced that conventional image processing methods consistently overestimate the number and area of interparticle contacts, mainly due to acquisition-driven image artefacts. The present study seeks to address this issue by systematically assessing the role of XCT acquisition parameters in the accurate detection of interparticle contacts. To this end, synchrotron XCT has been applied to a hexagonal close-packed arrangement of glass pellets with and without a prescribed separation between lattice layers. Different values for the number of projections, exposure time, and rotation range have been evaluated. Conventional global grey value thresholding and novel U-Net segmentation methods have been assessed, followed by local refinements at the presumptive contacts, as per recently proposed contact detection routines. The effect of the different acquisition set-ups and segmentation techniques on contact detection performance is presented and discussed, and optimised workflows are proposed.
      Citation: Journal of Imaging
      PubDate: 2022-05-12
      DOI: 10.3390/jimaging8050135
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 136: Data Extraction of Circular-Shaped and
           Grid-like Chart Images

    • Authors: Filip Bajić, Josip Job
      First page: 136
      Abstract: Chart data extraction is a crucial research field in recovering information from chart images. With the recent rise in image processing and computer vision algorithms, researchers presented various approaches to tackle this problem. Nevertheless, most of them use different datasets, often not publicly available to the research community. Therefore, the main focus of this research was to create a chart data extraction algorithm for circular-shaped and grid-like chart types, which will accelerate research in this field and allow uniform result comparison. A large-scale dataset is provided containing 120,000 chart images organized into 20 categories, with corresponding ground truth for each image. Through the undertaken extensive research and to the best of our knowledge, no other author reports the chart data extraction of the sunburst diagrams, heatmaps, and waffle charts. In this research, a new, fully automatic low-level algorithm is also presented that uses a raster image as input and generates an object-oriented structure of the chart of that image. The main novelty of the proposed approach is in chart processing on binary images instead of commonly used pixel counting techniques. The experiments were performed with a synthetic dataset and with real-world chart images. The obtained results demonstrate two things: First, a low-level bottom-up approach can be shared among different chart types. Second, the proposed algorithm achieves superior results on a synthetic dataset. The achieved average data extraction accuracy on the synthetic dataset can be considered state-of-the-art within multiple error rate groups.
      Citation: Journal of Imaging
      PubDate: 2022-05-12
      DOI: 10.3390/jimaging8050136
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 137: Development of a Visualisation Approach for
           Analysing Incipient and Clinically Unrecorded Enamel Fissure Caries Using
           Laser-Induced Contrast Imaging, MicroRaman Spectroscopy and Biomimetic
           Composites: A Pilot Study

    • Authors: Pavel Seredin, Dmitry Goloshchapov, Vladimir Kashkarov, Anna Emelyanova, Nikita Buylov, Yuri Ippolitov, Tatiana Prutskij
      First page: 137
      Abstract: This pilot study presents a practical approach to detecting and visualising the initial forms of caries that are not clinically registered. The use of a laser-induced contrast visualisation (LICV) technique was shown to provide detection of the originating caries based on the separation of emissions from sound tissue, areas with destroyed tissue and regions of bacterial invasion. Adding microRaman spectroscopy to the measuring system enables reliable detection of the transformation of the organic–mineral component in the dental tissue and the spread of bacterial microflora in the affected region. Further laboratory and clinical studies of the comprehensive use of LICV and microRaman spectroscopy enable data extension on the application of this approach for accurate determination of the boundaries in the changed dental tissue as a result of initial caries. The obtained data has the potential to develop an effective preventive medical diagnostic approach and as a result, further personalised medical treatment can be specified.
      Citation: Journal of Imaging
      PubDate: 2022-05-13
      DOI: 10.3390/jimaging8050137
      Issue No: Vol. 8, No. 5 (2022)
       
  • J. Imaging, Vol. 8, Pages 82: Three-Dimensional Pharyngeal Airway Space
           Changes Following Isolated Mandibular Advancement Surgery in 120 Patients:
           A 1-Year Follow-up Study

    • Authors: Sohaib Shujaat, Eman Shaheen, Marryam Riaz, Constantinus Politis, Reinhilde Jacobs
      First page: 82
      Abstract: Lack of evidence exists related to the three-dimensional (3D) pharyngeal airway space (PAS) changes at follow-up after isolated bilateral sagittal split osteotomy (BSSO) advancement surgery. The present study assessed the 3D PAS changes following isolated mandibular advancement at a follow-up period of 1 year. A total of 120 patients (40 males, 80 females, mean age: 26.0 ± 12.2) who underwent BSSO advancement surgery were recruited. Cone-beam computed tomography (CBCT) scans were acquired preoperatively (T0), immediately following surgery (T1), and at 1 year of follow-up (T2). The volume, surface area, and minimal cross-sectional area (mCSA) of the airway were assessed. The total airway showed a 38% increase in volume and 13% increase in surface area from T0 to T1, where the oropharyngeal region showed the maximum immediate change. At T1–T2 follow-up, both volumetric and surface area showed a relapse of less than 7% for all sub-regions. The mCSA showed a significant increase of 71% from T0 to T1 (p < 0.0001), whereas a non-significant relapse was observed at T1–T2 (p = 0.1252). The PAS remained stable at a follow-up period of 1 year. In conclusion, BSSO advancement surgery could be regarded as a stable procedure for widening of the PAS with maintenance of positive space at follow-up.
      Citation: Journal of Imaging
      PubDate: 2022-03-22
      DOI: 10.3390/jimaging8040082
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 83: Generative Adversarial Networks in Brain
           Imaging: A Narrative Review

    • Authors: Maria Elena Laino, Pierandrea Cancian, Letterio Salvatore Politi, Matteo Giovanni Della Porta, Luca Saba, Victor Savevski
      First page: 83
      Abstract: Artificial intelligence (AI) is expected to have a major effect on radiology as it demonstrated remarkable progress in many clinical tasks, mostly regarding the detection, segmentation, classification, monitoring, and prediction of diseases. Generative Adversarial Networks have been proposed as one of the most exciting applications of deep learning in radiology. GANs are a new approach to deep learning that leverages adversarial learning to tackle a wide array of computer vision challenges. Brain radiology was one of the first fields where GANs found their application. In neuroradiology, indeed, GANs open unexplored scenarios, allowing new processes such as image-to-image and cross-modality synthesis, image reconstruction, image segmentation, image synthesis, data augmentation, disease progression models, and brain decoding. In this narrative review, we will provide an introduction to GANs in brain imaging, discussing the clinical potential of GANs, future clinical applications, as well as pitfalls that radiologists should be aware of.
      Citation: Journal of Imaging
      PubDate: 2022-03-23
      DOI: 10.3390/jimaging8040083
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 84: Addressing Motion Blurs in Brain MRI Scans
           Using Conditional Adversarial Networks and Simulated Curvilinear Motions

    • Authors: Shangjin Li, Yijun Zhao
      First page: 84
      Abstract: In-scanner head motion often leads to degradation in MRI scans and is a major source of error in diagnosing brain abnormalities. Researchers have explored various approaches, including blind and nonblind deconvolutions, to correct the motion artifacts in MRI scans. Inspired by the recent success of deep learning models in medical image analysis, we investigate the efficacy of employing generative adversarial networks (GANs) to address motion blurs in brain MRI scans. We cast the problem as a blind deconvolution task where a neural network is trained to guess a blurring kernel that produced the observed corruption. Specifically, our study explores a new approach under the sparse coding paradigm where every ground truth corrupting kernel is assumed to be a “combination” of a relatively small universe of “basis” kernels. This assumption is based on the intuition that, on small distance scales, patients’ moves follow simple curves and that complex motions can be obtained by combining a number of simple ones. We show that, with a suitably dense basis, a neural network can effectively guess the degrading kernel and reverse some of the damage in the motion-affected real-world scans. To this end, we generated 10,000 continuous and curvilinear kernels in random positions and directions that are likely to uniformly populate the space of corrupting kernels in real-world scans. We further generated a large dataset of 225,000 pairs of sharp and blurred MR images to facilitate training effective deep learning models. Our experimental results demonstrate the viability of the proposed approach evaluated using synthetic and real-world MRI scans. Our study further suggests there is merit in exploring separate models for the sagittal, axial, and coronal planes.
      Citation: Journal of Imaging
      PubDate: 2022-03-23
      DOI: 10.3390/jimaging8040084
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 85: Exploring Metrics to Establish an Optimal
           Model for Image Aesthetic Assessment and Analysis

    • Authors: Ying Dai
      First page: 85
      Abstract: To establish an optimal model for photo aesthetic assessment, in this paper, an internal metric called the disentanglement-measure (D-measure) is introduced, which reflects the disentanglement degree of the final layer FC (full connection) nodes of convolutional neural network (CNN). By combining the F-measure with the D-measure to obtain an FD measure, an algorithm of determining the optimal model from many photo score prediction models generated by CNN-based repetitively self-revised learning (RSRL) is proposed. Furthermore, the aesthetics features of the model regarding the first fixation perspective (FFP) and the assessment interest region (AIR) are defined by means of the feature maps so as to analyze the consistency with human aesthetics. The experimental results show that the proposed method is helpful in improving the efficiency of determining the optimal model. Moreover, extracting the FFP and AIR of the models to the image is useful in understanding the internal properties of these models related to the human aesthetics and validating the external performances of the aesthetic assessment.
      Citation: Journal of Imaging
      PubDate: 2022-03-23
      DOI: 10.3390/jimaging8040085
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 86: Improving Scene Text Recognition for Indian
           Languages with Transfer Learning and Font Diversity

    • Authors: Sanjana Gunna, Rohit Saluja, Cheerakkuzhi Veluthemana Jawahar
      First page: 86
      Abstract: Reading Indian scene texts is complex due to the use of regional vocabulary, multiple fonts/scripts, and text size. This work investigates the significant differences in Indian and Latin Scene Text Recognition (STR) systems. Recent STR works rely on synthetic generators that involve diverse fonts to ensure robust reading solutions. We present utilizing additional non-Unicode fonts with generally employed Unicode fonts to cover font diversity in such synthesizers for Indian languages. We also perform experiments on transfer learning among six different Indian languages. Our transfer learning experiments on synthetic images with common backgrounds provide an exciting insight that Indian scripts can benefit from each other than from the extensive English datasets. Our evaluations for the real settings help us achieve significant improvements over previous methods on four Indian languages from standard datasets like IIIT-ILST, MLT-17, and the new dataset (we release) containing 440 scene images with 500 Gujarati and 2535 Tamil words. Further enriching the synthetic dataset with non-Unicode fonts and multiple augmentations helps us achieve a remarkable Word Recognition Rate gain of over 33% on the IIIT-ILST Hindi dataset. We also present the results of lexicon-based transcription approaches for all six languages.
      Citation: Journal of Imaging
      PubDate: 2022-03-23
      DOI: 10.3390/jimaging8040086
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 87: Phase Retardation Analysis in a Rotated
           Plane-Parallel Plate for Phase-Shifting Digital Holography

    • Authors: Igor Shevkunov, Nikolay V. Petrov
      First page: 87
      Abstract: In this paper, we detail a phase-shift implementation in a rotated plane-parallel plate (PPP). Considering the phase-shifting digital holography application, we provide a more precise phase-shift estimation based on PPP thickness, rotation, and mutual inclination of reference and object wavefronts. We show that phase retardation uncertainty implemented by the rotated PPP in a simple configuration is less than the uncertainty of a traditionally used piezoelectric translator. Physical experiments on a phase test target verify the high quality of phase reconstruction.
      Citation: Journal of Imaging
      PubDate: 2022-03-24
      DOI: 10.3390/jimaging8040087
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 88: YOLOv4-Based CNN Model versus Nested
           Contours Algorithm in the Suspicious Lesion Detection on the Mammography
           Image: A Direct Comparison in the Real Clinical Settings

    • Authors: Alexey Kolchev, Dmitry Pasynkov, Ivan Egoshin, Ivan Kliouchkin, Olga Pasynkova, Dmitrii Tumakov
      First page: 88
      Abstract: Background: We directly compared the mammography image processing results obtained with the help of the YOLOv4 convolutional neural network (CNN) model versus those obtained with the help of the NCA-based nested contours algorithm model. Method: We used 1080 images to train the YOLOv4, plus 100 images with proven breast cancer (BC) and 100 images with proven absence of BC to test both models. Results: the rates of true-positive, false-positive and false-negative outcomes were 60, 10 and 40, respectively, for YOLOv4, and 93, 63 and 7, respectively, for NCA. The sensitivities for the YOLOv4 and the NCA were comparable to each other for star-like lesions, masses with unclear borders, round- or oval-shaped masses with clear borders and partly visualized masses. On the contrary, the NCA was superior to the YOLOv4 in the case of asymmetric density and of changes invisible on the dense parenchyma background. Radiologists changed their earlier decisions in six cases per 100 for NCA. YOLOv4 outputs did not influence the radiologists’ decisions. Conclusions: in our set, NCA clinically significantly surpasses YOLOv4.
      Citation: Journal of Imaging
      PubDate: 2022-03-24
      DOI: 10.3390/jimaging8040088
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 89: Union-Retire for Connected Components
           Analysis on FPGA

    • Authors: Donald G. Bailey, Michael J. Klaiber
      First page: 89
      Abstract: The Union-Retire CCA (UR-CCA) algorithm started a new paradigm for connected components analysis. Instead of using directed tree structures, UR-CCA focuses on connectivity. This algorithmic change leads to a reduction in required memory, with no end-of-row processing overhead. In this paper we describe a hardware architecture based on UR-CCA and its realisation on an FPGA. The memory bandwidth and pipelining challenges of hardware UR-CCA are analysed and resolved. It is shown that up to 36% of memory resources can be saved using the proposed architecture. This translates directly to a smaller device for an FPGA implementation.
      Citation: Journal of Imaging
      PubDate: 2022-03-24
      DOI: 10.3390/jimaging8040089
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 90: Object Categorization Capability of
           Psychological Potential Field in Perceptual Assessment Using Line-Drawing
           Images

    • Authors: Naoyuki Awano, Yuki Hayashi
      First page: 90
      Abstract: Affective/cognitive engineering investigations typically require the quantitative assessment of object perception. Recent research has suggested that certain perceptions of object categorization can be derived from human eye fixation and that color images and line drawings induce similar neural activities. Line drawings contain less information than color images; therefore, line drawings are expected to simplify the investigations of object perception. The psychological potential field (PPF), which is a psychological feature, is an image feature of line drawings. On the basis of the PPF, the possibility that the general human perception of object categorization can be assessed from the similarity to fixation maps (FMs) generated from human eye fixations has been reported. However, this may be due to chance because image features other than the PPF have not been compared with FMs. This study examines the potential and effectiveness of the PPF by comparing its performance with that of other image features in terms of the similarity to FMs. The results show that the PPF shows the ideal performance for assessing the perception of object categorization. In particular, the PPF effectively distinguishes between animal and nonanimal targets; however, real-time assessment is difficult.
      Citation: Journal of Imaging
      PubDate: 2022-03-26
      DOI: 10.3390/jimaging8040090
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 91: Augmented Reality Games and Presence: A
           Systematic Review

    • Authors: Anabela Marto, Alexandrino Gonçalves
      First page: 91
      Abstract: The sense of presence in augmented reality (AR) has been studied by multiple researchers through diverse applications and strategies. In addition to the valuable information provided to the scientific community, new questions keep being raised. These approaches vary from following the standards from virtual reality to ascertaining the presence of users’ experiences and new proposals for evaluating presence that specifically target AR environments. It is undeniable that the idea of evaluating presence across AR may be overwhelming due to the different scenarios that may be possible, whether this regards technological devices—from immersive AR headsets to the small screens of smartphones—or the amount of virtual information that is being added to the real scenario. Taking into account the recent literature that has addressed the sense of presence in AR as a true challenge given the diversity of ways that AR can be experienced, this study proposes a specific scope to address presence and other related forms of dimensions such as immersion, engagement, embodiment, or telepresence, when AR is used in games. This systematic review was conducted following the PRISMA methodology, carefully analysing all studies that reported visual games that include AR activities and somehow included presence data—or related dimensions that may be referred to as immersion-related feelings, analysis or results. This study clarifies what dimensions of presence are being considered and evaluated in AR games, how presence-related variables have been evaluated, and what the major research findings are. For a better understanding of these approaches, this study takes note of what devices are being used for the AR experience when immersion-related feelings are one of the behaviours that are considered in their evaluations, and discusses to what extent these feelings in AR games affect the player’s other behaviours.
      Citation: Journal of Imaging
      PubDate: 2022-03-29
      DOI: 10.3390/jimaging8040091
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 92: A New Preclinical Decision Support System
           Based on PET Radiomics: A Preliminary Study on the Evaluation of an
           Innovative 64Cu-Labeled Chelator in Mouse Models

    • Authors: Viviana Benfante, Alessandro Stefano, Albert Comelli, Paolo Giaccone, Francesco Paolo Cammarata, Selene Richiusa, Fabrizio Scopelliti, Marco Pometti, Milene Ficarra, Sebastiano Cosentino, Marcello Lunardon, Francesca Mastrotto, Alberto Andrighetto, Antonino Tuttolomondo, Rosalba Parenti, Massimo Ippolito, Giorgio Russo
      First page: 92
      Abstract: The 64Cu-labeled chelator was analyzed in vivo by positron emission tomography (PET) imaging to evaluate its biodistribution in a murine model at different acquisition times. For this purpose, nine 6-week-old female Balb/C nude strain mice underwent micro-PET imaging at three different time points after 64Cu-labeled chelator injection. Specifically, the mice were divided into group 1 (acquisition 1 h after [64Cu] chelator administration, n = 3 mice), group 2 (acquisition 4 h after [64Cu]chelator administration, n = 3 mice), and group 3 (acquisition 24 h after [64Cu] chelator administration, n = 3 mice). Successively, all PET studies were segmented by means of registration with a standard template space (3D whole-body Digimouse atlas), and 108 radiomics features were extracted from seven organs (namely, heart, bladder, stomach, liver, spleen, kidney, and lung) to investigate possible changes over time in [64Cu]chelator biodistribution. The one-way analysis of variance and post hoc Tukey Honestly Significant Difference test revealed that, while heart, stomach, spleen, kidney, and lung districts showed a very low percentage of radiomics features with significant variations (p-value < 0.05) among the three groups of mice, a large number of features (greater than 60% and 50%, respectively) that varied significantly between groups were observed in bladder and liver, indicating a different in vivo uptake of the 64Cu-labeled chelator over time. The proposed methodology may improve the method of calculating the [64Cu]chelator biodistribution and open the way towards a decision support system in the field of new radiopharmaceuticals used in preclinical imaging trials.
      Citation: Journal of Imaging
      PubDate: 2022-03-30
      DOI: 10.3390/jimaging8040092
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 93: Unified Probabilistic Deep Continual
           Learning through Generative Replay and Open Set Recognition

    • Authors: Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Yongwon Hong, Visvanathan Ramesh
      First page: 93
      Abstract: Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge. Although it is inevitable for continual-learning systems to encounter such unseen concepts, the corresponding literature appears to nonetheless focus primarily on alleviating catastrophic interference with learned representations. In this work, we introduce a probabilistic approach that connects these perspectives based on variational inference in a single deep autoencoder model. Specifically, we propose to bound the approximate posterior by fitting regions of high density on the basis of correctly classified data points. These bounds are shown to serve a dual purpose: unseen unknown out-of-distribution data can be distinguished from already trained known tasks towards robust application. Simultaneously, to retain already acquired knowledge, a generative replay process can be narrowed to strictly in-distribution samples, in order to significantly alleviate catastrophic interference.
      Citation: Journal of Imaging
      PubDate: 2022-03-31
      DOI: 10.3390/jimaging8040093
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 94: Imaging PPG for In Vivo Human Tissue
           Perfusion Assessment during Surgery

    • Authors: Marco Lai, Stefan D. van der Stel, Harald C. Groen, Mark van Gastel, Koert F. D. Kuhlmann, Theo J. M. Ruers, Benno H. W. Hendriks
      First page: 94
      Abstract: Surgical excision is the golden standard for treatment of intestinal tumors. In this surgical procedure, inadequate perfusion of the anastomosis can lead to postoperative complications, such as anastomotic leakages. Imaging photoplethysmography (iPPG) can potentially provide objective and real-time feedback of the perfusion status of tissues. This feasibility study aims to evaluate an iPPG acquisition system during intestinal surgeries to detect the perfusion levels of the microvasculature tissue bed in different perfusion conditions. This feasibility study assesses three patients that underwent resection of a portion of the small intestine. Data was acquired from fully perfused, non-perfused and anastomosis parts of the intestine during different phases of the surgical procedure. Strategies for limiting motion and noise during acquisition were implemented. iPPG perfusion maps were successfully extracted from the intestine microvasculature, demonstrating that iPPG can be successfully used for detecting perturbations and perfusion changes in intestinal tissues during surgery. This study provides proof of concept for iPPG to detect changes in organ perfusion levels.
      Citation: Journal of Imaging
      PubDate: 2022-03-31
      DOI: 10.3390/jimaging8040094
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 95: Spiky: An ImageJ Plugin for Data Analysis of
           Functional Cardiac and Cardiomyocyte Studies

    • Authors: Côme Pasqualin, François Gannier, Angèle Yu, David Benoist, Ian Findlay, Romain Bordy, Pierre Bredeloux, Véronique Maupoil
      First page: 95
      Abstract: Introduction and objective: Nowadays, investigations of heart physiology and pathophysiology rely more and more upon image analysis, whether for the detection and characterization of events in single cells or for the mapping of events and their characteristics across an entire tissue. These investigations require extensive skills in image analysis and/or expensive software, and their reproducibility may be a concern. Our objective was to build a robust, reliable and open-source software tool to quantify excitation–contraction related experimental data at multiple scales, from single isolated cells to the whole heart. Methods and results: A free and open-source ImageJ plugin, Spiky, was developed to detect and analyze peaks in experimental data streams. It allows rapid and easy analysis of action potentials, intracellular calcium transient and contraction data from cardiac research experiments. As shown in the provided examples, both classical bi-dimensional data (XT signals) and video data obtained from confocal microscopy and optical mapping experiments (XYT signals) can be analyzed. Spiky was written in ImageJ Macro Language and JAVA, and works under Windows, Mac and Linux operating systems. Conclusion: Spiky provides a complete working interface to process and analyze cardiac physiology research data.
      Citation: Journal of Imaging
      PubDate: 2022-04-01
      DOI: 10.3390/jimaging8040095
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 96: Convolutional Neural Networks for the
           Identification of African Lions from Individual Vocalizations

    • Authors: Martino Trapanotto, Loris Nanni, Sheryl Brahnam, Xiang Guo
      First page: 96
      Abstract: The classification of vocal individuality for passive acoustic monitoring (PAM) and census of animals is becoming an increasingly popular area of research. Nearly all studies in this field of inquiry have relied on classic audio representations and classifiers, such as Support Vector Machines (SVMs) trained on spectrograms or Mel-Frequency Cepstral Coefficients (MFCCs). In contrast, most current bioacoustic species classification exploits the power of deep learners and more cutting-edge audio representations. A significant reason for avoiding deep learning in vocal identity classification is the tiny sample size in the collections of labeled individual vocalizations. As is well known, deep learners require large datasets to avoid overfitting. One way to handle small datasets with deep learning methods is to use transfer learning. In this work, we evaluate the performance of three pretrained CNNs (VGG16, ResNet50, and AlexNet) on a small, publicly available lion roar dataset containing approximately 150 samples taken from five male lions. Each of these networks is retrained on eight representations of the samples: MFCCs, spectrogram, and Mel spectrogram, along with several new ones, such as VGGish and stockwell, and those based on the recently proposed LM spectrogram. The performance of these networks, both individually and in ensembles, is analyzed and corroborated using the Equal Error Rate and shown to surpass previous classification attempts on this dataset; the best single network achieved over 95% accuracy and the best ensembles over 98% accuracy. The contributions this study makes to the field of individual vocal classification include demonstrating that it is valuable and possible, with caution, to use transfer learning with single pretrained CNNs on the small datasets available for this problem domain. We also make a contribution to bioacoustics generally by offering a comparison of the performance of many state-of-the-art audio representations, including for the first time the LM spectrogram and stockwell representations. All source code for this study is available on GitHub.
      Citation: Journal of Imaging
      PubDate: 2022-04-01
      DOI: 10.3390/jimaging8040096
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 97: Machine Learning for Early Parkinson’s
           Disease Identification within SWEDD Group Using Clinical and DaTSCAN SPECT
           Imaging Features

    • Authors: Hajer Khachnaoui, Nawres Khlifa, Rostom Mabrouk
      First page: 97
      Abstract: Early Parkinson’s Disease (PD) diagnosis is a critical challenge in the treatment process. Meeting this challenge allows appropriate planning for patients. However, Scan Without Evidence of Dopaminergic Deficit (SWEDD) is a heterogeneous group of PD patients and Healthy Controls (HC) in clinical and imaging features. The application of diagnostic tools based on Machine Learning (ML) comes into play here as they are capable of distinguishing between HC subjects and PD patients within an SWEDD group. In the present study, three ML algorithms were used to separate PD patients from HC within an SWEDD group. Data of 548 subjects were firstly analyzed by Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) techniques. Using the best reduction technique result, we built the following clustering models: Density-Based Spatial (DBSCAN), K-means and Hierarchical Clustering. According to our findings, LDA performs better than PCA; therefore, LDA was used as input for the clustering models. The different models’ performances were assessed by comparing the clustering algorithms outcomes with the ground truth after a follow-up. Hierarchical Clustering surpassed DBSCAN and K-means algorithms by 64%, 78.13% and 38.89% in terms of accuracy, sensitivity and specificity. The proposed method demonstrated the suitability of ML models to distinguish PD patients from HC subjects within an SWEDD group.
      Citation: Journal of Imaging
      PubDate: 2022-04-02
      DOI: 10.3390/jimaging8040097
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 98: A Comparative Review on Applications of
           Different Sensors for Sign Language Recognition

    • Authors: Muhammad Saad Amin, Syed Tahir Hussain Rizvi, Md. Murad Hossain
      First page: 98
      Abstract: Sign language recognition is challenging due to the lack of communication between normal and affected people. Many social and physiological impacts are created due to speaking or hearing disability. A lot of different dimensional techniques have been proposed previously to overcome this gap. A sensor-based smart glove for sign language recognition (SLR) proved helpful to generate data based on various hand movements related to specific signs. A detailed comparative review of all types of available techniques and sensors used for sign language recognition was presented in this article. The focus of this paper was to explore emerging trends and strategies for sign language recognition and to point out deficiencies in existing systems. This paper will act as a guide for other researchers to understand all materials and techniques like flex resistive sensor-based, vision sensor-based, or hybrid system-based technologies used for sign language until now.
      Citation: Journal of Imaging
      PubDate: 2022-04-02
      DOI: 10.3390/jimaging8040098
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 99: Cardiac Magnetic Resonance Imaging in Immune
           Check-Point Inhibitor Myocarditis: A Systematic Review

    • Authors: Luca Arcari, Giacomo Tini, Giovanni Camastra, Federica Ciolina, Domenico De Santis, Domitilla Russo, Damiano Caruso, Massimiliano Danti, Luca Cacciotti
      First page: 99
      Abstract: Immune checkpoint inhibitors (ICIs) are a family of anticancer drugs in which the immune response elicited against the tumor may involve other organs, including the heart. Cardiac magnetic resonance (CMR) imaging is increasingly used in the diagnostic work-up of myocardial inflammation; recently, several studies investigated the use of CMR in patients with ICI-myocarditis (ICI-M). The aim of the present systematic review is to summarize the available evidence on CMR findings in ICI-M. We searched electronic databases for relevant publications; after screening, six studies were selected, including 166 patients from five cohorts, and further 86 patients from a sub-analysis that were targeted for a tissue mapping assessment. CMR revealed mostly preserved left ventricular ejection fraction; edema prevalence ranged from 9% to 60%; late gadolinium enhancement (LGE) prevalence ranged from 23% to 83%. T1 and T2 mapping assessment were performed in 108 and 104 patients, respectively. When available, the comparison of CMR with endomyocardial biopsy revealed partial agreement between techniques and was higher for native T1 mapping amongst imaging biomarkers. The prognostic assessment was inconsistently assessed; CMR variables independently associated with the outcome included decreasing LVEF and increasing native T1. In conclusion, CMR findings in ICI-M include myocardial dysfunction, edema and fibrosis, though less evident than in more classic forms of myocarditis; native T1 mapping retained the higher concordance with EMB and significant prognostic value.
      Citation: Journal of Imaging
      PubDate: 2022-04-05
      DOI: 10.3390/jimaging8040099
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 100: Transverse Analysis of Maxilla and Mandible
           in Adults with Normal Occlusion: A Cone Beam Computed Tomography Study

    • Authors: Kyung Jin Lee, Hyeran Helen Jeon, Normand Boucher, Chun-Hsi Chung
      First page: 100
      Abstract: Objectives: To study the transverse widths of maxilla and mandible and their relationship with the inclination of first molars. Materials and Methods: Fifty-six untreated adults (12 males, 44 females) with normal occlusion were included. On each Cone Beam Computed Tomography (CBCT) image of the subject, inter-buccal and inter-lingual bone widths were measured at the levels of hard palate, alveolar crest and furcation of the first molars, and maxillomandibular width differentials were calculated. In addition, the buccolingual inclination of each first molar was measured and its correlation with the maxillomandibular width differential was tested. Results: At the furcation level of the first molar, the maxillary inter-buccal bone width was more than the mandibular inter-buccal bone width by 1.1 ± 4.5 mm for males and 1.6 ± 2.9 mm for females; the mandibular inter-lingual bone width was more than the maxillary inter-lingual bone width by 1.3 ± 3.6 mm for males and 0.3 ± 3.2 mm for females. For females, there was a negative correlation between the maxillomandibular inter-lingual bone differential and maxillary first molar buccal inclination (p < 0.05), and a positive correlation between the maxillomandibular inter-lingual bone differential and mandibular first molar lingual inclination (p < 0.05). Conclusions: This is a randomized clinical study on transverse analysis of maxilla and mandible in adults with normal occlusion using CBCTs. On average: (1) At the furcation level of the first molars, the maxillary inter-buccal bone width was slightly wider than mandibular inter-buccal bone width; whereas the mandibular inter-lingual bone width was slightly wider than maxillary inter-lingual bone width; (2) A statistically significant correlation existed between the maxillomandibular transverse skeletal differentials and molar inclinations.
      Citation: Journal of Imaging
      PubDate: 2022-04-05
      DOI: 10.3390/jimaging8040100
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 101: Machine-Learning-Based Real-Time
           Multi-Camera Vehicle Tracking and Travel-Time Estimation

    • Authors: Xiaohui Huang, Pan He, Anand Rangarajan, Sanjay Ranka
      First page: 101
      Abstract: Travel-time estimation of traffic flow is an important problem with critical implications for traffic congestion analysis. We developed techniques for using intersection videos to identify vehicle trajectories across multiple cameras and analyze corridor travel time. Our approach consists of (1) multi-object single-camera tracking, (2) vehicle re-identification among different cameras, (3) multi-object multi-camera tracking, and (4) travel-time estimation. We evaluated the proposed framework on real intersections in Florida with pan and fisheye cameras. The experimental results demonstrate the viability and effectiveness of our method.
      Citation: Journal of Imaging
      PubDate: 2022-04-06
      DOI: 10.3390/jimaging8040101
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 102: Novel Hypertrophic Cardiomyopathy Diagnosis
           Index Using Deep Features and Local Directional Pattern Techniques

    • Authors: Anjan Gudigar, U. Raghavendra, Jyothi Samanth, Chinmay Dharmik, Mokshagna Rohit Gangavarapu, Krishnananda Nayak, Edward J. Ciaccio, Ru-San Tan, Filippo Molinari, U. Rajendra Acharya
      First page: 102
      Abstract: Hypertrophic cardiomyopathy (HCM) is a genetic disorder that exhibits a wide spectrum of clinical presentations, including sudden death. Early diagnosis and intervention may avert the latter. Left ventricular hypertrophy on heart imaging is an important diagnostic criterion for HCM, and the most common imaging modality is heart ultrasound (US). The US is operator-dependent, and its interpretation is subject to human error and variability. We proposed an automated computer-aided diagnostic tool to discriminate HCM from healthy subjects on US images. We used a local directional pattern and the ResNet-50 pretrained network to classify heart US images acquired from 62 known HCM patients and 101 healthy subjects. Deep features were ranked using Student’s t-test, and the most significant feature (SigFea) was identified. An integrated index derived from the simulation was defined as 100·log10(SigFea/2)  in each subject, and a diagnostic threshold value was empirically calculated as the mean of the minimum and maximum integrated indices among HCM and healthy subjects, respectively. An integrated index above a threshold of 0.5 separated HCM from healthy subjects with 100% accuracy in our test dataset.
      Citation: Journal of Imaging
      PubDate: 2022-04-06
      DOI: 10.3390/jimaging8040102
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 103: A Hybrid Method for 3D Reconstruction of MR
           Images

    • Authors: Loubna Lechelek, Sebastien Horna, Rita Zrour, Mathieu Naudin, Carole Guillevin
      First page: 103
      Abstract: Three-dimensional surface reconstruction is a well-known task in medical imaging. In procedures for intervention or radiation treatment planning, the generated models should be accurate and reflect the natural appearance. Traditional methods for this task, such as Marching Cubes, use smoothing post processing to reduce staircase artifacts from mesh generation and exhibit the natural look. However, smoothing algorithms often reduce the quality and degrade the accuracy. Other methods, such as MPU implicits, based on adaptive implicit functions, inherently produce smooth 3D models. However, the integration in the implicit functions of both smoothness and accuracy of the shape approximation may impact the precision of the reconstruction. Having these limitations in mind, we propose a hybrid method for 3D reconstruction of MR images. This method is based on a parallel Marching Cubes algorithm called Flying Edges (FE) and Multi-level Partition of Unity (MPU) implicits. We aim to combine the robustness of the Marching Cubes algorithm with the smooth implicit curve tracking enabled by the use of implicit models in order to provide higher geometry precision. Towards this end, the regions that closely fit to the segmentation data, and thus regions that are not impacted by reconstruction issues, are first extracted from both methods. These regions are then merged and used to reconstruct the final model. Experimental studies were performed on a number of MRI datasets, providing images and error statistics generated from our results. The results obtained show that our method reduces the geometric errors of the reconstructed surfaces when compared to the MPU and FE approaches, producing a more accurate 3D reconstruction.
      Citation: Journal of Imaging
      PubDate: 2022-04-07
      DOI: 10.3390/jimaging8040103
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 104: Explainable Multimedia Feature Fusion for
           Medical Applications

    • Authors: Stefan Wagenpfeil, Paul Mc Kevitt, Abbas Cheddad, Matthias Hemmje
      First page: 104
      Abstract: Due to the exponential growth of medical information in the form of, e.g., text, images, Electrocardiograms (ECGs), X-ray, multimedia, etc., the management of a patient’s data has become a huge challenge. Particularly, the extraction of features from various different formats and their representation in a homogeneous way are areas of particular interest in medical applications. Multimedia Information Retrieval (MMIR) frameworks, like the Generic Multimedia Analysis Framework (GMAF), can contribute to solving this problem, when adapted to special requirements and modalities of medical applications. In this paper, we demonstrate how typical multimedia processing techniques can be extended and adapted to medical applications and how these applications benefit from employing a Multimedia Feature Graph (MMFG) and specialized, highly efficient indexing structures in the form of Graph Codes. These Graph Codes are transformed to feature relevant Graph Codes by employing a modified Term Frequency Inverse Document Frequency (TFIDF) algorithm, which further supports value ranges and Boolean operations required in the medical context. On this basis, various metrics for the calculation of similarity, recommendations, and automated inferencing and reasoning can be applied supporting the field of diagnostics. Finally, the presentation of these new facilities in the form of explainability is introduced and demonstrated. Thus, in this paper, we show how Graph Codes contribute new querying options for diagnosis and how Explainable Graph Codes can help to quickly understand medical multimedia formats.
      Citation: Journal of Imaging
      PubDate: 2022-04-08
      DOI: 10.3390/jimaging8040104
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 105: Face Attribute Estimation Using Multi-Task
           Convolutional Neural Network

    • Authors: Hiroya Kawai, Koichi Ito, Takafumi Aoki
      First page: 105
      Abstract: Face attribute estimation can be used for improving the accuracy of face recognition, customer analysis in marketing, image retrieval, video surveillance, and criminal investigation. The major methods for face attribute estimation are based on Convolutional Neural Networks (CNNs) that solve face attribute estimation as a multiple two-class classification problem. Although one feature extractor should be used for each attribute to explore the accuracy of attribute estimation, in most cases, one feature extractor is shared to estimate all face attributes for the parameter efficiency. This paper proposes a face attribute estimation method using Merged Multi-CNN (MM-CNN) to automatically optimize CNN structures for solving multiple binary classification problems to improve parameter efficiency and accuracy in face attribute estimation. We also propose a parameter reduction method called Convolutionalization for Parameter Reduction (CPR), which removes all fully connected layers from MM-CNNs. Through a set of experiments using the CelebA and LFW-a datasets, we demonstrate that MM-CNN with CPR exhibits higher efficiency of face attribute estimation in terms of estimation accuracy and the number of weight parameters than conventional methods.
      Citation: Journal of Imaging
      PubDate: 2022-04-10
      DOI: 10.3390/jimaging8040105
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 106: Adaptive Real-Time Object Detection for
           Autonomous Driving Systems

    • Authors: Maryam Hemmati, Morteza Biglari-Abhari, Smail Niar
      First page: 106
      Abstract: Accurate and reliable detection is one of the main tasks of Autonomous Driving Systems (ADS). While detecting the obstacles on the road during various environmental circumstances add to the reliability of ADS, it results in more intensive computations and more complicated systems. The stringent real-time requirements of ADS, resource constraints, and energy efficiency considerations add to the design complications. This work presents an adaptive system that detects pedestrians and vehicles in different lighting conditions on the road. We take a hardware-software co-design approach on Zynq UltraScale+ MPSoC and develop a dynamically reconfigurable ADS that employs hardware accelerators for pedestrian and vehicle detection and adapts its detection method to the environment lighting conditions. The results show that the system maintains real-time performance and achieves adaptability with minimal resource overhead.
      Citation: Journal of Imaging
      PubDate: 2022-04-11
      DOI: 10.3390/jimaging8040106
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 107: Example-Based Multispectral Photometric
           Stereo for Multi-Colored Surfaces

    • Authors: Daisuke Miyazaki, Kazuya Uegomori
      First page: 107
      Abstract: A photometric stereo needs three images taken under three different light directions lit one by one, while a color photometric stereo needs only one image taken under three different lights lit at the same time with different light directions and different colors. As a result, a color photometric stereo can obtain the surface normal of a dynamically moving object from a single image. However, the conventional color photometric stereo cannot estimate a multicolored object due to the colored illumination. This paper uses an example-based photometric stereo to solve the problem of the color photometric stereo. The example-based photometric stereo searches the surface normal from the database of the images of known shapes. Color photometric stereos suffer from mathematical difficulty, and they add many assumptions and constraints; however, the example-based photometric stereo is free from such mathematical problems. The process of our method is pixelwise; thus, the estimated surface normal is not oversmoothed, unlike existing methods that use smoothness constraints. To demonstrate the effectiveness of this study, a measurement device that can realize the multispectral photometric stereo method with sixteen colors is employed instead of the classic color photometric stereo method with three colors.
      Citation: Journal of Imaging
      PubDate: 2022-04-11
      DOI: 10.3390/jimaging8040107
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 108: Multi-Stage Platform for (Semi-)Automatic
           Planning in Reconstructive Orthopedic Surgery

    • Authors: Florian Kordon, Andreas Maier, Benedict Swartman, Maxim Privalov, Jan Siad El-Barbari, Holger Kunze
      First page: 108
      Abstract: Intricate lesions of the musculoskeletal system require reconstructive orthopedic surgery to restore the correct biomechanics. Careful pre-operative planning of the surgical steps on 2D image data is an essential tool to increase the precision and safety of these operations. However, the plan’s effectiveness in the intra-operative workflow is challenged by unpredictable patient and device positioning and complex registration protocols. Here, we develop and analyze a multi-stage algorithm that combines deep learning-based anatomical feature detection and geometric post-processing to enable accurate pre- and intra-operative surgery planning on 2D X-ray images. The algorithm allows granular control over each element of the planning geometry, enabling real-time adjustments directly in the operating room (OR). In the method evaluation of three ligament reconstruction tasks effect on the knee joint, we found high spatial precision in drilling point localization (ε<2.9mm) and low angulation errors for k-wire instrumentation (ε<0.75∘) on 38 diagnostic radiographs. Comparable precision was demonstrated in 15 complex intra-operative trauma cases suffering from strong implant overlap and multi-anatomy exposure. Furthermore, we found that the diverse feature detection tasks can be efficiently solved with a multi-task network topology, improving precision over the single-task case. Our platform will help overcome the limitations of current clinical practice and foster surgical plan generation and adjustment directly in the OR, ultimately motivating the development of novel 2D planning guidelines.
      Citation: Journal of Imaging
      PubDate: 2022-04-12
      DOI: 10.3390/jimaging8040108
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 109: Tracking Highly Similar Rat Instances under
           Heavy Occlusions: An Unsupervised Deep Generative Pipeline

    • Authors: Anna Gelencsér-Horváth, László Kopácsi, Viktor Varga, Dávid Keller, Árpád Dobolyi, Kristóf Karacs, András Lőrincz
      First page: 109
      Abstract: Identity tracking and instance segmentation are crucial in several areas of biological research. Behavior analysis of individuals in groups of similar animals is a task that emerges frequently in agriculture or pharmaceutical studies, among others. Automated annotation of many hours of surveillance videos can facilitate a large number of biological studies/experiments, which otherwise would not be feasible. Solutions based on machine learning generally perform well in tracking and instance segmentation; however, in the case of identical, unmarked instances (e.g., white rats or mice), even state-of-the-art approaches can frequently fail. We propose a pipeline of deep generative models for identity tracking and instance segmentation of highly similar instances, which, in contrast to most region-based approaches, exploits edge information and consequently helps to resolve ambiguity in heavily occluded cases. Our method is trained by synthetic data generation techniques, not requiring prior human annotation. We show that our approach greatly outperforms other state-of-the-art unsupervised methods in identity tracking and instance segmentation of unmarked rats in real-world laboratory video recordings.
      Citation: Journal of Imaging
      PubDate: 2022-04-13
      DOI: 10.3390/jimaging8040109
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 110: Salient Object Detection by LTP Texture
           Characterization on Opposing Color Pairs under SLICO Superpixel Constraint
           

    • Authors: Didier Ndayikengurukiye, Max Mignotte
      First page: 110
      Abstract: The effortless detection of salient objects by humans has been the subject of research in several fields, including computer vision, as it has many applications. However, salient object detection remains a challenge for many computer models dealing with color and textured images. Most of them process color and texture separately and therefore implicitly consider them as independent features which is not the case in reality. Herein, we propose a novel and efficient strategy, through a simple model, almost without internal parameters, which generates a robust saliency map for a natural image. This strategy consists of integrating color information into local textural patterns to characterize a color micro-texture. It is the simple, yet powerful LTP (Local Ternary Patterns) texture descriptor applied to opposing color pairs of a color space that allows us to achieve this end. Each color micro-texture is represented by a vector whose components are from a superpixel obtained by the SLICO (Simple Linear Iterative Clustering with zero parameter) algorithm, which is simple, fast and exhibits state-of-the-art boundary adherence. The degree of dissimilarity between each pair of color micro-textures is computed by the FastMap method, a fast version of MDS (Multi-dimensional Scaling) that considers the color micro-textures’ non-linearity while preserving their distances. These degrees of dissimilarity give us an intermediate saliency map for each RGB (Red–Green–Blue), HSL (Hue–Saturation–Luminance), LUV (L for luminance, U and V represent chromaticity values) and CMY (Cyan–Magenta–Yellow) color space. The final saliency map is their combination to take advantage of the strength of each of them. The MAE (Mean Absolute Error), MSE (Mean Squared Error) and Fβ measures of our saliency maps, on the five most used datasets show that our model outperformed several state-of-the-art models. Being simple and efficient, our model could be combined with classic models using color contrast for a better performance.
      Citation: Journal of Imaging
      PubDate: 2022-04-13
      DOI: 10.3390/jimaging8040110
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 111: Reliability of OMERACT Scoring System in
           Ultra-High Frequency Ultrasonography of Minor Salivary Glands: Inter-Rater
           Agreement Study

    • Authors: Rossana Izzetti, Giovanni Fulvio, Marco Nisi, Stefano Gennai, Filippo Graziani
      First page: 111
      Abstract: Minor salivary gland ultra-high frequency ultrasonography (UHFUS) has recently been introduced for the evaluation of patients with suspected primary Sjögren’s Syndrome (pSS). At present, ultrasonographic assessment of major salivary glands is performed using the Outcome Measures in Rheumatology (OMERACT) scoring system. Previous reports have explored the possibility of applying the OMERACT scoring system to minor salivary glands UHFUS, with promising results. The aim of this study was to test the inter-reader concordance in the assignment of the OMERACT score to minor salivary gland UHFUS. The study was conducted on 170 minor salivary glands UHFUS scans of patients with suspected pSS. Three independent readers performed UHFUS image evaluation. Intraclass correlation coefficient (ICC) was employed to assess inter-reader reliability. Bland and Altman analysis was employed to test the agreement with a gold standard examiner. ICC values > 0.9 were found for scores 0 and 1, while score 2 and score 3 presented ICCs of 0.873 and 0.785, respectively. The measurements performed by the three examiners were in agreement with the gold standard examiner. According to these results, UHFUS interpretation showed good inter-observer reliability, suggesting that OMERACT score can be effectively used for the evaluation of glandular alterations, even for minor salivary glands.
      Citation: Journal of Imaging
      PubDate: 2022-04-15
      DOI: 10.3390/jimaging8040111
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 112: Spectral Photon-Counting Computed
           Tomography: A Review on Technical Principles and Clinical Applications

    • Authors: Mario Tortora, Laura Gemini, Imma D’Iglio, Lorenzo Ugga, Gaia Spadarella, Renato Cuocolo
      First page: 112
      Abstract: Photon-counting computed tomography (CT) is a technology that has attracted increasing interest in recent years since, thanks to new-generation detectors, it holds the promise to radically change the clinical use of CT imaging. Photon-counting detectors overcome the major limitations of conventional CT detectors by providing very high spatial resolution without electronic noise, providing a higher contrast-to-noise ratio, and optimizing spectral images. Additionally, photon-counting CT can lead to reduced radiation exposure, reconstruction of higher spatial resolution images, reduction of image artifacts, optimization of the use of contrast agents, and create new opportunities for quantitative imaging. The aim of this review is to briefly explain the technical principles of photon-counting CT and, more extensively, the potential clinical applications of this technology.
      Citation: Journal of Imaging
      PubDate: 2022-04-15
      DOI: 10.3390/jimaging8040112
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 113: Fuzzy Information Discrimination Measures
           and Their Application to Low Dimensional Embedding Construction in the
           UMAP Algorithm

    • Authors: Liliya A. Demidova, Artyom V. Gorchakov
      First page: 113
      Abstract: Dimensionality reduction techniques are often used by researchers in order to make high dimensional data easier to interpret visually, as data visualization is only possible in low dimensional spaces. Recent research in nonlinear dimensionality reduction introduced many effective algorithms, including t-distributed stochastic neighbor embedding (t-SNE), uniform manifold approximation and projection (UMAP), dimensionality reduction technique based on triplet constraints (TriMAP), and pairwise controlled manifold approximation (PaCMAP), aimed to preserve both the local and global structure of high dimensional data while reducing the dimensionality. The UMAP algorithm has found its application in bioinformatics, genetics, genomics, and has been widely used to improve the accuracy of other machine learning algorithms. In this research, we compare the performance of different fuzzy information discrimination measures used as loss functions in the UMAP algorithm while constructing low dimensional embeddings. In order to achieve this, we derive the gradients of the considered losses analytically and employ the Adam algorithm during the loss function optimization process. From the conducted experimental studies we conclude that the use of either the logarithmic fuzzy cross entropy loss without reduced repulsion or the symmetric logarithmic fuzzy cross entropy loss with sufficiently large neighbor count leads to better global structure preservation of the original multidimensional data when compared to the loss function used in the original UMAP algorithm implementation.
      Citation: Journal of Imaging
      PubDate: 2022-04-15
      DOI: 10.3390/jimaging8040113
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 114: Resources and Power Efficient FPGA
           Accelerators for Real-Time Image Classification

    • Authors: Angelos Kyriakos, Elissaios-Alexios Papatheofanous, Charalampos Bezaitis, Dionysios Reisis
      First page: 114
      Abstract: A plethora of image and video-related applications involve complex processes that impose the need for hardware accelerators to achieve real-time performance. Among these, notable applications include the Machine Learning (ML) tasks using Convolutional Neural Networks (CNNs) that detect objects in image frames. Aiming at contributing to the CNN accelerator solutions, the current paper focuses on the design of Field-Programmable Gate Arrays (FPGAs) for CNNs of limited feature space to improve performance, power consumption and resource utilization. The proposed design approach targets the designs that can utilize the logic and memory resources of a single FPGA device and benefit mainly the edge, mobile and on-board satellite (OBC) computing; especially their image-processing- related applications. This work exploits the proposed approach to develop an FPGA accelerator for vessel detection on a Xilinx Virtex 7 XC7VX485T FPGA device (Advanced Micro Devices, Inc, Santa Clara, CA, USA). The resulting architecture operates on RGB images of size 80×80 or sliding windows; it is trained for the “Ships in Satellite Imagery” and by achieving frequency 270 MHz, completing the inference in 0.687 ms and consuming 5 watts, it validates the approach.
      Citation: Journal of Imaging
      PubDate: 2022-04-15
      DOI: 10.3390/jimaging8040114
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 115: Human Tracking in Top-View Fisheye Images:
           Analysis of Familiar Similarity Measures via HOG and against Various Color
           Spaces

    • Authors: Hicham Talaoubrid, Marina Vert, Khizar Hayat, Baptiste Magnier
      First page: 115
      Abstract: The purpose of this paper is to find the best way to track human subjects in fisheye images by considering the most common similarity measures in the function of various color spaces as well as the HOG. To this end, we have relied on videos taken by a fisheye camera wherein multiple human subjects were recorded walking simultaneously, in random directions. Using an existing deep-learning method for the detection of persons in fisheye images, bounding boxes are extracted each containing information related to a single person. Consequently, each bounding box can be described by color features, usually color histograms; with the HOG relying on object shapes and contours. These descriptors do not inform the same features and they need to be evaluated in the context of tracking in top-view fisheye images. With this in perspective, a distance is computed to compare similarities between the detected bounding boxes of two consecutive frames. To do so, we are proposing a rate function (S) in order to compare and evaluate together the six different color spaces and six distances, and with the HOG. This function links inter-distance (i.e., the distance between the images of the same person throughout the frames of the video) with intra-distance (i.e., the distance between images of different people throughout the frames). It enables ascertaining a given feature descriptor (color or HOG) mapped to a corresponding similarity function and hence deciding the most reliable one to compute the similarity or the difference between two segmented persons. All these comparisons lead to some interesting results, as explained in the later part of the article.
      Citation: Journal of Imaging
      PubDate: 2022-04-16
      DOI: 10.3390/jimaging8040115
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 116: A Comparison of Dense and Sparse Optical
           Flow Techniques for Low-Resolution Aerial Thermal Imagery

    • Authors: Tran Xuan Bach Nguyen, Kent Rosser, Javaan Chahl
      First page: 116
      Abstract: It is necessary to establish the relative performance of established optical flow approaches in airborne scenarios with thermal cameras. This study investigated the performance of a dense optical flow algorithm on 14 bit radiometric images of the ground. While sparse techniques that rely on feature matching techniques perform very well with airborne thermal data in high-contrast thermal conditions, these techniques suffer in low-contrast scenes, where there are fewer detectable and distinct features in the image. On the other hand, some dense optical flow algorithms are highly amenable to parallel processing approaches compared to those that rely on tracking and feature detection. A Long-Wave Infrared (LWIR) micro-sensor and a PX4Flow optical sensor were mounted looking downwards on a drone. We compared the optical flow signals of a representative dense optical flow technique, the Image Interpolation Algorithm (I2A), to the Lucas–Kanade (LK) algorithm in OpenCV and the visible light optical flow results from the PX4Flow in both X and Y displacements. The I2A to LK was found to be generally comparable in performance and better in cold-soaked environments while suffering from the aperture problem in some scenes.
      Citation: Journal of Imaging
      PubDate: 2022-04-16
      DOI: 10.3390/jimaging8040116
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 117: HISFCOS: Half-Inverted Stage Block for
           Efficient Object Detection Based on Deep Learning

    • Authors: Beomyeon Hwang, Sanghun Lee, Seunghyun Lee
      First page: 117
      Abstract: Recent advances in object detection play a key role in various industrial applications. However, a fully convolutional one-stage detector (FCOS), a conventional object detection method, has low detection accuracy given the calculation cost. Thus, in this study, we propose a half-inverted stage FCOS (HISFCOS) with improved detection accuracy at a computational cost comparable to FCOS based on the proposed half inverted stage (HIS) block. First, FCOS has low detection accuracy owing to low-level information loss. Therefore, an HIS block that minimizes feature loss by extracting spatial and channel information in parallel is proposed. Second, detection accuracy was improved by reconstructing the feature pyramid on the basis of the proposed block and improving the low-level information. Lastly, the improved detection head structure reduced the computational cost and amount compared to the conventional method. Through experiments, the proposed method defined the optimal HISFCOS parameters and evaluated several datasets for fair comparison. The HISFCOS was trained and evaluated using the PASCAL VOC and MSCOCO2017 datasets. Additionally, the average precision (AP) was used as an evaluation index to quantitatively evaluate detection performance. As a result of the experiment, the parameters were increased by 0.5 M compared to the conventional method, but the detection accuracy was improved by 3.0 AP and 1.5 AP in the PASCAL VOC and MSCOCO datasets, respectively. in addition, an ablation study was conducted, and the results for the proposed block and detection head were analyzed.
      Citation: Journal of Imaging
      PubDate: 2022-04-17
      DOI: 10.3390/jimaging8040117
      Issue No: Vol. 8, No. 4 (2022)
       
  • J. Imaging, Vol. 8, Pages 50: Considerations on Baseline Generation for
           Imaging AI Studies Illustrated on the CT-Based Prediction of Empyema and
           Outcome Assessment

    • Authors: Raphael Sexauer, Bram Stieltjes, Jens Bremerich, Tugba Akinci D’Antonoli, Noemi Schmidt
      First page: 50
      Abstract: For AI-based classification tasks in computed tomography (CT), a reference standard for evaluating the clinical diagnostic accuracy of individual classes is essential. To enable the implementation of an AI tool in clinical practice, the raw data should be drawn from clinical routine data using state-of-the-art scanners, evaluated in a blinded manner and verified with a reference test. Three hundred and thirty-five consecutive CTs, performed between 1 January 2016 and 1 January 2021 with reported pleural effusion and pathology reports from thoracocentesis or biopsy within 7 days of the CT were retrospectively included. Two radiologists (4 and 10 PGY) blindly assessed the chest CTs for pleural CT features. If needed, consensus was achieved using an experienced radiologist’s opinion (29 PGY). In addition, diagnoses were extracted from written radiological reports. We analyzed these findings for a possible correlation with the following patient outcomes: mortality and median hospital stay. For AI prediction, we used an approach consisting of nnU-Net segmentation, PyRadiomics features and a random forest model. Specificity and sensitivity for CT-based detection of empyema (n = 81 of n = 335 patients) were 90.94 (95%-CI: 86.55–94.05) and 72.84 (95%-CI: 61.63–81.85%) in all effusions, with moderate to almost perfect interrater agreement for all pleural findings associated with empyema (Cohen’s kappa = 0.41–0.82). Highest accuracies were found for pleural enhancement or thickening with 87.02% and 81.49%, respectively. For empyema prediction, AI achieved a specificity and sensitivity of 74.41% (95% CI: 68.50–79.57) and 77.78% (95% CI: 66.91–85.96), respectively. Empyema was associated with a longer hospital stay (median = 20 versus 14 days), and findings consistent with pleural carcinomatosis impacted mortality.
      Citation: Journal of Imaging
      PubDate: 2022-02-22
      DOI: 10.3390/jimaging8030050
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 51: Diagnosis of Vertical Root Fractures in
           Endodontically Treated Teeth by Cone-Beam Computed Tomography

    • Authors: Fumi Mizuhashi, Yuko Watarai, Ichiro Ogura
      First page: 51
      Abstract: The purpose of this study was to investigate the characteristics and the detection ability of vertical root fractures in endodontically treated teeth by intraoral radiography and cone-beam computed tomography (CBCT). CBCT images of 50 patients with root fractures in endodontically treated teeth were reviewed, and 36 vertical root fractures were taken in this study. The cause of fracture, core construction, kind of teeth, and fracture direction (bucco-lingual and mesio-distal fractures) were investigated. Detection ability of vertical root fractures by intraoral radiography and CBCT was also examined. Statistical analyses concerning the characteristics were performed by χ2 test, and the detection ability was analyzed by cross-tabulation. All of the fractured teeth were nontraumatized teeth. The vertical root fracture occurrence was not differed by core construction. The vertical root fracture number was largest at the premolar teeth (p = 0.005), and the number of the bucco-lingual fracture was larger than the mesio-distal fracture (p = 0.046). Vertical root fractures were detectable using CBCT, while undetectable by intraoral radiography (p < 0.001). Vertical root fractures occurred easily in premolar teeth with bucco-lingual direction, and CBCT is an adequate radiographic method to diagnose vertical root fracture.
      Citation: Journal of Imaging
      PubDate: 2022-02-23
      DOI: 10.3390/jimaging8030051
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 52: Qualitative Comparison of Image Stitching
           Algorithms for Multi-Camera Systems in Laparoscopy

    • Authors: Sylvain Guy, Jean-Loup Haberbusch, Emmanuel Promayon, Stéphane Mancini, Sandrine Voros
      First page: 52
      Abstract: Multi-camera systems were recently introduced into laparoscopy to increase the narrow field of view of the surgeon. The video streams are stitched together to create a panorama that is easier for the surgeon to comprehend. Multi-camera prototypes for laparoscopy use quite basic algorithms and have only been evaluated on simple laparoscopic scenarios. The more recent state-of-the-art algorithms, mainly designed for the smartphone industry, have not yet been evaluated in laparoscopic conditions. We developed a simulated environment to generate a dataset of multi-view images displaying a wide range of laparoscopic situations, which is adaptable to any multi-camera system. We evaluated classical and state-of-the-art image stitching techniques used in non-medical applications on this dataset, including one unsupervised deep learning approach. We show that classical techniques that use global homography fail to provide a clinically satisfactory rendering and that even the most recent techniques, despite providing high quality panorama images in non-medical situations, may suffer from poor alignment or severe distortions in simulated laparoscopic scenarios. We highlight the main advantages and flaws of each algorithm within a laparoscopic context, identify the main remaining challenges that are specific to laparoscopy, and propose methods to improve these approaches. We provide public access to the simulated environment and dataset.
      Citation: Journal of Imaging
      PubDate: 2022-02-23
      DOI: 10.3390/jimaging8030052
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 53: A Survey of 6D Object Detection Based on 3D
           Models for Industrial Applications

    • Authors: Felix Gorschlüter, Pavel Rojtberg, Thomas Pöllabauer
      First page: 53
      Abstract: Six-dimensional object detection of rigid objects is a problem especially relevant for quality control and robotic manipulation in industrial contexts. This work is a survey of the state of the art of 6D object detection with these use cases in mind, specifically focusing on algorithms trained only with 3D models or renderings thereof. Our first contribution is a listing of requirements typically encountered in industrial applications. The second contribution is a collection of quantitative evaluation results for several different 6D object detection methods trained with synthetic data and the comparison and analysis thereof. We identify the top methods for individual requirements that industrial applications have for object detectors, but find that a lack of comparable data prevents large-scale comparison over multiple aspects.
      Citation: Journal of Imaging
      PubDate: 2022-02-24
      DOI: 10.3390/jimaging8030053
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 54: Monochrome Camera Conversion: Effect on
           Sensitivity for Multispectral Imaging (Ultraviolet, Visible, and Infrared)
           

    • Authors: Jonathan Crowther
      First page: 54
      Abstract: Conversion of standard cameras to enable them to capture images in the ultraviolet (UV) and infrared (IR) spectral regions has applications ranging from purely artistic to science and research. Taking the modification of the camera a step further and removing the color filter array (CFA) results in the formation of a monochrome camera. The spectral sensitivities of a range of cameras with different sensors which were converted to monochrome were measured and compared with standard multispectral camera conversions, with an emphasis on their behavior from the UV through to the IR regions.
      Citation: Journal of Imaging
      PubDate: 2022-02-25
      DOI: 10.3390/jimaging8030054
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 55: Kidney Tumor Semantic Segmentation Using
           Deep Learning: A Survey of State-of-the-Art

    • Authors: Abubaker Abdelrahman, Serestina Viriri
      First page: 55
      Abstract: Cure rates for kidney cancer vary according to stage and grade; hence, accurate diagnostic procedures for early detection and diagnosis are crucial. Some difficulties with manual segmentation have necessitated the use of deep learning models to assist clinicians in effectively recognizing and segmenting tumors. Deep learning (DL), particularly convolutional neural networks, has produced outstanding success in classifying and segmenting images. Simultaneously, researchers in the field of medical image segmentation employ DL approaches to solve problems such as tumor segmentation, cell segmentation, and organ segmentation. Segmentation of tumors semantically is critical in radiation and therapeutic practice. This article discusses current advances in kidney tumor segmentation systems based on DL. We discuss the various types of medical images and segmentation techniques and the assessment criteria for segmentation outcomes in kidney tumor segmentation, highlighting their building blocks and various strategies.
      Citation: Journal of Imaging
      PubDate: 2022-02-25
      DOI: 10.3390/jimaging8030055
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 56: Principal Component Analysis versus
           Subject’s Residual Profile Analysis for Neuroinflammation
           Investigation in Parkinson Patients: A PET Brain Imaging Study

    • Authors: Rostom Mabrouk
      First page: 56
      Abstract: Dysfunction of neurons in the central nervous system is the primary pathological feature of Parkinson’s disease (PD). Despite different triggering, emerging evidence indicates that neuroinflammation revealed through microglia activation is critical for PD. Moreover, recent investigations sought a potential relationship between Lrrk2 genetic mutation and microglia activation. In this paper, neuroinflammation in sporadic PD, Lrrk2-PD and unaffected Lrrk2 mutation carriers were investigated. The principal component analysis (PCA) and the subject’s residual profile (SRP) techniques were performed on multiple groups and regions of interest in 22 brain-regions. The 11C-PBR28 binding profiles were compared in four genotypes depending on groups, i.e., HC, sPD, Lrrk2-PD and UC, using the PCA and SPR scores. The genotype effect was found as a principal feature of group-dependent 11C-PBR28 binding, and preliminary evidence of a MAB-Lrrk2 mutation interaction in manifest Parkinson’s and subjects at risk was found.
      Citation: Journal of Imaging
      PubDate: 2022-02-25
      DOI: 10.3390/jimaging8030056
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 57: PRNU-Based Video Source Attribution: Which
           Frames Are You Using'

    • Authors: Pasquale Ferrara, Massimo Iuliani, Alessandro Piva
      First page: 57
      Abstract: Photo Response Non-Uniformity (PRNU) is reputed the most successful trace to identify the source of a digital video. However, its effectiveness is mainly limited by compression and the effect of recently introduced electronic image stabilization on several devices. In the last decade, several approaches were proposed to overcome both these issues, mainly by selecting those video frames which are considered more informative. However, the two problems were always treated separately, and the combined effect of compression and digital stabilization was never considered. This separated analysis makes it hard to understand if achieved conclusions still stand for digitally stabilized videos and if those choices represent a general optimum strategy to perform video source attribution. In this paper, we explore whether an optimum strategy exists in selecting frames based on their type and their positions within the groups of pictures. We, therefore, systematically analyze the PRNU contribute provided by all frames belonging to either digitally stabilized or not stabilized videos. Results on the VISION dataset come up with some insights into optimizing video source attribution in different use cases.
      Citation: Journal of Imaging
      PubDate: 2022-02-25
      DOI: 10.3390/jimaging8030057
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 58: Scanning Hyperspectral Imaging for In Situ
           Biogeochemical Analysis of Lake Sediment Cores: Review of Recent
           Developments

    • Authors: Paul D. Zander, Giulia Wienhues, Martin Grosjean
      First page: 58
      Abstract: Hyperspectral imaging (HSI) in situ core scanning has emerged as a valuable and novel tool for rapid and non-destructive biogeochemical analysis of lake sediment cores. Variations in sediment composition can be assessed directly from fresh sediment surfaces at ultra-high-resolution (40–300 μm measurement resolution) based on spectral profiles of light reflected from sediments in visible, near infrared, and short-wave infrared wavelengths (400–2500 nm). Here, we review recent methodological developments in this new and growing field of research, as well as applications of this technique for paleoclimate and paleoenvironmental studies. Hyperspectral imaging of sediment cores has been demonstrated to effectively track variations in sedimentary pigments, organic matter, grain size, minerogenic components, and other sedimentary features. These biogeochemical variables record information about past climatic conditions, paleoproductivity, past hypolimnetic anoxia, aeolian input, volcanic eruptions, earthquake and flood frequencies, and other variables of environmental relevance. HSI has been applied to study seasonal and inter-annual environmental variability as recorded in individual varves (annually laminated sediments) or to study sedimentary records covering long glacial–interglacial time-scales (>10,000 years).
      Citation: Journal of Imaging
      PubDate: 2022-02-25
      DOI: 10.3390/jimaging8030058
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 59: Glossiness Index of Objects in Halftone
           Color Images Based on Structure and Appearance Distortion

    • Authors: Donghui Li, Midori Tanaka, Takahiko Horiuchi
      First page: 59
      Abstract: This paper proposes an objective glossiness index for objects in halftone color images. In the proposed index, we consider the characteristics of the human visual system (HVS) and associate the image’s structure distortion and statistical information. According to the difference in the number of strategies adopted by the HVS in judging the difference between images, it is divided into single and multi-strategy modeling. In this study, we advocate multiple strategies to determine glossy or non-glossy quality. We assumed that HVS used different visual mechanisms to evaluate glossy and non-glossy objects. For non-glossy images, the image structure dominated, so the HVS tried to use structural information to judge distortion (a strategy based on structural distortion detection). For glossy images, the glossy appearance dominated; thus, the HVS tried to search for the glossiness difference (an appearance-based strategy). Herein, we present an index for glossiness assessment that attempts to explicitly model the structural dissimilarity and appearance distortion. We used the contrast sensitivity function to account for the mechanism of halftone images when viewed by the human eye. We estimated the structure distortion for the first strategy by using local luminance and contrast masking; meanwhile, local statistics changing in the spatial frequency components for skewness and standard deviation were used to estimate the appearance distortion for the second strategy. Experimental results showed that these two mixed-distortion measurement strategies performed well in consistency with the subjective ratings of glossiness in halftone color images.
      Citation: Journal of Imaging
      PubDate: 2022-02-27
      DOI: 10.3390/jimaging8030059
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 60: Hierarchical Fusion Using Subsets of
           Multi-Features for Historical Arabic Manuscript Dating

    • Authors: Kalthoum Adam, Somaya Al-Maadeed, Younes Akbari
      First page: 60
      Abstract: Automatic dating tools for historical documents can greatly assist paleographers and save them time and effort. This paper describes a novel method for estimating the date of historical Arabic documents that employs hierarchical fusions of multiple features. A set of traditional features and features extracted by a residual network (ResNet) are fused in a hierarchical approach using joint sparse representation. To address noise during the fusion process, a new approach based on subsets of multiple features is being considered. Following that, supervised and unsupervised classifiers are used for classification. We show that using hierarchical fusion based on subsets of multiple features in the KERTAS dataset can produce promising results and significantly improve the results.
      Citation: Journal of Imaging
      PubDate: 2022-03-01
      DOI: 10.3390/jimaging8030060
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 61: Iterative Multiple Bounding-Box Refinements
           for Visual Tracking

    • Authors: Giorgio Cruciata, Liliana Lo Presti, Marco La Cascia
      First page: 61
      Abstract: Single-object visual tracking aims at locating a target in each video frame by predicting the bounding box of the object. Recent approaches have adopted iterative procedures to gradually refine the bounding box and locate the target in the image. In such approaches, the deep model takes as input the image patch corresponding to the currently estimated target bounding box, and provides as output the probability associated with each of the possible bounding box refinements, generally defined as a discrete set of linear transformations of the bounding box center and size. At each iteration, only one transformation is applied, and supervised training of the model may introduce an inherent ambiguity by giving importance priority to some transformations over the others. This paper proposes a novel formulation of the problem of selecting the bounding box refinement. It introduces the concept of non-conflicting transformations and allows applying multiple refinements to the target bounding box at each iteration without introducing ambiguities during learning of the model parameters. Empirical results demonstrate that the proposed approach improves the iterative single refinement in terms of accuracy and precision of the tracking results.
      Citation: Journal of Imaging
      PubDate: 2022-03-03
      DOI: 10.3390/jimaging8030061
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 62: A Real-Time Method for Time-to-Collision
           Estimation from Aerial Images

    • Authors: Daniel Tøttrup, Stinus Lykke Skovgaard, Jonas le Fevre Sejersen, Rui Pimentel de Figueiredo
      First page: 62
      Abstract: When large vessels such as container ships are approaching their destination port, they are required by law to have a maritime pilot on board responsible for safely navigating the vessel to its desired location. The maritime pilot has extensive knowledge of the local area and how currents and tides affect the vessel’s navigation. In this work, we present a novel end-to-end solution for estimating time-to-collision time-to-collision (TTC) between moving objects (i.e., vessels), using real-time image streams from aerial drones in dynamic maritime environments. Our method relies on deep features, which are learned using realistic simulation data, for reliable and robust object detection, segmentation, and tracking. Furthermore, our method uses rotated bounding box representations, which are computed by taking advantage of pixel-level object segmentation for enhanced TTC estimation accuracy. We present collision estimates in an intuitive manner, as collision arrows that gradually change its color to red to indicate an imminent collision. A set of experiments in a realistic shipyard simulation environment demonstrate that our method can accurately, robustly, and quickly predict TTC between dynamic objects seen from a top-view, with a mean error and a standard deviation of 0.358 and 0.114 s, respectively, in a worst case scenario.
      Citation: Journal of Imaging
      PubDate: 2022-03-03
      DOI: 10.3390/jimaging8030062
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 63: An Exploration of Pathologies of Multilevel
           Principal Components Analysis in Statistical Models of Shape

    • Authors: Damian J. J. Farnell
      First page: 63
      Abstract: 3D facial surface imaging is a useful tool in dentistry and in terms of diagnostics and treatment planning. Between-group PCA (bgPCA) is a method that has been used to analyse shapes in biological morphometrics, although various “pathologies” of bgPCA have recently been proposed. Monte Carlo (MC) simulated datasets were created here in order to explore “pathologies” of multilevel PCA (mPCA), where mPCA with two levels is equivalent to bgPCA. The first set of MC experiments involved 300 uncorrelated normally distributed variables, whereas the second set of MC experiments used correlated multivariate MC data describing 3D facial shape. We confirmed results of numerical experiments from other researchers that indicated that bgPCA (and so also mPCA) can give a false impression of strong differences in component scores between groups when there is none in reality. These spurious differences in component scores via mPCA decreased significantly as the sample sizes per group were increased. Eigenvalues via mPCA were also found to be strongly affected by imbalances in sample sizes per group, although this problem was removed by using weighted forms of covariance matrices suggested by the maximum likelihood solution of the two-level model. However, this did not solve problems of spurious differences between groups in these simulations, which was driven by very small sample sizes in one group. As a “rule of thumb” only, all of our experiments indicate that reasonable results are obtained when sample sizes per group in all groups are at least equal to the number of variables. Interestingly, the sum of all eigenvalues over both levels via mPCA scaled approximately linearly with the inverse of the sample size per group in all experiments. Finally, between-group variation was added explicitly to the MC data generation model in two experiments considered here. Results for the sum of all eigenvalues via mPCA predicted the asymptotic amount for the total amount of variance correctly in this case, whereas standard “single-level” PCA underestimated this quantity.
      Citation: Journal of Imaging
      PubDate: 2022-03-04
      DOI: 10.3390/jimaging8030063
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 64: Rethinking Weight Decay for Efficient Neural
           Network Pruning

    • Authors: Hugo Tessier, Vincent Gripon, Mathieu Léonardon, Matthieu Arzel, Thomas Hannagan, David Bertrand
      First page: 64
      Abstract: Introduced in the late 1980s for generalization purposes, pruning has now become a staple for compressing deep neural networks. Despite many innovations in recent decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in the field, and especially the use of weight decay to achieve sparsity, we introduce Selective Weight Decay (SWD), which carries out efficient, continuous pruning throughout training. Our approach, theoretically grounded on Lagrangian smoothing, is versatile and can be applied to multiple tasks, networks, and pruning structures. We show that SWD compares favorably to state-of-the-art approaches, in terms of performance-to-parameters ratio, on the CIFAR-10, Cora, and ImageNet ILSVRC2012 datasets.
      Citation: Journal of Imaging
      PubDate: 2022-03-04
      DOI: 10.3390/jimaging8030064
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 65: Review of Machine Learning in Lung
           Ultrasound in COVID-19 Pandemic

    • Authors: Jing Wang, Xiaofeng Yang, Boran Zhou, James J. Sohn, Jun Zhou, Jesse T. Jacob, Kristin A. Higgins, Jeffrey D. Bradley, Tian Liu
      First page: 65
      Abstract: Ultrasound imaging of the lung has played an important role in managing patients with COVID-19–associated pneumonia and acute respiratory distress syndrome (ARDS). During the COVID-19 pandemic, lung ultrasound (LUS) or point-of-care ultrasound (POCUS) has been a popular diagnostic tool due to its unique imaging capability and logistical advantages over chest X-ray and CT. Pneumonia/ARDS is associated with the sonographic appearances of pleural line irregularities and B-line artefacts, which are caused by interstitial thickening and inflammation, and increase in number with severity. Artificial intelligence (AI), particularly machine learning, is increasingly used as a critical tool that assists clinicians in LUS image reading and COVID-19 decision making. We conducted a systematic review from academic databases (PubMed and Google Scholar) and preprints on arXiv or TechRxiv of the state-of-the-art machine learning technologies for LUS images in COVID-19 diagnosis. Openly accessible LUS datasets are listed. Various machine learning architectures have been employed to evaluate LUS and showed high performance. This paper will summarize the current development of AI for COVID-19 management and the outlook for emerging trends of combining AI-based LUS with robotics, telehealth, and other techniques.
      Citation: Journal of Imaging
      PubDate: 2022-03-05
      DOI: 10.3390/jimaging8030065
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 66: An Empirical Evaluation of Convolutional
           Networks for Malaria Diagnosis

    • Authors: Andrea Loddo, Corrado Fadda, Cecilia Di Ruberto
      First page: 66
      Abstract: Malaria is a globally widespread disease caused by parasitic protozoa transmitted to humans by infected female mosquitoes of Anopheles. It is caused in humans only by the parasite Plasmodium, further classified into four different species. Identifying malaria parasites is possible by analysing digital microscopic blood smears, which is tedious, time-consuming and error prone. So, automation of the process has assumed great importance as it helps the laborious manual process of review and diagnosis. This work focuses on deep learning-based models, by comparing off-the-shelf architectures for classifying healthy and parasite-affected cells, by investigating the four-class classification on the Plasmodium falciparum stages of life and, finally, by evaluating the robustness of the models with cross-dataset experiments on two different datasets. The main contributions to the research in this field can be resumed as follows: (i) comparing off-the-shelf architectures in the task of classifying healthy and parasite-affected cells, (ii) investigating the four-class classification on the P. falciparum stages of life and (iii) evaluating the robustness of the models with cross-dataset experiments. Eleven well-known convolutional neural networks on two public datasets have been exploited. The results show that the networks have great accuracy in binary classification, even though they lack few samples per class. Moreover, the cross-dataset experiments exhibit the need for some further regulations. In particular, ResNet-18 achieved up to 97.68% accuracy in the binary classification, while DenseNet-201 reached 99.40% accuracy on the multiclass classification. The cross-dataset experiments exhibit the limitations of deep learning approaches in such a scenario, even though combining the two datasets permitted DenseNet-201 to reach 97.45% accuracy. Naturally, this needs further investigation to improve the robustness. In general, DenseNet-201 seems to offer the most stable and robust performance, offering as a crucial candidate to further developments and modifications. Moreover, the mobile-oriented architectures showed promising and satisfactory performance in the classification of malaria parasites. The obtained results enable extensive improvements, specifically oriented to the application of object detectors for type and stage of life recognition, even in mobile environments.
      Citation: Journal of Imaging
      PubDate: 2022-03-07
      DOI: 10.3390/jimaging8030066
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 67: Perceptually Optimal Color Representation of
           Fully Polarimetric SAR Imagery

    • Authors: Georgia Koukiou
      First page: 67
      Abstract: The four bands of fully polarimetric SAR data convey scattering characteristics of the Earth’s background, but perceptually are not very easy for an observer to use. In this work, the four different channels of fully polarimetric SAR images, namely HH, HV, VH, and VV, are combined so that a color image of the Earth’s background is derived that is perceptually excellent for the human eye and at the same time provides accurate information regarding the scattering mechanisms in each pixel. Most of the elementary scattering mechanisms are related to specific color and land cover types. The innovative nature of the proposed approach is due to the two different consecutive coloring procedures. The first one is a fusion procedure that moves all the information contained in the four polarimetric channels into three derived RGB bands. This is achieved by means of Cholesky decomposition and brings to the RGB output the correlation properties of a natural color image. The second procedure moves the color information of the RGB image to the CIELab color space, which is perceptually uniform. The color information is then evenly distributed by means of color equalization in the CIELab color space. After that, the inverse procedure to obtain the final RGB image is performed. These two procedures bring the PolSAR information regarding the scattering mechanisms on the Earth’s surface onto a meaningful color image, the appearance of which is close to Google Earth maps. Simultaneously, they give better color correspondence to various land cover types compared with existing SAR color representation methods.
      Citation: Journal of Imaging
      PubDate: 2022-03-07
      DOI: 10.3390/jimaging8030067
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 68: Photo2Video: Semantic-Aware Deep
           Learning-Based Video Generation from Still Content

    • Authors: Paula Viana, Maria Teresa Andrade, Pedro Carvalho, Luis Vilaça, Inês N. Teixeira, Tiago Costa, Pieter Jonker
      First page: 68
      Abstract: Applying machine learning (ML), and especially deep learning, to understand visual content is becoming common practice in many application areas. However, little attention has been given to its use within the multimedia creative domain. It is true that ML is already popular for content creation, but the progress achieved so far addresses essentially textual content or the identification and selection of specific types of content. A wealth of possibilities are yet to be explored by bringing the use of ML into the multimedia creative process, allowing the knowledge inferred by the former to influence automatically how new multimedia content is created. The work presented in this article provides contributions in three distinct ways towards this goal: firstly, it proposes a methodology to re-train popular neural network models in identifying new thematic concepts in static visual content and attaching meaningful annotations to the detected regions of interest; secondly, it presents varied visual digital effects and corresponding tools that can be automatically called upon to apply such effects in a previously analyzed photo; thirdly, it defines a complete automated creative workflow, from the acquisition of a photograph and corresponding contextual data, through the ML region-based annotation, to the automatic application of digital effects and generation of a semantically aware multimedia story driven by the previously derived situational and visual contextual data. Additionally, it presents a variant of this automated workflow by offering to the user the possibility of manipulating the automatic annotations in an assisted manner. The final aim is to transform a static digital photo into a short video clip, taking into account the information acquired. The final result strongly contrasts with current standard approaches of creating random movements, by implementing an intelligent content- and context-aware video.
      Citation: Journal of Imaging
      PubDate: 2022-03-10
      DOI: 10.3390/jimaging8030068
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 69: Seamless Copy–Move Replication in
           Digital Images

    • Authors: Tanzeela Qazi, Mushtaq Ali, Khizar Hayat, Baptiste Magnier
      First page: 69
      Abstract: The importance and relevance of digital-image forensics has attracted researchers to establish different techniques for creating and detecting forgeries. The core category in passive image forgery is copy–move image forgery that affects the originality of image by applying a different transformation. In this paper, a frequency-domain image-manipulation method is presented. The method exploits the localized nature of discrete wavelet transform (DWT) to attain the region of the host image to be manipulated. Both patch and host image are subjected to DWT at the same level l to obtain 3l+1 sub-bands, and each sub-band of the patch is pasted to the identified region in the corresponding sub-band of the host image. Resulting manipulated host sub-bands are then subjected to inverse DWT to obtain the final manipulated host image. The proposed method shows good resistance against detection by two frequency-domain forgery detection methods from the literature. The purpose of this research work is to create a forgery and highlight the need to produce forgery detection methods that are robust against malicious copy–move forgery.
      Citation: Journal of Imaging
      PubDate: 2022-03-10
      DOI: 10.3390/jimaging8030069
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 70: A Novel Deep-Learning-Based Framework for
           the Classification of Cardiac Arrhythmia

    • Authors: Sonain Jamil, MuhibUr Rahman
      First page: 70
      Abstract: Cardiovascular diseases (CVDs) are the primary cause of death. Every year, many people die due to heart attacks. The electrocardiogram (ECG) signal plays a vital role in diagnosing CVDs. ECG signals provide us with information about the heartbeat. ECGs can detect cardiac arrhythmia. In this article, a novel deep-learning-based approach is proposed to classify ECG signals as normal and into sixteen arrhythmia classes. The ECG signal is preprocessed and converted into a 2D signal using continuous wavelet transform (CWT). The time–frequency domain representation of the CWT is given to the deep convolutional neural network (D-CNN) with an attention block to extract the spatial features vector (SFV). The attention block is proposed to capture global features. For dimensionality reduction in SFV, a novel clump of features (CoF) framework is proposed. The k-fold cross-validation is applied to obtain the reduced feature vector (RFV), and the RFV is given to the classifier to classify the arrhythmia class. The proposed framework achieves 99.84% accuracy with 100% sensitivity and 99.6% specificity. The proposed algorithm outperforms the state-of-the-art accuracy, F1-score, and sensitivity techniques.
      Citation: Journal of Imaging
      PubDate: 2022-03-10
      DOI: 10.3390/jimaging8030070
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 71: Multi-Modality Microscopy Image Style
           Augmentation for Nuclei Segmentation

    • Authors: Ye Liu, Sophia J. Wagner, Tingying Peng
      First page: 71
      Abstract: Annotating microscopy images for nuclei segmentation by medical experts is laborious and time-consuming. To leverage the few existing annotations, also across multiple modalities, we propose a novel microscopy-style augmentation technique based on a generative adversarial network (GAN). Unlike other style transfer methods, it can not only deal with different cell assay types and lighting conditions, but also with different imaging modalities, such as bright-field and fluorescence microscopy. Using disentangled representations for content and style, we can preserve the structure of the original image while altering its style during augmentation. We evaluate our data augmentation on the 2018 Data Science Bowl dataset consisting of various cell assays, lighting conditions, and imaging modalities. With our style augmentation, the segmentation accuracy of the two top-ranked Mask R-CNN-based nuclei segmentation algorithms in the competition increases significantly. Thus, our augmentation technique renders the downstream task more robust to the test data heterogeneity and helps counteract class imbalance without resampling of minority classes.
      Citation: Journal of Imaging
      PubDate: 2022-03-11
      DOI: 10.3390/jimaging8030071
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 72: Comparison of 2D Optical Imaging and 3D
           Microtomography Shape Measurements of a Coastal Bioclastic Calcareous Sand
           

    • Authors: Ryan D. Beemer, Linzhu Li, Antonio Leonti, Jeremy Shaw, Joana Fonseca, Iren Valova, Magued Iskander, Cynthia H. Pilskaln
      First page: 72
      Abstract: This article compares measurements of particle shape parameters from three-dimensional (3D) X-ray micro-computed tomography (μCT) and two-dimensional (2D) dynamic image analysis (DIA) from the optical microscopy of a coastal bioclastic calcareous sand from Western Australia. This biogenic sand from a high energy environment consists largely of the shells and tests of marine organisms and their clasts. A significant difference was observed between the two imaging techniques for measurements of aspect ratio, convexity, and sphericity. Measured values of aspect ratio, sphericity, and convexity are larger in 2D than in 3D. Correlation analysis indicates that sphericity is correlated with convexity in both 2D and 3D. These results are attributed to inherent limitations of DIA when applied to platy sand grains and to the shape being, in part, dependent on the biology of the grain rather than a purely random clastic process, like typical siliceous sands. The statistical data has also been fitted to Johnson Bounded Distribution for the ease of future use. Overall, this research demonstrates the need for high-quality 3D microscopy when conducting a micromechanical analysis of biogenic calcareous sands.
      Citation: Journal of Imaging
      PubDate: 2022-03-14
      DOI: 10.3390/jimaging8030072
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 73: Fabrication of a Human Skin Mockup with a
           Multilayered Concentration Map of Pigment Components Using a UV Printer

    • Authors: Kazuki Nagasawa, Shoji Yamamoto, Wataru Arai, Kunio Hakkaku, Chawan Koopipat, Keita Hirai, Norimichi Tsumura
      First page: 73
      Abstract: In this paper, we propose a pipeline that reproduces human skin mockups using a UV printer by obtaining the spatial concentration map of pigments from an RGB image of human skin. The pigment concentration distributions were obtained by a separating method of skin pigment components with independent component analysis from the skin image. This method can extract the concentration of melanin and hemoglobin components, which are the main pigments that make up skin tone. Based on this concentration, we developed a procedure to reproduce a skin mockup with a multi-layered structure that is determined by mapping the absorbance of melanin and hemoglobin to CMYK (Cyan, Magenta, Yellow, Black) subtractive color mixing. In our proposed method, the multi-layered structure with different pigments in each layer contributes greatly to the accurate reproduction of skin tones. We use a UV printer because the printer is capable of layered fabrication by using UV-curable inks. As the result, subjective evaluation showed that the artificial skin reproduced by our method has a more skin-like appearance than that produced using conventional printing.
      Citation: Journal of Imaging
      PubDate: 2022-03-15
      DOI: 10.3390/jimaging8030073
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 74: Investigation of Nonlinear Optical
           Properties of Quantum Dots Deposited onto a Sample Glass Using
           Time-Resolved Inline Digital Holography

    • Authors: Andrey Belashov, Igor Shevkunov, Ekaterina Kolesova, Anna Orlova, Sergei Putilin, Andrei Veniaminov, Chau-Jern Cheng, Nikolay Petrov
      First page: 74
      Abstract: We report on the application of time-resolved inline digital holography in the study of the nonlinear optical properties of quantum dots deposited onto sample glass. The Fresnel diffraction patterns of the probe pulse due to noncollinear degenerate phase modulation induced by a femtosecond pump pulse were extracted from the set of inline digital holograms and analyzed. The absolute values of the nonlinear refractive index of both the sample glass substrate and the deposited layer of quantum dots were evaluated using the proposed technique. To characterize the inhomogeneous distribution of the samples’ nonlinear optical properties, we proposed plotting an optical nonlinearity map calculated as a local standard deviation of the diffraction pattern intensities induced by noncollinear degenerate phase modulation.
      Citation: Journal of Imaging
      PubDate: 2022-03-16
      DOI: 10.3390/jimaging8030074
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 75: Visualization of Inferior Alveolar and
           Lingual Nerve Pathology by 3D Double-Echo Steady-State MRI: Two Case
           Reports with Literature Review

    • Authors: Adib Al-Haj Husain, Daphne Schönegg, Silvio Valdec, Bernd Stadlinger, Thomas Gander, Harald Essig, Marco Piccirelli, Sebastian Winklhofer
      First page: 75
      Abstract: Injury to the peripheral branches of the trigeminal nerve, particularly the lingual nerve (LN) and the inferior alveolar nerve (IAN), is a rare but serious complication that can occur during oral and maxillofacial surgery. Mandibular third molar surgery, one of the most common surgical procedures in dentistry, is most often associated with such a nerve injury. Proper preoperative radiologic assessment is hence key to avoiding neurosensory dysfunction. In addition to the well-established conventional X-ray-based imaging modalities, such as panoramic radiography and cone-beam computed tomography, radiation-free magnetic resonance imaging (MRI) with the recently introduced black-bone MRI sequences offers the possibility to simultaneously visualize osseous structures and neural tissue in the oral cavity with high spatial resolution and excellent soft-tissue contrast. Fortunately, most LN and IAN injuries recover spontaneously within six months. However, permanent damage may cause significant loss of quality of life for affected patients. Therefore, therapy should be initiated early in indicated cases, despite the inconsistency in the literature regarding the therapeutic time window. In this report, we present the visualization of two cases of nerve pathology using 3D double-echo steady-state MRI and evaluate evidence-based decision-making for iatrogenic nerve injury regarding a wait-and-see strategy, conservative drug treatment, or surgical re-intervention.
      Citation: Journal of Imaging
      PubDate: 2022-03-17
      DOI: 10.3390/jimaging8030075
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 76: Microsaccades, Drifts, Hopf Bundle and
           Neurogeometry

    • Authors: Dmitri Alekseevsky
      First page: 76
      Abstract: The first part of the paper contains a short review of the image processing in early vision is static, when the eyes and the stimulus are stable, and in dynamics, when the eyes participate in fixation eye movements. In the second part, we give an interpretation of Donders’ and Listing’s law in terms of the Hopf fibration of the 3-sphere over the 2-sphere. In particular, it is shown that the configuration space of the eye ball (when the head is fixed) is the 2-dimensional hemisphere SL+, called Listing hemisphere, and saccades are described as geodesic segments of SL+ with respect to the standard round metric. We study fixation eye movements (drift and microsaccades) in terms of this model and discuss the role of fixation eye movements in vision. A model of fixation eye movements is proposed that gives an explanation of presaccadic shift of receptive fields.
      Citation: Journal of Imaging
      PubDate: 2022-03-17
      DOI: 10.3390/jimaging8030076
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 77: Metal Artifact Reduction in Spectral X-ray
           CT Using Spectral Deep Learning

    • Authors: Matteo Busi, Christian Kehl, Jeppe R. Frisvad, Ulrik L. Olsen
      First page: 77
      Abstract: Spectral X-ray computed tomography (SCT) is an emerging method for non-destructive imaging of the inner structure of materials. Compared with the conventional X-ray CT, this technique provides spectral photon energy resolution in a finite number of energy channels, adding a new dimension to the reconstructed volumes and images. While this mitigates energy-dependent distortions such as beam hardening, metal artifacts due to photon starvation effects are still present, especially for low-energy channels where the attenuation coefficients are higher. We present a correction method for metal artifact reduction in SCT that is based on spectral deep learning. The correction efficiently reduces streaking artifacts in all the energy channels measured. We show that the additional information in the energy domain provides relevance for restoring the quality of low-energy reconstruction affected by metal artifacts. The correction method is parameter free and only takes around 15 ms per energy channel, satisfying near-real time requirement of industrial scanners.
      Citation: Journal of Imaging
      PubDate: 2022-03-17
      DOI: 10.3390/jimaging8030077
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 78: The Capabilities of Dedicated Small
           Satellite Infrared Missions for the Quantitative Characterization of
           Wildfires

    • Authors: Winfried Halle, Christian Fischer, Dieter Oertel, Boris Zhukov
      First page: 78
      Abstract: The main objective of this paper was to demonstrate the capability of dedicated small satellite infrared sensors with cooled quantum detectors, such as those successfully utilized three times in Germany’s pioneering BIRD and FireBIRD small satellite infrared missions, in the quantitative characterization of high-temperature events such as wildfires. The Bi-spectral Infrared Detection (BIRD) mission was launched in October 2001. The space segment of FireBIRD consists of the small satellites Technologie Erprobungs-Träger (TET-1), launched in July 2012, and Bi-spectral InfraRed Optical System (BIROS), launched in June 2016. These missions also significantly improved the scientific understanding of space-borne fire monitoring with regard to climate change. The selected examples compare the evaluation of quantitative characteristics using data from BIRD or FireBIRD and from the operational polar orbiting IR sensor systems MODIS, SLSTR and VIIRS. Data from the geostationary satellite “Himawari-8” were compared with FireBIRD data, obtained simultaneously. The geostationary Meteosat Third Generation-Imager (MTG-I) is foreseen to be launched at the end of 2022. In its application to fire, the MTG-I’s Flexible Combined Imager (FCI) will provide related spectral bands at ground sampling distances (GSD) of 3.8 µm and 10.5 µm at the sub-satellite point (SSP) of 1 km or 2 km, depending on the used FCI imaging mode. BIRD wildfire data, obtained over Africa and Portugal, were used to simulate the fire detection and monitoring capability of MTG-I/FCI. A new quality of fire monitoring is predicted, if the 1 km resolution wildfire data from MTG-1/FCI are used together with the co-located fire data acquired by the polar orbiting Visible Infrared Imaging Radiometer Suite (VIIRS), and possibly prospective FireBIRD-type compact IR sensors flying on several small satellites in various low Earth orbits (LEOs).
      Citation: Journal of Imaging
      PubDate: 2022-03-18
      DOI: 10.3390/jimaging8030078
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 79: Comparing Desktop vs. Mobile Interaction for
           the Creation of Pervasive Augmented Reality Experiences

    • Authors: Tiago Madeira, Bernardo Marques, Pedro Neves, Paulo Dias, Beatriz Sousa Santos
      First page: 79
      Abstract: This paper presents an evaluation and comparison of interaction methods for the configuration and visualization of pervasive Augmented Reality (AR) experiences using two different platforms: desktop and mobile. AR experiences consist of the enhancement of real-world environments by superimposing additional layers of information, real-time interaction, and accurate 3D registration of virtual and real objects. Pervasive AR extends this concept through experiences that are continuous in space, being aware of and responsive to the user’s context and pose. Currently, the time and technical expertise required to create such applications are the main reasons preventing its widespread use. As such, authoring tools which facilitate the development and configuration of pervasive AR experiences have become progressively more relevant. Their operation often involves the navigation of the real-world scene and the use of the AR equipment itself to add the augmented information within the environment. The proposed experimental tool makes use of 3D scans from physical environments to provide a reconstructed digital replica of such spaces for a desktop-based method, and to enable positional tracking for a mobile-based one. While the desktop platform represents a non-immersive setting, the mobile one provides continuous AR in the physical environment. Both versions can be used to place virtual content and ultimately configure an AR experience. The authoring capabilities of the different platforms were compared by conducting a user study focused on evaluating their usability. Although the AR interface was generally considered more intuitive, the desktop platform shows promise in several aspects, such as remote configuration, lower required effort, and overall better scalability.
      Citation: Journal of Imaging
      PubDate: 2022-03-18
      DOI: 10.3390/jimaging8030079
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 80: Neutron Tomography Studies of Two
           Lamprophyre Dike Samples: 3D Data Analysis for the Characterization of
           Rock Fabric

    • Authors: Ivan Zel, Bekhzodjon Abdurakhimov, Sergey Kichanov, Olga Lis, Elmira Myrzabekova, Denis Kozlenko, Mannab Tashmetov, Khalbay Ishbaev, Kuatbay Kosbergenov
      First page: 80
      Abstract: The rock fabric of two lamprophyre dike samples from the Koy-Tash granitoid intrusion (Koy-Tash, Jizzakh region, Uzbekistan) has been studied, using the neutron tomography method. We have performed virtual segmentation of the reconstructed 3D model of the tabular igneous intrusion and the corresponding determination of dike margins orientation. Spatial distributions of inclusions in the dike volume, as well as further analysis of size distributions and shape orientations of inclusions, have been obtained. The observed shape preferred orientations of inclusions as evidence of the magma flow-related fabric. The obtained structural data have been discussed in the frame of the models of rigid particle motion and the straining of vesicles in a moving viscous fluid.
      Citation: Journal of Imaging
      PubDate: 2022-03-19
      DOI: 10.3390/jimaging8030080
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 81: A New Approach in Detectability of
           Microcalcifications in the Placenta during Pregnancy Using Textural
           Features and K-Nearest Neighbors Algorithm

    • Authors: Mihaela Miron, Simona Moldovanu, Bogdan Ioan Ștefănescu, Mihai Culea, Sorin Marius Pavel, Anisia Luiza Culea-Florescu
      First page: 81
      Abstract: (1) Background: Ultrasonography is the main method used during pregnancy to assess the fetal growth, amniotic fluid, umbilical cord and placenta. The placenta’s structure suffers dynamic modifications throughout the whole pregnancy and many of these changes, in which placental microcalcifications are by far the most prominent, are related to the process of aging and maturation and have no effect on fetal wellbeing. However, when placental microcalcifications are noticed earlier during pregnancy, they could suggest a major placental dysfunction with serious consequences for the fetus and mother. For better detectability of microcalcifications, we propose a new approach based on improving the clarity of details and the analysis of the placental structure using first and second order statistics, and fractal dimension. (2) Methods: The methodology is based on four stages: (i) cropping the region of interest and preprocessing steps; (ii) feature extraction, first order—standard deviation (SD), skewness (SK) and kurtosis (KR)—and second order—contrast (C), homogeneity (H), correlation (CR), energy (E) and entropy (EN)—are computed from a gray level co-occurrence matrix (GLCM) and fractal dimension (FD); (iii) statistical analysis (t-test); (iv) classification with the K-Nearest Neighbors algorithm (K-NN algorithm) and performance comparison with results from the support vector machine algorithm (SVM algorithm). (3) Results: Experimental results obtained from real clinical data show an improvement in the detectability and visibility of placental microcalcifications.
      Citation: Journal of Imaging
      PubDate: 2022-03-19
      DOI: 10.3390/jimaging8030081
      Issue No: Vol. 8, No. 3 (2022)
       
  • J. Imaging, Vol. 8, Pages 38: Natural Images Allow Universal Adversarial
           Attacks on Medical Image Classification Using Deep Neural Networks with
           Transfer Learning

    • Authors: Akinori Minagi, Hokuto Hirano, Kauzhiro Takemoto
      First page: 38
      Abstract: Transfer learning from natural images is used in deep neural networks (DNNs) for medical image classification to achieve a computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to be limited because training datasets (medical images), which are often required for adversarial attacks, are generally unavailable in terms of security and privacy preservation. Nevertheless, in this study, we demonstrated that adversarial attacks are also possible using natural images for medical DNN models with transfer learning, even if such medical images are unavailable; in particular, we showed that universal adversarial perturbations (UAPs) can also be generated from natural images. UAPs from natural images are useful for both non-targeted and targeted attacks. The performance of UAPs from natural images was significantly higher than that of random controls. The use of transfer learning causes a security hole, which decreases the reliability and safety of computer-based disease diagnosis. Model training from random initialization reduced the performance of UAPs from natural images; however, it did not completely avoid vulnerability to UAPs. The vulnerability of UAPs to natural images is expected to become a significant security threat.
      Citation: Journal of Imaging
      PubDate: 2022-02-04
      DOI: 10.3390/jimaging8020038
      Issue No: Vol. 8, No. 2 (2022)
       
  • J. Imaging, Vol. 8, Pages 39: X-ray Tomography Unveils the Construction
           Technique of Un-Montu’s Egyptian Coffin (Early 26th Dynasty)

    • Authors: Fauzia Albertin, Maria Pia Morigi, Matteo Bettuzzi, Rosa Brancaccio, Nicola Macchioni, Roberto Saccuman, Gianluca Quarta, Lucio Calcagnile, Daniela Picchi
      First page: 39
      Abstract: The Bologna Archaeological Museum, in cooperation with prestigious Italian universities, institutions, and independent scholars, recently began a vast investigation programme on a group of Egyptian coffins of Theban provenance dating to the first millennium BC, primarily the 25th–26th Dynasty (c. 746–525 BC). Herein, we present the results of the multidisciplinary investigation carried out on one of these coffins before its restoration intervention: the anthropoid wooden coffin of Un-Montu (Inv. MCABo EG1960). The integration of radiocarbon dating, wood species identification, and CT imaging enabled a deep understanding of the coffin’s wooden structure. In particular, we discuss the results of the tomographic investigation performed in situ. The use of a transportable X-ray facility largely reduced the risks associated with the transfer of the large object (1.80 cm tall) out of the museum without compromising image quality. Thanks to the 3D tomographic imaging, the coffin revealed the secrets of its construction technique, from the rational use of wood to the employment of canvas (incamottatura), from the use of dowels to the assembly procedure.
      Citation: Journal of Imaging
      PubDate: 2022-02-07
      DOI: 10.3390/jimaging8020039
      Issue No: Vol. 8, No. 2 (2022)
       
  • J. Imaging, Vol. 8, Pages 40: DRM-Based Colour Photometric Stereo Using
           Diffuse-Specular Separation for Non-Lambertian Surfaces

    • Authors: Boren Li, Tomonari Furukawa
      First page: 40
      Abstract: This paper presents a photometric stereo (PS) method based on the dichromatic reflectance model (DRM) using colour images. The proposed method estimates surface orientations for surfaces with non-Lambertian reflectance using diffuse-specular separation and contains two steps. The first step, referred to as diffuse-specular separation, initialises surface orientations in a specular invariant colour subspace and further separates the diffuse and specular components in the RGB space. In the second step, the surface orientations are refined by first initialising specular parameters via solving a log-linear regression problem owing to the separation and then fitting the DRM using Levenburg-Marquardt algorithm. Since reliable information from diffuse reflection free from specularities is adopted in the initialisations, the proposed method is robust and feasible with less observations. At pixels where dense non-Lambertian reflectances appear, signals from specularities are exploited to refine the surface orientations and the additionally acquired specular parameters are potentially valuable for more applications, such as digital relighting. The effectiveness of the newly proposed surface normal refinement step was evaluated and the accuracy in estimating surface orientations was enhanced around 30% on average by including this step. The proposed method was also proven effective in an experiment using synthetic input images comprised of twenty-four different reflectances of dielectric materals. A comparison with nine other PS methods on five representative datasets further prove the validity of the proposed method.
      Citation: Journal of Imaging
      PubDate: 2022-02-08
      DOI: 10.3390/jimaging8020040
      Issue No: Vol. 8, No. 2 (2022)
       
  • J. Imaging, Vol. 8, Pages 41: Towards a Connected Mobile Cataract
           Screening System: A Future Approach

    • Authors: Wan Mimi Diyana Wan Zaki, Haliza Abdul Mutalib, Laily Azyan Ramlan, Aini Hussain, Aouache Mustapha
      First page: 41
      Abstract: Advances in computing and AI technology have promoted the development of connected health systems, indirectly influencing approaches to cataract treatment. In addition, thanks to the development of methods for cataract detection and grading using different imaging modalities, ophthalmologists can make diagnoses with significant objectivity. This paper aims to review the development and limitations of published methods for cataract detection and grading using different imaging modalities. Over the years, the proposed methods have shown significant improvement and reasonable effort towards automated cataract detection and grading systems that utilise various imaging modalities, such as optical coherence tomography (OCT), fundus, and slit-lamp images. However, more robust and fully automated cataract detection and grading systems are still needed. In addition, imaging modalities such as fundus, slit-lamps, and OCT images require medical equipment that is expensive and not portable. Therefore, the use of digital images from a smartphone as the future of cataract screening tools could be a practical and helpful solution for ophthalmologists, especially in rural areas with limited healthcare facilities.
      Citation: Journal of Imaging
      PubDate: 2022-02-10
      DOI: 10.3390/jimaging8020041
      Issue No: Vol. 8, No. 2 (2022)
       
  • J. Imaging, Vol. 8, Pages 42: A Soft Coprocessor Approach for Developing
           Image and Video Processing Applications on FPGAs

    • Authors: Tiantai Deng, Danny Crookes, Roger Woods, Fahad Siddiqui
      First page: 42
      Abstract: Developing Field Programmable Gate Array (FPGA)-based applications is typically a slow and multi-skilled task. Research in tools to support application development has gradually reached a higher level. This paper describes an approach which aims to further raise the level at which an application developer works in developing FPGA-based implementations of image and video processing applications. The starting concept is a system of streamed soft coprocessors. We present a set of soft coprocessors which implement some of the key abstractions of Image Algebra. Our soft coprocessors are designed for easy chaining, and allow users to describe their application as a dataflow graph. A prototype implementation of a development environment, called SCoPeS, is presented. An application can be modified even during execution without requiring re-synthesis. The paper concludes with performance and resource utilization results for different implementations of a sample algorithm. We conclude that the soft coprocessor approach has the potential to deliver better performance than the soft processor approach, and can improve programmability over dedicated HDL cores for domain-specific applications while achieving competitive real time performance and utilization.
      Citation: Journal of Imaging
      PubDate: 2022-02-11
      DOI: 10.3390/jimaging8020042
      Issue No: Vol. 8, No. 2 (2022)
       
  • J. Imaging, Vol. 8, Pages 43: A Boosted Minimum Cross Entropy Thresholding
           for Medical Images Segmentation Based on Heterogeneous Mean Filters
           Approaches

    • Authors: Walaa Ali H. Jumiawi, Ali El-Zaart
      First page: 43
      Abstract: Computer vision plays an important role in the accurate foreground detection of medical images. Diagnosing diseases in their early stages has effective life-saving potential, and this is every physician’s goal. There is a positive relationship between improving image segmentation methods and precise diagnosis in medical images. This relation provides a profound indication for feature extraction in a segmented image, such that an accurate separation occurs between the foreground and the background. There are many thresholding-based segmentation methods found under the pure image processing approach. Minimum cross entropy thresholding (MCET) is one of the frequently used mean-based thresholding methods for medical image segmentation. In this paper, the aim was to boost the efficiency of MCET, based on heterogeneous mean filter approaches. The proposed model estimates an optimized mean by excluding the negative influence of noise, local outliers, and gray intensity levels; thus, obtaining new mean values for the MCET’s objective function. The proposed model was examined compared to the original and related methods, using three types of medical image dataset. It was able to show accurate results based on the performance measures, using the benchmark of unsupervised and supervised evaluation.
      Citation: Journal of Imaging
      PubDate: 2022-02-11
      DOI: 10.3390/jimaging8020043
      Issue No: Vol. 8, No. 2 (2022)
       
  • J. Imaging, Vol. 8, Pages 44: Hybrid FPGA–CPU-Based Architecture for
           Object Recognition in Visual Servoing of Arm Prosthesis

    • Authors: Attila Fejér, Zoltán Nagy, Jenny Benois-Pineau, Péter Szolgay, Aymar de de Rugy, Jean-Philippe Domenger
      First page: 44
      Abstract: The present paper proposes an implementation of a hybrid hardware–software system for the visual servoing of prosthetic arms. We focus on the most critical vision analysis part of the system. The prosthetic system comprises a glass-worn eye tracker and a video camera, and the task is to recognize the object to grasp. The lightweight architecture for gaze-driven object recognition has to be implemented as a wearable device with low power consumption (less than 5.6 W). The algorithmic chain comprises gaze fixations estimation and filtering, generation of candidates, and recognition, with two backbone convolutional neural networks (CNN). The time-consuming parts of the system, such as SIFT (Scale Invariant Feature Transform) detector and the backbone CNN feature extractor, are implemented in FPGA, and a new reduction layer is introduced in the object-recognition CNN to reduce the computational burden. The proposed implementation is compatible with the real-time control of the prosthetic arm.
      Citation: Journal of Imaging
      PubDate: 2022-02-12
      DOI: 10.3390/jimaging8020044
      Issue No: Vol. 8, No. 2 (2022)
       
  • J. Imaging, Vol. 8, Pages 45: Radiomics of Musculoskeletal Sarcomas: A
           Narrative Review

    • Authors: Cristiana Fanciullo, Salvatore Gitto, Eleonora Carlicchi, Domenico Albano, Carmelo Messina, Luca Maria Sconfienza
      First page: 45
      Abstract: Bone and soft-tissue primary malignant tumors or sarcomas are a large, diverse group of mesenchymal-derived malignancies. They represent a model for intra- and intertumoral heterogeneities, making them particularly suitable for radiomics analyses. Radiomic features offer information on cancer phenotype as well as the tumor microenvironment which, combined with other pertinent data such as genomics and proteomics and correlated with outcomes data, can produce accurate, robust, evidence-based, clinical-decision support systems. Our purpose in this narrative review is to offer an overview of radiomics studies dealing with Magnetic Resonance Imaging (MRI)-based radiomics models of bone and soft-tissue sarcomas that could help distinguish different histotypes, low-grade from high-grade sarcomas, predict response to multimodality therapy, and thus better tailor patients’ treatments and finally improve their survivals. Although showing promising results, interobserver segmentation variability, feature reproducibility, and model validation are three main challenges of radiomics that need to be addressed in order to translate radiomics studies to clinical applications. These efforts, together with a better knowledge and application of the “Radiomics Quality Score” and Image Biomarker Standardization Initiative reporting guidelines, could improve the quality of sarcoma radiomics studies and facilitate radiomics towards clinical translation.
      Citation: Journal of Imaging
      PubDate: 2022-02-13
      DOI: 10.3390/jimaging8020045
      Issue No: Vol. 8, No. 2 (2022)
       
  • J. Imaging, Vol. 8, Pages 46: Refining Tumor Treatment in Sinonasal Cancer
           Using Delta Radiomics of Multi-Parametric MRI after the First Cycle of
           Induction Chemotherapy

    • Authors: Valentina D. A. Corino, Marco Bologna, Giuseppina Calareso, Carlo Resteghini, Silvana Sdao, Ester Orlandi, Lisa Licitra, Luca Mainardi, Paolo Bossi
      First page: 46
      Abstract: Background: Response to induction chemotherapy (IC) has been predicted in patients with sinonasal cancer using early delta radiomics obtained from T1- and T2-weighted images and apparent diffusion coefficient (ADC) maps, comparing results with early radiological evaluation by RECIST. Methods: Fifty patients were included in the study. For each image (at baseline and after the first IC cycle), 536 radiomic features were extracted as follows: semi-supervised principal component analysis components, explaining 97% of the variance, were used together with a support vector machine (SVM) to develop a radiomic signature. One signature was developed for each sequence (T1-, T2-weighted and ADC). A multiagent decision-making algorithm was used to merge multiple signatures into one score. Results: The area under the curve (AUC) for mono-modality signatures was 0.79 (CI: 0.65–0.88), 0.76 (CI: 0.62–0.87) and 0.93 (CI: 0.75–1) using T1-, T2-weighted and ADC images, respectively. The fuse signature improved the AUC when an ADC-based signature was added. Radiological prediction using RECIST criteria reached an accuracy of 0.78. Conclusions: These results suggest the importance of early delta radiomics and of ADC maps to predict the response to IC in sinonasal cancers.
      Citation: Journal of Imaging
      PubDate: 2022-02-15
      DOI: 10.3390/jimaging8020046
      Issue No: Vol. 8, No. 2 (2022)
       
  • J. Imaging, Vol. 8, Pages 47: Mixed-Reality-Assisted Puncture of the
           Common Femoral Artery in a Phantom Model

    • Authors: Christian Uhl, Johannes Hatzl, Katrin Meisenbacher, Lea Zimmer, Niklas Hartmann, Dittmar Böckler
      First page: 47
      Abstract: Percutaneous femoral arterial access is daily practice in a variety of medical specialties and enables physicians worldwide to perform endovascular interventions. The reported incidence of percutaneous femoral arterial access complications is 3–18% and often results from suboptimal puncture location due to insufficient visualization of the target vessel. The purpose of this proof-of-concept study was to evaluate the feasibility and the positional error of a mixed-reality (MR)-assisted puncture of the common femoral artery in a phantom model using a commercially available navigation system. In total, 15 MR-assisted punctures were performed. Cone-beam computed tomography angiography (CTA) was used following each puncture to allow quantification of positional error of needle placements in the axial and sagittal planes. Technical success was achieved in 14/15 cases (93.3%) with a median axial positional error of 1.0 mm (IQR 1.3) and a median sagittal positional error of 1.1 mm (IQR 1.6). The median duration of the registration process and needle insertion was 2 min (IQR 1.0). MR-assisted puncture of the common femoral artery is feasible with acceptable positional errors in a phantom model. Future studies should aim to measure and reduce the positional error resulting from MR registration.
      Citation: Journal of Imaging
      PubDate: 2022-02-16
      DOI: 10.3390/jimaging8020047
      Issue No: Vol. 8, No. 2 (2022)
       
  • J. Imaging, Vol. 8, Pages 48: New and Specialized Methods of Image
           Compression

    • Authors: Roman Starosolski
      First page: 48
      Abstract: Due to the enormous amounts of images produced today, compression is crucial for consumer and professional (for instance, medical) picture archiving and communication systems [...]
      Citation: Journal of Imaging
      PubDate: 2022-02-16
      DOI: 10.3390/jimaging8020048
      Issue No: Vol. 8, No. 2 (2022)
       
  • J. Imaging, Vol. 8, Pages 49: Small Satellite Tools for High-Resolution
           Infrared Fire Monitoring

    • Authors: Christian Fischer, Winfried Halle, Thomas Säuberlich, Olaf Frauenberger, Maik Hartmann, Dieter Oertel, Thomas Terzibaschian
      First page: 49
      Abstract: Space-borne infrared remote sensing specifically for the detection and characterization of fires has a long history in the DLR Institute of Optical Sensor Systems. In the year 2001, the first DLR experimental satellite, Bi-spectral Infrared Detection (BIRD), was launched after an intensive test period with cooled IR sensor systems on airborne systems. The main basis for the development of the FireBIRD mission with the two satellites, Technology Erprobungsträger No 1 (TET-1) and Bi-spectral-Infrared Optical System (BIROS), was the already space-proven sensor and satellite technology with successfully tested algorithms for fire detection and quantification in the form of the so-called fire radiation power (FRP). This paper summarizes the development principles for the IR sensor system of FireBIRD and the most critical design elements of the TET-1 and BIROS satellites, especially concerning the attitude control system—all very essential tools for high-resolution infrared fire monitoring. Key innovative tools necessary to increase the agility of small IR satellites are discussed.
      Citation: Journal of Imaging
      PubDate: 2022-02-16
      DOI: 10.3390/jimaging8020049
      Issue No: Vol. 8, No. 2 (2022)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 34.231.247.88
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-