for Journals by Title or ISSN
for Articles by Keywords
  Subjects -> BIOLOGY (Total: 3071 journals)
    - BIOCHEMISTRY (242 journals)
    - BIOENGINEERING (113 journals)
    - BIOLOGY (1453 journals)
    - BIOPHYSICS (46 journals)
    - BIOTECHNOLOGY (227 journals)
    - BOTANY (220 journals)
    - CYTOLOGY AND HISTOLOGY (28 journals)
    - ENTOMOLOGY (67 journals)
    - GENETICS (166 journals)
    - MICROBIOLOGY (261 journals)
    - MICROSCOPY (11 journals)
    - ORNITHOLOGY (26 journals)
    - PHYSIOLOGY (73 journals)
    - ZOOLOGY (138 journals)

BIOTECHNOLOGY (227 journals)                  1 2 | Last

Showing 1 - 200 of 227 Journals sorted alphabetically
3 Biotech     Open Access   (Followers: 7)
Advances in Bioscience and Biotechnology     Open Access   (Followers: 14)
Advances in Genetic Engineering & Biotechnology     Hybrid Journal   (Followers: 7)
African Journal of Biotechnology     Open Access   (Followers: 6)
Algal Research     Partially Free   (Followers: 9)
American Journal of Biochemistry and Biotechnology     Open Access   (Followers: 69)
American Journal of Bioinformatics Research     Open Access   (Followers: 8)
American Journal of Polymer Science     Open Access   (Followers: 29)
Animal Biotechnology     Hybrid Journal   (Followers: 9)
Annales des Sciences Agronomiques     Full-text available via subscription  
Applied Biochemistry and Biotechnology     Hybrid Journal   (Followers: 42)
Applied Bioenergy     Open Access  
Applied Biosafety     Hybrid Journal  
Applied Microbiology and Biotechnology     Hybrid Journal   (Followers: 62)
Applied Mycology and Biotechnology     Full-text available via subscription   (Followers: 5)
Arthroplasty Today     Open Access   (Followers: 1)
Artificial Cells, Nanomedicine and Biotechnology     Hybrid Journal   (Followers: 2)
Asia Pacific Biotech News     Hybrid Journal   (Followers: 2)
Asian Journal of Biotechnology     Open Access   (Followers: 8)
Asian Pacific Journal of Tropical Biomedicine     Open Access   (Followers: 2)
Australasian Biotechnology     Full-text available via subscription   (Followers: 1)
Banat's Journal of Biotechnology     Open Access  
BBR : Biochemistry and Biotechnology Reports     Open Access   (Followers: 4)
Bio-Algorithms and Med-Systems     Hybrid Journal   (Followers: 1)
Bio-Research     Full-text available via subscription   (Followers: 2)
Bioactive Materials     Open Access   (Followers: 1)
Biocatalysis and Agricultural Biotechnology     Hybrid Journal   (Followers: 4)
Biocybernetics and Biological Engineering     Full-text available via subscription   (Followers: 5)
Bioethics UPdate     Hybrid Journal  
Biofuels     Hybrid Journal   (Followers: 11)
Biofuels Engineering     Open Access   (Followers: 1)
Biological & Pharmaceutical Bulletin     Full-text available via subscription   (Followers: 5)
Biological Cybernetics     Hybrid Journal   (Followers: 10)
Biomarkers and Genomic Medicine     Open Access   (Followers: 5)
Biomarkers in Drug Development     Partially Free   (Followers: 1)
Biomaterials Research     Open Access   (Followers: 4)
BioMed Research International     Open Access   (Followers: 6)
Biomédica     Open Access  
Biomedical Engineering Research     Open Access   (Followers: 7)
Biomedical glasses     Open Access  
Biomedical Reports     Full-text available via subscription  
BioMedicine     Open Access  
Bioprinting     Hybrid Journal  
Bioresource Technology Reports     Hybrid Journal  
Bioscience, Biotechnology, and Biochemistry     Hybrid Journal   (Followers: 22)
Biosimilars     Open Access   (Followers: 1)
Biosurface and Biotribology     Open Access  
Biotechnic and Histochemistry     Hybrid Journal   (Followers: 2)
BioTechniques : The International Journal of Life Science Methods     Full-text available via subscription   (Followers: 28)
Biotechnologia Acta     Open Access   (Followers: 1)
Biotechnologie, Agronomie, Société et Environnement     Open Access   (Followers: 2)
Biotechnology     Open Access   (Followers: 6)
Biotechnology & Biotechnological Equipment     Open Access   (Followers: 5)
Biotechnology Advances     Hybrid Journal   (Followers: 33)
Biotechnology and Applied Biochemistry     Hybrid Journal   (Followers: 44)
Biotechnology and Bioengineering     Hybrid Journal   (Followers: 160)
Biotechnology and Bioprocess Engineering     Hybrid Journal   (Followers: 6)
Biotechnology and Genetic Engineering Reviews     Hybrid Journal   (Followers: 14)
Biotechnology and Health Sciences     Open Access   (Followers: 1)
Biotechnology and Molecular Biology Reviews     Open Access   (Followers: 1)
Biotechnology Annual Review     Full-text available via subscription   (Followers: 7)
Biotechnology for Biofuels     Open Access   (Followers: 10)
Biotechnology Frontier     Open Access   (Followers: 2)
Biotechnology Journal     Hybrid Journal   (Followers: 15)
Biotechnology Law Report     Hybrid Journal   (Followers: 4)
Biotechnology Letters     Hybrid Journal   (Followers: 33)
Biotechnology Progress     Hybrid Journal   (Followers: 39)
Biotechnology Reports     Open Access  
Biotechnology Research International     Open Access   (Followers: 2)
Biotechnology Techniques     Hybrid Journal   (Followers: 10)
Biotecnología Aplicada     Open Access  
Biotribology     Hybrid Journal  
BMC Biotechnology     Open Access   (Followers: 15)
Chinese Journal of Agricultural Biotechnology     Full-text available via subscription   (Followers: 3)
Communications in Mathematical Biology and Neuroscience     Open Access  
Computational and Structural Biotechnology Journal     Open Access   (Followers: 2)
Computer Methods and Programs in Biomedicine     Hybrid Journal   (Followers: 8)
Contributions to Tobacco Research     Open Access   (Followers: 3)
Copernican Letters     Open Access   (Followers: 1)
Critical Reviews in Biotechnology     Hybrid Journal   (Followers: 20)
Crop Breeding and Applied Biotechnology     Open Access   (Followers: 4)
Current Bionanotechnology     Hybrid Journal  
Current Biotechnology     Hybrid Journal   (Followers: 3)
Current Opinion in Biomedical Engineering     Hybrid Journal   (Followers: 1)
Current Opinion in Biotechnology     Hybrid Journal   (Followers: 55)
Current Pharmaceutical Biotechnology     Hybrid Journal   (Followers: 9)
Current Research in Bioinformatics     Open Access   (Followers: 14)
Current trends in Biotechnology and Pharmacy     Open Access   (Followers: 9)
EBioMedicine     Open Access  
Electronic Journal of Biotechnology     Open Access   (Followers: 1)
Entomologia Generalis     Full-text available via subscription  
Environmental Science : Processes & Impacts     Full-text available via subscription   (Followers: 4)
Experimental Biology and Medicine     Hybrid Journal   (Followers: 3)
Folia Medica Indonesiana     Open Access  
Food Bioscience     Hybrid Journal  
Food Biotechnology     Hybrid Journal   (Followers: 12)
Food Science and Biotechnology     Hybrid Journal   (Followers: 9)
Frontiers in Bioengineering and Biotechnology     Open Access   (Followers: 6)
Frontiers in Systems Biology     Open Access   (Followers: 2)
Fungal Biology and Biotechnology     Open Access   (Followers: 1)
GM Crops and Food: Biotechnology in Agriculture and the Food Chain     Full-text available via subscription   (Followers: 1)
GSTF Journal of BioSciences     Open Access  
HAYATI Journal of Biosciences     Open Access  
Horticulture, Environment, and Biotechnology     Hybrid Journal   (Followers: 11)
IEEE Transactions on Molecular, Biological and Multi-Scale Communications     Hybrid Journal   (Followers: 1)
IET Nanobiotechnology     Hybrid Journal   (Followers: 2)
IIOAB Letters     Open Access  
IN VIVO     Full-text available via subscription   (Followers: 4)
Indian Journal of Biotechnology (IJBT)     Open Access   (Followers: 2)
Indonesia Journal of Biomedical Science     Open Access   (Followers: 1)
Indonesian Journal of Biotechnology     Open Access   (Followers: 1)
Industrial Biotechnology     Hybrid Journal   (Followers: 18)
International Biomechanics     Open Access  
International Journal of Bioinformatics Research and Applications     Hybrid Journal   (Followers: 15)
International Journal of Biomechatronics and Biomedical Robotics     Hybrid Journal   (Followers: 4)
International Journal of Biomedical Research     Open Access   (Followers: 2)
International Journal of Biotechnology     Hybrid Journal   (Followers: 5)
International Journal of Biotechnology and Molecular Biology Research     Open Access   (Followers: 2)
International Journal of Biotechnology for Wellness Industries     Partially Free   (Followers: 1)
International Journal of Environment, Agriculture and Biotechnology     Open Access   (Followers: 5)
International Journal of Functional Informatics and Personalised Medicine     Hybrid Journal   (Followers: 4)
International Journal of Medicine and Biomedical Research     Open Access   (Followers: 1)
International Journal of Nanotechnology and Molecular Computation     Full-text available via subscription   (Followers: 3)
International Journal of Radiation Biology     Hybrid Journal   (Followers: 4)
Iranian Journal of Biotechnology     Open Access  
ISABB Journal of Biotechnology and Bioinformatics     Open Access  
Italian Journal of Food Science     Open Access   (Followers: 1)
Journal of Biometrics & Biostatistics     Open Access   (Followers: 3)
Journal of Bioterrorism & Biodefense     Open Access   (Followers: 6)
Journal of Petroleum & Environmental Biotechnology     Open Access   (Followers: 2)
Journal of Advanced Therapies and Medical Innovation Sciences     Open Access  
Journal of Advances in Biotechnology     Open Access   (Followers: 5)
Journal Of Agrobiotechnology     Open Access  
Journal of Analytical & Bioanalytical Techniques     Open Access   (Followers: 7)
Journal of Animal Science and Biotechnology     Open Access   (Followers: 6)
Journal of Applied Biomedicine     Open Access   (Followers: 3)
Journal of Applied Biotechnology     Open Access   (Followers: 2)
Journal of Applied Biotechnology Reports     Open Access   (Followers: 2)
Journal of Applied Mathematics & Bioinformatics     Open Access   (Followers: 5)
Journal of Biologically Active Products from Nature     Hybrid Journal   (Followers: 1)
Journal of Biomaterials and Nanobiotechnology     Open Access   (Followers: 6)
Journal of Biomedical Photonics & Engineering     Open Access  
Journal of Biomedical Practitioners     Open Access  
Journal of Bioprocess Engineering and Biorefinery     Full-text available via subscription  
Journal of Bioprocessing & Biotechniques     Open Access  
Journal of Biosecurity, Biosafety and Biodefense Law     Hybrid Journal   (Followers: 3)
Journal of Biotechnology     Hybrid Journal   (Followers: 68)
Journal of Chemical and Biological Interfaces     Full-text available via subscription   (Followers: 1)
Journal of Chemical Technology & Biotechnology     Hybrid Journal   (Followers: 10)
Journal of Chitin and Chitosan Science     Full-text available via subscription  
Journal of Colloid Science and Biotechnology     Full-text available via subscription  
Journal of Commercial Biotechnology     Full-text available via subscription   (Followers: 6)
Journal of Crop Science and Biotechnology     Hybrid Journal   (Followers: 7)
Journal of Essential Oil Research     Hybrid Journal   (Followers: 3)
Journal of Experimental Biology     Full-text available via subscription   (Followers: 25)
Journal of Genetic Engineering and Biotechnology     Open Access   (Followers: 5)
Journal of Ginseng Research     Open Access  
Journal of Industrial Microbiology and Biotechnology     Hybrid Journal   (Followers: 16)
Journal of Integrative Bioinformatics     Open Access  
Journal of International Biotechnology Law     Hybrid Journal   (Followers: 3)
Journal of Medical Imaging and Health Informatics     Full-text available via subscription  
Journal of Molecular Microbiology and Biotechnology     Full-text available via subscription   (Followers: 14)
Journal of Nano Education     Full-text available via subscription  
Journal of Nanobiotechnology     Open Access   (Followers: 4)
Journal of Nanofluids     Full-text available via subscription   (Followers: 2)
Journal of Organic and Biomolecular Simulations     Open Access  
Journal of Plant Biochemistry and Biotechnology     Hybrid Journal   (Followers: 6)
Journal of Science and Applications : Biomedicine     Open Access  
Journal of the Mechanical Behavior of Biomedical Materials     Hybrid Journal   (Followers: 11)
Journal of Trace Elements in Medicine and Biology     Hybrid Journal   (Followers: 1)
Journal of Tropical Microbiology and Biotechnology     Full-text available via subscription  
Journal of Yeast and Fungal Research     Open Access   (Followers: 1)
Marine Biotechnology     Hybrid Journal   (Followers: 5)
Messenger     Full-text available via subscription  
Metabolic Engineering Communications     Open Access   (Followers: 4)
Metalloproteinases In Medicine     Open Access  
Microalgae Biotechnology     Open Access   (Followers: 2)
Microbial Biotechnology     Open Access   (Followers: 9)
MicroMedicine     Open Access   (Followers: 3)
Molecular and Cellular Biomedical Sciences     Open Access  
Molecular Biotechnology     Hybrid Journal   (Followers: 16)
Molecular Genetics and Metabolism Reports     Open Access   (Followers: 3)
Nanobiomedicine     Open Access  
Nanobiotechnology     Hybrid Journal   (Followers: 3)
Nanomaterials and Nanotechnology     Open Access  
Nanomaterials and Tissue Regeneration     Open Access  
Nanomedicine and Nanobiology     Full-text available via subscription  
Nanomedicine Research Journal     Open Access  
Nanotechnology Reviews     Hybrid Journal   (Followers: 5)
Nature Biotechnology     Full-text available via subscription   (Followers: 519)
Network Modeling and Analysis in Health Informatics and Bioinformatics     Hybrid Journal   (Followers: 3)
New Biotechnology     Hybrid Journal   (Followers: 4)
Nigerian Journal of Biotechnology     Open Access  
Nova Biotechnologica et Chimica     Open Access  
NPG Asia Materials     Open Access  
npj Biofilms and Microbiomes     Open Access  
OA Biotechnology     Open Access  
Plant Biotechnology Journal     Open Access   (Followers: 10)
Plant Biotechnology Reports     Hybrid Journal   (Followers: 4)
Preparative Biochemistry and Biotechnology     Hybrid Journal   (Followers: 4)

        1 2 | Last

Journal Cover Computer Methods and Programs in Biomedicine
  [SJR: 0.985]   [H-I: 63]   [8 followers]  Follow
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 0169-2607
   Published by Elsevier Homepage  [3177 journals]
  • Generating region proposals for histopathological whole slide image
    • Abstract: Publication date: June 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 159
      Author(s): Yibing Ma, Zhiguo Jiang, Haopeng Zhang, Fengying Xie, Yushan Zheng, Huaqiang Shi, Yu Zhao, Jun Shi
      Background and objective Content-based image retrieval is an effective method for histopathological image analysis. However, given a database of huge whole slide images (WSIs), acquiring appropriate region-of-interests (ROIs) for training is significant and difficult. Moreover, histopathological images can only be annotated by pathologists, resulting in the lack of labeling information. Therefore, it is an important and challenging task to generate ROIs from WSI and retrieve image with few labels. Methods This paper presents a novel unsupervised region proposing method for histopathological WSI based on Selective Search. Specifically, the WSI is over-segmented into regions which are hierarchically merged until the WSI becomes a single region. Nucleus-oriented similarity measures for region mergence and Nucleus–Cytoplasm color space for histopathological image are specially defined to generate accurate region proposals. Additionally, we propose a new semi-supervised hashing method for image retrieval. The semantic features of images are extracted with Latent Dirichlet Allocation and transformed into binary hashing codes with Supervised Hashing. Results The methods are tested on a large-scale multi-class database of breast histopathological WSIs. The results demonstrate that for one WSI, our region proposing method can generate 7.3 thousand contoured regions which fit well with 95.8% of the ROIs annotated by pathologists. The proposed hashing method can retrieve a query image among 136 thousand images in 0.29 s and reach precision of 91% with only 10% of images labeled. Conclusions The unsupervised region proposing method can generate regions as predictions of lesions in histopathological WSI. The region proposals can also serve as the training samples to train machine-learning models for image retrieval. The proposed hashing method can achieve fast and precise image retrieval with small amount of labels. Furthermore, the proposed methods can be potentially applied in online computer-aided-diagnosis systems.

      PubDate: 2018-03-08T16:33:45Z
  • Encryption and watermark-treated medical image against hacking
           disease—An immune convention in spatial and frequency domains
    • Abstract: Publication date: June 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 159
      Author(s): C. Lakshmi, K. Thenmozhi, John Bosco Balaguru Rayappan, Rengarajan Amirtharajan
      Digital Imaging and Communications in Medicine (DICOM) is one among the significant formats used worldwide for the representation of medical images. Undoubtedly, medical-image security plays a crucial role in telemedicine applications. Merging encryption and watermarking in medical-image protection paves the way for enhancing the authentication and safer transmission over open channels. In this context, the present work on DICOM image encryption has employed a fuzzy chaotic map for encryption and the Discrete Wavelet Transform (DWT) for watermarking. The proposed approach overcomes the limitation of the Arnold transform—one of the most utilised confusion mechanisms in image ciphering. Various metrics have substantiated the effectiveness of the proposed medical-image encryption algorithm.
      Graphical abstract image

      PubDate: 2018-03-08T16:33:45Z
  • Radiomics-based features for pattern recognition of lung cancer
           histopathology and metastases
    • Abstract: Publication date: June 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 159
      Author(s): José Raniery Ferreira Junior, Marcel Koenigkam-Santos, Federico Enrique Garcia Cipriano, Alexandre Todorovic Fabro, Paulo Mazzoncini de Azevedo-Marques
      Background and Objectives: lung cancer is the leading cause of cancer-related deaths in the world, and its poor prognosis varies markedly according to tumor staging. Computed tomography (CT) is the imaging modality of choice for lung cancer evaluation, being used for diagnosis and clinical staging. Besides tumor stage, other features, like histopathological subtype, can also add prognostic information. In this work, radiomics-based CT features were used to predict lung cancer histopathology and metastases using machine learning models. Methods: local image datasets of confirmed primary malignant pulmonary tumors were retrospectively evaluated for testing and validation. CT images acquired with same protocol were semiautomatically segmented. Tumors were characterized by clinical features and computer attributes of intensity, histogram, texture, shape, and volume. Three machine learning classifiers used up to 100 selected features to perform the analysis. Results: radiomics-based features yielded areas under the receiver operating characteristic curve of 0.89, 0.97, and 0.92 at testing and 0.75, 0.71, and 0.81 at validation for lymph nodal metastasis, distant metastasis, and histopathology pattern recognition, respectively. Conclusions: the radiomics characterization approach presented great potential to be used in a computational model to aid lung cancer histopathological subtype diagnosis as a “virtual biopsy” and metastatic prediction for therapy decision support without the necessity of a whole-body imaging scanning.

      PubDate: 2018-03-08T16:33:45Z
  • SeeSway – A free web-based system for analysing and exploring
           standing balance data
    • Abstract: Publication date: June 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 159
      Author(s): Ross A. Clark, Yong-Hao Pua
      Background and objectives Computerised posturography can be used to assess standing balance, and can predict poor functional outcomes in many clinical populations. A key limitation is the disparate signal filtering and analysis techniques, with many methods requiring custom computer programs. This paper discusses the creation of a freely available web-based software program, SeeSway (, which was designed to provide powerful tools for pre-processing, analysing and visualising standing balance data in an easy to use and platform independent website. Methods SeeSway links an interactive web platform with file upload capability to software systems including LabVIEW, Matlab, Python and R to perform the data filtering, analysis and visualisation of standing balance data. Input data can consist of any signal that comprises an anterior-posterior and medial-lateral coordinate trace such as center of pressure or mass displacement. This allows it to be used with systems including criterion reference commercial force platforms and three dimensional motion analysis, smartphones, accelerometers and low-cost technology such as Nintendo Wii Balance Board and Microsoft Kinect. Filtering options include Butterworth, weighted and unweighted moving average, and discrete wavelet transforms. Analysis methods include standard techniques such as path length, amplitude, and root mean square in addition to less common but potentially promising methods such as sample entropy, detrended fluctuation analysis and multiresolution wavelet analysis. These data are visualised using scalograms, which chart the change in frequency content over time, scatterplots and standard line charts. This provides the user with a detailed understanding of their results, and how their different pre-processing and analysis method selections affect their findings. Results An example of the data analysis techniques is provided in the paper, with graphical representation of how advanced analysis methods can better discriminate between someone with neurological impairment and a healthy control. Conclusions The goal of SeeSway is to provide a simple yet powerful educational and research tool to explore how standing balance is affected in aging and clinical populations.

      PubDate: 2018-03-08T16:33:45Z
  • Monte-Carlo based assessment of MAGIC, MAGICAUG, PAGATUG and PAGATAUG
           polymer gel dosimeters for ovaries and uterus organ dosimetry in
           brachytherapy, nuclear medicine and Tele-therapy
    • Abstract: Publication date: June 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 159
      Author(s): Karim Adinehvand, Fereidoun Nowshiravan Rahatabad
      Background and objectives Calculation of 3D dose distribution during radiotherapy and nuclear medicine helps us for better treatment of sensitive organs such as ovaries and uterus. In this research, we investigate two groups of normoxic dosimeters based on meta-acrylic acid (MAGIC and MAGICAUG) and polyacrylamide (PAGATUG and PAGATAUG) for brachytherapy, nuclear medicine and Tele-therapy in their sensitive and critical role as organ dosimeters. Methods These polymer gel dosimeters are compared with soft tissue while irradiated by different energy photons in therapeutic applications. This comparison has been simulated by Monte-Carlo based MCNPX code. ORNL phantom–Female has been used to model the critical organs of kidneys, ovaries and uterus. Right kidney is proposed to be the source of irradiation and another two organs are exposed to this irradiation. Results Effective atomic numbers of soft tissue, MAGIC, MAGICAUG, PAGATUG and PAGATAUG are 6.86, 7.07, 6.95, 7.28, and 7.07 respectively. Results show the polymer gel dosimeters are comparable to soft tissue for using in nuclear medicine and Tele-therapy. Differences between gel dosimeters and soft tissue are defined as the dose responses. This difference is less than 4.1%, 22.6% and 71.9% for Tele-therapy, nuclear medicine and brachytherapy respectively. The results approved that gel dosimeters are the best choice for ovaries and uterus in nuclear medicine and Tele-therapy respectively. Conclusions Due to the slight difference between the effective atomic numbers of these polymer gel dosimeters and soft tissue, these polymer gels are not suitable for brachytherapy since the dependence of photon interaction to atomic number, for low energy brachytherapy, had been so effective. Also this dependence to atomic number, decrease for photoelectric and increase for Compton. Therefore polymer gel dosimeters are not a good alternative to soft tissue replacement in brachytherapy.

      PubDate: 2018-03-08T16:33:45Z
  • Hard exudates segmentation based on learned initial seeds and iterative
           graph cut
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Worapan Kusakunniran, Qiang Wu, Panrasee Ritthipravat, Jian Zhang
      (Background and Objective): The occurrence of hard exudates is one of the early signs of diabetic retinopathy which is one of the leading causes of the blindness. Many patients with diabetic retinopathy lose their vision because of the late detection of the disease. Thus, this paper is to propose a novel method of hard exudates segmentation in retinal images in an automatic way. (Methods): The existing methods are based on either supervised or unsupervised learning techniques. In addition, the learned segmentation models may often cause miss-detection and/or fault-detection of hard exudates, due to the lack of rich characteristics, the intra-variations, and the similarity with other components in the retinal image. Thus, in this paper, the supervised learning based on the multilayer perceptron (MLP) is only used to identify initial seeds with high confidences to be hard exudates. Then, the segmentation is finalized by unsupervised learning based on the iterative graph cut (GC) using clusters of initial seeds. Also, in order to reduce color intra-variations of hard exudates in different retinal images, the color transfer (CT) is applied to normalize their color information, in the pre-processing step. (Results): The experiments and comparisons with the other existing methods are based on the two well-known datasets, e_ophtha EX and DIARETDB1. It can be seen that the proposed method outperforms the other existing methods in the literature, with the sensitivity in the pixel-level of 0.891 for the DIARETDB1 dataset and 0.564 for the e_ophtha EX dataset. The cross datasets validation where the training process is performed on one dataset and the testing process is performed on another dataset is also evaluated in this paper, in order to illustrate the robustness of the proposed method. (Conclusions): This newly proposed method integrates the supervised learning and unsupervised learning based techniques. It achieves the improved performance, when compared with the existing methods in the literature. The robustness of the proposed method for the scenario of cross datasets could enhance its practical usage. That is, the trained model could be more practical for unseen data in the real-world situation, especially when the capturing environments of training and testing images are not the same.

      PubDate: 2018-03-08T16:33:45Z
  • Microaneurysm detection using fully convolutional neural networks
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Piotr Chudzik, Somshubra Majumdar, Francesco Calivá, Bashir Al-Diri, Andrew Hunter
      Backround and Objectives Diabetic retinopathy is a microvascular complication of diabetes that can lead to sight loss if treated not early enough. Microaneurysms are the earliest clinical signs of diabetic retinopathy. This paper presents an automatic method for detecting microaneurysms in fundus photographies. Methods A novel patch-based fully convolutional neural network with batch normalization layers and Dice loss function is proposed. Compared to other methods that require up to five processing stages, it requires only three. Furthermore, to the best of the authors’ knowledge, this is the first paper that shows how to successfully transfer knowledge between datasets in the microaneurysm detection domain. Results The proposed method was evaluated using three publicly available and widely used datasets: E-Ophtha, DIARETDB1, and ROC. It achieved better results than state-of-the-art methods using the FROC metric. The proposed algorithm accomplished highest sensitivities for low false positive rates, which is particularly important for screening purposes. Conclusions Performance, simplicity, and robustness of the proposed method demonstrates its suitability for diabetic retinopathy screening applications.

      PubDate: 2018-03-08T16:33:45Z
  • Kidney segmentation in ultrasound, magnetic resonance and computed
           tomography images: A systematic review
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Helena R. Torres, Sandro Queirós, Pedro Morais, Bruno Oliveira, Jaime C. Fonseca, João L. Vilaça
      Background and objective Segmentation is an essential step in computer-aided diagnosis and treatment planning of kidney diseases. In recent years, several researchers proposed multiple techniques to segment the kidney in medical images from distinct imaging acquisition systems, namely ultrasound, magnetic resonance, and computed tomography. This article aims to present a systematic review of the different methodologies developed for kidney segmentation. Methods With this work, it is intended to analyze and categorize the different kidney segmentation algorithms, establishing a comparison between them and discussing the most appropriate methods for each modality. For that, articles published between 2010 and 2016 were analyzed. The search was performed in Scopus and Web of Science using the expressions “kidney segmentation” and “renal segmentation”. Results A total of 1528 articles were retrieved from the databases, and 95 articles were selected for this review. After analysis of the selected articles, the reviewed segmentation techniques were categorized according to their theoretical approach. Conclusions Based on the performed analysis, it was possible to identify segmentation approaches based on distinct image processing classes that can be used to accurately segment the kidney in images of different imaging modalities. Nevertheless, further research on kidney segmentation must be conducted to overcome the current drawbacks of the state-of-the-art methods. Moreover, a standardization of the evaluation database and metrics is needed to allow a direct comparison between methods.

      PubDate: 2018-03-08T16:33:45Z
  • Supervised learning based multimodal MRI brain tumour segmentation using
           texture features from supervoxels
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Mohammadreza Soltaninejad, Guang Yang, Tryphon Lambrou, Nigel Allinson, Timothy L Jones, Thomas R Barrick, Franklyn A Howe, Xujiong Ye
      Background Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. Methods We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. Results The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. Conclusion The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management.

      PubDate: 2018-03-08T16:33:45Z
  • A Novel Pipeline for Adrenal Tumour Segmentation
    • Abstract: Publication date: Available online 7 March 2018
      Source:Computer Methods and Programs in Biomedicine
      Author(s): Hasan Koyuncu, Rahime Ceylan, Hasan Erdogan, Mesut Sivri
      Background and objective Adrenal tumours occur on adrenal glands surrounded by organs and osteoid. These tumours can be categorized as either functional, non-functional, malign, or benign. Depending on their appearance in the abdomen, adrenal tumours can arise from one adrenal gland (unilateral) or from both adrenal glands (bilateral) and can connect with other organs, including the liver, spleen, pancreas, etc. This connection phenomenon constitutes the most important handicap against adrenal tumour segmentation. Size change, variety of shape, diverse location, and low contrast (similar grey values between the various tissues) are other disadvantages compounding segmentation difficulty. Few studies have considered adrenal tumour segmentation, and no significant improvement has been achieved for unilateral, bilateral, adherent, or noncohesive tumour segmentation. There is also no recognised segmentation pipeline or method for adrenal tumours including different shape, size, or location information. Methods This study proposes an adrenal tumour segmentation (ATUS) pipeline designed to eliminate the above disadvantages for adrenal tumour segmentation. ATUS incorporates a number of image methods, including contrast limited adaptive histogram equalization, split and merge based on quadtree decomposition, mean shift segmentation, large grey level eliminator, and region growing. Results Performance assessment of ATUS was realised on 32 arterial and portal phase computed tomography images using six metrics: dice, jaccard, sensitivity, specificity, accuracy, and structural similarity index. ATUS achieved remarkable segmentation performance, and was not affected by the discussed handicaps, on particularly adherence to other organs, with success rates of 83.06%, 71.44%, 86.44%, 99.66%, 99.43%, and 98.51% for the metrics, respectively, for images including sufficient contrast uptake. Conclusions The proposed ATUS system realises detailed adrenal tumour segmentation, and avoids known disadvantages preventing accurate segmentation.
      Graphical abstract image

      PubDate: 2018-03-08T16:33:45Z
  • Assessment of auditory threshold using Multiple Magnitude-Squared
           Coherence and amplitude modulated tones monaural stimulation around 40Hz
    • Abstract: Publication date: Available online 6 March 2018
      Source:Computer Methods and Programs in Biomedicine
      Author(s): Glaucia de Morais Silva, Felipe Antunes, Catherine Salvador Henrique, Leonardo Bonato Felix
      Background and Objective The use of objective detection techniques applied to the auditory steady-state responses (ASSRs) for the assessment of auditory thresholds has been investigated over the years. The idea consists in setting up the audiometric profile without subjective inference from patients and evaluators. The challenge encountered is to reduce the detection time of auditory thresholds reaching high correlation coefficients between the objective and the conventional thresholds, as well as reducing difference between thresholds. Methods This paper evaluated the use of the Multiple Magnitude-Squared Coherence (MMSC) in Auditory Steady-State Responses (ASSRs) evoked by amplitude modulated tones around 40Hz, attaining objective audiograms, which were, later, compared to conventional audiograms. It was proposed an analysis of the electroencephalogram signals of ten subjects, monaurally stimulated, in the intensities 15, 20, 25, 30, 40 and 50 dB SPL, for carrier frequencies of 0.5, 1, 2 and 4 kHz. After the detection protocol parameters variation, two detectors were selected according to behavioral thresholds. Results The method of this study resulted in a Maximum detector with correlation coefficient r = 0.9262, mean difference between the objective and behavioral thresholds of 6.44 dB SPL, average detection time per ear of 49.96 min and per stimulus of 2.08 min. Meanwhile, the Fast detector presented coefficient r = 0.8401, mean difference of 6.81 dB SPL, average detection time of 28.20 min per ear and 1.18 per stimulus. Conclusions The results of this study indicate that the MMSC use in the auditory responses detection might provide a reliable and efficient estimation of auditory thresholds.

      PubDate: 2018-03-08T16:33:45Z
  • Tissue Classification and Segmentation of Pressure Injuries Using
           Convolutional Neural Networks
    • Abstract: Publication date: Available online 3 March 2018
      Source:Computer Methods and Programs in Biomedicine
      Author(s): Sofia Zahia, Daniel Sierra-Sosa, Begonya Garcia-Zapirain, Adel Elmaghraby
      Background and Objectives: This paper presents a new approach for automatic tissue classification in pressure injuries. These wounds are localized skin damages which need frequent diagnosis and treatment. Therefore, a reliable and accurate systems for segmentation and tissue type identification are needed in order to achieve better treatment results. Methods: Our proposed system is based on a Convolutional Neural Network (CNN) devoted to performing optimized segmentation of the different tissue types present in pressure injuries (granulation, slough, and necrotic tissues). A preprocessing step removes the flash light and creates a set of 5x5 sub-images which are used as input for the CNN network. The network output will classify every sub-image of the validation set into one of the three classes studied. Results: The metrics used to evaluate our approach show an overall average classification accuracy of 92.01%, an average total weighted Dice Similarity Coefficient of 91.38%, and an average precision per class of 97.31% for granulation tissue, 96.59% for necrotic tissue, and 77.90% for slough tissue. Conclusions: Our system has been proven to make recognition of complicated structures in biomedical images feasible.

      PubDate: 2018-03-08T16:33:45Z
  • SLAM-based dense surface reconstruction in monocular Minimally Invasive
           Surgery and its application to Augmented Reality
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Long Chen, Wen Tang, Nigel W. John, Tao Ruan Wan, Jian Jun Zhang
      Background and objective: While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon’s performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. Methods: A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations based on a robust 3D calibration. Results: We demonstrate the clinical relevance of our proposed system through two examples: (a) measurement of the surface; (b) depth cues in monocular endoscopy. The performance and accuracy evaluations of the proposed framework consist of two steps. First, we have created a computer-generated endoscopy simulation video to quantify the accuracy of the camera tracking by comparing the results of the video camera tracking with the recorded ground-truth camera trajectories. The accuracy of the surface reconstruction is assessed by evaluating the Root Mean Square Distance (RMSD) of surface vertices of the reconstructed mesh with that of the ground truth 3D models. An error of 1.24 mm for the camera trajectories has been obtained and the RMSD for surface reconstruction is 2.54 mm, which compare favourably with previous approaches. Second, in vivo laparoscopic videos are used to examine the quality of accurate AR based annotation and measurement, and the creation of depth cues. These results show the potential promise of our geometry-aware AR technology to be used in MIS surgical scenes. Conclusions: The results show that the new framework is robust and accurate in dealing with challenging situations such as the rapid endoscopy camera movements in monocular MIS scenes. Both camera tracking and surface reconstruction based on a sparse point cloud are effective and operated in real-time. This demonstrates the potential of our algorithm for accurate AR localization and depth augmentation with geometric cues and correct surface measurements in MIS with monocular endoscopes.

      PubDate: 2018-02-25T15:53:53Z
  • Using modified information delivery to enhance the traditional pharmacy
           OSCE program at TMU – a pilot study
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Che-Wei Lin, Elizabeth H. Chang, Daniel L. Clinciu, Yun-Ting Peng, Wen-Chen Huang, Chien-Chih Wu, Jen-Chieh Wu, Yu-Chuan Li
      Background and Objective Objective Structured Clinical Examination (OSCE) has been used in many areas of healthcare training over the years. However, it constantly needs to be upgraded and enhanced due to technological and teaching changes. We aim at implementing an integrative OSCE method which employs informatics via the virtual patient within the pharmacy education curriculum at Taipei Medical University to enhance the pharmacy students’ competence for using and disseminating information and to also improve critical thinking and clinical reasoning. Methods We propose an integrated pharmacy OSCE which uses standardized patients and virtual patients (DxR Clinician). To evaluate this method, we designed four simulated stations and pilot tested with 19 students in the first year of the Master in Clinical Pharmacy program. Three stations were simulated as the inpatient pharmacy: 1) History and lab data collection; 2) Prescription review; 3) Calling physician to discuss potential prescription problems. The fourth was simulated as the patient ward station to provide patient education. A satisfaction questionnaire was administered at the end of the study. Results Students rated their ability of 2.84, 2.37, 2.37, and 3.63 of 5 for each of the four stations, with the second and third being the most difficult stations. The method obtained an average rating of 4.32 of 5 for relevance, 4.16 for improving clinical ability, 4.32 for practicality in future healthcare work, and 4.28 for willing to have another similar learning experience. Conclusion The integration of Virtual Patient in this study reveals that this assessment method is efficient and practical in many aspects. Most importantly, it provides the test taker with a much closer real-life clinical encounter. Although it is in many ways more difficult, it also provides for better “learning from mistakes” opportunities for test-takers.

      PubDate: 2018-02-25T15:53:53Z
  • Automated choroid segmentation of three-dimensional SD-OCT images by
           incorporating EDI-OCT images
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Qiang Chen, Sijie Niu, Wangyi Fang, Yuanlu Shuai, Wen Fan, Songtao Yuan, Qinghuai Liu
      Background and Objective The measurement of choroidal volume is more related with eye diseases than choroidal thickness, because the choroidal volume can reflect the diseases comprehensively. The purpose is to automatically segment choroid for three-dimensional (3D) spectral domain optical coherence tomography (SD-OCT) images. Methods We present a novel choroid segmentation strategy for SD-OCT images by incorporating the enhanced depth imaging OCT (EDI-OCT) images. The down boundary of the choroid, namely choroid-sclera junction (CSJ), is almost invisible in SD-OCT images, while visible in EDI-OCT images. During the SD-OCT imaging, the EDI-OCT images can be generated for the same eye. Thus, we present an EDI-OCT-driven choroid segmentation method for SD-OCT images, where the choroid segmentation results of the EDI-OCT images are used to estimate the average choroidal thickness and to improve the construction of the CSJ feature space of the SD-OCT images. We also present a whole registration method between EDI-OCT and SD-OCT images based on retinal thickness and Bruch's Membrane (BM) position. The CSJ surface is obtained with a 3D graph search in the CSJ feature space. Results Experimental results with 768 images (6 cubes, 128 B-scan images for each cube) from 2 healthy persons, 2 age-related macular degeneration (AMD) and 2 diabetic retinopathy (DR) patients, and 210 B-scan images from other 8 healthy persons and 21 patients demonstrate that our method can achieve high segmentation accuracy. The mean choroid volume difference and overlap ratio for 6 cubes between our proposed method and outlines drawn by experts were −1.96µm3 and 88.56%, respectively. Conclusions Our method is effective for the 3D choroid segmentation of SD-OCT images because the segmentation accuracy and stability are compared with the manual segmentation.

      PubDate: 2018-02-25T15:53:53Z
  • Longitudinal health-related quality of life analysis in oncology with time
           to event approaches, the STATA command qlqc30_TTD
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): C. Bascoul-Mollevi, Marion Savina, Amélie Anota, Antoine Barbieri, David Azria, Franck Bonnetain, Sophie Gourgou
      Background and objective Health-related quality of life (HRQoL) has become one relevant and available alternative endpoint of clinical trials in cancer research to evaluate efficiency of care both for the patient and health system. HRQoL in oncology is mainly assessed using the 30-item European Organisation for Research and Treatment of Cancer Quality of Life—Questionnaire Core 30 (EORTC QLQ-C30). The EORTC QLQ-C30 questionnaire is usually assessed at different times along the clinical trials in order to analyze the kinetics of HRQoL evolution and to fully assess the impact of the treatment on the patient's HRQoL level. In this perspective, the realization of a longitudinal HRQoL analysis is essential and the time to HRQoL score deterioration approach is a method which is more and more used in clinical trials. Method Using the Stata software, we developed a QLQ-C30 specific command, qlqc30_TTD , which implements longitudinal strategies based on the time to event methods by considering the time to HRQoL score deterioration. This user-written command providing automatic execution of the Time To Deterioration (TTD) and Time Until Definitive Deterioration (TUDD) methods. Result The program implements all published definitions and provides the Kaplan–Meier curves for each dimension (by group) and a table with the Hazard Ratio and Log-Rank test. Conclusion The longitudinal analysis of HRQoL data in cancer clinical trials remains complex with only few programs like ours computed. This program will be of great help and will allow a more systematic and quicker analysis of the HRQoL data in clinical trials in oncology.

      PubDate: 2018-02-25T15:53:53Z
  • Communication and diagnosis: Cornerstones for achieving precision medicine
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Chih Yuan Wu, Usman Iqbal, Yu-Chuan (Jack) Li

      PubDate: 2018-02-25T15:53:53Z
  • Tuberculosis diagnosis support analysis for precarious health information
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Alvaro David Orjuela-Cañón, Jorge Eliécer Camargo Mendoza, Carlos Enrique Awad García, Erika Paola Vergara Vela
      Background and objective Pulmonary tuberculosis is a world emergency for the World Health Organization. Techniques and new diagnosis tools are important to battle this bacterial infection. There have been many advances in all those fields, but in developing countries such as Colombia, where the resources and infrastructure are limited, new fast and less expensive strategies are increasingly needed. Artificial neural networks are computational intelligence techniques that can be used in this kind of problems and offer additional support in the tuberculosis diagnosis process, providing a tool to medical staff to make decisions about management of subjects under suspicious of tuberculosis. Materials and methods A database extracted from 105 subjects with precarious information of people under suspect of pulmonary tuberculosis was used in this study. Data extracted from sex, age, diabetes, homeless, AIDS status and a variable with clinical knowledge from the medical personnel were used. Models based on artificial neural networks were used, exploring supervised learning to detect the disease. Unsupervised learning was used to create three risk groups based on available information. Results Obtained results are comparable with traditional techniques for detection of tuberculosis, showing advantages such as fast and low implementation costs. Sensitivity of 97% and specificity of 71% where achieved. Conclusions Used techniques allowed to obtain valuable information that can be useful for physicians who treat the disease in decision making processes, especially under limited infrastructure and data.

      PubDate: 2018-02-25T15:53:53Z
  • Fully automatic cervical vertebrae segmentation framework for X-ray images
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): S. M. Masudur Rahman Al Arif, Karen Knapp, Greg Slabaugh
      The cervical spine is a highly flexible anatomy and therefore vulnerable to injuries. Unfortunately, a large number of injuries in lateral cervical X-ray images remain undiagnosed due to human errors. Computer-aided injury detection has the potential to reduce the risk of misdiagnosis. Towards building an automatic injury detection system, in this paper, we propose a deep learning-based fully automatic framework for segmentation of cervical vertebrae in X-ray images. The framework first localizes the spinal region in the image using a deep fully convolutional neural network. Then vertebra centers are localized using a novel deep probabilistic spatial regression network. Finally, a novel shape-aware deep segmentation network is used to segment the vertebrae in the image. The framework can take an X-ray image and produce a vertebrae segmentation result without any manual intervention. Each block of the fully automatic framework has been trained on a set of 124 X-ray images and tested on another 172 images, all collected from real-life hospital emergency rooms. A Dice similarity coefficient of 0.84 and a shape error of 1.69 mm have been achieved.

      PubDate: 2018-02-25T15:53:53Z
  • ECG fiducial point extraction using switching Kalman filter
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Mahsa Akhbari, Nasim Montazeri Ghahjaverestan, Mohammad B. Shamsollahi, Christian Jutten
      In this paper, we propose a novel method for extracting fiducial points (FPs) of the beats in electrocardiogram (ECG) signals using switching Kalman filter (SKF). In this method, according to McSharry’s model, ECG waveforms (P-wave, QRS complex and T-wave) are modeled with Gaussian functions and ECG baselines are modeled with first order auto regressive models. In the proposed method, a discrete state variable called “switch” is considered that affects only the observation equations. We denote a mode as a specific observation equation and switch changes between 7 modes and corresponds to different segments of an ECG beat. At each time instant, the probability of each mode is calculated and compared among two consecutive modes and a path is estimated, which shows the relation of each part of the ECG signal to the mode with the maximum probability. ECG FPs are found from the estimated path. For performance evaluation, the Physionet QT database is used and the proposed method is compared with methods based on wavelet transform, partially collapsed Gibbs sampler (PCGS) and extended Kalman filter. For our proposed method, the mean error and the root mean square error across all FPs are 2 ms (i.e. less than one sample) and 14 ms, respectively. These errors are significantly smaller than those obtained using other methods. The proposed method achieves lesser RMSE and smaller variability with respect to others.

      PubDate: 2018-02-25T15:53:53Z
  • Visuospatial working memory assessment using a digital tablet in
           adolescents with attention deficit hyperactivity disorder
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Gi Jung Hyun, Jin Wan Park, Jin Hee Kim, Kyoung Joon Min, Young Sik Lee, Sun Mi Kim, Doug Hyun Han
      Background and objective Attention-deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder hypothesized to involve impaired visuospatial working memory (VSWM). However, there are few studies utilizing neuropsychological tests to measure VSWM in ADHD adolescents. The Rey–Osterrieth complex figure test (ROCF) is commonly used as a neuropsychological test to assess visuospatial working memory for individuals with ADHD. We assessed working memory using the ROCF test on a digital Galaxy tablet with the technically new Gaussian filter method. Methods Thirty adolescents with ADHD and 30 healthy control adolescents were recruited for participation in the current study. All adolescents were assessed with K-WISC-IV, Children's depression inventory, and the Korean ADHD rating scale. All adolescents were asked to copy the ROCF from paper onto a Galaxy tablet screen using a wireless pen. Results There was a significant difference in representative value of the deviation of the original images from template images (R-value) in copy and delayed recall between ADHD adolescents and healthy adolescents. There was no significant difference in R-value of immediate recall between ADHD adolescents and healthy adolescents. In all adolescents (ADHD and healthy) and ADHD adolescents, the R-value of copy was negatively correlated with visuospatial index and working memory index, and the R-value of delayed recall was negatively correlated with WMI. The R-value of copy and delayed recall was positively correlated with K-ARS in all adolescents and ADHD adolescents. Conclusions ADHD adolescents showed differences in the R-values of copy and delayed recall in the digital ROCF version compared to healthy adolescents. The digital ROCF assessment tool can represent different patterns of visuospatial working memory abilities in ADHD adolescents compared to healthy adolescents.

      PubDate: 2018-02-25T15:53:53Z
  • Automated coronary artery tree segmentation in X-ray angiography using
           improved Hessian based enhancement and statistical region merging
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Tao Wan, Xiaoqing Shang, Weilin Yang, Jianhui Chen, Deyu Li, Zengchang Qin
      Background and Objective Coronary artery segmentation is a fundamental step for a computer-aided diagnosis system to be developed to assist cardiothoracic radiologists in detecting coronary artery diseases. Manual delineation of the vasculature becomes tedious or even impossible with a large number of images acquired in the daily life clinic. A new computerized image-based segmentation method is presented for automatically extracting coronary arteries from angiography images. Methods A combination of a multiscale-based adaptive Hessian-based enhancement method and a statistical region merging technique provides a simple and effective way to improve the complex vessel structures as well as thin vessel delineation which often missed by other segmentation methods. The methodology was validated on 100 patients who underwent diagnostic coronary angiography. The segmentation performance was assessed via both qualitative and quantitative evaluations. Results Quantitative evaluation shows that our method is able to identify coronary artery trees with an accuracy of 93% and outperforms other segmentation methods in terms of two widely used segmentation metrics of mean absolute difference and dice similarity coefficient. Conclusions The comparison to the manual segmentations from three human observers suggests that the presented automated segmentation method is potential to be used in an image-based computerized analysis system for early detection of coronary artery disease.

      PubDate: 2018-02-25T15:53:53Z
  • Cloud-assisted mutual authentication and privacy preservation protocol for
           telecare medical information systems
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Chun-Ta Li, Dong-Her Shih, Chun-Cheng Wang
      Background and Objective: With the rapid development of wireless communication technologies and the growing prevalence of smart devices, telecare medical information system (TMIS) allows patients to receive medical treatments from the doctors via Internet technology without visiting hospitals in person. By adopting mobile device, cloud-assisted platform and wireless body area network, the patients can collect their physiological conditions and upload them to medical cloud via their mobile devices, enabling caregivers or doctors to provide patients with appropriate treatments at anytime and anywhere. In order to protect the medical privacy of the patient and guarantee reliability of the system, before accessing the TMIS, all system participants must be authenticated. Methods: Mohit et al. recently suggested a lightweight authentication protocol for cloud-based health care system. They claimed their protocol ensures resilience of all well-known security attacks and has several important features such as mutual authentication and patient anonymity. In this paper, we demonstrate that Mohit et al.’s authentication protocol has various security flaws and we further introduce an enhanced version of their protocol for cloud-assisted TMIS, which can ensure patient anonymity and patient unlinkability and prevent the security threats of report revelation and report forgery attacks. Results: The security analysis proves that our enhanced protocol is secure against various known attacks as well as found in Mohit et al.’s protocol. Compared with existing related protocols, our enhanced protocol keeps the merits of all desirable security requirements and also maintains the efficiency in terms of computation costs for cloud-assisted TMIS. Conclusions: We propose a more secure mutual authentication and privacy preservation protocol for cloud-assisted TMIS, which fixes the mentioned security weaknesses found in Mohit et al.’s protocol. According to our analysis, our authentication protocol satisfies most functionality features for privacy preservation and effectively cope with cloud-assisted TMIS with better efficiency.

      PubDate: 2018-02-25T15:53:53Z
  • Efficient computational model for classification of protein localization
           images using Extended Threshold Adjacency Statistics and Support Vector
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Muhammad Tahir, Bismillah Jan, Maqsood Hayat, Shakir Ullah Shah, Muhammad Amin
      Background and objective Discriminative and informative feature extraction is the core requirement for accurate and efficient classification of protein subcellular localization images so that drug development could be more effective. The objective of this paper is to propose a novel modification in the Threshold Adjacency Statistics technique and enhance its discriminative power. Methods In this work, we utilized Threshold Adjacency Statistics from a novel perspective to enhance its discrimination power and efficiency. In this connection, we utilized seven threshold ranges to produce seven distinct feature spaces, which are then used to train seven SVMs. The final prediction is obtained through the majority voting scheme. The proposed ETAS-SubLoc system is tested on two benchmark datasets using 5-fold cross-validation technique. Results We observed that our proposed novel utilization of TAS technique has improved the discriminative power of the classifier. The ETAS-SubLoc system has achieved 99.2% accuracy, 99.3% sensitivity and 99.1% specificity for Endogenous dataset outperforming the classical Threshold Adjacency Statistics technique. Similarly, 91.8% accuracy, 96.3% sensitivity and 91.6% specificity values are achieved for Transfected dataset. Conclusions Simulation results validated the effectiveness of ETAS-SubLoc that provides superior prediction performance compared to the existing technique. The proposed methodology aims at providing support to pharmaceutical industry as well as research community towards better drug designing and innovation in the fields of bioinformatics and computational biology. The implementation code for replicating the experiments presented in this paper is available at:
      Graphical abstract image

      PubDate: 2018-02-25T15:53:53Z
  • Assessing mechanical ventilation asynchrony through iterative airway
           pressure reconstruction
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Yeong Shiong Chiew, Chee Pin Tan, J. Geoffrey Chase, Yeong Woei Chiew, Thomas Desaive, Azrina Md Ralib, Mohd Basri Mat Nor
      Background and objective Respiratory mechanics estimation can be used to guide mechanical ventilation (MV) but is severely compromised when asynchronous breathing occurs. In addition, asynchrony during MV is often not monitored and little is known about the impact or magnitude of asynchronous breathing towards recovery. Thus, it is important to monitor and quantify asynchronous breathing over every breath in an automated fashion, enabling the ability to overcome the limitations of model-based respiratory mechanics estimation during asynchronous breathing ventilation. Methods An iterative airway pressure reconstruction (IPR) method is used to reconstruct asynchronous airway pressure waveforms to better match passive breathing airway waveforms using a single compartment model. The reconstructed pressure enables estimation of respiratory mechanics of airway pressure waveform essentially free from asynchrony. Reconstruction enables real-time breath-to-breath monitoring and quantification of the magnitude of the asynchrony (MAsyn ). Results and discussion Over 100,000 breathing cycles from MV patients with known asynchronous breathing were analyzed. The IPR was able to reconstruct different types of asynchronous breathing. The resulting respiratory mechanics estimated using pressure reconstruction were more consistent with smaller interquartile range (IQR) compared to respiratory mechanics estimated using asynchronous pressure. Comparing reconstructed pressure with asynchronous pressure waveforms quantifies the magnitude of asynchronous breathing, which has a median value MAsyn for the entire dataset of 3.8%. Conclusion The iterative pressure reconstruction method is capable of identifying asynchronous breaths and improving respiratory mechanics estimation consistency compared to conventional model-based methods. It provides an opportunity to automate real-time quantification of asynchronous breathing frequency and magnitude that was previously limited to invasively method only.

      PubDate: 2018-02-25T15:53:53Z
  • Data mart construction based on semantic annotation of scientific
           articles: A case study for the prioritization of drug targets
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Marlon Amaro Coelho Teixeira, Kele Teixeira Belloze, Maria Cláudia Cavalcanti, Floriano P. Silva-Junior
      Background and objectives Semantic text annotation enables the association of semantic information (ontology concepts) to text expressions (terms), which are readable by software agents. In the scientific scenario, this is particularly useful because it reveals a lot of scientific discoveries that are hidden within academic articles. The Biomedical area has more than 300 ontologies, most of them composed of over 500 concepts. These ontologies can be used to annotate scientific papers and thus, facilitate data extraction. However, in the context of a scientific research, a simple keyword-based query using the interface of a digital scientific texts library can return more than a thousand hits. The analysis of such a large set of texts, annotated with such numerous and large ontologies, is not an easy task. Therefore, the main objective of this work is to provide a method that could facilitate this task. Methods This work describes a method called Text and Ontology ETL (TOETL), to build an analytical view over such texts. First, a corpus of selected papers is semantically annotated using distinct ontologies. Then, the annotation data is extracted, organized and aggregated into the dimensional schema of a data mart. Results Besides the TOETL method, this work illustrates its application through the development of the TaP DM (Target Prioritization data mart). This data mart has focus on the research of gene essentiality, a key concept to be considered when searching for genes showing potential as anti-infective drug targets. Conclusions This work reveals that the proposed approach is a relevant tool to support decision making in the prioritization of new drug targets, being more efficient than the keyword-based traditional tools.

      PubDate: 2018-02-25T15:53:53Z
  • Computer assisted gastric abnormalities detection using hybrid texture
           descriptors for chromoendoscopy images
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Hussam Ali, Mussarat Yasmin, Muhammad Sharif, Mubashir Husain Rehmani
      Background and Objective The early diagnosis of stomach cancer can be performed by using a proper screening procedure. Chromoendoscopy (CH) is an image-enhanced video endoscopy technique, which is used for inspection of the gastrointestinal-tract by spraying dyes to highlight the gastric mucosal structures. An endoscopy session can end up with generating a large number of video frames. Therefore, inspection of every individual endoscopic-frame is an exhaustive task for the medical experts. In contrast with manual inspection, the automated analysis of gastroenterology images using computer vision based techniques can provide assistance to endoscopist, by finding out abnormal frames from the whole endoscopic sequence. Methods In this paper, we have presented a new feature extraction method named as Gabor-based gray-level co-occurrence matrix (G2LCM) for computer-aided detection of CH abnormal frames. It is a hybrid texture extraction approach which extracts a combination both local and global texture descriptors. Moreover, texture information of a CH image is represented by computing the gray level co-occurrence matrix of Gabor filters responses. Furthermore, the second-order statistics of these co-occurrence matrices are computed to represent images’ texture. Results The obtained results show the possibility to correctly classifying abnormal from normal frames, with sensitivity, specificity, accuracy, and area under the curve as 91%, 82%, 87% and 0.91 respectively, by using a support vector machine classifier and G2LCM texture features. Conclusion It is apparent from results that the proposed system can be used for providing aid to the gastroenterologist in the screening of the gastric tract. Ultimately, the time taken by an endoscopic procedure will be sufficiently reduced.

      PubDate: 2018-02-25T15:53:53Z
  • Simultaneous detection and classification of breast masses in digital
           mammograms via a deep learning YOLO-based CAD system
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Mohammed A. Al-masni, Mugahed A. Al-antari, Jeong-Min Park, Geon Gi, Tae-Yeon Kim, Patricio Rivera, Edwin Valarezo, Mun-Taek Choi, Seung-Moo Han, Tae-Seong Kim
      Background and objective Automatic detection and classification of the masses in mammograms are still a big challenge and play a crucial role to assist radiologists for accurate diagnosis. In this paper, we propose a novel Computer-Aided Diagnosis (CAD) system based on one of the regional deep learning techniques, a ROI-based Convolutional Neural Network (CNN) which is called You Only Look Once (YOLO). Although most previous studies only deal with classification of masses, our proposed YOLO-based CAD system can handle detection and classification simultaneously in one framework. Methods The proposed CAD system contains four main stages: preprocessing of mammograms, feature extraction utilizing deep convolutional networks, mass detection with confidence, and finally mass classification using Fully Connected Neural Networks (FC-NNs). In this study, we utilized original 600 mammograms from Digital Database for Screening Mammography (DDSM) and their augmented mammograms of 2,400 with the information of the masses and their types in training and testing our CAD. The trained YOLO-based CAD system detects the masses and then classifies their types into benign or malignant. Results Our results with five-fold cross validation tests show that the proposed CAD system detects the mass location with an overall accuracy of 99.7%. The system also distinguishes between benign and malignant lesions with an overall accuracy of 97%. Conclusions Our proposed system even works on some challenging breast cancer cases where the masses exist over the pectoral muscles or dense regions.

      PubDate: 2018-02-25T15:53:53Z
  • Using computational support in motor ability analysis of individuals with
           Down syndrome: Literature review
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Clauirton A. Siebra, Helio A. Siebra
      Background The lack of motor ability is one of the main Down syndrome (DS) effects. However, there are several types of motor disorders that can be attenuated or corrected if they are early identified and properly analyzed. Objectives The aim of our study is to support the local Physical Activity research group, which works with about 25 DS children, by means of computational resources for motor analysis. To that end, we first needed to identify the main computational approaches that support the motor analysis of DS individuals, if they are already connected to intervention programs, and potential opportunities to extend the current state of the art. Method We carried out a systematic review that identified 28 papers from the current literature. These papers were then analyzed to answer the research questions defined in our study. Results Our main findings were: (1) the temporal distribution of papers shows this area is new and it is starting to create a body of knowledge that in fact supports motor treatments of DS individuals; (2) there is a diversity of studies that consider different research directions such as comparisons of motor features of DS with non-DS individuals, characterization of DS motor features, and approaches for intervention programs to improve DS motor abilities; (3) there are several types of sensing hardware that enables the development of studies from different perspectives; (4) spatial monitoring is performed but only in laboratory conditions; (5); mathematical tools are largely used while strategies based on artificial intelligence for automated analysis are ignored; and (6) proposals for DS post-intervention monitoring are not found in the literature. Conclusion DS motor analysis is still a new research area and it is not mature yet. Thus, the use of computational resources is very pragmatic and focused only on mathematical tools that support the numerical analysis of the acquired data. The main proposals for motor analysis are performed in laboratory, so that there are several opportunities to create computational resources to obtain real-time data on the move. The integration of this data with intervention strategies is also a potential area for future researches.

      PubDate: 2018-02-25T15:53:53Z
  • Analysis of methods commonly used in biomedicine for treatment versus
           control comparison of very small samples
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Jasna L. Ristić-Djurović, Saša Ćirković, Pavle Mladenović, Nebojša Romčević, Alexander M. Trbovich
      Background and objective A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Methods Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. Results The sample size 9 and the t -test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6–8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3–5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. Conclusions The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment.

      PubDate: 2018-02-25T15:53:53Z
  • Fast modified Self-organizing Deformable Model: Geometrical
           feature-preserving mapping of organ models onto target surfaces with
           various shapes and topologies
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Shoko Miyauchi, Ken’ichi Morooka, Tokuo Tsuji, Yasushi Miyagi, Takaichi Fukuda, Ryo Kurazume
      Background and Objective This paper proposes a new method for mapping surface models of human organs onto target surfaces with the same genus as the organs. Methods In the proposed method, called modified Self-organizing Deformable Model (mSDM), the mapping problem is formulated as the minimization of an objective function which is defined as the weighted linear combination of four energy functions: model fitness, foldover-free, landmark mapping accuracy, and geometrical feature preservation. Further, we extend mSDM to speed up its processes, and call it Fast mSDM. Results From the mapping results of various organ models with different number of holes, it is observed that Fast mSDM can map the organ models onto their target surfaces efficiently and stably without foldovers while preserving geometrical features. Conclusions Fast mSDM can map the organ model onto the target surface efficiently and stably, and is applicable to medical applications including Statistical Shape Model.
      Graphical abstract image

      PubDate: 2018-02-25T15:53:53Z
  • A YinYang bipolar fuzzy cognitive TOPSIS method to bipolar disorder
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Ying Han, Zhenyu Lu, Zhenguang Du, Qi Luo, Sheng Chen
      Background and Objective: Bipolar disorder is often mis-diagnosed as unipolar depression in the clinical diagnosis. The main reason is that, different from other diseases, bipolarity is the norm rather than exception in bipolar disorder diagnosis. YinYang bipolar fuzzy set captures bipolarity and has been successfully used to construct a unified inference mathematical modeling method to bipolar disorder clinical diagnosis. Nevertheless, symptoms and their interrelationships are not considered in the existing method, circumventing its ability to describe complexity of bipolar disorder. Thus, in this paper, a YinYang bipolar fuzzy multi-criteria group decision making method to bipolar disorder clinical diagnosis is developed. Methods: Comparing with the existing method, the new one is more comprehensive. The merits of the new method are listed as follows: First of all, multi-criteria group decision making method is introduced into bipolar disorder diagnosis for considering different symptoms and multiple doctors’ opinions. Secondly, the discreet diagnosis principle is adopted by the revised TOPSIS method. Last but not the least, YinYang bipolar fuzzy cognitive map is provided for the understanding of interrelations among symptoms. Results: The illustrated case demonstrates the feasibility, validity, and necessity of the theoretical results obtained. Moreover, the comparison analysis demonstrates that the diagnosis result is more accurate, when interrelations about symptoms are considered in the proposed method. Conclusions: In a conclusion, the main contribution of this paper is to provide a comprehensive mathematical approach to improve the accuracy of bipolar disorder clinical diagnosis, in which both bipolarity and complexity are considered.

      PubDate: 2018-02-16T15:24:43Z
  • Segmentation of liver and vessels from CT images and classification of
           liver segments for preoperative liver surgical planning in living donor
           liver transplantation
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Xiaopeng Yang, Jae Do Yang, Hong Pil Hwang, Hee Chul Yu, Sungwoo Ahn, Bong-Wan Kim, Heecheon You
      Background and objective The present study developed an effective surgical planning method consisting of a liver extraction stage, a vessel extraction stage, and a liver segment classification stage based on abdominal computerized tomography (CT) images. Methods An automatic seed point identification method, customized level set methods, and an automated thresholding method were applied in this study to extraction of the liver, portal vein (PV), and hepatic vein (HV) from CT images. Then, a semi-automatic method was developed to separate PV and HV. Lastly, a local searching method was proposed for identification of PV branches and the nearest neighbor approximation method was applied to classifying liver segments. Results Onsite evaluation of liver segmentation provided by the SLIVER07 website showed that the liver segmentation method achieved an average volumetric overlap accuracy of 95.2%. An expert radiologist evaluation of vessel segmentation showed no false positive errors or misconnections between PV and HV in the extracted vessel trees. Clinical evaluation of liver segment classification using 43 CT datasets from two medical centers showed that the proposed method achieved high accuracy in liver graft volumetry (absolute error, AE = 45.2 ± 20.9 ml; percentage of AE, %AE = 6.8% ± 3.2%; percentage of %AE > 10% = 16.3%; percentage of %AE > 20% = none) and the classified segment boundaries agreed with the intraoperative surgical cutting boundaries by visual inspection. Conclusions The method in this study is effective in segmentation of liver and vessels and classification of liver segments and can be applied to preoperative liver surgical planning in living donor liver transplantation.

      PubDate: 2018-02-16T15:24:43Z
  • NiftyNet: a deep-learning platform for medical imaging
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Eli Gibson, Wenqi Li, Carole Sudre, Lucas Fidon, Dzhoshkun I. Shakir, Guotai Wang, Zach Eaton-Rosen, Robert Gray, Tom Doel, Yipeng Hu, Tom Whyntie, Parashkev Nachev, Marc Modat, Dean C. Barratt, Sébastien Ourselin, M. Jorge Cardoso, Tom Vercauteren
      Background and objectives Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. Methods The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. Results We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. Conclusions The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.

      PubDate: 2018-02-16T15:24:43Z
  • Low-complexity hardware design methodology for reliable and automated
           removal of ocular and muscular artifact from EEG
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Amit Acharyya, Pranit N Jadhav, Valentina Bono, Koushik Maharatna, Ganesh R. Naik
      Background and objective EEG is a non-invasive tool for neuro-developmental disorder diagnosis and treatment. However, EEG signal is mixed with other biological signals including Ocular and Muscular artifacts making it difficult to extract the diagnostic features. Therefore, the contaminated EEG channels are often discarded by the medical practitioners which may result in less accurate diagnosis. Many existing methods require reference electrodes, which will create discomfort to the patient/children and cause hindrance to the diagnosis of the neuro-developmental disorder and Brain Computer Interface in the pervasive environment. Therefore, it would be ideal if these artifacts can be removed real time on the hardware platform in an automated fashion and then the denoised EEG can be used for online diagnosis in a pervasive personalized healthcare environment without the need of any reference electrode. Methods In this paper we propose a reliable, robust and automated methodology to solve the aforementioned problem. The proposed methodology is based on the Haar function based Wavelet decompositions with simple threshold based wavelet domain denoising and artifacts removal schemes. Subsequently hardware implementation results are also presented. 100 EEG data from Physionet, Klinik für Epileptologie, Universität Bonn, Germany, Caltech EEG databases and 7 EEG data from 3 subjects from University of Southampton, UK have been studied and nine exhaustive case studies comprising of real and simulated data have been formulated and tested. The proposed methodology is prototyped and validated using FPGA platform. Results Like existing literature, the performance of the proposed methodology is also measured in terms of correlation, regression and R-square statistics and the respective values lie above 80%, 79% and 65% with the gain in hardware complexity of 64.28% and improvement in hardware delay of 53.58% compared to state-of-the art approaches. Hardware design based on the proposed methodology consumes 75 micro-Watt power. Conclusions The automated methodology proposed in this paper, unlike the state of the art methods, can remove blink and muscular artifacts real time without the need of any extra electrode. Its reliability and robustness is also established after exhaustive simulation study and analysis on both simulated and real data. We believe the proposed methodology would be useful in next generation personalized pervasive healthcare for Brain Computer Interface and neuro-developmental disorder diagnosis and treatment.

      PubDate: 2018-02-16T15:24:43Z
  • Hemodynamic effect of bypass geometry on intracranial aneurysm: A
           numerical investigation
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Burak Kurşun, Levent Uğur, Gökhan Keskin
      Background and objective Hemodynamic analyzes are used in the clinical investigation and treatment of cardiovascular diseases. In the present study, the effect of bypass geometry on intracranial aneurysm hemodynamics was investigated numerically. Pressure, wall shear stress (WSS) and velocity distribution causing the aneurysm to grow and rupture were investigated and the best conditions were tried to be determined in case of bypassing between basilar (BA) and left/right posterior arteries (LPCA/RPCA) for different values of parameters. Methods The finite volume method was used for numerical solutions and calculations were performed with the ANSYS-Fluent software. The SIMPLE algorithm was used to solve the discretized conservation equations. Second Order Upwind method was preferred for finding intermediate point values in the computational domain. As the blood flow velocity changes with time, the blood viscosity value also changes. For this reason, the Carreu model was used in determining the viscosity depending on the velocity. Results Numerical study results showed that when bypassed, pressure and wall shear stresses reduced in the range of 40–70% in the aneurysm. Numerical results obtained are presented in graphs including the variation of pressure, wall shear stress and velocity streamlines in the aneurysm. Conclusion Considering the numerical results for all parameter values, it is seen that the most important factors affecting the pressure and WSS values in bypassing are the bypass position on the basilar artery (Lb ) and the diameter of the bypass vessel (d). Pressure and wall shear stress reduced in the range of 40–70% in the aneurysm in the case of bypass for all parameters. This demonstrates that pressure and WSS values can be greatly reduced in aneurysm treatment by bypassing in cases where clipping or coil embolization methods can not be applied.

      PubDate: 2018-02-16T15:24:43Z
  • A novel biomedical image indexing and retrieval system via deep preference
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Shuchao Pang, Mehmet A. Orgun, Zhezhou Yu
      Background and Objectives The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. Methods We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. Results We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state-of-the-art techniques in indexing biomedical images. Conclusions We propose a novel and automated indexing system based on deep preference learning to characterize biomedical images for developing computer aided diagnosis (CAD) systems in healthcare. Our proposed system shows an outstanding indexing ability and high efficiency for biomedical image retrieval applications and it can be used to collect and annotate the high-resolution images in a biomedical database for further biomedical image research and applications.

      PubDate: 2018-02-16T15:24:43Z
  • Blood vessel segmentation algorithms — Review of methods, datasets
           and evaluation metrics
    • Abstract: Publication date: May 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 158
      Author(s): Sara Moccia, Elena De Momi, Sara El Hadji, Leonardo S. Mattos
      Background Blood vessel segmentation is a topic of high interest in medical image analysis since the analysis of vessels is crucial for diagnosis, treatment planning and execution, and evaluation of clinical outcomes in different fields, including laryngology, neurosurgery and ophthalmology. Automatic or semi-automatic vessel segmentation can support clinicians in performing these tasks. Different medical imaging techniques are currently used in clinical practice and an appropriate choice of the segmentation algorithm is mandatory to deal with the adopted imaging technique characteristics (e.g. resolution, noise and vessel contrast). Objective This paper aims at reviewing the most recent and innovative blood vessel segmentation algorithms. Among the algorithms and approaches considered, we deeply investigated the most novel blood vessel segmentation including machine learning, deformable model, and tracking-based approaches. Methods This paper analyzes more than 100 articles focused on blood vessel segmentation methods. For each analyzed approach, summary tables are presented reporting imaging technique used, anatomical region and performance measures employed. Benefits and disadvantages of each method are highlighted. Discussion Despite the constant progress and efforts addressed in the field, several issues still need to be overcome. A relevant limitation consists in the segmentation of pathological vessels. Unfortunately, not consistent research effort has been addressed to this issue yet. Research is needed since some of the main assumptions made for healthy vessels (such as linearity and circular cross-section) do not hold in pathological tissues, which on the other hand require new vessel model formulations. Moreover, image intensity drops, noise and low contrast still represent an important obstacle for the achievement of a high-quality enhancement. This is particularly true for optical imaging, where the image quality is usually lower in terms of noise and contrast with respect to magnetic resonance and computer tomography angiography. Conclusion No single segmentation approach is suitable for all the different anatomical region or imaging modalities, thus the primary goal of this review was to provide an up to date source of information about the state of the art of the vessel segmentation algorithms so that the most suitable methods can be chosen according to the specific task.

      PubDate: 2018-02-16T15:24:43Z
  • Machine learning based cancer detection using various image modalities
    • Abstract: Publication date: March 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 156
      Author(s): Chung-Ming Lo, Yu-Chuan (Jack) Li

      PubDate: 2018-02-16T15:24:43Z
  • Deep Convolutional Neural Networks for breast cancer screening
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Hiba Chougrad, Hamid Zouaki, Omar Alheyane
      Background and objective Radiologists often have a hard time classifying mammography mass lesions which leads to unnecessary breast biopsies to remove suspicions and this ends up adding exorbitant expenses to an already burdened patient and health care system. Methods In this paper we developed a Computer-aided Diagnosis (CAD) system based on deep Convolutional Neural Networks (CNN) that aims to help the radiologist classify mammography mass lesions. Deep learning usually requires large datasets to train networks of a certain depth from scratch. Transfer learning is an effective method to deal with relatively small datasets as in the case of medical images, although it can be tricky as we can easily start overfitting. Results In this work, we explore the importance of transfer learning and we experimentally determine the best fine-tuning strategy to adopt when training a CNN model. We were able to successfully fine-tune some of the recent, most powerful CNNs and achieved better results compared to other state-of-the-art methods which classified the same public datasets. For instance we achieved 97.35% accuracy and 0.98 AUC on the DDSM database, 95.50% accuracy and 0.97 AUC on the INbreast database and 96.67% accuracy and 0.96 AUC on the BCDR database. Furthermore, after pre-processing and normalizing all the extracted Regions of Interest (ROIs) from the full mammograms, we merged all the datasets to build one large set of images and used it to fine-tune our CNNs. The CNN model which achieved the best results, a 98.94% accuracy, was used as a baseline to build the Breast Cancer Screening Framework. To evaluate the proposed CAD system and its efficiency to classify new images, we tested it on an independent database (MIAS) and got 98.23% accuracy and 0.99 AUC. Conclusion The results obtained demonstrate that the proposed framework is performant and can indeed be used to predict if the mass lesions are benign or malignant.

      PubDate: 2018-02-05T08:47:49Z
  • Machine learning techniques for medical diagnosis of diabetes using iris
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Piyush Samant, Ravinder Agarwal
      Background and Objective Complementary and alternative medicine techniques have shown their potential for the treatment and diagnosis of chronical diseases like diabetes, arthritis etc. On the same time digital image processing techniques for disease diagnosis is reliable and fastest growing field in biomedical. Proposed model is an attempt to evaluate diagnostic validity of an old complementary and alternative medicine technique, iridology for diagnosis of type-2 diabetes using soft computing methods. Methods Investigation was performed over a close group of total 338 subjects (180 diabetic and 158 non-diabetic). Infra-red images of both the eyes were captured simultaneously. The region of interest from the iris image was cropped as zone corresponds to the position of pancreas organ according to the iridology chart. Statistical, texture and discrete wavelength transformation features were extracted from the region of interest. Results The results show best classification accuracy of 89.63% calculated from RF classifier. Maximum specificity and sensitivity were absorbed as 0.9687 and 0.988, respectively. Conclusion Results have revealed the effectiveness and diagnostic significance of proposed model for non-invasive and automatic diabetes diagnosis.

      PubDate: 2018-02-05T08:47:49Z
  • Intradialytic hypotension related episodes identification based on the
           most effective features of photoplethysmography signal
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Vahid Reza Nafisi, Mina Shahabi
      Background and objective One of the most adverse conditions facing the hemodialysis patient is repetitive hypotension during their dialysis session. Different factors can be used to monitor patient conditions and prevent Intradialytic Hypotension (IDH) during hemodialysis. These factors include blood pressure, blood volume, and electrical Impedance factors. In this paper, pre-IDH and IDH episodes were recognized and classified by using the features of the finger photoplethysmography (PPG) signal. In other words, the goal of present study is to use PPG signal features to predict the risk of acute hypotension. Methods Since the PPG signal is non-stationary in nature, the main signal was divided in five-minute intervals with no overlap and then each interval was analyzed separately and fifteen PPG signal features in time and seven features in the frequency domain were extracted. Then different feature selection and classification methods were applied on the normalized feature matrix to select the best features and detect IDH and pre-IDH episodes in dialysis sessions. Results The best results were achieved from a genetic algorithm and AdaBoost. The obtained results on our developed database indicated that the mean and maximum accuracy of the proposed algorithm were 94.5 ± 1.0 and 96.6 respectively. Conclusion Some PPG signal features can be useful during hemodialysis session for hypotension management.
      Graphical abstract image

      PubDate: 2018-02-05T08:47:49Z
  • An expert system design to diagnose cancer by using a new method reduced
           rule base
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Fatih Başçiftçi, Emre Avuçlu
      Background and objectives A Medical Expert System (MES) was developed which uses Reduced Rule Base to diagnose cancer risk according to the symptoms in an individual. A total of 13 symptoms were used. With the new MES, the reduced rules are controlled instead of all possibilities (213= 8192 different possibilities occur). By controlling reduced rules, results are found more quickly. The method of two-level simplification of Boolean functions was used to obtain Reduced Rule Base. Thanks to the developed application with the number of dynamic inputs and outputs on different platforms, anyone can easily test their own cancer easily. Methods More accurate results were obtained considering all the possibilities related to cancer. Thirteen different risk factors were determined to determine the type of cancer. The truth table produced in our study has 13 inputs and 4 outputs. The Boolean Function Minimization method is used to obtain less situations by simplifying logical functions. Diagnosis of cancer quickly thanks to control of the simplified 4 output functions. Results Diagnosis made with the 4 output values obtained using Reduced Rule Base was found to be quicker than diagnosis made by screening all 213= 8192 possibilities. With the improved MES, more probabilities were added to the process and more accurate diagnostic results were obtained. As a result of the simplification process in breast and renal cancer diagnosis 100% diagnosis speed gain, in cervical cancer and lung cancer diagnosis rate gain of 99% was obtained. Conclusions With Boolean function minimization, less number of rules is evaluated instead of evaluating a large number of rules. Reducing the number of rules allows the designed system to work more efficiently and to save time, and facilitates to transfer the rules to the designed Expert systems. Interfaces were developed in different software platforms to enable users to test the accuracy of the application. Any one is able to diagnose the cancer itself using determinative risk factors. Thereby likely to beat the cancer with early diagnosis.

      PubDate: 2018-02-05T08:47:49Z
  • Automatic energy expenditure measurement for health science
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): Cagatay Catal, Akhan Akbulut
      Background and objective It is crucial to predict the human energy expenditure in any sports activity and health science application accurately to investigate the impact of the activity. However, measurement of the real energy expenditure is not a trivial task and involves complex steps. The objective of this work is to improve the performance of existing estimation models of energy expenditure by using machine learning algorithms and several data from different sensors and provide this estimation service in a cloud-based platform. Methods In this study, we used input data such as breathe rate, and hearth rate from three sensors. Inputs are received from a web form and sent to the web service which applies a regression model on Azure cloud platform. During the experiments, we assessed several machine learning models based on regression methods. Results Our experimental results showed that our novel model which applies Boosted Decision Tree Regression in conjunction with the median aggregation technique provides the best result among other five regression algorithms. Conclusions This cloud-based energy expenditure system which uses a web service showed that cloud computing technology is a great opportunity to develop estimation systems and the new model which applies Boosted Decision Tree Regression with the median aggregation provides remarkable results.

      PubDate: 2018-02-05T08:47:49Z
  • dfpk: An R-package for Bayesian dose-finding designs using
           pharmacokinetics (PK) for phase I clinical trials
    • Abstract: Publication date: April 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 157
      Author(s): A. Toumazi, E. Comets, C. Alberti, T. Friede, F. Lentz, N. Stallard, S. Zohar, M. Ursino
      Background and objective Dose-finding, aiming at finding the maximum tolerated dose, and pharmacokinetics studies are the first in human studies in the development process of a new pharmacological treatment. In the literature, to date only few attempts have been made to combine pharmacokinetics and dose-finding and to our knowledge no software implementation is generally available. In previous papers, we proposed several Bayesian adaptive pharmacokinetics-based dose-finding designs in small populations. The objective of this work is to implement these dose-finding methods in an R package, called dfpk. Methods All methods were developed in a sequential Bayesian setting and Bayesian parameter estimation is carried out using the rstan package. All available pharmacokinetics and toxicity data are used to suggest the dose of the next cohort with a constraint regarding the probability of toxicity. Stopping rules are also considered for each method. The ggplot2 package is used to create summary plots of toxicities or concentration curves. Results For all implemented methods, dfpk provides a function (nextDose) to estimate the probability of efficacy and to suggest the dose to give to the next cohort, and a function to run trial simulations to design a trial (nsim). The function generates at each dose the toxicity value related to a pharmacokinetic measure of exposure, the AUC, with an underlying pharmacokinetic one compartmental model with linear absorption. It is included as an example since similar data-frames can be generated directly by the user and passed to nsim. Conclusion The developed user-friendly R package dfpk, available on the CRAN repository, supports the design of innovative dose-finding studies using PK information.

      PubDate: 2018-02-05T08:47:49Z
  • A Review of the Automated Detection and Classification of Acute Leukaemia:
           Coherent Taxonomy, Datasets, Validation and Performance Measurements,
           Motivation, Open Challenges and Recommendations
    • Abstract: Publication date: Available online 3 February 2018
      Source:Computer Methods and Programs in Biomedicine
      Author(s): M.A. Alsalem, A.A. Zaidan, B.B. Zaidan, M. Hashim, H.T. Madhloom, N.D. Azeez, S. Alsyisuf
      Context : Acute leukaemia diagnosis is a field requiring automated solutions, tools and methods and the ability to facilitate early detection and even prediction. Many studies have focused on the automatic detection and classification of acute leukaemia and their subtypes to promote enable highly accurate diagnosis. Objective : This study aimed to review and analyse literature related to the detection and classification of acute leukaemia. The factors that were considered to improve understanding on the field's various contextual aspects in published studies and characteristics were motivation, open challenges that confronted researchers and recommendations presented to researchers to enhance this vital research area. Methods : We systematically searched all articles about the classification and detection of acute leukaemia, as well as their evaluation and benchmarking, in three main databases: ScienceDirect, Web of Science and IEEE Xplore from 2007 to 2017. These indices were considered to be sufficiently extensive to encompass our field of literature. Results : Based on our inclusion and exclusion criteria, 89 articles were selected. Most studies (58/89) focused on the methods or algorithms of acute leukaemia classification, a number of papers (22/89) covered the developed systems for the detection or diagnosis of acute leukaemia and few papers (5/89) presented evaluation and comparative studies. The smallest portion (4/89) of articles comprised reviews and surveys. Discussion : Acute leukaemia diagnosis, which is a field requiring automated solutions, tools and methods, entails the ability to facilitate early detection or even prediction. Many studies have been performed on the automatic detection and classification of acute leukaemia and their subtypes to promote accurate diagnosis. Conclusions : Research areas on medical-image classification vary, but they are all equally vital. We expect this systematic review to help emphasise current research opportunities and thus extend and create additional research fields.

      PubDate: 2018-02-05T08:47:49Z
  • An automated blastomere identification method for the evaluation of day 2
           embryos during IVF/ICSI treatments
    • Abstract: Publication date: March 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 156
      Author(s): Charalambos Strouthopoulos, George Anifandis
      Purpose Evaluation of human embryos is one of the most important challenges in vitro fertilization (IVF) programs. The morphology and the morphokinetic parameters of the early cleaving embryo are of critical clinical importance. This stage spans the first 48 h post-fertilization, in which the embryo is dividing in smaller blastomeres at specific time-points. The morphology, in combination with the symmetry of the blastomeres seems to be powerful features with strong prognostic value for embryo evaluation. To date, the identification of these features is based on human inspection in timed intervals, at best using camera systems that simply work as surveillance systems without any precise alerting and decision support mechanisms. The purpose of the study presented in this paper was to develop a computer vision technique to automatically detect and identify the most suitable cleaving embryos (preferably at day 2 post-fertilization) for embryo transfer (ET) during IVF/ICSI treatments. Methods and results To this end, texture and geometrical features were used to localize and analyze the whole cleaving embryo in 2D grayscale images captured during in vitro embryo formation. Because of the ellipsoidal nature of blastomeres, the contour of each blastomere was modeled with an optimal fitting ellipse while the mean eccentricity of all ellipses is computed. The mean eccentricity in combination with the number of blastomeres forms the feature space on which the final criterion for the embryo evaluation was based. Conclusions Experimental results with low quality 2D grayscale images demonstrated the effectiveness of the proposed technique and provided evidence of a novel automated approach for predicting embryo quality.

      PubDate: 2018-01-06T04:21:28Z
  • Automatic hemolysis identification on aligned dual-lighting images of
           cultured blood agar plates
    • Abstract: Publication date: March 2018
      Source:Computer Methods and Programs in Biomedicine, Volume 156
      Author(s): Mattia Savardi, Alessandro Ferrari, Alberto Signoroni
      Background and Objective: The recent introduction of Full Laboratory Automation systems in clinical microbiology opens to the availability of streams of high definition images representing bacteria culturing plates. This creates new opportunities to support diagnostic decisions through image analysis and interpretation solutions, with an expected high impact on the efficiency of the laboratory workflow and related quality implications. Starting from images acquired under different illumination settings (top-light and back-light), the objective of this work is to design and evaluate a method for the detection and classification of diagnostically relevant hemolysis effects associated with specific bacteria growing on blood agar plates. The presence of hemolysis is an important factor to assess the virulence of pathogens, and is a fundamental sign of the presence of certain types of bacteria. Methods: We introduce a two-stage approach. Firstly, the implementation of a highly accurate alignment of same-plate image scans, acquired using top-light and back-light illumination, enables the joint spatially coherent exploitation of the available data. Secondly, from each segmented portion of the image containing at least one bacterial colony, specifically designed image features are extracted to feed a SVM classification system, allowing detection and discrimination among different types of hemolysis. Results: The fine alignment solution aligns more than 98.1% images with a residual error of less than 0.13 mm. The hemolysis classification block achieves a 88.3% precision with a recall of 98.6%. Conclusions: The results collected from different clinical scenarios (urinary infections and throat swab screening) together with accurate error analysis demonstrate the suitability of our system for robust hemolysis detection and classification, which remains feasible even in challenging conditions (low contrast or illumination changes).

      PubDate: 2017-12-27T12:52:07Z
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-