for Journals by Title or ISSN
for Articles by Keywords
  Subjects -> BIOLOGY (Total: 3126 journals)
    - BIOCHEMISTRY (240 journals)
    - BIOENGINEERING (119 journals)
    - BIOLOGY (1490 journals)
    - BIOPHYSICS (47 journals)
    - BIOTECHNOLOGY (236 journals)
    - BOTANY (228 journals)
    - CYTOLOGY AND HISTOLOGY (30 journals)
    - ENTOMOLOGY (69 journals)
    - GENETICS (163 journals)
    - MICROBIOLOGY (258 journals)
    - MICROSCOPY (10 journals)
    - ORNITHOLOGY (26 journals)
    - PHYSIOLOGY (73 journals)
    - ZOOLOGY (137 journals)

BIOTECHNOLOGY (236 journals)                  1 2 | Last

Showing 1 - 200 of 239 Journals sorted alphabetically
3 Biotech     Open Access   (Followers: 8)
Advanced Biomedical Research     Open Access  
Advances in Bioscience and Biotechnology     Open Access   (Followers: 16)
Advances in Genetic Engineering & Biotechnology     Hybrid Journal   (Followers: 7)
Advances in Regenerative Medicine     Open Access   (Followers: 2)
African Journal of Biotechnology     Open Access   (Followers: 6)
Algal Research     Partially Free   (Followers: 11)
American Journal of Biochemistry and Biotechnology     Open Access   (Followers: 67)
American Journal of Bioinformatics Research     Open Access   (Followers: 7)
American Journal of Polymer Science     Open Access   (Followers: 32)
Anadolu University Journal of Science and Technology : C Life Sciences and Biotechnology     Open Access  
Animal Biotechnology     Hybrid Journal   (Followers: 8)
Annales des Sciences Agronomiques     Full-text available via subscription  
Applied Biochemistry and Biotechnology     Hybrid Journal   (Followers: 43)
Applied Biosafety     Hybrid Journal  
Applied Food Biotechnology     Open Access   (Followers: 3)
Applied Microbiology and Biotechnology     Hybrid Journal   (Followers: 64)
Applied Mycology and Biotechnology     Full-text available via subscription   (Followers: 4)
Arthroplasty Today     Open Access   (Followers: 1)
Artificial Cells, Nanomedicine and Biotechnology     Hybrid Journal   (Followers: 1)
Asia Pacific Biotech News     Hybrid Journal   (Followers: 2)
Asian Journal of Biotechnology     Open Access   (Followers: 9)
Asian Pacific Journal of Tropical Biomedicine     Open Access   (Followers: 2)
Australasian Biotechnology     Full-text available via subscription   (Followers: 1)
Banat's Journal of Biotechnology     Open Access  
BBR : Biochemistry and Biotechnology Reports     Open Access   (Followers: 5)
Beitr?ge zur Tabakforschung International/Contributions to Tobacco Research     Open Access   (Followers: 3)
Bio-Algorithms and Med-Systems     Hybrid Journal   (Followers: 2)
Bio-Research     Full-text available via subscription   (Followers: 3)
Bioactive Materials     Open Access   (Followers: 1)
Biocatalysis and Agricultural Biotechnology     Hybrid Journal   (Followers: 4)
Biocybernetics and Biological Engineering     Full-text available via subscription   (Followers: 5)
Bioethics UPdate     Hybrid Journal   (Followers: 1)
Biofuels     Hybrid Journal   (Followers: 11)
Biofuels Engineering     Open Access   (Followers: 1)
Biological & Pharmaceutical Bulletin     Full-text available via subscription   (Followers: 4)
Biological Cybernetics     Hybrid Journal   (Followers: 10)
Biomarkers and Genomic Medicine     Open Access   (Followers: 3)
Biomarkers in Drug Development     Partially Free   (Followers: 1)
Biomaterials Research     Open Access   (Followers: 4)
BioMed Research International     Open Access   (Followers: 4)
Biomédica     Open Access  
Biomedical and Biotechnology Research Journal     Open Access  
Biomedical Engineering Research     Open Access   (Followers: 6)
Biomedical Glasses     Open Access  
Biomedical Reports     Full-text available via subscription  
BioMedicine     Open Access  
Biomedika     Open Access  
Bioprinting     Hybrid Journal   (Followers: 1)
Bioresource Technology Reports     Hybrid Journal   (Followers: 1)
Bioscience, Biotechnology, and Biochemistry     Hybrid Journal   (Followers: 21)
Biosensors Journal     Open Access  
Biosimilars     Open Access   (Followers: 1)
Biosurface and Biotribology     Open Access  
Biotechnic and Histochemistry     Hybrid Journal   (Followers: 1)
BioTechniques : The International Journal of Life Science Methods     Full-text available via subscription   (Followers: 28)
Biotechnologia Acta     Open Access   (Followers: 1)
Biotechnologie, Agronomie, Société et Environnement     Open Access   (Followers: 2)
Biotechnology     Open Access   (Followers: 6)
Biotechnology & Biotechnological Equipment     Open Access   (Followers: 4)
Biotechnology Advances     Hybrid Journal   (Followers: 33)
Biotechnology and Applied Biochemistry     Hybrid Journal   (Followers: 44)
Biotechnology and Bioengineering     Hybrid Journal   (Followers: 153)
Biotechnology and Bioprocess Engineering     Hybrid Journal   (Followers: 5)
Biotechnology and Genetic Engineering Reviews     Hybrid Journal   (Followers: 13)
Biotechnology and Health Sciences     Open Access   (Followers: 1)
Biotechnology and Molecular Biology Reviews     Open Access   (Followers: 2)
Biotechnology Annual Review     Full-text available via subscription   (Followers: 5)
Biotechnology for Biofuels     Open Access   (Followers: 10)
Biotechnology Frontier     Open Access   (Followers: 2)
Biotechnology Journal     Hybrid Journal   (Followers: 16)
Biotechnology Law Report     Hybrid Journal   (Followers: 4)
Biotechnology Letters     Hybrid Journal   (Followers: 34)
Biotechnology Progress     Hybrid Journal   (Followers: 40)
Biotechnology Reports     Open Access  
Biotechnology Research International     Open Access   (Followers: 1)
Biotechnology Techniques     Hybrid Journal   (Followers: 10)
Biotecnología Aplicada     Open Access  
Bioteknologi (Biotechnological Studies)     Open Access  
BIOTIK : Jurnal Ilmiah Biologi Teknologi dan Kependidikan     Open Access  
Biotribology     Hybrid Journal   (Followers: 1)
BMC Biotechnology     Open Access   (Followers: 16)
Cell Biology and Development     Open Access  
Chinese Journal of Agricultural Biotechnology     Full-text available via subscription   (Followers: 4)
Communications in Mathematical Biology and Neuroscience     Open Access  
Computational and Structural Biotechnology Journal     Open Access   (Followers: 2)
Computer Methods and Programs in Biomedicine     Hybrid Journal   (Followers: 8)
Copernican Letters     Open Access   (Followers: 1)
Critical Reviews in Biotechnology     Hybrid Journal   (Followers: 20)
Crop Breeding and Applied Biotechnology     Open Access   (Followers: 3)
Current Bionanotechnology     Hybrid Journal  
Current Biotechnology     Hybrid Journal   (Followers: 4)
Current Opinion in Biomedical Engineering     Hybrid Journal   (Followers: 1)
Current Opinion in Biotechnology     Hybrid Journal   (Followers: 56)
Current Pharmaceutical Biotechnology     Hybrid Journal   (Followers: 9)
Current Research in Bioinformatics     Open Access   (Followers: 12)
Current Trends in Biotechnology and Chemical Research     Open Access   (Followers: 3)
Current trends in Biotechnology and Pharmacy     Open Access   (Followers: 8)
EBioMedicine     Open Access  
Electronic Journal of Biotechnology     Open Access  
Entomologia Generalis     Full-text available via subscription  
Environmental Science : Processes & Impacts     Full-text available via subscription   (Followers: 4)
Experimental Biology and Medicine     Hybrid Journal   (Followers: 3)
Folia Medica Indonesiana     Open Access  
Food Bioscience     Hybrid Journal  
Food Biotechnology     Hybrid Journal   (Followers: 9)
Food Science and Biotechnology     Hybrid Journal   (Followers: 8)
Frontiers in Bioengineering and Biotechnology     Open Access   (Followers: 6)
Frontiers in Systems Biology     Open Access   (Followers: 2)
Fungal Biology and Biotechnology     Open Access   (Followers: 2)
GM Crops and Food: Biotechnology in Agriculture and the Food Chain     Full-text available via subscription   (Followers: 1)
GSTF Journal of BioSciences     Open Access  
HAYATI Journal of Biosciences     Open Access  
Horticulture, Environment, and Biotechnology     Hybrid Journal   (Followers: 11)
IEEE Transactions on Molecular, Biological and Multi-Scale Communications     Hybrid Journal   (Followers: 1)
IET Nanobiotechnology     Hybrid Journal   (Followers: 2)
IIOAB Letters     Open Access  
IN VIVO     Full-text available via subscription   (Followers: 4)
Indian Journal of Biotechnology (IJBT)     Open Access   (Followers: 2)
Indonesia Journal of Biomedical Science     Open Access   (Followers: 2)
Indonesian Journal of Biotechnology     Open Access   (Followers: 1)
Indonesian Journal of Medicine     Open Access  
Industrial Biotechnology     Hybrid Journal   (Followers: 17)
International Biomechanics     Open Access  
International Journal of Bioinformatics Research and Applications     Hybrid Journal   (Followers: 13)
International Journal of Biomechatronics and Biomedical Robotics     Hybrid Journal   (Followers: 4)
International Journal of Biomedical Research     Open Access   (Followers: 2)
International Journal of Biotechnology     Hybrid Journal   (Followers: 5)
International Journal of Biotechnology and Molecular Biology Research     Open Access   (Followers: 3)
International Journal of Biotechnology for Wellness Industries     Partially Free   (Followers: 1)
International Journal of Environment, Agriculture and Biotechnology     Open Access   (Followers: 5)
International Journal of Functional Informatics and Personalised Medicine     Hybrid Journal   (Followers: 4)
International Journal of Medicine and Biomedical Research     Open Access   (Followers: 1)
International Journal of Nanotechnology and Molecular Computation     Full-text available via subscription   (Followers: 3)
International Journal of Radiation Biology     Hybrid Journal   (Followers: 4)
Iranian Journal of Biotechnology     Open Access  
ISABB Journal of Biotechnology and Bioinformatics     Open Access  
Italian Journal of Food Science     Open Access   (Followers: 1)
JMIR Biomedical Engineering     Open Access  
Journal of Biometrics & Biostatistics     Open Access   (Followers: 3)
Journal of Bioterrorism & Biodefense     Open Access   (Followers: 6)
Journal of Petroleum & Environmental Biotechnology     Open Access   (Followers: 1)
Journal of Advanced Therapies and Medical Innovation Sciences     Open Access  
Journal of Advances in Biotechnology     Open Access   (Followers: 5)
Journal Of Agrobiotechnology     Open Access  
Journal of Analytical & Bioanalytical Techniques     Open Access   (Followers: 7)
Journal of Animal Science and Biotechnology     Open Access   (Followers: 4)
Journal of Applied Biomedicine     Open Access   (Followers: 2)
Journal of Applied Biotechnology     Open Access   (Followers: 2)
Journal of Applied Biotechnology Reports     Open Access   (Followers: 2)
Journal of Applied Mathematics & Bioinformatics     Open Access   (Followers: 5)
Journal of Biologically Active Products from Nature     Hybrid Journal   (Followers: 1)
Journal of Biomaterials and Nanobiotechnology     Open Access   (Followers: 6)
Journal of Biomedical Photonics & Engineering     Open Access  
Journal of Biomedical Practitioners     Open Access  
Journal of Bioprocess Engineering and Biorefinery     Full-text available via subscription  
Journal of Bioprocessing & Biotechniques     Open Access  
Journal of Biosecurity Biosafety and Biodefense Law     Hybrid Journal   (Followers: 3)
Journal of Biotechnology     Hybrid Journal   (Followers: 64)
Journal of Biotechnology and Strategic Health Research     Open Access  
Journal of Chemical and Biological Interfaces     Full-text available via subscription   (Followers: 1)
Journal of Chemical Technology & Biotechnology     Hybrid Journal   (Followers: 9)
Journal of Chitin and Chitosan Science     Full-text available via subscription   (Followers: 1)
Journal of Colloid Science and Biotechnology     Full-text available via subscription  
Journal of Commercial Biotechnology     Full-text available via subscription   (Followers: 6)
Journal of Crop Science and Biotechnology     Hybrid Journal   (Followers: 3)
Journal of Essential Oil Research     Hybrid Journal   (Followers: 2)
Journal of Experimental Biology     Full-text available via subscription   (Followers: 25)
Journal of Genetic Engineering and Biotechnology     Open Access   (Followers: 5)
Journal of Ginseng Research     Open Access  
Journal of Industrial Microbiology and Biotechnology     Hybrid Journal   (Followers: 17)
Journal of Integrative Bioinformatics     Open Access  
Journal of Medical Imaging and Health Informatics     Full-text available via subscription  
Journal of Molecular Biology and Biotechnology     Open Access  
Journal of Molecular Microbiology and Biotechnology     Full-text available via subscription   (Followers: 11)
Journal of Nano Education     Full-text available via subscription  
Journal of Nanobiotechnology     Open Access   (Followers: 4)
Journal of Nanofluids     Full-text available via subscription   (Followers: 1)
Journal of Organic and Biomolecular Simulations     Open Access  
Journal of Plant Biochemistry and Biotechnology     Hybrid Journal   (Followers: 4)
Journal of Science and Applications : Biomedicine     Open Access  
Journal of the Mechanical Behavior of Biomedical Materials     Hybrid Journal   (Followers: 12)
Journal of Trace Elements in Medicine and Biology     Hybrid Journal   (Followers: 1)
Journal of Tropical Microbiology and Biotechnology     Full-text available via subscription  
Journal of Yeast and Fungal Research     Open Access   (Followers: 1)
Marine Biotechnology     Hybrid Journal   (Followers: 4)
Meat Technology     Open Access  
Messenger     Full-text available via subscription  
Metabolic Engineering Communications     Open Access   (Followers: 4)
Metalloproteinases In Medicine     Open Access  
Microbial Biotechnology     Open Access   (Followers: 9)
MicroMedicine     Open Access   (Followers: 3)
Molecular and Cellular Biomedical Sciences     Open Access   (Followers: 1)
Molecular Biotechnology     Hybrid Journal   (Followers: 13)
Molecular Genetics and Metabolism Reports     Open Access   (Followers: 3)
Nanobiomedicine     Open Access  
Nanobiotechnology     Hybrid Journal   (Followers: 2)
Nanomaterials and Nanotechnology     Open Access  
Nanomedicine and Nanobiology     Full-text available via subscription  
Nanomedicine Research Journal     Open Access  

        1 2 | Last

Journal Cover
Computer Methods and Programs in Biomedicine
Journal Prestige (SJR): 0.786
Citation Impact (citeScore): 3
Number of Followers: 8  
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0169-2607
Published by Elsevier Homepage  [3161 journals]
  • Fusion Based Glioma Brain Tumor Detection and Segmentation using ANFIS
    • Abstract: Publication date: Available online 12 September 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): A Selvapandian, K Manivannan The detection of tumor regions in Glioma brain image is a challenging task due to its low sensitive boundary pixels. In this paper, Non-Sub sampled Contourlet Transform (NSCT) is used to enhance the brain image and then texture features are extracted from the enhanced brain image. These extracted features are trained and classified using Adaptive Neuro Fuzzy Inference System (ANFIS) approach to classify the brain image into normal and Glioma brain image. Then, the tumor regions in Glioma brain image is segmented using morphological functions. The proposed Glioma brain tumor detection methodology is applied on the Brain Tumor image Segmentation challenge (BRATS) open access dataset in order to evaluate the performance.
  • Nutrition delivery, workload and performance in a model-based ICU
           glycaemic control system
    • Abstract: Publication date: Available online 11 September 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Kent W. Stewart, J. Geoffrey Chase, Christopher G. Pretty, Geoffrey M. Shaw Background and ObjectiveHyperglycaemia is commonplace in the adult intensive care unit (ICU), and has been associated with increased morbidity and mortality. Effective glycaemic control (GC) can reduce morbidity and mortality, but has proven difficult. STAR is a model-based GC protocol that uniquely maintains normoglycaemia by changing both insulin and nutrition interventions, and has been proven effective in controlling blood glucose (BG) in the ICU. However, most ICU GC protocols only change insulin interventions, making the variable nutrition aspect of STAR less clinically desirable. This paper compares the performance of STAR modulating only insulin, with three simpler alternative nutrition protocols in clinically evaluated virtual trials.MethodsAlternative nutrition protocols are fixed nutrition rate (100% caloric goal), CB (Cahill et al. best) stepped nutrition rate (60%, 80% and 100% caloric goal for the first 3 days of GC, and 100% thereafter) and SLQ (STAR lower quartile) stepped nutrition rate (65%, 75% and 85% caloric goal for the first 3 days of GC, and 85% thereafter). Each nutrition protocol is simulated with the STAR insulin protocol on a 221 patient virtual cohort, and GC performance, safety and total intervention workload are assessed.ResultsAll alternative nutrition protocols considerably reduced total intervention workload (14.6-19.8%) due to reduced numbers of nutrition changes. However, only the stepped nutrition protocols achieved similar GC performance to the current variable nutrition protocol. Of the two stepped nutrition protocols, the SLQ nutrition protocol also improved GC safety, almost halving the number of severe hypoglycaemic cases (5 vs. 9, P=0.42).ConclusionsOverall, the SLQ nutrition protocol was the best alternative to the current variable nutrition protocol, but either stepped nutrition protocol could be adapted by STAR to reduce workload and make it more clinically acceptable, while maintaining its proven performance and safety.
  • AI in Medicine: Big Data Remains a Challenge
    • Abstract: Publication date: October 2018Source: Computer Methods and Programs in Biomedicine, Volume 164Author(s): Ming-Chin Lin, Usman Iqbal, Yu-Chuan Li
  • Generating amorphous target margins in radiation therapy to promote
           maximal target coverage with minimal target size
    • Abstract: Publication date: Available online 5 September 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Adam D. YockABSTRACTBackground and SignificanceThis work provides proof-of-principle for two versions of a heuristic approach that automatically creates amorphous radiation therapy planning target volume (PTV) margins considering local effects of tumor shape and motion to ensure adequate voxel coverage with while striving to minimize PTV size. The resulting target thereby promotes disease control while minimizing the risk of normal tissue toxicity.MethodsThis work describes the mixed-PDF algorithm and the independent-PDF algorithm which generate amorphous margins around a radiation therapy target by incorporating user-defined models of target motion. Both algorithms were applied to example targets – one circular and one “cashew-shaped.” Target motion was modeled by four probability density functions applied to the target quadrants. The spatially variant motion model illustrates the application of the algorithms even with tissue deformation. Performance of the margins was evaluated in silico with respect to voxelized target coverage and PTV size, and was compared to conventional techniques: a threshold-based probabilistic technique and an (an)isotropic expansion technique. To demonstrate the algorithm's clinical utility, a lung cancer patient was analyzed retrospectively. For this case, 4D CT measurements were combined with setup uncertainty to compare the PTV from the mixed-PDF algorithm with a PTV equivalent to the one used clinically.ResultsFor both targets, the mixed-PDF algorithm performed best, followed by the independent-PDF algorithm, the threshold algorithm, and lastly, the (an)isotropic algorithm. Superior coverage was always achieved by the amorphous margin algorithms for a given PTV size. Alternatively, the margin required for a particular level of coverage was always smaller (8–15%) when created with the amorphous algorithms. For the lung cancer patient, the mixed-PDF algorithm resulted in a PTV that was 13% smaller than the clinical PTV while still achieving ≥ 99.9% coverage.ConclusionsThe amorphous margin algorithms are better suited for the local effects of target shape and positional uncertainties than conventional margins. As a result, they provide superior target coverage with smaller PTVs, ensuring dose delivered to the target while decreasing the risk of normal tissue toxicity.
  • A Supervised Joint Multi-layer Segmentation Framework for Retinal Optical
           Coherence Tomography Images using Conditional Random Field
    • Abstract: Publication date: Available online 5 September 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Arunava Chakravarty, Jayanthi Sivaswamy Background and Objective:Accurate segmentation of the intra-retinal tissue layers in Optical Coherence Tomography (OCT) images plays an important role in the diagnosis and treatment of ocular diseases such as Age-Related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). The existing energy minimization based methods employ multiple, manually handcrafted cost terms and often fail in the presence of pathologies. In this work, we eliminate the need to handcraft the energy by learning it from training images in an end-to-end manner. Our method can be easily adapted to pathologies by re-training it on an appropriate dataset.Methods:We propose a Conditional Random Field (CRF) framework for the joint multi-layer segmentation of OCT B-scans. The appearance of each retinal layer and boundary is modeled by two convolutional filter banks and the shape priors are modeled using Gaussian distributions. The total CRF energy is linearly parameterized to allow a joint, end-to-end training by employing the Structured Support Vector Machine formulation.Results:The proposed method outperformed three benchmark algorithms on four public datasets. The NORMAL-1 and NORMAL-2 datasets contain healthy OCT B-scans while the AMD-1 and DME-1 dataset contain B-scans of AMD and DME cases respectively. The proposed method achieved an average unsigned boundary localization error (U-BLE) of 1.52 pixels on NORMAL-1, 1.11 pixels on NORMAL-2 and 2.04 pixels on the combined NORMAL-1 and DME-1 dataset across the eight layer boundaries, outperforming the three benchmark methods in each case. The Dice coefficient was 0.87 on NORMAL-1, 0.89 on NORMAL-2 and 0.84 on the combined NORMAL-1 and DME-1 dataset across the seven retinal layers. On the combined NORMAL-1 and AMD-1 dataset, we achieved an average U-BLE of 1.86 pixels on the ILM, inner and outer RPE boundaries and a Dice of 0.98 for the ILM-RPEin region and 0.81 for the RPE layer.Conclusion:We have proposed a supervised CRF based method to jointly segment multiple tissue layers in OCT images. It can aid the ophthalmologists in the quantitative analysis of structural changes in the retinal tissue layers for clinical practice and large-scale clinical studies.
  • Surgery of complex craniofacial defects: a single-step AM-based
    • Abstract: Publication date: Available online 5 September 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Yary Volpe, Rocco Furferi, Lapo Governi, Francesca Uccheddu, Monica Carfagni, Federico Mussa, Mirko Scagnet, Lorenzo Genitori Background and objectiveThe purpose of the present paper is to pave the road to the systematic optimization of complex craniofacial surgical intervention and to validate a design methodology for the virtual surgery and the fabrication of cranium vault custom plates. Recent advances in the field of medical imaging, image processing and additive manufacturing (AM) have led to new insights in several medical applications. The engineered combination of medical actions and 3D processing steps, foster the optimization of the intervention in terms of operative time and number of sessions needed. Complex craniofacial surgical intervention, such as for instance severe hypertelorism accompanied by skull holes, traditionally requires a first surgery to correctly “resize” the patient cranium and a second surgical session to implant a customized 3D printed prosthesis. Between the two surgical interventions, medical imaging needs to be carried out to aid the design the skull plate. Instead, this paper proposes a CAD/AM-based one-in-all design methodology allowing the surgeons to perform, in a single surgical intervention, both skull correction and implantation.MethodsA strategy envisaging a virtual/mock surgery on a CAD/AM model of the patient cranium so as to plan the surgery and to design the final shape of the cranium plaque is proposed. The procedure relies on patient imaging, 3D geometry reconstruction of the defective skull, virtual planning and mock surgery to determine the hypothetical anatomic 3D model and, finally, to skull plate design and 3D printing.ResultsThe methodology has been tested on a complex case study. Results demonstrate the feasibility of the proposed approach and a consistent reduction of time and overall cost of the surgery, not to mention the huge benefits on the patient that is subjected to a single surgical operation.ConclusionsDespite a number of AM-based methodologies have been proposed for designing cranial implants or to correct orbital hypertelorism, to the best of the authors’ knowledge, the present work is the first to simultaneously treat osteotomy and titanium cranium plaque.
  • An effective computer aided diagnosis model for pancreas cancer on PET/CT
    • Abstract: Publication date: Available online 4 September 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Siqi Li, Huiyan Jiang, Zhiguo Wang, Guoxu Zhang, Yu-dong Yao Background and objective: Pancreas cancer is a digestive tract tumor with high malignancy, which is difficult for diagnosis and treatment at early time. To this end, this paper proposes a computer aided diagnosis (CAD) model for pancreas cancer on Positron Emission Tomography/Computed Tomography (PET/CT) images.Methods: There are three essential steps in the proposed CAD model, including (1) pancreas segmentation, (2) feature extraction and selection, (3) classifier design, respectively. First, pancreas segmentation is performed using simple linear iterative clustering (SLIC) on CT pseudo-color images generated by the gray interval mapping (GIP) method. Second, dual threshold principal component analysis (DT-PCA) is developed to select the most beneficial feature combination, which not only considers principal features but also integrates some non-principal features into a new polar angle representation. Finally, a hybrid feedback-support vector machine-random forest (HFB-SVM-RF) model is designed to identify normal pancreas or pancreas cancer and the key is to use 8 types of SVMs to establish the decision trees of RF.Results: The proposed CAD model is tested on 80 cases of PET/CT data (from General Hospital of Shenyang Military Area Command) and achieves the average pancreas cancer identification accuracy of 96.47%, sensibility of 95.23% and specificity of 97.51%, respectively. In addition, the proposed pancreas segmentation method is also evaluated using a public dataset with 82 3D CT scans from the National Institutes of Health (NIH) Clinical Center and its performance is found to surpass other methods, with a mean Dice coefficient of 78.9% and Jaccard index of 65.4%.Conclusions: Collectively, contrast experiments in 10-fold cross validation demonstrate the efficiency and accuracy of the proposed CAD model as well as its performance advantages as compared with related methods.
  • Classification of glucose records from patients at diabetes risk using a
           combined Permutation Entropy algorithm
    • Abstract: Publication date: Available online 1 September 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): D. Cuesta–Frau, P. Miró–Martínez, S. Oltra–Crespo, J. Jordán–Núñez, B. Vargas, L. Vigil Background and objectives: The adoption in clinical practice of electronic portable blood or interstitial glucose monitors has enabled the collection, storage, and sharing of massive amounts of glucose level readings. This availability of data opened the door to the application of a multitude of mathematical methods to extract clinical information not discernible with conventional visual inspection. The objective of this study is to assess the capability of Permutation Entropy(PE) to find differences between glucose records of healthy and potentially diabetic subjects.Methods: PE is a mathematical method based on the relative frequency analysis of ordinal patterns in time series that has gained a lot of attention in the last years due to its simplicity, robustness, and performance. We study in this paper the applicability of this method to glucose records of subjects at risk of diabetes in order to assess the predictability value of this metric in this context.Results: PE, along with some of its derivatives, was able to find significant differences between diabetic and non–diabetic patients from records acquired up to 3 years before the diagnosis. The quantitative results for PE were 3.5878 ± 0.3916 for the non–diabetic class, and 3.1564 ± 0.4166 for the diabetic class. With a classification accuracy higher than 70%, and by means of a Cox regression model, PE demonstrated that it is a very promising candidate as a risk stratification tool for continuous glucose monitoring.Conclusion: PE can be considered as a prospective tool for the early diagnosis of the glucorregulatory system.
  • A hybrid data mining model for diagnosis of patients with clinical
           suspicion of dementia
    • Abstract: Publication date: Available online 24 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Leonard Barreto Moreira, Anderson Amendoeira Namen Background and ObjectiveGiven the phenomenon of aging population, dementias arise as a complex health problem throughout the world. Several methods of machine learning have been applied to the task of predicting dementias. Given its diagnostic complexity, the great challenge lies in distinguishing patients with some type of dementia from healthy people. Particularly in the early stages, the diagnosis positively impacts the quality of life of both the patient and the family. This work presents a hybrid data mining model, involving the mining of texts integrated to the mining of structured data. This model aims to assist specialists in the diagnosis of patients with clinical suspicion of dementia.MethodsThe experiments were conducted from a set of 605 medical records with 19 different attributes about patients with cognitive decline reports. Firstly, a new structured attribute was created from a text mining process. It was the result of clustering the patient's pathological history information stored in an unstructured textual attribute. Classification algorithms (naïve bayes, bayesian belief networks and decision trees) were applied to obtain Alzheimer's disease and mild cognitive impairment predictive models. Ensemble methods (Bagging, Boosting and Random Forests) were used in order to improve the accuracy of the generated models. These methods were applied in two datasets: one containing only the original structured data; the other containing the original structured data with the inclusion of the new attribute resulting from the text mining (hybrid model).ResultsThe models’ accuracy metrics obtained from the two different datasets were compared. The results evidenced the greater effectiveness of the hybrid model in the diagnostic prediction for the pathologies of interest.ConclusionsWhen analysing the different methods of classification and clustering used, the better rates related to the precision and sensitivity of the pathologies under study were obtained with hybrid models with support of ensemble methods.
  • A Novel Cumulative Level Difference Mean based GLDM and Modified ABCD
           Features Ranked using Eigenvector Centrality Approach for Four Skin Lesion
           Types Classification
    • Abstract: Publication date: Available online 24 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Maram A. Wahba, Amira S. Ashour, Yanhui Guo, Sameh A. Napoleon, Mustafa M. Abd Elnaby Background and objectiveMelanoma is one of the major death causes while basal cell carcinoma (BCC) is the utmost incident skin lesion type. At their early stages, medical experts may be confused between both types with benign nevus and pigmented benign keratoses (BKL). This inspired the current study to develop an accurate automated, user-friendly skin lesion identification system.MethodsThe current work targets a novel discrimination technique of four pre-mentioned skin lesion classes. A novel proposed texture feature, named cumulative level-difference mean (CLDM) based on the gray-level difference method (GLDM) is extracted. The asymmetry, border irregularity, color variation and diameter are summed up as the ABCD rule feature vector is originally used to classify the melanoma from benign lesions. The proposed method improved the ABCD rule to also classify BCC and BKL by using the proposed modified-ABCD feature vector. In the modified set of ABCD features, each border feature, such as compact index, fractal dimension, and edge abruptness is considered a separate feature. Then, the composite feature vector having the pre-mentioned features is ranked using the Eigenvector Centrality (ECFS) feature ranking method. The ranked features are then classified by a cubic support vector machine for different numbers of selected features.ResultsThe proposed CLDM texture features combined with the ranked ABCD features achieved outstanding performance to classify the four targeted classes (melanoma, BCC, nevi and BKL). The results report 100% outstanding performance of the sensitivity, accuracy and specificity per each class compared to other features when using the highest seven ranked features.ConclusionsThe proposed system established that Melanoma, BCC, nevus and BKL are efficiently classified using cubic SVM with the new feature set. In addition, the comparative studies proved the superiority of the cubic SVM to classify the four classes.
  • BE-DTI’: Ensemble Framework for Drug Target Interaction Prediction using
           Dimensionality Reduction and Active Learning
    • Abstract: Publication date: Available online 22 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Aman Sharma, Rinkle Rani Background and Objective: Drug-target interaction prediction plays an intrinsic role in the drug discovery process. Prediction of novel drugs and targets helps in identifying optimal drug therapies for various stringent diseases. Computational prediction of drug-target interactions can help to identify potential drug-target pairs and speed-up the process of drug repositioning. In our present, work we have focused on machine learning algorithms for predicting drug-target interactions from the pool of existing drug-target data. The key idea is to train the classifier using existing DTI so as to predict new or unknown DTI. However, there are various challenges such as class imbalance and high dimensional nature of data that need to be addressed before developing optimal drug-target interaction model. Methods: In this paper, we propose a bagging based ensemble framework named BE-DTI’ for drug-target interaction prediction using dimensionality reduction and active learning to deal with class-imbalanced data. Active learning helps to improve under-sampling bagging based ensembles. Dimensionality reduction is used to deal with high dimensional data. Results: Results show that the proposed technique outperforms the other five competing methods in 10-fold cross-validation experiments in terms of AUC=0.927, Sensitivity=0.886, Specificity=0.864, and G-mean=0.874. Conclusion: Missing interactions and new interactions are predicted using the proposed framework. Some of the known interactions are removed from the original dataset and their interactions are recalculated to check the accuracy of the proposed framework. Moreover, validation of the proposed approach is performed using the external dataset. All these results show that structurally similar drugs tend to interact with similar targets.
  • Automated recognition of cardiac arrhythmias using sparse decomposition
           over composite dictionary
    • Abstract: Publication date: Available online 22 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Sandeep Raj, Kailash Chandra Ray Background and Objective: Cardiovascular diseases (CVDs) are the leading cause of deaths worldwide. Due to an increase in the rate of global mortalities, biopathological signal processing and evaluation are widely used in the ambulatory situations for healthcare applications. For decades, the processing of pathological electrocardiogram (ECG) signals for arrhythmia detection has been thoroughly studied for diagnosis of various cardiovascular diseases. Apart from these studies, efficient diagnosis of ECG signals remains a challenge in the clinical cardiovascular domain due to its non-stationary nature. The classical signal processing methods are widely employed to analyze the ECG signals, but they exhibit certain limitations and hence, are insufficient to achieve higher accuracy.Methods: This study presents a novel technique for an efficient representation of electrocardiogram (ECG) signals using sparse decomposition using composite dictionary (CD). The dictionary consists of the stockwell, sine and cosine analytical functions. The technique decomposes an input ECG signal into stationary and non-stationary components or atoms. For each of these atoms, five features i.e. permutation entropy, energy, RR-interval, standard deviation and kurtosis are extracted to determine the feature sets representing the heartbeats that are classified into different categories using the multi-class least-square twin support vector machines. The artificial bee colony (ABC) technique is used to determine the optimal classifier parameters. The proposed method is evaluated under category and personalized schemes and its validation is performed on MIT-BIH data.Results: The experimental results reported a higher overall accuracy of 99.21% and 90.08% in category and personalized schemes respectively than the existing techniques reported in the literature. Further a sensitivity, positive predictivity and F-score of 99.21% each in the category based scheme and 90.08% each in the personalized schemes respectively.Conclusions: The proposed methodology can be utilized in computerized decision support systems to monitor different classes of cardiac arrhythmias with higher accuracy for early detection and treatment of cardiovascular diseases.
  • Tracking Tumor Boundary Using Point Correspondence for Adaptive Radio
    • Abstract: Publication date: Available online 22 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Nazanin Tahmasebi, Pierre Boulanger, Jihyun Yun, B. Gino Fallone, Kumaradevan Punithakumar Background and ObjectiveTracking mobile tumor regions during the treatment is a crucial part of image-guided radiation therapy because of two main reasons which negatively affect the treatment process: 1) a tiny error will lead to some healthy tissues being irradiated; and 2) some cancerous cells may survive if the beam is not accurately positioned as it may not cover the entire cancerous region. However, tracking or delineation of such a tumor region from magnetic resonance imaging (MRI) is challenging due to photometric similarities of the region of interest and surrounding area as well as the influence of motion in the organs. The purpose of this work is to develop an approach to track the center and boundary of tumor region by auto-contouring the region of interest in moving organs for radiotherapy.MethodsWe utilize a nonrigid registration method as well as a publicly available RealTITracker algorithm for MRI to delineate and track tumor regions from a sequence of MRI images. The location and shape of the tumor region in the MRI image sequence varies over time due to breathing. We investigate two approaches: the first one uses manual segmentation of the first frame during the pretreatment stage; and the second one utilizes manual segmentation of all the frames during the pretreatment stage.ResultsWe evaluated the proposed approaches over a sequence of 600 images acquired from 6 patients. The method that utilizes all the frames in the pretreatment stage with moving mesh based registration yielded the best performance with an average Dice Score of 0.89 ± 0.04 and Hausdorff Distance of 3.38 ± 0.10 mm.ConclusionsThis study demonstrates a promising boundary tracking tool for delineating the tumor region that can deal with respiratory movement and the constraints of adaptive radiation therapy.
  • A novel, data-driven conceptualization for critical left heart obstruction
    • Abstract: Publication date: Available online 20 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): James M. Meza, Martijn Slieker, Eugene H. Blackstone, Luc Mertens, William M. DeCampli, James K. Kirklin, Mohsen Karimi, Pirooz Eghtesady, Kamal Pourmoghadam, Richard W. Kim, Phillip T. Burch, Marshall L. Jacobs, Tara Karamlou, Brian W. McCrindle, Congenital Heart Surgeons’ Society BackgroundQualitative features of aortic and mitral valvar pathology have traditionally been used to classify congenital cardiac anomalies for which the left heart structures are unable to sustain adequate systemic cardiac output. We aimed to determine if novel groups of patients with greater clinical relevance could be defined within this population of patients with critical left heart obstruction (CLHO) using a data-driven approach based on both qualitative and quantitative echocardiographic measures.MethodsAn independent standardized review of recordings from pre-intervention transthoracic echocardiograms for 651 neonates with CLHO was performed. An unsupervised cluster analysis, incorporating 136 echocardiographic measures, was used to group patients with similar characteristics. Key measures differentiating the groups were then identified.ResultsBased on all measures, cluster analysis linked the 651 neonates into groups of 215 (Group 1), 338 (Group 2), and 98 (Group 3) patients. Aortic valve atresia and left ventricular (LV) end diastolic volume were identified as significant variables differentiating the groups. The median LV end diastolic area was 1.35, 0.69, and 2.47 cm2 in Groups 1, 2, and 3, respectively (p
  • A Facial Expression Controlled Wheelchair for People with Disabilities
    • Abstract: Publication date: Available online 18 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): YASSINE Rabhi, MAKREM Mrabet, FARHAT Fnaiech Background and ObjectivesIn order to improve assistive technologies for people with reduced mobility, this paper develops a new intelligent real-time emotion detection system to control equipment, such as electric wheelchairs (EWC) or robotic assistance vehicles. Every year, degenerative diseases and traumas prohibit thousands of people to easily control the joystick of their wheelchairs with their hands. Most current technologies are considered invasive and uncomfortable such as those requiring the user to wear some body sensor to control the wheelchair.MethodsIn this work, the proposed Human Machine Interface (HMI) provides an efficient hands-free option that does not require sensors or objects attached to the user's body. It allows the user to drive the wheelchair using its facial expressions which can be flexibly updated. This intelligent solution is based on a combination of neural networks (NN) and specific image preprocessing steps. First, the Viola-Jones combination is used to detect the face of the disability from a video. Subsequently, a neural network is used to classify the emotions displayed on the face. This solution called "The Mathematics Behind Emotion" is capable of classifying many facial expressions in real time, such as smiles and raised eyebrows, which are translated into signals for wheelchair control. On the hardware side, this solution only requires a smartphone and a Raspberry Pi card that can be easily mounted on the wheelchair.ResultsMany experiments have been conducted to evaluate the efficiency of the control acquisition process and the user experience in driving a wheelchair through facial expressions. The classification accuracy can expect 98.6% and it can offer an average recall rate of 97.1%. Thus, all these experiments have proven that the proposed system is able of accurately recognizing user commands in real time. Indeed, the obtained results indicate that the suggested system is more comfortable and better adapted to severely disabled people in their daily lives, than conventional methods. Among the advantages of this system, we cite its real time ability to identify facial emotions from different angles.ConclusionsThe proposed system takes into account the patient's pathology. It is intuitive, modern, doesn't require physical effort and can be integrated into a smartphone or tablet. The results obtained highlight the efficiency and reliability of this system, which ensures safe navigation for the disabled patient.
  • Secure Large-Scale Genome Data Storage and Query
    • Abstract: Publication date: Available online 16 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Luyao Chen, Md Momin Al Aziz, Noman Mohammed, Xiaoqian Jiang Background and ObjectiveCloud computing plays a vital role in big data science with its scalable and cost-efficient architecture. Large-scale genome data storage and computations would benefit from using these latest cloud computing infrastructures, to save cost and speedup discoveries. However, due to the privacy and security concerns, data owners are often disinclined to put sensitive data in a public cloud environment without enforcing some protective measures. An ideal solution is to develop secure genome database that supports encrypted data deposition and query.MethodsNevertheless, it is a challenging task to make such a system fast and scalable enough to handle real-world demands providing data security as well. In this paper, we propose a novel, secure mechanism to support secure count queries on an open source graph database (Neo4j) and evaluated the performance on a real-world dataset of around 735,317 Single Nucleotide Polymorphisms (SNPs). In particular, we propose a new tree indexing method that offers constant time complexity (proportion to the tree depth), which was the bottleneck of existing approaches.ResultsThe proposed method significantly improves the runtime of query execution compared to the existing techniques. It takes less than one minute to execute an arbitrary count query on a dataset of 212 GB, while the best-known algorithm takes around 7 minutes.ConclusionsThe outlined framework and experimental results show the applicability of utilizing graph database for securely storing large-scale genome data in untrusted environment. Furthermore, the crypto-system and security assumptions underlined are much suitable for such use cases which be generalized in future work.
  • Transfer Learning for Classification of Cardiovascular Tissues in
           Histological Images
    • Abstract: Publication date: Available online 16 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Claudia Mazo, Jose Bernal, Maria Trujillo, Enrique Alegre Background and Objective: Automatic classification of healthy tissues and organs based on histology images is an open problem, mainly due to the lack of automated tools. Solutions in this regard have potential in educational medicine and medical practices. Some preliminary advances have been made using image processing techniques and classical supervised learning. Due to the breakthrough performance of deep learning in various areas, we present an approach to recognise and classify, automatically, fundamental tissues and organs using Convolutional Neural Networks (CNN).Methods: We adapt four popular CNNs architectures – ResNet, VGG19, VGG16 and Inception – to this problem through transfer learning. The resulting models are evaluated at three stages. Firstly, all the transferred networks are compared to each other. Secondly, the best resulting fine-tuned model is compared to an ad-hoc 2D multi-path model to outline the importance of transfer learning. Thirdly, the same model is evaluated against the state-of-the-art method, a cascade SVM using LBP-based descriptors, to contrast a traditional machine learning approach and a representation learning one. The evaluation task consists of separating six classes accurately: smooth muscle of the elastic artery, smooth muscle of the large vein, smooth muscle of the muscular artery, cardiac muscle, loose connective tissue, and light regions. The different networks are tuned on 6000 blocks of 100 × 100 pixels and tested on 7500.Results: Our proposal yields F-score values between 0.717 and 0.928. The highest and lowest performances are for cardiac muscle and smooth muscle of the large vein, respectively. The main issue leading to limited classification scores for the latter class is its similarity with the elastic artery. However, this confusion is evidenced during manual annotation as well. Our algorithm reached improvements in F-score between 0.080 and 0.220 compared to the state-of-the-art machine learning approach.Conclusions: We conclude that it is possible to classify healthy cardiovascular tissues and organs automatically using CNNs and that deep learning holds great promise to improve tissue and organs classification. We left our training and test sets, models and source code publicly available to the research community.
  • Automated Ontology Generation Framework Powered by Linked Biomedical
           Ontologies for Disease-Drug Domain
    • Abstract: Publication date: Available online 16 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Mazen Alobaidi, Khalid Mahmood Malik, Maqbool Hussain Objective and background: The exponential growth of the unstructured data available in biomedical literature, and Electronic Health Record (EHR), requires powerful novel technologies and architectures to unlock the information hidden in the unstructured data. The success of smart healthcare applications such as clinical decision support systems, disease diagnosis systems, and healthcare management systems depends on knowledge that is understandable by machines to interpret and infer new knowledge from it. In this regard, ontological data models are expected to play a vital role to organize, integrate, and make informative inferences with the knowledge implicit in that unstructured data and represent the resultant knowledge in a form that machines can understand. However, constructing such models is challenging because they demand intensive labor, domain experts, and ontology engineers. Such requirements impose a limit on the scale or scope of ontological data models. We present a framework that will allow mitigating the time-intensity to build ontologies and achieve machine interoperability.Methods: Empowered by linked biomedical ontologies, our proposed novel Automated Ontology Generation Framework consists of five major modules: a) Text Processing using compute on demand approach. b) Medical Semantic Annotation using N-Gram, ontology linking and classification algorithms, c) Relation Extraction using graph method and Syntactic Patterns, d), Semantic Enrichment using RDF mining, e) Domain Inference Engine to build the formal ontology.Results: Quantitative evaluations show 84.78% recall, 53.35% precision, and 67.70% F-measure in terms of disease-drug concepts identification; 85.51% recall, 69.61% precision, and F-measure 76.74% with respect to taxonomic relation extraction; and 77.20% recall, 40.10 % precision, and F-measure 52.78 % with respect to biomedical non-taxonomic relation extraction.Conclusion: We present an automated ontology generation framework that is empowered by Linked Biomedical Ontologies. This framework integrates various natural language processing, semantic enrichment, syntactic pattern, and graph algorithm based techniques. Moreover, it shows that using Linked Biomedical Ontologies enables a promising solution to the problem of automating the process of disease-drug ontology generation.
  • Global optimal constrained ICA and its application in extraction of
           movement related cortical potentials from single-trial EEG signals
    • Abstract: Publication date: Available online 11 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Elnaz Eilbeigi, Seyed Kamaledin Setarehdan Background and ObjectiveThe constrained ICA (cICA) is a recent approach which can extract the desired source signal by using prior information. cICA employs gradient-based algorithms to optimize non convex objective functions and therefore global optimum solution is not guaranteed. In this study, we propose the Global optimal constrained ICA (GocICA) algorithm for solving the conventional cICA problems. Due to the importance of movement related cortical potentials (MRCPs) for neurorehabilitation and developing a suitable mechanism for detection of movement intention, single-trial MRCP extraction is presented as an application of GocICA.MethodsIn order to evaluate the performance of the proposed technique, two kinds of datasets including simulated and real EEG data have been utilized in this paper. The GocICA method has been implemented based on the most popular meta-heuristic optimization algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Charged System Search (CSS) where the results have been compared with those of conventional cICA and two ICA-based methods (JADE and Infomax).ResultsIt was found that GocICA enhanced the extracted MRCP from multi-channel EEG better than both conventional cICA and ICA-based methods and also outperformed them in single-trial MRCP detection with higher true positive rates (TPRs) and lower false positive rates (FPRs). Moreover, CSS-cICA resulted in the greatest TPR (91.2232 ± 3.4708) and the lowest FPR (8.7465±3.7705) for single-trial MRCP detection from real EEG data and the greatest signal-to-noise ratio (SNR) (39.2818) and the lowest mean square error (MSE) and individual performance index (IPI) (41.8230 and 0.0012, respectively) for single-trial MRCP extraction from simulated EEG data.ConclusionsThese results confirm the superiority of GocICA with respect to conventional cICA that is due to the ability of meta-heuristic optimization algorithms to escape from local optimal point. As such, GocICA is a promising new algorithm for single-trial MRCP detection which can be used for detecting other types of event related cortical potentials (ERPs) such as P300 and also for EEG artifact removal.
  • Prediction of Paroxysmal Atrial Fibrillation: A Machine Learning Based
           Approach Using Combined Feature Vector and Mixture of Expert
           Classification on HRV Signal
    • Abstract: Publication date: Available online 10 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Elias Ebrahimzadeh, Maede Kalantari, Mohammadamin Joulani, Reza Shahrokhi Shahraki, Farahnaz Fayaz, Fereshteh Ahmadi Background and ObjectiveParoxysmal Atrial Fibrillation (PAF) is one of the most common major cardiac arrhythmia. Unless treated timely, PAF might transform into permanent Atrial Fibrillation leading to a high rate of morbidity and mortality. Therefore, increasing attention has been directed towards prediction of PAF, to enable early detection and prevent further progression of the disease. Notwithstanding the pharmacological and electrical treatments, a validated method to predict the onset of PAF is yet to be developed. We aim to address this issue through integrating classical and modern methods.MethodsTo increase the predictivity, we have made use of a combination of features extracted through linear, time-frequency, and nonlinear analyses performed on heart rate variability. We then apply a novel approach to local feature using meticulous methodologies, developed in our previous works, to reduce the dimensionality of the feature space. Subsequently, the Mixture of Experts classification is employed to ensure a precise decision-making on the output of different processes. In the current study, we analyzed 106 signals from 53 pairs of ECG recordings obtained from the standard database called Atrial Fibrillation Prediction Database (AFPDB). Each pair of data contains one 30-min ECG segment that ends just before the onset of PAF event and another 30-min ECG segment at least 45 minutes distant from the onset.ResultsCombining the features that are extracted using both classical and modern analyses was found to be significantly more effective in predicting the onset of PAF, compared to using either analyses independently. Also, the Mixture of Experts classification yielded more precise class discrimination than other well-known classifiers. The performance of the proposed method was evaluated using the Atrial Fibrillation Prediction Database (AFPDB) which led to sensitivity, specificity, and accuracy as 100%, 95.55%, and 98.21% respectively.ConclusionPrediction of PAF has been a matter of clinical and theoretical importance. We demonstrated that utilising an optimized combination of — as opposed to being restricted to — linear, time-frequency, and nonlinear features, along with applying the Mixture of Experts, contributes greatly to an early detection of PAF, thus, the proposed method is shown to be superior to those mentioned in similar studied in the literature.Graphical Image, graphical abstract
  • A Virtual Patient Model for Mechanical Ventilation
    • Abstract: Publication date: Available online 10 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): S.E. Morton, J. Dickson, J.G. Chase, P. Docherty, T. Desaive, S.L. Howe, G.M. Shaw, M. Tawhai Background and ObjectivesMechanical ventilation (MV) is a primary therapy for patients with acute respiratory failure. However, poorly selected ventilator settings can cause further lung damage due to heterogeneity of healthy and damaged alveoli. Varying positive-end-expiratory-pressure (PEEP) to a point of minimum elastance is a lung protective ventilator strategy. However, even low levels of PEEP can lead to ventilator induced lung injury for individuals with highly inflamed pulmonary tissue. Hence, models that could accurately predict peak inspiratory pressures after changes to PEEP could improve clinician confidence in attempting potentially beneficial treatment strategies.MethodsThis study develops and validates a physiologically relevant respiratory model that captures elastance and resistance via basis functions within a well-validated single compartment lung model. The model can be personalised using information available at a low PEEP to predict lung mechanics at a higher PEEP. Proof of concept validation is undertaken with data from four patients and eight recruitment manoeuvre arms.ResultsResults show low error when predicting upwards over the clinically relevant pressure range, with the model able to predict peak inspiratory pressure with less than 10% error over 90% of the range of PEEP changes up to 12 cmH2O.ConclusionsThe results provide an in-silico model-based means of predicting clinically relevant responses to changes in MV therapy, which is the foundation of a first virtual patient for MV.
  • Fast Unsupervised Nuclear Segmentation and Classification Scheme for
           Automatic Allred Cancer Scoring in Immunohistochemical Breast Tissue
    • Abstract: Publication date: Available online 10 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Aymen Mouelhi, Hana Rmili, Jaouher Ben Ali, Mounir Sayadi, Raoudha Doghri, Karima Mrad Background and ObjectiveThis paper presents an improved scheme able to perform accurate segmentation and classification of cancer nuclei in immunohistochemical (IHC) breast tissue images in order to provide quantitative evaluation of estrogen or progesterone (ER/PR) receptor status that will assist pathologists in cancer diagnostic process.MethodsThe proposed segmentation method is based on adaptive local thresholding and an enhanced morphological procedure, which are applied to extract all stained nuclei regions and to split overlapping nuclei. In fact, a new segmentation approach is presented here for cell nuclei detection from the IHC image using a modified Laplacian filter and an improved watershed algorithm. Stromal cells are then removed from the segmented image using an adaptive criterion in order to get fast tumor nuclei recognition. Finally, unsupervised classification of cancer nuclei is obtained by the combination of four common color separation techniques for a subsequent Allred cancer scoring.ResultsExperimental results on various IHC tissue images of different cancer affected patients, demonstrate the effectiveness of the proposed scheme when compared to the manual scoring of pathological experts. A statistical analysis is performed on the whole image database between immuno-score of manual and automatic method, and compared with the scores that have reached using other state-of-art segmentation and classification strategies. According to the performance evaluation, we recorded more than 98% for both accuracy of detected nuclei and image cancer scoring over the truths provided by experienced pathologists which shows the best correlation with the expert's score (Pearson's correlation coefficient = 0.993, p-value < 0.005) and the lowest computational total time of 72.3s/image (±1.9) compared to recent studied methods.ConclusionsThe proposed scheme can be easily applied for any histopathological diagnostic process that needs stained nuclear quantification and cancer grading. Moreover, the reduced processing time and manual interactions of our procedure can facilitate its implementation in a real-time device to construct a fully online evaluation system of IHC tissue images.Graphical abstractImage, graphical abstract
  • The region of interest localization for glaucoma analysis from retinal
           fundus image using deep learning
    • Abstract: Publication date: Available online 8 August 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Anirban Mitra, Priya Shankar Banerjee, Sudipta Roy, Somasis Roy, Sanjit Kumar Setua Background and objectives: Retinal fundus image analysis without manual intervention has been rising as an imperative analytical approach for early detection of eye-related diseases such as glaucoma and diabetic retinopathy. For analysis and detection of Glaucoma and some other disease from retinal image, there is a significant role of predicting the bounding box coordinates of Optic Disc (OD) that acts as a Region of Interest (ROI). Methods - We reframe ROI detection as a solitary regression predicament, from image pixel values to ROI coordinates including class probabilities. A Convolution Neural Network (CNN) has trained on full images to predict bounding boxes along with their analogous probabilities and confidence scores. The publically available MESSIDOR and Kaggle datasets have been used to train the network. We adopted various data augmentation techniques to amplify our dataset so that our network becomes less sensitive to noise. From a very high-level perspective, every image is divided into a 13 × 13 grid. Every grid cell envisages 5 bounding boxes along with the corresponding class probability and a confidence score. Before training, the network and the bounding box priors or anchors are initialized using k-means clustering on the original dataset using a distance metric based on Intersection of the Union (IOU) over ground-truth bounding boxes. During training in fact, a sum-squared loss function is used as the prediction's error function. Finally, Non-maximum suppression is applied by the proposed methodology to reach the concluding prediction. Results - The following projected method accomplish an accuracy of 99.05% and 98.78% on the Kaggle and MESSIDOR test sets for ROI detection. Results of proposed methodology indicates that proposed network is able to perceive ROI in fundus images in 0.0045s at 25ms of latency, which is far better than the recent-time and using no handcrafted features. Conclusions- The network predicts accurate results even on low-quality images without being biased towards any particular type of image. The network prepared to see more summed up depiction rather than past works in the field. Going by the results, our novel method has better diagnosis of eye diseases in the future in a faster and reliable way.
  • Complex-Valued Unsupervised Convolutional Neural Networks for Sleep Stage
    • Abstract: Publication date: Available online 26 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Junming Zhang, Yan Wu Background and objectiveDespite numerous deep learning methods being developed for automatic sleep stage classification, almost all the models need labeled data. However, obtaining labeled data is a subjective process. Therefore, the labels will be different between two experts. At the same time, obtaining labeled data also is a time-consuming task. Even an experienced expert requires hours to annotate the sleep stage patterns. More important, as the development of wearable sleep devices, it is very difficult to obtain labeled sleep data. Therefore, unsupervised training algorithm is very important for sleep stage classification. Hence, a new sleep stage classification method named complex-valued unsupervised convolutional neural networks (CUCNN) is proposed in this study.MethodsThe CUCNN operates with complex-valued inputs, outputs, and weights, and its training strategy is greedy layer-wise training. It is composed of three phases: phase encoder, unsupervised training and complex-valued classification. Phase encoder is used to translate real-valued inputs into complex numbers. In the unsupervised training phase, the complex-valued K-means is used to learn filters which will be used in the convolution.ResultsThe classification performances of handcrafted features are compared with those of learned features via CUCNN. The total accuracy (TAC) and kappa coefficient the of sleep stage from UCD dataset are 87% and 0.8, respectively. Moreover, the comparison experiments indicate that the TACs of the CUCNN from UCD and MIT-BIH datasets outperform these of unsupervised convolutional neural networks (UCNN) by 12.9% and 13%, respectively. Additionally, the convergence of CUCNN is much faster than that of UCNN in most cases.ConclusionsThe proposed method is fully automated and can learn features in an unsupervised fashion. Results show that unsupervised training and automatic feature extraction on sleep data are possible, which are very important for home sleep monitoring.
  • Computer-Aided Diagnosis of Glaucoma Using Fundus Images: A Review
    • Abstract: Publication date: Available online 26 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Yuki Hagiwara, Joel En Wei Koh, Jen Hong Tan, Sulatha V Bhandary, Augustinus Laude, Edward J Ciaccio, Louis Tong, U Rajendra Acharya Background and ObjectivesGlaucoma is an eye condition which leads to permanent blindness when the disease progresses to an advanced stage. It occurs due to inappropriate intraocular pressure within the eye, resulting in damage to the optic nerve. Glaucoma does not exhibit any symptoms in its early stage and thus, it is important to diagnose early to prevent blindness. Fundus photography is widely used by ophthalmologists to assist in diagnosis of glaucoma and is cost-effective.MethodsThe morphological features of the disc that is characteristic of glaucoma are clearly seen in the fundus images. However, manual inspection of the acquired fundus images may be prone to inter-observer variation. Therefore, a computer-aided detection (CAD) system is proposed to make an accurate, reliable and fast diagnosis of glaucoma based on the optic nerve features of fundus imaging. In this paper, we reviewed existing techniques to automatically diagnose glaucoma.ResultsThe use of CAD is very effective in the diagnosis of glaucoma and can assist the clinicians to alleviate their workload significantly. We have also discussed the advantages of employing state-of-art techniques, including deep learning (DL), when developing the automated system. The DL methods are effective in glaucoma diagnosis.ConclusionsNovel DL algorithms with big data availability are required to develop a reliable CAD system. Such techniques can be employed to diagnose other eye diseases accurately.Graphical Image, graphical abstract
  • A lightweight rapid application development framework for biomedical image
    • Abstract: Publication date: Available online 26 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Shekhar S. Chandra, Jason A. Dowling, Craig Engstrom, Ying Xia, Anthony Paproki, Aleš Neubert, David Rivest-Hénault, Olivier Salvado, Stuart Crozier, Jurgen Fripp Biomedical imaging analysis typically comprises a variety of complex tasks requiring sophisticated algorithms and visualising high dimensional data. The successful integration and deployment of the enabling software to clinical (research) partners, for rigorous evaluation and testing, is a crucial step to facilitate adoption of research innovations within medical settings. In this paper, we introduce the Simple Medical Imaging Library Interface(SMILI), an object oriented open-source framework with a compact suite of objects geared for rapid biomedical imaging (cross-platform) application development and deployment. SMILI supports the development of both command-line (shell and Python scripting) and graphical applications utilising the same set of processing algorithms. It provides a substantial subset of features when compared to more complex packages, yet it is small enough to ship with clinical applications with limited overhead and has a license suitable for commercial use. After describing where SMILI fits within the existing biomedical imaging software ecosystem, by comparing it to other state-of-the-art offerings, we demonstrate its capabilities in creating a clinical application for manual measurement of cam-type lesions of the femoral head-neck region for the investigation of femoro-acetabular impingement (FAI) from three dimensional (3D) magnetic resonance (MR) images of the hip. This application for the investigation of FAI proved to be convenient for radiological analyses and resulted in high intra (ICC=0.97) and inter-observer (ICC=0.95) reliabilities for measurement of α-angles of the femoral head-neck region. We believe that SMILI is particularly well suited for prototyping biomedical imaging applications requiring user interaction and/or visualisation of 3D mesh, scalar, vector or tensor data.
  • A topological approach to delineation and arrhythmic beats detection in
           unprocessed long-term ECG signals
    • Abstract: Publication date: Available online 21 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Jana Faganeli Pucer, Matjaž Kukar Background and Objective:Arrhythmias are one of the most common symptoms of cardiac failure. They are usually diagnosed using ECG recordings, particularly long ambulatory recordings (AECG). These recordings are tedious to interpret by humans due to their extent (up to 48 hours) and the relative scarcity of arrhythmia events. This makes automated systems for detecting various AECG anomalies indispensable. In this work we present a novel procedure based on topological principles (Morse theory) for detecting arrhythmic beats in AECG. It works in nearly real-time (delayed by a 14-second window), and can be applied to raw (unprocessed) ECG signals.Methods:The procedure is based on a subject-specific adaptation of the one-dimensional discrete Morse theory (ADMT), which represents the signal as a sequence of its most important extrema. The ADMT algorithm is applied twice; for low-amplitude, high-frequency noise removal, and for detection of the characteristic waves of individual ECG beats. The waves are annotated using the ADMT algorithm and template matching. The annotated beats are then compared to the adjacent beats with two measures of similarity: the distance between two beats, and the difference in shape between them. The two measures of similarity are used as inputs to a decision tree algorithm that classifies the beats as normal or abnormal. The classification performance is evaluated with the leave-one-record-out cross-validation method.Results:Our approach was tested on the MIT-BIH database, where it exhibited a classification accuracy of 92.73%, a sensitivity of 73.35%, a specificity of 96.70%, a positive predictive value of 88.01%, and a negative predictive value of 95.73%.Conclusions:Compared to related studies, our algorithm requires less preprocessing while retaining the capability to detect and classify beats in almost real-time. The algorithm exhibits a high degree of accuracy in beats detection and classification that are at least comparable to state-of-the-art methods.
  • Analysis of PCG Signals using Quality Assessment and Homomorphic Filters
           for Localization and Classification of Heart Sounds
    • Abstract: Publication date: Available online 21 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Qurat-ul-Ain Mubarak, Muhammad Usman Akram, Arslan Shaukat, Farhan Hussain, Sajid Gul Khawaja, Wasi Haider Butt Background and Objective: Accurate localization of heart beats in phonocardiogram (PCG) signal is very crucial for correct segmentation and classification of heart sounds into S1 and S2. This task becomes challenging due to inclusion of noise in acquisition process owing to number of different factors. In this paper we propose a system for heart sound localization and classification into S1 and S2. The proposed system introduces the concept of quality assessment before localization, feature extraction and classification of heart sounds. Methods: The signal quality is assessed by predefined criteria based upon number of peaks and zero crossing of PCG signal. Once quality assessment is performed, then heart beats within PCG signal are localized, which is done by envelope extraction using homomorphic envelogram and finding prominent peaks. In order to classify localized peaks into S1 and S2, temporal and time-frequency based statistical features have been used. Support Vector Machine using radial basis function kernel is used for classification of heart beats into S1 and S2 based upon extracted features. The performance of the proposed system is evaluated using Accuracy, Sensitivity, Specificity, F-measure and Total Error. The dataset provided by PASCAL classifying heart sound challenge is used for testing. Results: Performance of system is significantly improved by quality assessment. Results shows that proposed Localization algorithm achieves accuracy up to 97% and generates smallest total average error among top 3 challenge participants. The classification algorithm achieves accuracy up to 91%. Conclusion: The system provides firm foundation for the detection of normal and abnormal heart sounds for cardiovascular disease detection.
  • Deep neural models for extracting entities and relationships in the new
           RDD corpus relating disabilities and rare diseases
    • Abstract: Publication date: Available online 20 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Hermenegildo Fabregat, Lourdes Araujo, Juan Martinez-Romo Background and Objective: There is a huge amount of rare diseases, many of which have associated important disabilities. It is paramount to know in advance the evolution of the disease in order to limit and prevent the appearance of disabilities and to prepare the patient to manage the future difficulties. Rare disease associations are making an effort to manually collect this information, but it is a long process. A lot of information about the consequences of rare diseases is published in scientific papers, and our goal is to automatically extract disabilities associated with diseases from them.Methods: This work presents a new corpus of abstracts from scientific papers related to rare diseases, which has been manually annotated with disabilities. This corpus allows to train machine and deep learning systems that can automatically process other papers, thus extracting new information about the relations between rare diseases and disabilities. The corpus is also annotated with negation and speculation when they appear affecting disabilities. The corpus has been made publicly accessible.Results: We have devised some experiments using deep learning techniques to show the usefulness of the developed corpus. Specifically, we have designed a long short-term memory based architecture for disabilities identification, as well as a convolutional neural network for detecting their relationships to diseases. The systems designed do not need any preprocessing of the data, but only low dimensional vectors representing the words.Conclusions: The developed corpus will allow to train systems to identify disabilities in biomedical documents, which the current annotation systems are not able to detect. The system could also be trained to detect relationships between them and diseases, as well as negation and speculation, that can change the meaning of the language. The deep learning models designed for identifying disabilities and their relationships to diseases in new documents show that the corpus allows obtaining an F-measure of around 81% for the disability recognition and 75% for relation extraction.
  • Adrenal Tumor Segmentation Method for MR Images
    • Abstract: Publication date: Available online 18 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Mücahid Barstuğan, Rahime Ceylan, Semih Asoglu, Hakan Cebeci, Mustafa Koplay Background and objectiveAdrenal tumors, which occur on adrenal glands, are incidentally determined. The liver, spleen, spinal cord, and kidney surround the adrenal glands. Therefore, tumors on the adrenal glands can be adherent to other organs. This is a problem in adrenal tumor segmentation. In addition, low contrast, non-standardized shape and size, homogeneity, and heterogeneity of the tumors are considered as problems in segmentation.MethodsThis study proposes a computer-aided diagnosis (CAD) system to segment adrenal tumors by eliminating the above problems. The proposed hybrid method incorporates many image processing methods, which include active contour, adaptive thresholding, contrast limited adaptive histogram equalization (CLAHE), image erosion, and region growing.ResultsThe performance of the proposed method was assessed on 113 Magnetic Resonance (MR) images using seven metrics: sensitivity, specificity, accuracy, precision, Dice Coefficient, Jaccard Rate, and structural similarity index (SSIM). The proposed method eliminates some of the discussed problems with success rates of 74.84%, 99.99%, 99.84%, 93.49%, 82.09%, 71.24%, 99.48% for the metrics, respectively.ConclusionsThis study presents a new method for adrenal tumor segmentation, and avoids some of the problems preventing accurate segmentation, especially for cyst-based tumors.
  • Keyframe extraction from laparoscopic videos based on visual saliency
    • Abstract: Publication date: Available online 18 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Constantinos Loukas, Christos Varytimidis, Konstantinos Rapantzikos, Meletios A. Kanakis Background and objective: Laparoscopic surgery offers the potential for video recording of the operation, which is important for technique evaluation, cognitive training, patient briefing and documentation. An effective way for video content representation is to extract a limited number of keyframes with semantic information. In this paper we present a novel method for keyframe extraction from individual shots of the operational video.Methods: The laparoscopic video was first segmented into video shots using an objectness model, which was trained to capture significant changes in the endoscope field of view. Each frame of a shot was then decomposed into three saliency maps in order to model the preference of human vision to regions with higher differentiation with respect to color, motion and texture. The accumulated responses from each map provided a 3D time series of saliency variation across the shot. The time series was modeled as a multivariate autoregressive process with hidden Markov states (HMMAR model). This approach allowed the temporal segmentation of the shot into a predefined number of states. A representative keyframe was extracted from each state based on the highest state-conditional probability of the corresponding saliency vector.Results: Our method was tested on 168 video shots extracted from various laparoscopic cholecystectomy operations from the publicly available Cholec80 dataset. Four state-of-the-art methodologies were used for comparison. The evaluation was based on two assessment metrics: Color Consistency Score (CCS), which measures the color distance between the ground truth (GT) and the closest keyframe, and Temporal Consistency Score (TCS), which considers the temporal proximity between GT and extracted keyframes. About 81% of the extracted keyframes matched the color content of the GT keyframes, compared to 77% yielded by the second-best method. The TCS of the proposed and the second-best method was close to 1.9 and 1.4 respectively.Conclusions: Our results demonstrated that the proposed method yields superior performance in terms of content and temporal consistency to the ground truth. The extracted keyframes provided highly semantic information that may be used for various applications related to surgical video content representation, such as workflow analysis, video summarization and retrieval.
  • A Secure Biometrics-Based Authentication Key Exchange Protocol for
           Multi-Server TMIS using ECC
    • Abstract: Publication date: Available online 18 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Mingping Qi, Jianhua Chen, Yitao Chen Background and objectivesTelecare Medicine Information System (TMIS) enables physicians to efficiently and conveniently make certain diagnoses and medical treatment for patients over the insecure public Internet. To ensure patients securely access to medicinal services, many authentication schemes have been proposed. Although numerous cryptographic authentication schemes for TMIS have been proposed with the aim to ensure data security, user privacy and authentication, various forms of attacks make these schemes impractical.MethodsTo design a truly secure and practical authentication scheme for TMIS, a new biometrics-based authentication key exchange protocol for multi-server TMIS without sharing the system private key with distributed servers is presented in this work.ResultsOur proposed protocol has perfect security features including mutual authentication, user anonymity, perfect forward secrecy and resisting various well-known attacks, and these security feathers are confirmed by the BAN logic and heuristic cryptanalysis, respectively.ConclusionsA secure biometrics-based authentication key exchange protocol for multi-server TMIS is presented in this work, which has perfect security properties including perfect forward secrecy, supporting user anonymity, etc., and can withstand various attacks such as impersonation attack, off-line password guessing attack, etc.. Considering security is the most important factor for an authentication scheme, so our scheme is more suitable for multi-server TMIS.
  • Mammographic mass segmentation using fuzzy contours
    • Abstract: Publication date: Available online 18 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Marwa HMIDA, Kamel HAMROUNI, Basel SOLAIMAN, Sana BOUSSETTA Background and Objective: Accurate mass segmentation in mammographic images is a critical requirement for computer-aided diagnosis systems since it allows accurate feature extraction and thus improves classification precision.Methods: In this paper, a novel automatic breast mass segmentation approach is presented. This approach consists of mainly three stages: contour initialization applied to a given region of interest; construction of fuzzy contours and estimation of fuzzy membership maps of different classes in the considered image; integration of these maps in the Chan-Vese model to get a fuzzy-energy based model that is used for final delineation of mass.Results: The proposed approach is evaluated using mass regions of interest extracted from the mini-MIAS database. The experimental results show that the proposed method achieves an average true positive rate of 91.12% with a precision of 88.08%.Conclusions: The achieved results show high accuracy in breast mass segmentation when compared to manually annotated ground truth and to other methods from the literature.
  • Cardiology record multi-label classification using Latent Dirichlet
    • Abstract: Publication date: Available online 17 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Jorge Pérez, Alicia Pérez, Arantza Casillas, Koldo Gojenola Background and Objective: Electronic health records (EHRs) convey vast and valuable knowledge about dynamically changing clinical practices. Indeed, clinical documentation entails the inspection of massive number of records across hospitals and hospital sections. The goal of this study is to provide an efficient framework that will help clinicians explore EHRs and attain alternative views related to both patient-segments and diseases, like clustering and statistical information about the development of heart diseases (replacement of pacemakers, valve implantation etc.) in co-occurrence with other diseases. The task is challenging, dealing with lengthy health records and a high number of classes in a multi-label setting. Methods: LDA is a statistical procedure optimized to explain a document by multinomial distributions on their latent topics and the topics by distributions on related words. These distributions allow to represent collections of texts into a continuous space enabling distance-based associations between documents and also revealing the underlying topics. The topic models were assessed by means of four divergence metrics. In addition, we applied LDA to the task of multi-label document classification of EHRs according to the International Classification of Diseases 10th Clinical Modification (ICD-10). The set of EHRs had assigned 7 codes on average over 970 different codes corresponding to cardiology. Results: First, the discriminative ability of topic models was assessed using dissimilarity metrics. Nevertheless, there was an open question regarding the interpretability of automatically discovered topics. To address this issue, we explored the connection between the latent topics and ICD-10. EHRs were represented by means of LDA and, next, supervised classifiers were inferred from those representations. Given the low-dimensional representation provided by LDA, the search was computationally efficient compared to symbolic approaches such as TF-IDF. The classifiers achieved an average AUC of 77.79. As a side contribution, with this work we released the software implemented in Python and R to both train and evaluate the models. Conclusion: Topic modeling offers a means of representing EHRs in a small dimensional continuous space. This representation conveys relevant information as hidden topics in a comprehensive manner. Moreover, in practice, this compact representation allowed to extract the ICD-10 codes associated to EHRs.
  • Attentional bias in MDD: ERP components analysis and classification using
           a dot-probe task
    • Abstract: Publication date: Available online 17 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Xiaowei Li, Jianxiu Li, Bin Hu, Jing Zhu, Xuemin Zhang, Liuqing Wei, Ning Zhong, Mi Li, Zhijie Ding, Jing Yang, Lan Zhang Background and ObjectiveStrands of evidence have supported existence of negative attentional bias in patients with depression. This study aimed to assess the behavioral and electrophysiological signatures of attentional bias in major depressive disorder (MDD) and explore whether ERP components contain valuable information for discriminating between MDD patients and healthy controls (HCs).MethodsElectroencephalography data were collected from 17 patients with MDD and 17 HCs in a dot-probe task, with emotional-neutral pairs as experimental materials. Fourteen features related to ERP waveform shape were generated. Then, Correlated Feature Selection (CFS), ReliefF and GainRatio (GR) were applied for feature selection. For discriminating between MDDs and HCs, k-nearest neighbor (KNN), C4.5, Sequential Minimal Optimization (SMO) and Logistic Regression (LR) were used.ResultsBehaviorally, MDD patients showed significantly shorter reaction time (RT) to valid than invalid sad trials, with significantly higher bias score for sad-neutral pairs. Analysis of split-half reliability in RT indices indicated a strong reliability in RT, while coefficients of RT bias scores neared zero. These behavioral effects were supported by ERP results. MDD patients had higher P300 amplitude with the probe replacing a sad face than a neutral face, indicating difficult attention disengagement from negative emotional faces. Meanwhile, data mining analysis based on ERP components suggested that CFS was the best feature selection algorithm. Especially for the P300 induced by valid sad trials, the classification accuracy of CFS combination with any classifier was above 85%, and the KNN (k=3) classifier achieved the highest accuracy (94%).ConclusionsMDD patients show difficulty in attention disengagement from negative stimuli, reflected by P300. The CFS over other methods leads to a good overall performance in most cases, especially when KNN classifier is used for P300 component classification, illustrating that ERP component may be applied as a tool for auxiliary diagnosis of depression.
  • A review of image analysis and machine learning techniques for automated
           cervical cancer screening from pap-smear images
    • Abstract: Publication date: October 2018Source: Computer Methods and Programs in Biomedicine, Volume 164Author(s): Wasswa William, Andrew Ware, Annabella Habinka Basaza-Ejiri, Johnes Obungoloch Background and ObjectiveEarly diagnosis and classification of a cancer type can help facilitate the subsequent clinical management of the patient. Cervical cancer ranks as the fourth most prevalent cancer affecting women worldwide and its early detection provides the opportunity to help save life. To that end, automated diagnosis and classification of cervical cancer from pap-smear images has become a necessity as it enables accurate, reliable and timely analysis of the condition's progress. This paper presents an overview of the state of the art as articulated in prominent recent publications focusing on automated detection of cervical cancer from pap-smear images.MethodsThe survey reviews publications on applications of image analysis and machine learning in automated diagnosis and classification of cervical cancer from pap-smear images spanning 15 years. The survey reviews 30 journal papers obtained electronically through four scientific databases (Google Scholar, Scopus, IEEE and Science Direct) searched using three sets of keywords: (1) segmentation, classification, cervical cancer; (2) medical imaging, machine learning, pap-smear; (3) automated system, classification, pap-smear.ResultsMost of the existing algorithms facilitate an accuracy of nearly 93.78% on an open pap-smear data set, segmented using CHAMP digital image software. K-nearest-neighbors and support vector machines algorithms have been reported to be excellent classifiers for cervical images with accuracies of over 99.27% and 98.5% respectively when applied to a 2-class classification problem (normal or abnormal).ConclusionThe reviewed papers indicate that there are still weaknesses in the available techniques that result in low accuracy of classification in some classes of cells. Moreover, most of the existing algorithms work either on single or on multiple cervical smear images. This accuracy can be increased by varying various parameters such as the features to be extracted, improvement in noise removal, using hybrid segmentation and classification techniques such of multi-level classifiers. Combining K-nearest-neighbors algorithm with other algorithm(s) such as support vector machines, pixel level classifications and including statistical shape models can also improve performance. Further, most of the developed classifiers are tested on accurately segmented images using commercially available software such as CHAMP software. There is thus a deficit of evidence that these algorithms will work in clinical settings found in developing countries (where 85% of cervical cancer incidences occur) that lack sufficient trained cytologists and the funds to buy the commercial segmentation software.
  • Predictive models for hospital readmission risk: A systematic review of
    • Abstract: Publication date: October 2018Source: Computer Methods and Programs in Biomedicine, Volume 164Author(s): Arkaitz Artetxe, Andoni Beristain, Manuel Graña ObjectivesHospital readmission risk prediction facilitates the identification of patients potentially at high risk so that resources can be used more efficiently in terms of cost-benefit. In this context, several models for readmission risk prediction have been proposed in recent years. The goal of this review is to give an overview of prediction models for hospital readmission, describe the data analysis methods and algorithms used for building the models, and synthesize their results.MethodsStudies that reported the predictive performance of a model for hospital readmission risk were included. We defined the scope of the review and accordingly built a search query to select the candidate papers. This query string was used as input for the chosen search engines, namely PubMed and Google Scholar. For each study, we recorded the population, feature selection method, classification algorithm, sample size, readmission threshold, readmission rate and predictive performance of the model.ResultsWe identified 77 studies that met the inclusion criteria, out of 265 citations. In 68% of the studies (n = 52) logistic regression or other regression techniques were utilized as the main method. Ten (13%) studies used survival analysis for model construction, while 14 (18%) used machine learning techniques for classification, of which decision tree-based methods and SVM were the most utilized algorithms. Among these, only four studies reported the use of any class imbalance addressing technique, of which resampling is the most frequent (75%). The performance of the models varied significantly among studies, with Area Under the ROC Curve (AUC) values in the ranges between 0.54 and 0.92.ConclusionLogistic regression and survival analysis have been traditionally the most widely used techniques for model building. Nevertheless, machine learning techniques are becoming increasingly popular in recent years. Recent comparative studies suggest that machine learning techniques can improve prediction ability over traditional statistical approaches. Regardless, the lack of an appropriate benchmark dataset of hospital readmissions makes a comparison of models’ performance across different studies difficult.
  • Non-invasive assessment of liver quality in transplantation based on
           thermal imaging analysis
    • Abstract: Publication date: October 2018Source: Computer Methods and Programs in Biomedicine, Volume 164Author(s): Qing Lan, Hongyue Sun, John Robertson, Xinwei Deng, Ran Jin Background and objectiveLiver quality evaluation is one of the vital steps for predicting the success of liver transplantation. Current evaluation methods, such as biopsy and visual inspection, which are either invasive or lack of consistent standards, provide limited predictive value of long-term transplant viability. Objective analytical models, based on the real-time infrared images of livers during perfusion and preservation, are proposed as novel methods to precisely evaluate donated liver quality.MethodsIn this study, by using principal component analysis to extract infrared image features as predictors, we construct a multivariate logistic regression model for single liver quality evaluation, and a multi-task learning logistic regression model for cross-liver quality evaluation.ResultsThe single liver quality predictions show testing errors of 0%. The leave-one-liver-out predictions show testing errors ranging from 9% to 36%.ConclusionsIt is found that there is a strong correlation between the viability of livers and the infrared image features in both single liver and cross-liver quality evaluations. These analytical methods also determine that the selected significant infrared image features indicate regional difference in viability and show that more stringent pre-implantation evaluation may be needed to predict transplant outcomes.Graphical abstractImage, graphical abstract
  • Early diagnosis of mild cognitive impairment and Alzheimer’s with
           event-related potentials and event-related desynchronization in N-back
           working memory tasks
    • Abstract: Publication date: October 2018Source: Computer Methods and Programs in Biomedicine, Volume 164Author(s): Francisco J. Fraga, Godofredo Quispe Mamani, Erin Johns, Guilherme Tavares, Tiago H. Falk, Natalie A. Phillips Background and Objective: In this study we investigate whether or not event-related potentials (ERP) and/or event-related (de)synchronization (ERD/ERS) can be used to differentiate between 27 healthy elderly (HE), 21 subjects diagnosed with mild cognitive impairment (MCI) and 15 mild Alzheimer’s disease (AD) patients. Methods: Using 32-channel EEG recordings, we measured ERP responses to a three-level (N-back, N = 0,1,2) visual working memory task. We also performed ERD analysis over the same EEG data, dividing the full-band signal into the well-known delta, theta, alpha, beta and gamma bands. Both ERP and ERD analyses were followed by cluster analysis with correction for multicomparisons whenever significant differences were found between groups. Results: Regarding ERP (full-band analysis), our findings have shown both patient groups (MCI and AD) with reduced P450 amplitude (compared to HE controls) in the execution of the non-match 1-back task at many scalp electrodes, chiefly at parietal and centro-parietal areas. However, no significant differences were found between MCI and AD in ERP analysis whatever was the task. As for sub-band analyses, ERD/ERS measures revealed that HE subjects elicited consistently greater alpha ERD responses than MCI and AD patients during the 1-back task in the match condition, with all differences located at frontal, central and occipital regions. Moreover, in the non-match condition, it was possible to distinguish between MCI and AD patients when they were performing the 0-back task, with MCI presenting more desynchronization than AD on the theta band at temporal and fronto-temporal areas. In summary, ERD analyses have revealed themselves more valuable than ERP, since they showed significant differences in all three group comparisons: HE vs. MCI, HE vs. AD, and MCI vs. AD. Conclusions: Based on these findings, we conclude that ERD responses to working memory (N-back) tasks could be useful not only for early MCI diagnosis or for improved AD diagnosis, but probably also for assessing the likelihood of MCI progression to AD, after further validated by a longitudinal study.
  • Availability and use of in-patient electronic health records in low
           resource setting
    • Abstract: Publication date: October 2018Source: Computer Methods and Programs in Biomedicine, Volume 164Author(s): Umair Qazi, Mahdi Haq, Nabhan Rashad, Khalid Rashid, Shahid Ullah, Usman Raza
  • R+Package+isni&rft.title=Computer+Methods+and+Programs+in+Biomedicine&rft.issn=0169-2607&">Measuring the Impact of Nonignorable Missingness Using the R Package isni
    • Abstract: Publication date: Available online 4 July 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Hui Xie, Weihua Gao, Baodong Xing, Daniel F. Heitjan, Donald Hedeker, Chengbo Yuan Background and Objective: The popular assumption of ignorability simplifies analyses with incomplete data, but if it is not satisfied, results may be incorrect. Therefore it is necessary to assess the sensitivity of empirical findings to this assumption. We have created a user-friendly and freely available software program to conduct such analyses.Method: One can evaluate the dependence of inferences on the assumption of ignorability by measuring their sensitivity to its violation. One tool for such an analysis is the index of local sensitivity to nonignorability (ISNI), which evaluates the rate of change of parameter estimates to the assumed degree of nonignorability in the neighborhood of an ignorable model. Computation of ISNI avoids the need to estimate a nonignorable model or to posit a specific magnitude of nonignorability. Our new R package, named isni, implements ISNI analysis for some common data structures and corresponding statistical models.Result: The isni package computes ISNI in the generalized linear model for independent data, and in the marginal multivariate Gaussian model and the linear mixed model for longitudinal/clustered data. It allows for arbitrary patterns of missingness caused by dropout and/or intermittent missingness. Examples illustrate its use and features.Conclusions: The R package isni enables a systematic and efficient sensitivity analysis that informs evaluations of reliability and validity of empirical findings from incomplete data.
  • Hybrid L1/2  + 2 Method for Gene Selection in the Cox Proportional
           Hazards Model
    • Abstract: Publication date: Available online 27 June 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Hai-Hui Huang, Yong Liang Background and ObjectiveAn important issue in genomic research is to identify the significant genes that related to survival from tens of thousands of genes. Although Cox proportional hazards model is a conventional survival analysis method, it does not induce the gene selection.MethodsIn this paper, we extend the hybrid L1/2  + 2 regularization (HLR) idea to the censored survival situation, a new edition of sparse Cox model based on the HLR method has been proposed. We develop two algorithms for solving the HLR penalized Cox model; one is the coordinate descent algorithm with HLR thresholding operator, the other is the weight iteration method.ResultsThe proposed method was tested on six public mRNA data sets of serval kinds of cancers, AML, Breast cancer, Pancreatic cancer, DLBCL and Melanoma. The test results indicate that the method identified a small subset of genes but essential while giving best or equivalent predictive performance, as compared to some popular methods.ConclusionsThe results of empirical and simulations imply that the proposed strategy is highly competitive in studying high dimensional survival data among several state-of-the-art methods.
  • A review of disability EEG based wheelchair control system: Coherent
           taxonomy, open challenges and recommendations
    • Abstract: Publication date: Available online 18 June 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Z.T. Al-qaysi, B.B. Zaidan, A.A. Zaidan, M.S. Suzani ContextIntelligent wheelchair technology has recently been utilised to address several mobility problems. Techniques based on brain–computer interface (BCI) are currently used to develop electric wheelchairs. Using human brain control in wheelchairs for people with disability has elicited widespread attention due to its flexibility.ObjectiveThis study aims to determine the background of recent studies on wheelchair control based on BCI for disability and map the literature survey into a coherent taxonomy. The study intends to identify the most important aspects in this emerging field as an impetus for using BCI for disability in electric-powered wheelchair (EPW) control, which remains a challenge. The study also attempts to provide recommendations for solving other existing limitations and challenges.MethodsWe systematically searched all articles about EPW control based on BCI for disability in three popular databases: ScienceDirect, IEEE and Web of Science. These databases contain numerous articles that considerably influenced this field and cover most of the relevant theoretical and technical issues.ResultsWe selected 100 articles on the basis of our inclusion and exclusion criteria. A large set of articles (55) discussed on developing real-time wheelchair control systems based on BCI for disability signals. Another set of articles (25) focused on analysing BCI for disability signals for wheelchair control. The third set of articles (14) considered the simulation of wheelchair control based on BCI for disability signals. Four articles designed a framework for wheelchair control based on BCI for disability signals. Finally, one article reviewed concerns regarding wheelchair control based on BCI for disability signals.DiscussionSince 2007, researchers have pursued the possibility of using BCI for disability in EPW control through different approaches. Regardless of type, articles have focused on addressing limitations that impede the full efficiency of BCI for disability and recommended solutions for these limitations.ConclusionsStudies on wheelchair control based on BCI for disability considerably influence society due to the large number of people with disability. Therefore, we aim to provide researchers and developers with a clear understanding of this platform and highlight the challenges and gaps in the current and future studies.
  • Deep generative learning for automated EHR diagnosis of traditional
           Chinese medicine
    • Abstract: Publication date: Available online 4 May 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Zhaohui Liang, Jun Liu, Aihua Ou, Honglai Zhang, Ziping Li, Jimmy Xiangji Huang BackgroundComputer-aided medical decision-making (CAMDM) is the method to utilize massive EMR data as both empirical and evidence support for the decision procedure of healthcare activities. Well-developed information infrastructure, such as hospital information systems and disease surveillance systems, provides abundant data for CAMDM. However, the complexity of EMR data with abstract medical knowledge makes the conventional model incompetent for the analysis. Thus a deep belief networks (DBN) based model is proposed to simulate the information analysis and decision-making procedure in medical practice. The purpose of this paper is to evaluate a deep learning architecture as an effective solution for CAMDM.MethodsA two-step model is applied in our study. At the first step, an optimized seven-layer deep belief network (DBN) is applied as an unsupervised learning algorithm to perform model training to acquire feature representation. Then a support vector machine model is adopted to DBN at the second step of the supervised learning. There are two data sets used in the experiments. One is a plain text data set indexed by medical experts. The other is a structured dataset on primary hypertension. The data are randomly divided to generate the training set for the unsupervised learning and the testing set for the supervised learning. The model performance is evaluated by the statistics of mean and variance, the average precision and coverage on the data sets. Two conventional shallow models (support vector machine / SVM and decision tree / DT) are applied as the comparisons to show the superiority of our proposed approach.ResultsThe deep learning (DBN + SVM) model outperforms simple SVM and DT on two data sets in terms of all the evaluation measures, which confirms our motivation that the deep model is good at capturing the key features with less dependence when the index is built up by manpower.ConclusionsOur study shows the two-step deep learning model achieves high performance for medical information retrieval over the conventional shallow models. It is able to capture the features of both plain text and the highly-structured database of EMR data. The performance of the deep model is superior to the conventional shallow learning models such as SVM and DT. It is an appropriate knowledge-learning model for information retrieval of EMR system. Therefore, deep learning provides a good solution to improve the performance of CAMDM systems.
  • Wide complex tachycardia discrimination using dynamic time warping of ECG
    • Abstract: Publication date: Available online 20 April 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): F. Niknejad Mazandarani, M. Mohebbi Background and objective: Automatic processing and accurate diagnosis of wide complex tachycardia (WCT) arrhythmia groups using electrocardiogram signals (ECG) remains a challenge. WCT arrhythmia consists of two main groups: ventricular tachycardia (VT) and supraventricular tachycardia with aberrancy (SVT-A). These two groups have similar morphologies in the realm of ECG signals. VT and SVT-A arrhythmias originate from the ventricle and atrium, respectively. Hence, inaccurate diagnosis of SVT-A instead of VT can be fatal.Methods: In this paper, we present a novel algorithm using dynamic time warping (DTW) to discriminate between VT and SVT-A arrhythmias. This method includes pre-processing, best template search (BTS), and classifier modules. The first module, pre-processing, is responsible for filtering, R-wave detection, and beat detection of ECG signals. The second module, BTS, automatically extracts the minimum possible number of signals as a template from the entire training dataset using an intelligent algorithm. These template signals have the greatest morphological difference, which leads to accurate WCT discrimination. Finally, a 1NN classifier categorizes the test data using DTW distance.Results: Our proposed method was evaluated on an ECG signal database consisting of 171 subjects. The results showed that the proposed algorithm can accurately discriminate between VT, SVT-A, and normal subjects, and appears to be suitable for future use in clinical application. The obtained accuracy, sensitivity, specificity, and positive predictive values were 93.22%, 88.68%, 96.98%, and 90.27%, respectively.Conclusion: The presented diagnostic method for discriminating VT and SVT-A, using only one ECG lead, is suitable for future clinical use. It can reduce needless therapeutic interventions and minimize risk for patients.
  • Radial artery pulse waveform analysis based on curve fitting using
           discrete Fourier series
    • Abstract: Publication date: Available online 19 April 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Zhixing Jiang, David Zhang, Guangming Lu Background and objectives: Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients’ wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine.Methods: In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function.Results: Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model.Conclusions: The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states.
  • Detection of white matter lesion regions in MRI using SLIC0 and
           convolutional neural network
    • Abstract: Publication date: Available online 19 April 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Pedro Henrique Bandeira Diniz, Thales Levi Azevedo Valente, João Otávio Bandeira Diniz, Aristófanes Corrêa Silva, Marcelo Gattass, Nina Ventura, Bernardo Carvalho Muniz, Emerson Leandro Gasparetto Background and Objective: White matter lesions are non-static brain lesions that have a prevalence rate up to 98% in the elderly population. Because they may be associated with several brain diseases, it is important that they are detected as soon as possible. Magnetic Resonance Imaging (MRI) provides three-dimensional data with the possibility to detect and emphasize contrast differences in soft tissues, providing rich information about the human soft tissue anatomy. However, the amount of data provided for these images is far too much for manual analysis/interpretation, representing a difficult and time-consuming task for specialists. This work presents a computational methodology capable of detecting regions of white matter lesions of the brain in MRI of FLAIR modality. The techniques highlighted in this methodology are SLIC0 clustering for candidate segmentation and convolutional neural networks for candidate classification. Methods: The methodology proposed here consists of four steps: (1) images acquisition, (2) images preprocessing, (3) candidates segmentation and (4) candidates classification. Results: The methodology was applied on 91 magnetic resonance images provided by DASA, and achieved an accuracy of 98.73%, specificity of 98.77% and sensitivity of 78.79% with 0.005 of false positives, without any false positives reduction technique, in detection of white matter lesion regions. Conclusions: It is demonstrated the feasibility of the analysis of brain MRI using SLIC0 and convolutional neural network techniques to achieve success in detection of white matter lesions regions.
  • Exploring the molecular mechanisms of Traditional Chinese Medicine
           components using gene expression signatures and connectivity map
    • Abstract: Publication date: Available online 4 April 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Minjae Yoo, Jimin Shin, Hyunmin Kim, Jihye Kim, Jaewoo Kang, Aik Choon Tan Background and objectiveTraditional Chinese Medicine (TCM) has been practiced over thousands of years in China and other Asian countries for treating various symptoms and diseases. However, the underlying molecular mechanisms of TCM are poorly understood, partly due to the “multi-component, multi-target” nature of TCM. To uncover the molecular mechanisms of TCM, we perform comprehensive gene expression analysis using connectivity map.MethodsWe interrogated gene expression signatures obtained 102 TCM components using the next generation Connectivity Map (CMap) resource. We performed systematic data mining and analysis on the mechanism of action (MoA) of these TCM components based on the CMap results.ResultsWe clustered the 102 TCM components into four groups based on their MoAs using next generation CMap resource. We performed gene set enrichment analysis on these components to provide additional supports for explaining these molecular mechanisms. We also provided literature evidence to validate the MoAs identified through this bioinformatics analysis. Finally, we developed the Traditional Chinese Medicine Drug Repurposing Hub (TCM Hub) – a connectivity map resource to facilitate the elucidation of TCM MoA for drug repurposing research. TCMHub is freely available in mechanisms of TCM could be uncovered by using gene expression signatures and connectivity map. Through this analysis, we identified many of the TCM components possess diverse MoAs, this may explain the applications of TCM in treating various symptoms and diseases.
  • Symptom-based network classification identifies distinct clinical
           subgroups of liver diseases with common molecular pathways
    • Abstract: Publication date: Available online 22 February 2018Source: Computer Methods and Programs in BiomedicineAuthor(s): Zixin Shu, Wenwen Liu, Huikun Wu, Mingzhong Xiao, Deng Wu, Ting Cao, Meng Ren, Junxiu Tao, Chuhua Zhang, Tangqing He, Xiaodong Li, Runshun Zhang, Xuezhong Zhou Background and objectiveLiver disease is a multifactorial complex disease with high global prevalence and poor long-term clinical efficacy and liver disease patients with different comorbidities often incorporate multiple phenotypes in the clinic. Thus, there is a pressing need to improve understanding of the complexity of clinical liver population to help gain more accurate disease subtypes for personalized treatment.MethodsIndividualized treatment of the traditional Chinese medicine (TCM) provides a theoretical basis to the study of personalized classification of complex diseases. Utilizing the TCM clinical electronic medical records (EMRs) of 6475 liver inpatient cases, we built a liver disease comorbidity network (LDCN) to show the complicated associations between liver diseases and their comorbidities, and then constructed a patient similarity network with shared symptoms (PSN). Finally, we identified liver patient subgroups using community detection methods and performed enrichment analyses to find both distinct clinical and molecular characteristics (with the phenotype-genotype associations and interactome networks) of these patient subgroups.ResultsFrom the comorbidity network, we found that clinical liver patients have a wide range of disease comorbidities, in which the basic liver diseases (e.g. hepatitis b, decompensated liver cirrhosis), and the common chronic diseases (e.g. hypertension, type 2 diabetes), have high degree of disease comorbidities. In addition, we identified 303 patient modules (representing the liver patient subgroups) from the PSN, in which the top 6 modules with large number of cases include 51.68% of the whole cases and 251 modules contain only 10 or fewer cases, which indicates the manifestation diversity of liver diseases. Finally, we found that the patient subgroups actually have distinct symptom phenotypes, disease comorbidity characteristics and their underlying molecular pathways, which could be used for understanding the novel disease subtypes of liver conditions. For example, three patient subgroups, namely Module 6 (M6, n = 638), M2 (n = 623) and M1 (n = 488) were associated to common chronic liver disease conditions (hepatitis, cirrhosis, hepatocellular carcinoma). Meanwhile, patient subgroups of M30 (n = 36) and M36 (n = 37) were mostly related to acute gastroenteritis and upper respiratory infection, respectively, which reflected the individual comorbidity characteristics of liver subgroups. Furthermore, we identified the distinct genes and pathways of patient subgroups and the basic liver diseases (hepatitis b and cirrhosis), respectively. The high degree of overlapping pathways between them (e.g. M36 with 93.33% shared enriched pathways) indicates the underlying molecular network mechanisms of each patient subgroup.ConclusionsOur results demonstrate the utility and comprehensiveness of disease classification study based on community detection of patient network using shared TCM symptom phenotypes and it can be used to other more complex diseases.
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-