Publisher: Scientific Research Publishing   (Total: 230 journals)

 A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

        1 2 | Last   [Sort by number of followers]   [Restore default list]

Showing 1 - 200 of 230 Journals sorted alphabetically
Advances in Aerospace Science and Technology     Open Access   (Followers: 14)
Advances in Alzheimer's Disease     Open Access   (Followers: 8)
Advances in Anthropology     Open Access   (Followers: 18)
Advances in Applied Sociology     Open Access   (Followers: 16)
Advances in Biological Chemistry     Open Access   (Followers: 8)
Advances in Bioscience and Biotechnology     Open Access   (Followers: 20)
Advances in Breast Cancer Research     Open Access   (Followers: 18)
Advances in Chemical Engineering and Science     Open Access   (Followers: 109)
Advances in Computed Tomography     Open Access   (Followers: 2)
Advances in Entomology     Open Access   (Followers: 3)
Advances in Enzyme Research     Open Access   (Followers: 10)
Advances in Historical Studies     Open Access   (Followers: 10)
Advances in Infectious Diseases     Open Access   (Followers: 9)
Advances in Internet of Things     Open Access   (Followers: 18)
Advances in J.ism and Communication     Open Access   (Followers: 26)
Advances in Linear Algebra & Matrix Theory     Open Access   (Followers: 9)
Advances in Literary Study     Open Access   (Followers: 1)
Advances in Lung Cancer     Open Access   (Followers: 10)
Advances in Materials Physics and Chemistry     Open Access   (Followers: 33)
Advances in Microbiology     Open Access   (Followers: 24)
Advances in Molecular Imaging     Open Access   (Followers: 1)
Advances in Nanoparticles     Open Access   (Followers: 17)
Advances in Parkinson's Disease     Open Access   (Followers: 2)
Advances in Physical Education     Open Access   (Followers: 10)
Advances in Pure Mathematics     Open Access   (Followers: 8)
Advances in Remote Sensing     Open Access   (Followers: 59)
Advances in Reproductive Sciences     Open Access   (Followers: 1)
Advances in Sexual Medicine     Open Access   (Followers: 3)
Agricultural Sciences     Open Access   (Followers: 5)
American J. of Analytical Chemistry     Open Access   (Followers: 29)
American J. of Climate Change     Open Access   (Followers: 36)
American J. of Computational Mathematics     Open Access   (Followers: 6)
American J. of Industrial and Business Management     Open Access   (Followers: 24)
American J. of Molecular Biology     Open Access   (Followers: 3)
American J. of Operations Research     Open Access   (Followers: 6)
American J. of Plant Sciences     Open Access   (Followers: 18)
Applied Mathematics     Open Access   (Followers: 7)
Archaeological Discovery     Open Access   (Followers: 3)
Art and Design Review     Open Access   (Followers: 13)
Atmospheric and Climate Sciences     Open Access   (Followers: 30)
Beijing Law Review     Open Access   (Followers: 4)
Case Reports in Clinical Medicine     Open Access   (Followers: 2)
CellBio     Open Access  
Chinese Medicine     Open Access   (Followers: 3)
Chinese Studies     Open Access   (Followers: 4)
Circuits and Systems     Open Access   (Followers: 16)
Communications and Network     Open Access   (Followers: 12)
Computational Chemistry     Open Access   (Followers: 3)
Computational Molecular Bioscience     Open Access   (Followers: 1)
Computational Water, Energy, and Environmental Engineering     Open Access   (Followers: 5)
Creative Education     Open Access   (Followers: 14)
Current Urban Studies     Open Access   (Followers: 14)
Detection     Open Access   (Followers: 3)
E-Health Telecommunication Systems and Networks     Open Access   (Followers: 3)
Energy and Power Engineering     Open Access   (Followers: 23)
Food and Nutrition Sciences     Open Access   (Followers: 24)
Forensic Medicine and Anatomy Research     Open Access   (Followers: 5)
Geomaterials     Open Access   (Followers: 2)
Graphene     Open Access   (Followers: 7)
Green and Sustainable Chemistry     Open Access   (Followers: 4)
iBusiness     Open Access   (Followers: 2)
InfraMatics     Open Access  
Intelligent Control and Automation     Open Access   (Followers: 5)
Intelligent Information Management     Open Access   (Followers: 7)
Intl. J. of Analytical Mass Spectrometry and Chromatography     Open Access   (Followers: 8)
Intl. J. of Astronomy and Astrophysics     Open Access   (Followers: 36)
Intl. J. of Clean Coal and Energy     Open Access   (Followers: 2)
Intl. J. of Clinical Medicine     Open Access   (Followers: 2)
Intl. J. of Communications, Network and System Sciences     Open Access   (Followers: 9)
Intl. J. of Geosciences     Open Access   (Followers: 10)
Intl. J. of Intelligence Science     Open Access   (Followers: 3)
Intl. J. of Internet and Distributed Systems     Open Access   (Followers: 2)
Intl. J. of Medical Physics, Clinical Engineering and Radiation Oncology     Open Access   (Followers: 11)
Intl. J. of Modern Nonlinear Theory and Application     Open Access   (Followers: 1)
Intl. J. of Organic Chemistry     Open Access   (Followers: 8)
Intl. J. of Otolaryngology and Head & Neck Surgery     Open Access   (Followers: 5)
J. of Agricultural Chemistry and Environment     Open Access   (Followers: 3)
J. of Analytical Sciences, Methods and Instrumentation     Open Access   (Followers: 4)
J. of Applied Mathematics and Physics     Open Access   (Followers: 9)
J. of Behavioral and Brain Science     Open Access   (Followers: 7)
J. of Biomaterials and Nanobiotechnology     Open Access   (Followers: 6)
J. of Biomedical Science and Engineering     Open Access   (Followers: 1)
J. of Biophysical Chemistry     Open Access   (Followers: 3)
J. of Biosciences and Medicines     Open Access  
J. of Building Construction and Planning Research     Open Access   (Followers: 10)
J. of Cancer Therapy     Open Access   (Followers: 1)
J. of Computer and Communications     Open Access   (Followers: 1)
J. of Cosmetics, Dermatological Sciences and Applications     Open Access   (Followers: 2)
J. of Data Analysis and Information Processing     Open Access   (Followers: 2)
J. of Diabetes Mellitus     Open Access   (Followers: 6)
J. of Electromagnetic Analysis and Applications     Open Access   (Followers: 6)
J. of Electronics Cooling and Thermal Control     Open Access   (Followers: 9)
J. of Encapsulation and Adsorption Sciences     Open Access   (Followers: 1)
J. of Environmental Protection     Open Access   (Followers: 1)
J. of Financial Risk Management     Open Access   (Followers: 7)
J. of Flow Control, Measurement & Visualization     Open Access   (Followers: 1)
J. of Geoscience and Environment Protection     Open Access  
J. of High Energy Physics, Gravitation and Cosmology     Open Access   (Followers: 2)
J. of Human Resource and Sustainability Studies     Open Access   (Followers: 1)
J. of Immune Based Therapies, Vaccines and Antimicrobials     Open Access   (Followers: 2)
J. of Information Security     Open Access   (Followers: 11)
J. of Materials Science and Chemical Engineering     Open Access   (Followers: 1)
J. of Mathematical Finance     Open Access   (Followers: 6)
J. of Minerals and Materials Characterization and Engineering     Open Access   (Followers: 3)
J. of Power and Energy Engineering     Open Access   (Followers: 2)
J. of Quantum Information Science     Open Access   (Followers: 4)
J. of Sensor Technology     Open Access   (Followers: 3)
J. of Service Science and Management     Open Access  
J. of Software Engineering and Applications     Open Access   (Followers: 12)
J. of Sustainable Bioenergy Systems     Full-text available via subscription  
J. of Transportation Technologies     Open Access   (Followers: 13)
J. of Tuberculosis Research     Open Access   (Followers: 1)
J. of Water Resource and Protection     Open Access   (Followers: 6)
Low Carbon Economy     Open Access   (Followers: 4)
Materials Sciences and Applications     Open Access   (Followers: 2)
Microscopy Research     Open Access   (Followers: 2)
Modeling and Numerical Simulation of Material Science     Open Access   (Followers: 12)
Modern Chemotherapy     Open Access  
Modern Economy     Open Access   (Followers: 3)
Modern Instrumentation     Open Access   (Followers: 57)
Modern Mechanical Engineering     Open Access   (Followers: 66)
Modern Plastic Surgery     Open Access   (Followers: 6)
Modern Research in Catalysis     Open Access  
Modern Research in Inflammation     Open Access  
Natural Resources     Open Access  
Natural Science     Open Access   (Followers: 8)
Neuroscience & Medicine     Open Access   (Followers: 2)
New J. of Glass and Ceramics     Open Access   (Followers: 6)
Occupational Diseases and Environmental Medicine     Open Access   (Followers: 3)
Open J. of Accounting     Open Access   (Followers: 2)
Open J. of Acoustics     Open Access   (Followers: 23)
Open J. of Air Pollution     Open Access   (Followers: 4)
Open J. of Anesthesiology     Open Access   (Followers: 9)
Open J. of Animal Sciences     Open Access   (Followers: 4)
Open J. of Antennas and Propagation     Open Access   (Followers: 8)
Open J. of Apoptosis     Open Access  
Open J. of Applied Biosensor     Open Access  
Open J. of Applied Sciences     Open Access  
Open J. of Biophysics     Open Access  
Open J. of Blood Diseases     Open Access  
Open J. of Business and Management     Open Access   (Followers: 3)
Open J. of Cell Biology     Open Access   (Followers: 1)
Open J. of Civil Engineering     Open Access   (Followers: 7)
Open J. of Clinical Diagnostics     Open Access   (Followers: 1)
Open J. of Composite Materials     Open Access   (Followers: 21)
Open J. of Depression     Open Access   (Followers: 2)
Open J. of Discrete Mathematics     Open Access   (Followers: 3)
Open J. of Earthquake Research     Open Access   (Followers: 3)
Open J. of Emergency Medicine     Open Access   (Followers: 2)
Open J. of Endocrine and Metabolic Diseases     Open Access   (Followers: 1)
Open J. of Energy Efficiency     Open Access   (Followers: 1)
Open J. of Epidemiology     Open Access   (Followers: 2)
Open J. of Fluid Dynamics     Open Access   (Followers: 33)
Open J. of Forestry     Open Access   (Followers: 1)
Open J. of Gastroenterology     Open Access   (Followers: 1)
Open J. of Genetics     Open Access  
Open J. of Geology     Open Access   (Followers: 14)
Open J. of Immunology     Open Access   (Followers: 4)
Open J. of Inorganic Chemistry     Open Access   (Followers: 1)
Open J. of Inorganic Non-metallic Materials     Open Access   (Followers: 2)
Open J. of Internal Medicine     Open Access  
Open J. of Leadership     Open Access   (Followers: 18)
Open J. of Marine Science     Open Access   (Followers: 6)
Open J. of Medical Imaging     Open Access   (Followers: 2)
Open J. of Medical Microbiology     Open Access   (Followers: 4)
Open J. of Medical Psychology     Open Access  
Open J. of Medicinal Chemistry     Open Access   (Followers: 4)
Open J. of Metal     Open Access   (Followers: 1)
Open J. of Microphysics     Open Access  
Open J. of Modelling and Simulation     Open Access   (Followers: 2)
Open J. of Modern Hydrology     Open Access   (Followers: 5)
Open J. of Modern Linguistics     Open Access   (Followers: 5)
Open J. of Modern Neurosurgery     Open Access   (Followers: 2)
Open J. of Molecular and Integrative Physiology     Open Access  
Open J. of Nephrology     Open Access   (Followers: 4)
Open J. of Nursing     Open Access   (Followers: 4)
Open J. of Obstetrics and Gynecology     Open Access   (Followers: 5)
Open J. of Ophthalmology     Open Access   (Followers: 3)
Open J. of Optimization     Open Access  
Open J. of Organ Transplant Surgery     Open Access   (Followers: 1)
Open J. of Organic Polymer Materials     Open Access   (Followers: 1)
Open J. of Orthopedics     Open Access   (Followers: 3)
Open J. of Pathology     Open Access   (Followers: 2)
Open J. of Pediatrics     Open Access   (Followers: 4)
Open J. of Philosophy     Open Access   (Followers: 11)
Open J. of Physical Chemistry     Open Access  
Open J. of Political Science     Open Access   (Followers: 5)
Open J. of Polymer Chemistry     Open Access   (Followers: 12)
Open J. of Preventive Medicine     Open Access  
Open J. of Psychiatry     Open Access   (Followers: 3)
Open J. of Radiology     Open Access   (Followers: 4)
Open J. of Regenerative Medicine     Open Access  
Open J. of Respiratory Diseases     Open Access   (Followers: 2)
Open J. of Rheumatology and Autoimmune Diseases     Open Access   (Followers: 4)
Open J. of Safety Science and Technology     Open Access   (Followers: 16)
Open J. of Social Sciences     Open Access   (Followers: 3)
Open J. of Soil Science     Open Access   (Followers: 9)
Open J. of Statistics     Open Access   (Followers: 3)
Open J. of Stomatology     Open Access  
Open J. of Synthesis Theory and Applications     Open Access  

        1 2 | Last   [Sort by number of followers]   [Restore default list]

Similar Journals
Journal Cover
Journal of Information Security
Number of Followers: 11  

  This is an Open Access Journal Open Access journal
ISSN (Print) 2153-1234 - ISSN (Online) 2153-1242
Published by Scientific Research Publishing Homepage  [230 journals]
  • Information, Vol. 12, Pages 488: A Private Strategy for Workload
           Forecasting on Large-Scale Wireless Networks

    • Authors: Pedro Silveira Pisa, Bernardo Costa, Jéssica Alcântara Gonçalves, Dianne Scherly Varela de Medeiros, Diogo Menezes Ferrazani Mattos
      First page: 488
      Abstract: The growing convergence of various services characterizes wireless access networks. Therefore, there is a high demand for provisioning the spectrum to serve simultaneous users demanding high throughput rates. The load prediction at each access point is mandatory to allocate resources and to assist sophisticated network designs. However, the load at each access point varies according to the number of connected devices and traffic characteristics. In this paper, we propose a load estimation strategy based on a Markov’s Chain to predict the number of devices connected to each access point on the wireless network, and we apply an unsupervised machine learning model to identify traffic profiles. The main goals are to determine traffic patterns and overload projections in the wireless network, efficiently scale the network, and provide a knowledge base for security tools. We evaluate the proposal in a large-scale university network, with 670 access points spread over a wide area. The collected data is de-identified, and data processing occurs in the cloud. The evaluation results show that the proposal predicts the number of connected devices with 90% accuracy and discriminates five different user-traffic profiles on the load of the wireless network.
      Citation: Information
      PubDate: 2021-11-23
      DOI: 10.3390/info12120488
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 489: An Inspection and Classification System
           for Automotive Component Remanufacturing Industry Based on Ensemble

    • Authors: Fátima A. Saiz, Garazi Alfaro, Iñigo Barandiaran
      First page: 489
      Abstract: This paper presents an automated inspection and classification system for automotive component remanufacturing industry, based on ensemble learning. The system is based on different stages allowing to classify the components as good, rectifiable or rejection according to the manufacturer criteria. A study of two deep learning-based models’ performance when used individually and when using an ensemble of them is carried out, obtaining an improvement of 7% in accuracy in the ensemble. The results of the test set demonstrate the successful performance of the system in terms of component classification.
      Citation: Information
      PubDate: 2021-11-23
      DOI: 10.3390/info12120489
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 490: Early Stage Identification of COVID-19
           Patients in Mexico Using Machine Learning: A Case Study for the Tijuana
           General Hospital

    • Authors: Cristián Castillo-Olea, Roberto Conte-Galván, Clemente Zuñiga, Alexandra Siono, Angelica Huerta, Ornela Bardhi, Eric Ortiz
      First page: 490
      Abstract: Background: The current pandemic caused by SARS-CoV-2 is an acute illness of global concern. SARS-CoV-2 is an infectious disease caused by a recently discovered coronavirus. Most people who get sick from COVID-19 experience either mild, moderate, or severe symptoms. In order to help make quick decisions regarding treatment and isolation needs, it is useful to determine which significant variables indicate infection cases in the population served by the Tijuana General Hospital (Hospital General de Tijuana). An Artificial Intelligence (Machine Learning) mathematical model was developed in order to identify early-stage significant variables in COVID-19 patients. Methods: The individual characteristics of the study subjects included age, gender, age group, symptoms, comorbidities, diagnosis, and outcomes. A mathematical model that uses supervised learning algorithms, allowing the identification of the significant variables that predict the diagnosis of COVID-19 with high precision, was developed. Results: Automatic algorithms were used to analyze the data: for Systolic Arterial Hypertension (SAH), the Logistic Regression algorithm showed results of 91.0% in area under ROC (AUC), 80% accuracy (CA), 80% F1 and 80% Recall, and 80.1% precision for the selected variables, while for Diabetes Mellitus (DM) with the Logistic Regression algorithm it obtained 91.2% AUC, 89.2% accuracy, 88.8% F1, 89.7% precision, and 89.2% recall for the selected variables. The neural network algorithm showed better results for patients with Obesity, obtaining 83.4% AUC, 91.4% accuracy, 89.9% F1, 90.6% precision, and 91.4% recall. Conclusions: Statistical analyses revealed that the significant predictive symptoms in patients with SAH, DM, and Obesity were more substantial in fatigue and myalgias/arthralgias. In contrast, the third dominant symptom in people with SAH and DM was odynophagia.
      Citation: Information
      PubDate: 2021-11-24
      DOI: 10.3390/info12120490
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 491: Multi-Keyword Classification: A Case
           Study in Finnish Social Sciences Data Archive

    • Authors: Erjon Skenderi, Jukka Huhtamäki, Kostas Stefanidis
      First page: 491
      Abstract: In this paper, we consider the task of assigning relevant labels to studies in the social science domain. Manual labelling is an expensive process and prone to human error. Various multi-label text classification machine learning approaches have been proposed to resolve this problem. We introduce a dataset obtained from the Finnish Social Science Archive and comprised of 2968 research studies’ metadata. The metadata of each study includes attributes, such as the “abstract” and the “set of labels”. We used the Bag of Words (BoW), TF-IDF term weighting and pretrained word embeddings obtained from FastText and BERT models to generate the text representations for each study’s abstract field. Our selection of multi-label classification methods includes a Naive approach, Multi-label k Nearest Neighbours (ML-kNN), Multi-Label Random Forest (ML-RF), X-BERT and Parabel. The methods were combined with the text representation techniques and their performance was evaluated on our dataset. We measured the classification accuracy of the combinations using Precision, Recall and F1 metrics. In addition, we used the Normalized Discounted Cumulative Gain to measure the label ranking performance of the selected methods combined with the text representation techniques. The results showed that the ML-RF model achieved a higher classification accuracy with the TF-IDF features and, based on the ranking score, the Parabel model outperformed the other methods.
      Citation: Information
      PubDate: 2021-11-25
      DOI: 10.3390/info12120491
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 492: A Semantic Approach for Quality Assurance
           and Assessment of Volunteered Geographic Information

    • Authors: Gloria Bordogna
      First page: 492
      Abstract: The paper analyses the characteristics of Volunteer Geographic Information (VGI) and the need to assure and assess its quality for a possible use and re-use. Ontologies and soft ontologies are presented as means to support quality assurance and assessment of VGI by highlighting their limitations. A proposal of a possibilistic approach using fuzzy ontology is finally illustrated that allows to model both imprecision and vagueness of domain knowledge and epistemic uncertainty affecting observations. A case study example is illustrated.
      Citation: Information
      PubDate: 2021-11-25
      DOI: 10.3390/info12120492
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 493: Private Car O-D Flow Estimation Based on
           Automated Vehicle Monitoring Data: Theoretical Issues and Empirical

    • Authors: Antonio Comi, Alexander Rossolov, Antonio Polimeni, Agostino Nuzzolo
      First page: 493
      Abstract: Data on the daily activity of private cars form the basis of many studies in the field of transportation engineering. In the past, in order to obtain such data, a large number of collection techniques based on travel diaries and driver interviews were used. Telematics applied to vehicles and to a broad range of economic activities has opened up new opportunities for transportation engineers, allowing a significant increase in the volume and detail level of data collected. One of the options for obtaining information on the daily activity of private cars now consists of processing data from automated vehicle monitoring (AVM). Therefore, in this context, and in order to explore the opportunity offered by telematics, this paper presents a methodology for obtaining origin–destination flows through basic info extracted from AVM/floating car data (FCD). Then, the benefits of such a procedure are evaluated through its implementation in a real test case, i.e., the Veneto region in northern Italy where full-day AVM/FCD data were available with about 30,000 vehicles surveyed and more than 388,000 trips identified. Then, the goodness of the proposed methodology for O-D flow estimation is validated through assignment to the road network and comparison with traffic count data. Taking into account aspects of vehicle-sampling observations, this paper also points out issues related to sample representativeness, both in terms of daily activities and spatial coverage. A preliminary descriptive analysis of the O-D flows was carried out, and the analysis of the revealed trip patterns is presented.
      Citation: Information
      PubDate: 2021-11-26
      DOI: 10.3390/info12120493
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 494: The Systems and Methods of Game Design

    • Authors: Pedro Pinto Neves, Nelson Zagalo
      First page: 494
      Abstract: Even a cursory glance at scholarly literature from over a decade ago related to games can show authors variously prefacing their contributions with explanations of the newness of games, the impressive growth of the digital games industry, and the interdisciplinary nature of games [...]
      Citation: Information
      PubDate: 2021-11-27
      DOI: 10.3390/info12120494
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 495: Using Generative Module and Pruning
           Inference for the Fast and Accurate Detection of Apple Flower in Natural

    • Authors: Yan Zhang, Shupeng He, Shiyun Wa, Zhiqi Zong, Yunling Liu
      First page: 495
      Abstract: Apple flower detection is an important project in the apple planting stage. This paper proposes an optimized detection network model based on a generative module and pruning inference. Due to the problems of instability, non-convergence, and overfitting of convolutional neural networks in the case of insufficient samples, this paper uses a generative module and various image pre-processing methods including Cutout, CutMix, Mixup, SnapMix, and Mosaic algorithms for data augmentation. In order to solve the problem of slowing down the training and inference due to the increasing complexity of detection networks, the pruning inference proposed in this paper can automatically deactivate part of the network structure according to the different conditions, reduce the network parameters and operations, and significantly improve the network speed. The proposed model can achieve 90.01%, 98.79%, and 97.43% in precision, recall, and mAP, respectively, in detecting the apple flowers, and the inference speed can reach 29 FPS. On the YOLO-v5 model with slightly lower performance, the inference speed can reach 71 FPS by the pruning inference. These experimental results demonstrate that the model proposed in this paper can meet the needs of agricultural production.
      Citation: Information
      PubDate: 2021-11-29
      DOI: 10.3390/info12120495
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 496: Optimization of the Mashaer Shuttle-Bus
           Service in Hajj: Arafat-Muzdalifah Case Study

    • Authors: Omar Hussain, Emad Felemban, Faizan Ur Rehman
      First page: 496
      Abstract: Hajj, the fifth pillar of Islam, is held annually in the month of Dhul Al-Hijjah, the twelfth month, in the Islamic calendar. Pilgrims travel to Makkah and its neighbouring areas—Mina, Muzdalifah, and Arafat. Annually, about 2.5 million pilgrims perform spatiotemporally restricted rituals in these holy places that they must execute to fulfil the pilgrimage. These restrictions make the task of transportation in Hajj a big challenge. The shuttle bus service is an essential form of transport during Hajj due to its easy availability at all stages and ability to transport large numbers. The current shuttle service suffers from operational problems; this can be deduced from the service delays and customer dissatisfaction with the service. This study provides a system to help in planning the operation of the service for one of the Hajj Establishments to improve performance by determining the optimal number of buses and cycles required for each office in the Establishment. We will also present a case study in which the proposed model was applied to the non-Arab Africa Establishment shuttle service. At the same time, we will include the mechanism for extracting the information required in the tested model from the considerably large GPS data of 20,000+ buses in Hajj 2018.
      Citation: Information
      PubDate: 2021-11-29
      DOI: 10.3390/info12120496
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 497: Technology-Induced Stress,
           Sociodemographic Factors, and Association with Academic Achievement and
           Productivity in Ghanaian Higher Education during the COVID-19 Pandemic

    • Authors: Harry Barton Essel, Dimitrios Vlachopoulos, Akosua Tachie-Menson, Esi Eduafua Johnson, Alice Korkor Ebeheakey
      First page: 497
      Abstract: The COVID-19 pandemic affected many nations around the globe, including Ghana, in the first quarter of 2020. To avoid the spread of the virus, the Ghanaian government ordered universities to close, although most of them had only just begun the academic year. The adoption of Emergency Remote Teaching (ERT) had adverse effects, such as technostress, notwithstanding its advantages for both students and academic faculty. This study examined two significant antecedents: digital literacy and technology dependence. In addition, the study scrutinized the effects of technostress on two relevant student qualities: academic achievement and academic productivity. A descriptive correlational study method was used to discern the prevalence of technology-induced stress among university students in Ghana. The technostress scale was used with a sample of 525 students selected based on defined eligibility criteria. A confirmatory factor analysis (CFA) was employed to calculate the measurement models and structural models. The divergent validity and convergent validity were estimated with the average variance extracted (AVE) and coefficients of correlation between the constructs. The online survey of 525 university students inferred that technology dependence and digital literacy contributes significantly to technostress. Additionally, technostress has adverse effects on academic achievement and academic productivity. Practical implications, limitations, and future directions for the study were also discussed.
      Citation: Information
      PubDate: 2021-11-30
      DOI: 10.3390/info12120497
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 498: The Mediated Effect of Social Presence on
           Social Commerce WOM Behavior

    • Authors: Carolina Herrando, Julio Jiménez-Martínez, María José Martín-De Hoyos
      First page: 498
      Abstract: Based on expectation disconfirmation theory, this study analyzes how attitudes (satisfaction and loyalty) influence interaction intention (sWOM) and, consequently, active and passive sWOM behavior. It does so by assessing the mediating role of social presence on sWOM intention and behavior. The empirical results provide several contributions. First, knowing how to increase active sWOM contributes to bridging the gap regarding how to enhance interactions between users. Second, fostering active sWOM on social commerce websites will provide companies with more positive user-generated content, since this active sWOM comes from satisfied and loyal users, and it is assumed that they will rate the product positively and report a good experience. Third, companies can benefit more from users if users interact with other users by sharing their experiences. This study sheds light on how social presence can mediate the relationship between intention and behavior, particularly when it comes to increasing active participation and brand promotion.
      Citation: Information
      PubDate: 2021-11-30
      DOI: 10.3390/info12120498
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 499: Hardware-Based Emulator with Deep
           Learning Model for Building Energy Control and Prediction Based on
           Occupancy Sensors’ Data

    • Authors: Zhijing Ye, Zheng O’Neill, Fei Hu
      First page: 499
      Abstract: Heating, ventilation, and air conditioning (HVAC) is the largest source of residential energy consumption. Occupancy sensors’ data can be used for HVAC control since it indicates the number of people in the building. HVAC and sensors form a typical cyber-physical system (CPS). In this paper, we aim to build a hardware-based emulation platform to study the occupancy data’s features, which can be further extracted by using machine learning models. In particular, we propose two hardware-based emulators to investigate the use of wired/wireless communication interfaces for occupancy sensor-based building CPS control, and the use of deep learning to predict the building energy consumption with the sensor data. We hypothesize is that the building energy consumption may be predicted by using the occupancy data collected by the sensors, and question what type of prediction model should be used to accurately predict the energy load. Another hypothesis is that an in-lab hardware/software platform could be built to emulate the occupancy sensing process. The machine learning algorithms can then be used to analyze the energy load based on the sensing data. To test the emulator, the occupancy data from the sensors is used to predict energy consumption. The synchronization scheme between sensors and the HVAC server will be discussed. We have built two hardware/software emulation platforms to investigate the sensor/HVAC integration strategies, and used an enhanced deep learning model—which has sequence-to-sequence long short-term memory (Seq2Seq LSTM)—with an attention model to predict the building energy consumption with the preservation of the intrinsic patterns. Because the long-range temporal dependencies are captured, the Seq2Seq models may provide a higher accuracy by using LSTM architectures with encoder and decoder. Meanwhile, LSTMs can capture the temporal and spatial patterns of time series data. The attention model can highlight the most relevant input information in the energy prediction by allocating the attention weights. The communication overhead between the sensors and the HVAC control server can also be alleviated via the attention mechanism, which can automatically ignore the irrelevant information and amplify the relevant information during CNN training. Our experiments and performance analysis show that, compared with the traditional LSTM neural network, the performance of the proposed method has a 30% higher prediction accuracy.
      Citation: Information
      PubDate: 2021-12-01
      DOI: 10.3390/info12120499
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 500: A Closer-to-Reality Model for Comparing
           Relevant Dimensions of Recommender Systems, with Application to Novelty

    • Authors: François Fouss, Elora Fernandes
      First page: 500
      Abstract: Providing fair and convenient comparisons between recommendation algorithms—where algorithms could focus on a traditional dimension (accuracy) and/or less traditional ones (e.g., novelty, diversity, serendipity, etc.)—is a key challenge in the recent developments of recommender systems. This paper focuses on novelty and presents a new, closer-to-reality model for evaluating the quality of a recommendation algorithm by reducing the popularity bias inherent in traditional training/test set evaluation frameworks, which are biased by the dominance of popular items and their inherent features. In the suggested model, each interaction has a probability of being included in the test set that randomly depends on a specific feature related to the focused dimension (novelty in this work). The goal of this paper is to reconcile, in terms of evaluation (and therefore comparison), the accuracy and novelty dimensions of recommendation algorithms, leading to a more realistic comparison of their performance. The results obtained from two well-known datasets show the evolution of the behavior of state-of-the-art ranking algorithms when novelty is progressively, and fairly, given more importance in the evaluation procedure, and could lead to potential changes in the decision processes of organizations involving recommender systems.
      Citation: Information
      PubDate: 2021-12-01
      DOI: 10.3390/info12120500
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 501: Finding Central Vertices and Community
           Structure via Extended Density Peaks-Based Clustering

    • Authors: Yuanyuan Meng, Xiyu Liu
      First page: 501
      Abstract: Community detection is a significant research field of social networks, and modularity is a common method to measure the division of communities in social networks. Many classical algorithms obtain community partition by improving the modularity of the whole network. However, there is still a challenge in community division, which is that the traditional modularity optimization is difficult to avoid resolution limits. To a certain extent, the simple pursuit of improving modularity will cause the division to deviate from the real community structure. To overcome these defects, with the help of clustering ideas, we proposed a method to filter community centers by the relative connection coefficient between vertices, and we analyzed the community structure accordingly. We discuss how to define the relative connection coefficient between vertices, how to select the community centers, and how to divide the remaining vertices. Experiments on both real and synthetic networks demonstrated that our algorithm is effective compared with the state-of-the-art methods.
      Citation: Information
      PubDate: 2021-12-02
      DOI: 10.3390/info12120501
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 502: Towards Automated Semantic Explainability
           of Multimedia Feature Graphs

    • Authors: Stefan Wagenpfeil, Paul Mc Kevitt, Matthias Hemmje
      First page: 502
      Abstract: Multimedia feature graphs are employed to represent features of images, video, audio, or text. Various techniques exist to extract such features from multimedia objects. In this paper, we describe the extension of such a feature graph to represent the meaning of such multimedia features and introduce a formal context-free PS-grammar (Phrase Structure grammar) to automatically generate human-understandable natural language expressions based on such features. To achieve this, we define a semantic extension to syntactic multimedia feature graphs and introduce a set of production rules for phrases of natural language English expressions. This explainability, which is founded on a semantic model provides the opportunity to represent any multimedia feature in a human-readable and human-understandable form, which largely closes the gap between the technical representation of such features and their semantics. We show how this explainability can be formally defined and demonstrate the corresponding implementation based on our generic multimedia analysis framework. Furthermore, we show how this semantic extension can be employed to increase the effectiveness in precision and recall experiments.
      Citation: Information
      PubDate: 2021-12-02
      DOI: 10.3390/info12120502
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 503: CIDOC2VEC: Extracting Information from
           Atomized CIDOC-CRM Humanities Knowledge Graphs

    • Authors: Hassan El-Hajj, Matteo Valleriani
      First page: 503
      Abstract: The development of the field of digital humanities in recent years has led to the increased use of knowledge graphs within the community. Many digital humanities projects tend to model their data based on CIDOC-CRM ontology, which offers a wide array of classes appropriate for storing humanities and cultural heritage data. The CIDOC-CRM ontology model leads to a knowledge graph structure in which many entities are often linked to each other through chains of relations, which means that relevant information often lies many hops away from their entities. In this paper, we present a method based on graph walks and text processing to extract entity information and provide semantically relevant embeddings. In the process, we were able to generate similarity recommendations as well as explore their underlying data structure. This approach was then demonstrated on the Sphaera Dataset which was modeled according to the CIDOC-CRM data structure.
      Citation: Information
      PubDate: 2021-12-03
      DOI: 10.3390/info12120503
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 504: Service Facilities in Heritage Tourism:
           Identification and Planning Based on Space Syntax

    • Authors: Min Wang, Jianqiang Yang, Wei-Ling Hsu, Chunmei Zhang, Hsin-Lung Liu
      First page: 504
      Abstract: Improving the development level of tourism service facilities in historic areas of old cities and realizing the sustainable tourism are important strategies for urban historical protection, economic development, and cultural rejuvenation. Districts at different tourism development stages show different characteristics of tourism service facilities. This study collects location-based service data and uses space syntax to identify the correlation between the distribution of tourism service facilities and street networks, which helps decision-makers to optimize the spatial layout of tourism facilities in the planning of historic areas. Taking the southern historic area of Nanjing, China, as an example, this is an area with a rich collection of cultural heritage and many historic districts, and the study reveals that the areas with strongest street agglomeration and best accessibility, as well as the districts with most mature tourism development, are the core of the tourism facilities. The agglomeration of transportation and accommodation facilities should be set at the traffic nodes as much as possible due to the highest correlation with the street network. Instead, the entertainment, catering, and shopping facilities can be set in the nontraffic node areas under the premise of ensuring good traffic accessibility owing to the insignificantly relationship with the street network. The research results can be used as an important reference for urban decision-makers regarding the planning of historic areas.
      Citation: Information
      PubDate: 2021-12-05
      DOI: 10.3390/info12120504
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 505: Selected Methods of Predicting Financial
           Health of Companies: Neural Networks versus Discriminant Analysis

    • Authors: Horváthová, Mokrišová, Petruška
      First page: 505
      Abstract: This paper focuses on the financial health prediction of businesses. The issue of predicting the financial health of companies is very important in terms of their sustainability. The aim of this paper is to determine the financial health of the analyzed sample of companies and to distinguish financially healthy companies from companies which are not financially healthy. The analyzed sample, in the field of heat supply in Slovakia, consisted of 444 companies. To fulfil the aim, appropriate financial indicators were used. These indicators were selected using related empirical studies, a univariate logit model and a correlation matrix. In the paper, two main models were applied—multivariate discriminant analysis (MDA) and feed-forward neural network (NN). The classification accuracy of the constructed models was compared using the confusion matrix, error type 1 and error type 2. The performance of the models was compared applying Brier score and Somers’ D. The main conclusion of the paper is that the NN is a suitable alternative in assessing financial health. We confirmed that high indebtedness is a predictor of financial distress. The benefit and originality of the paper is the construction of an early warning model for the Slovak heating industry. From our point of view, the heating industry works in the similar way in other countries, especially in transition economies; therefore, the model is applicable in these countries as well.
      Citation: Information
      PubDate: 2021-12-06
      DOI: 10.3390/info12120505
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 506: Context-Aware Music Recommender Systems
           for Groups: A Comparative Study

    • Authors: Adrián Valera, Álvaro Lozano Murciego, María N. Moreno-García
      First page: 506
      Abstract: Nowadays, recommender systems are present in multiple application domains, such as e-commerce, digital libraries, music streaming services, etc. In the music domain, these systems are especially useful, since users often like to listen to new songs and discover new bands. At the same time, group music consumption has proliferated in this domain, not just physically, as in the past, but virtually in rooms or messaging groups created for specific purposes, such as studying, training, or meeting friends. Single-user recommender systems are no longer valid in this situation, and group recommender systems are needed to recommend music to groups of users, taking into account their individual preferences and the context of the group (when listening to music). In this paper, a group recommender system in the music domain is proposed, and an extensive comparative study is conducted, involving different collaborative filtering algorithms and aggregation methods.
      Citation: Information
      PubDate: 2021-12-07
      DOI: 10.3390/info12120506
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 507: The Efficiency Analysis of Large Banks
           Using the Bootstrap and Fuzzy DEA: A Case of an Emerging Market

    • Authors: Margareta Gardijan Kedžo, Branka Tuškan Sjauš
      First page: 507
      Abstract: In this study, banks’ business performance efficiency was analysed using data envelopment analysis (DEA), with expense categories as inputs and income categories as outputs. By incorporating a bootstrap method and a fuzzy data approach into a DEA model, additional insights and sensitivity analysis of the results were obtained. This study shows how fuzzy and bootstrap DEA can be used for investigating real market problems with uncertain data in an uncertain sample. The empirical analysis was based on the period of 2009–2018 for a sample of seven of Croatia’s largest private banks. The aim of the study was also to interpret the DEA results with regards to the specific market, legal, and macroeconomic conditions, caused by the changes introduced in the last decade. The results, and the changes in the inputs and outputs over time, revealed that the market processes occurring in the observed period had a significant impact on banks’ business performance, but led to a more efficient banking system. Two banks were found to be dominant over the others regardless of the changes in the sample and data fuzziness. DEA results were additionally compared to the most important financial indicators and accounting ratios, as an alternative or additional measure of banks’ efficiency and profitability.
      Citation: Information
      PubDate: 2021-12-07
      DOI: 10.3390/info12120507
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 508: TextQ—A User Friendly Tool for
           Exploratory Text Analysis

    • Authors: April Edwards, MaryLyn Sullivan, Ezrah Itkowsky, Dana Weinberg
      First page: 508
      Abstract: As the amount of textual data available on the Internet grows substantially each year, there is a need for tools to assist with exploratory data analysis. Furthermore, to democratize the process of text analytics, tools must be usable for those with a non-technical background and those who do not have the financial resources to outsource their data analysis needs. To that end, we developed TextQ, which provides a simple, intuitive interface for exploratory analysis of textual data. We also tested the efficacy of TextQ using two case studies performed by subject matter experts—one related to a project on the detection of cyberbullying communication and another related to the user of Twitter for influence operations. TextQ was able to efficiently process over a million social media messages and provide valuable insights that directly assisted in our research efforts on these topics. TextQ is built using an open access platform and object-oriented architecture for ease of use and installation. Additional features will continue to be added to TextQ, based on the needs and interests of the installed base.
      Citation: Information
      PubDate: 2021-12-07
      DOI: 10.3390/info12120508
      Issue No: Vol. 12, No. 12 (2021)
  • Information, Vol. 12, Pages 434: An Equilibrium Analysis of a Secondary
           Mobile Data-Share Market

    • Authors: Jordan Blocher, Frederick C. Harris
      First page: 434
      Abstract: Internet service providers are offering shared data plans where multiple users may buy and sell their overage data in a secondary market managed by the ISP. We propose a game-theoretic approach to a software-defined network for modeling this wireless data exchange market: a fully connected, non-cooperative network. We identify and define the rules for the underlying progressive second price (PSP) auction for the respective network and market structure. We allow for a single degree of statistical freedom—the reserve price—and show that the secondary data exchange market allows for greater flexibility in the acquisition decision making of mechanism design. We have designed a framework to optimize the strategy space using the elasticity of supply and demand. Wireless users are modeled as a distribution of buyers and sellers with normal incentives. Our derivation of a buyer-response strategy for wireless users based on second price market dynamics leads us to prove the existence of a balanced pricing scheme. We examine shifts in the market price function and prove that our network upholds the desired properties for optimization with respect to software-defined networks and prove the existence of a Nash equilibrium in the overlying non-cooperative game.
      Citation: Information
      PubDate: 2021-10-20
      DOI: 10.3390/info12110434
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 435: A Domain-Adaptable Heterogeneous
           Information Integration Platform: Tourism and Biomedicine Domains

    • Authors: Rafael Muñoz Gil, Manuel de Buenaga Rodríguez, Fernando Aparicio Galisteo, Diego Gachet Páez, Esteban García-Cuesta
      First page: 435
      Abstract: In recent years, information integration systems have become very popular in mashup-type applications. Information sources are normally presented in an individual and unrelated fashion, and the development of new technologies to reduce the negative effects of information dispersion is needed. A major challenge is the integration and implementation of processing pipelines using different technologies promoting the emergence of advanced architectures capable of processing such a number of diverse sources. This paper describes a semantic domain-adaptable platform to integrate those sources and provide high-level functionalities, such as recommendations, shallow and deep natural language processing, text enrichment, and ontology standardization. Our proposed intelligent domain-adaptable platform (IDAP) has been implemented and tested in the tourism and biomedicine domains to demonstrate the adaptability, flexibility, modularity, and utility of the platform. Questionnaires, performance metrics, and A/B control groups’ evaluations have shown improvements when using IDAP in learning environments.
      Citation: Information
      PubDate: 2021-10-20
      DOI: 10.3390/info12110435
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 436: Evaluating a Taxonomy of Textual
           Uncertainty for Collaborative Visualisation in the Digital Humanities

    • Authors: Alejandro Benito-Santos, Michelle Doran, Aleyda Rocha, Eveline Wandl-Vogt, Jennifer Edmond, Roberto Therón
      First page: 436
      Abstract: The capture, modelling and visualisation of uncertainty has become a hot topic in many areas of science, such as the digital humanities (DH). Fuelled by critical voices among the DH community, DH scholars are becoming more aware of the intrinsic advantages that incorporating the notion of uncertainty into their workflows may bring. Additionally, the increasing availability of ubiquitous, web-based technologies has given rise to many collaborative tools that aim to support DH scholars in performing remote work alongside distant peers from other parts of the world. In this context, this paper describes two user studies seeking to evaluate a taxonomy of textual uncertainty aimed at enabling remote collaborations on digital humanities (DH) research objects in a digital medium. Our study focuses on the task of free annotation of uncertainty in texts in two different scenarios, seeking to establish the requirements of the underlying data and uncertainty models that would be needed to implement a hypothetical collaborative annotation system (CAS) that uses information visualisation and visual analytics techniques to leverage the cognitive effort implied by these tasks. To identify user needs and other requirements, we held two user-driven design experiences with DH experts and lay users, focusing on the annotation of uncertainty in historical recipes and literary texts. The lessons learned from these experiments are gathered in a series of insights and observations on how these different user groups collaborated to adapt an uncertainty taxonomy to solve the proposed exercises. Furthermore, we extract a series of recommendations and future lines of work that we share with the community in an attempt to establish a common agenda of DH research that focuses on collaboration around the idea of uncertainty.
      Citation: Information
      PubDate: 2021-10-21
      DOI: 10.3390/info12110436
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 437: Quality Assessment Methods for Textual
           Conversational Interfaces: A Multivocal Literature Review

    • Authors: Riccardo Coppola, Luca Ardito
      First page: 437
      Abstract: The evaluation and assessment of conversational interfaces is a complex task since such software products are challenging to validate through traditional testing approaches. We conducted a systematic Multivocal Literature Review (MLR), on five different literature sources, to provide a view on quality attributes, evaluation frameworks, and evaluation datasets proposed to provide aid to the researchers and practitioners of the field. We came up with a final pool of 118 contributions, including grey (35) and white literature (83). We categorized 123 different quality attributes and metrics under ten different categories and four macro-categories: Relational, Conversational, User-Centered and Quantitative attributes. While Relational and Conversational attributes are most commonly explored by the scientific literature, we testified a predominance of User-Centered Attributes in industrial literature. We also identified five different academic frameworks/tools to automatically compute sets of metrics, and 28 datasets (subdivided into seven different categories based on the type of data contained) that can produce conversations for the evaluation of conversational interfaces. Our analysis of literature highlights that a high number of qualitative and quantitative attributes are available in the literature to evaluate the performance of conversational interfaces. Our categorization can serve as a valid entry point for researchers and practitioners to select the proper functional and non-functional aspects to be evaluated for their products.
      Citation: Information
      PubDate: 2021-10-21
      DOI: 10.3390/info12110437
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 438: Emotion Classification in Spanish:
           Exploring the Hard Classes

    • Authors: Aiala Rosá, Luis Chiruzzo
      First page: 438
      Abstract: The study of affective language has had numerous developments in the Natural Language Processing area in recent years, but the focus has been predominantly on Sentiment Analysis, an expression usually used to refer to the classification of texts according to their polarity or valence (positive vs. negative). The study of emotions, such as joy, sadness, anger, surprise, among others, has been much less developed and has fewer resources, both for English and for other languages, such as Spanish. In this paper, we present the most relevant existing resources for the study of emotions, mainly for Spanish; we describe some heuristics for the union of two existing corpora of Spanish tweets; and based on some experiments for classification of tweets according to seven categories (anger, disgust, fear, joy, sadness, surprise, and others) we analyze the most problematic classes.
      Citation: Information
      PubDate: 2021-10-21
      DOI: 10.3390/info12110438
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 439: Discovering the Arrow of Time in Machine

    • Authors: J. Kasmire, Anran Zhao
      First page: 439
      Abstract: Machine learning (ML) is increasingly useful as data grow in volume and accessibility. ML can perform tasks (e.g., categorisation, decision making, anomaly detection, etc.) through experience and without explicit instruction, even when the data are too vast, complex, highly variable, full of errors to be analysed in other ways. Thus, ML is great for natural language, images, or other complex and messy data available in large and growing volumes. Selecting ML models for tasks depends on many factors as they vary in supervision needed, tolerable error levels, and ability to account for order or temporal context, among many other things. Importantly, ML methods for tasks that use explicitly ordered or time-dependent data struggle with errors or data asymmetry. Most data are (implicitly) ordered or time-dependent, potentially allowing a hidden ‘arrow of time’ to affect ML performance on non-temporal tasks. This research explores the interaction of ML and implicit order using two ML models to automatically classify (a non-temporal task) tweets (temporal data) under conditions that balance volume and complexity of data. Results show that performance was affected, suggesting that researchers should carefully consider time when matching appropriate ML models to tasks, even when time is only implicitly included.
      Citation: Information
      PubDate: 2021-10-22
      DOI: 10.3390/info12110439
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 440: RDFsim: Similarity-Based Browsing over
           DBpedia Using Embeddings

    • Authors: Manos Chatzakis, Michalis Mountantonakis, Yannis Tzitzikas
      First page: 440
      Abstract: Browsing has been the core access method for the Web from its beginning. Analogously, one good practice for publishing data on the Web is to support dereferenceable URIs, to also enable plain web browsing by users. The information about one URI is usually presented through HTML tables (such as DBpedia and Wikidata pages) and graph representations (by using tools such as LODLive and LODMilla). In most cases, for an entity, the user gets all triples that have that entity as subject or as object. However, sometimes the number of triples is numerous. To tackle this issue, and to reveal similarity (and thus facilitate browsing), in this article we introduce an interactive similarity-based browsing system, called RDFsim, that offers “Parallel Browsing”, that is, it enables the user to see and browse not only the original data of the entity in focus, but also the K most similar entities of the focal entity. The similarity of entities is founded on knowledge graph embeddings; however, the indexes that we introduce for enabling real-time interaction do not depend on the particular method for computing similarity. We detail an implementation of the approach over specific subsets of DBpedia (movies, philosophers and others) and we showcase the benefits of the approach. Finally, we report detailed performance results and we describe several use cases of RDFsim.
      Citation: Information
      PubDate: 2021-10-23
      DOI: 10.3390/info12110440
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 441: Technology Standardization for
           Innovation: How Google Leverages an Open Digital Platform

    • Authors: Yoshiaki Fukami, Takumi Shimizu
      First page: 441
      Abstract: The aim of this study is to investigate firms’ strategies for developing and diffusing technology standards while maintaining a consensus with competitors in their industry. We conducted a case study of information technology (IT) standardization and analysed how Google drives the development and diffusion of HTML5 standards. Accordingly, this study sheds light on two strategic initiatives and two relational practices of standard development and diffusion. Adopting the technologies developed by other firms and forming alliances with other browser vendors are key to influencing the standardization process. Additionally, by building partnerships with developer communities, Google has accelerated the development and diffusion of the HTML5 standards. The mechanisms behind Google’s standardization strategies are also discussed.
      Citation: Information
      PubDate: 2021-10-23
      DOI: 10.3390/info12110441
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 442: Analysis of Gradient Vanishing of RNNs
           and Performance Comparison

    • Authors: Seol-Hyun Noh
      First page: 442
      Abstract: A recurrent neural network (RNN) combines variable-length input data with a hidden state that depends on previous time steps to generate output data. RNNs have been widely used in time-series data analysis, and various RNN algorithms have been proposed, such as the standard RNN, long short-term memory (LSTM), and gated recurrent units (GRUs). In particular, it has been experimentally proven that LSTM and GRU have higher validation accuracy and prediction accuracy than the standard RNN. The learning ability is a measure of the effectiveness of gradient of error information that would be backpropagated. This study provided a theoretical and experimental basis for the result that LSTM and GRU have more efficient gradient descent than the standard RNN by analyzing and experimenting the gradient vanishing of the standard RNN, LSTM, and GRU. As a result, LSTM and GRU are robust to the degradation of gradient descent even when LSTM and GRU learn long-range input data, which means that the learning ability of LSTM and GRU is greater than standard RNN when learning long-range input data. Therefore, LSTM and GRU have higher validation accuracy and prediction accuracy than the standard RNN. In addition, it was verified whether the experimental results of river-level prediction models, solar power generation prediction models, and speech signal models using the standard RNN, LSTM, and GRUs are consistent with the analysis results of gradient vanishing.
      Citation: Information
      PubDate: 2021-10-25
      DOI: 10.3390/info12110442
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 443: Optimizing Small BERTs Trained for German

    • Authors: Jochen Zöllner, Konrad Sperfeld, Christoph Wick, Roger Labahn
      First page: 443
      Abstract: Currently, the most widespread neural network architecture for training language models is the so-called BERT, which led to improvements in various Natural Language Processing (NLP) tasks. In general, the larger the number of parameters in a BERT model, the better the results obtained in these NLP tasks. Unfortunately, the memory consumption and the training duration drastically increases with the size of these models. In this article, we investigate various training techniques of smaller BERT models: We combine different methods from other BERT variants, such as ALBERT, RoBERTa, and relative positional encoding. In addition, we propose two new fine-tuning modifications leading to better performance: Class-Start-End tagging and a modified form of Linear Chain Conditional Random Fields. Furthermore, we introduce Whole-Word Attention, which reduces BERTs memory usage and leads to a small increase in performance compared to classical Multi-Head-Attention. We evaluate these techniques on five public German Named Entity Recognition (NER) tasks, of which two are introduced by this article.
      Citation: Information
      PubDate: 2021-10-25
      DOI: 10.3390/info12110443
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 444: Investigating Machine Learning &
           Natural Language Processing Techniques Applied for Predicting Depression
           Disorder from Online Support Forums: A Systematic Literature Review

    • Authors: Isuri Anuradha Nanomi Arachchige, Priyadharshany Sandanapitchai, Ruvan Weerasinghe
      First page: 444
      Abstract: Depression is a common mental health disorder that affects an individual’s moods, thought processes and behaviours negatively, and disrupts one’s ability to function optimally. In most cases, people with depression try to hide their symptoms and refrain from obtaining professional help due to the stigma related to mental health. The digital footprint we all leave behind, particularly in online support forums, provides a window for clinicians to observe and assess such behaviour in order to make potential mental health diagnoses. Natural language processing (NLP) and Machine learning (ML) techniques are able to bridge the existing gaps in converting language to a machine-understandable format in order to facilitate this. Our objective is to undertake a systematic review of the literature on NLP and ML approaches used for depression identification on Online Support Forums (OSF). A systematic search was performed to identify articles that examined ML and NLP techniques to identify depression disorder from OSF. Articles were selected according to the PRISMA workflow. For the purpose of the review, 29 articles were selected and analysed. From this systematic review, we further analyse which combination of features extracted from NLP and ML techniques are effective and scalable for state-of-the-art Depression Identification. We conclude by addressing some open issues that currently limit real-world implementation of such systems and point to future directions to this end.
      Citation: Information
      PubDate: 2021-10-27
      DOI: 10.3390/info12110444
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 445: Passive Fault-Tolerant Control of a 2-DOF
           Robotic Helicopter

    • Authors: Manuel A. Zuñiga, Luis A. Ramírez, Gerardo Romero, Efraín Alcorta-García, Alejandro Arceo
      First page: 445
      Abstract: The presence of faults in dynamic systems causes the potential loss of some of the control objectives. For that reason, a fault-tolerant controller is required to ensure a proper operation, as well as to reduce the risk of accidents. The present work proposes a passive fault-tolerant controller that is based on robust techniques, which are utilized to adjust a proportional-derivative scheme through a linear matrix inequality. In addition, a nonlinear term is included to improve the accuracy of the control task. The proposed methodology is implemented in the control of a two degrees of a freedom robotic helicopter in a simulation environment, where abrupt faults in the actuators are considered. Finally, the proposed scheme is also tested experimentally in the Quanser® 2-DOF Helicopter, highlighting the effectiveness of the proposed controller.
      Citation: Information
      PubDate: 2021-10-27
      DOI: 10.3390/info12110445
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 446: The Effective Factors on Continuity of
           Corporate Information Security Management: Based on TOE Framework

    • Authors: Yongho Kim, Boyoung Kim
      First page: 446
      Abstract: In the Fourth Industrial Revolution era, data-based business management activities among enterprises proliferated are mainly based on digital transformation. In this change, the information security system and its operation are emphasized as essential business activities of enterprises the research aims to verify the relationship among the influence factors of corporate information security management based on the TOE framework. This study analyzes the effects of technical, organizational, and environmental factors on the intention, strengthening, and continuity of information security management. To this, a survey was conducted on professional individuals who are working in areas related to information security in organizations, and 107 questionnaires were collected and analyzed. According to major results of the analysis on adopted hypotheses. In results, as to the intention of information security management, organization and environment factors were influential. In the other side, technology and environment factors were affected to the strengthening of information security management. Hence this study pointed out that the environmental factors are most significant for the information security administration of an organization. In addition, it turned out that the strengthening of information security management was influential on the continuity of information security management more significantly than the intention of information security management.
      Citation: Information
      PubDate: 2021-10-27
      DOI: 10.3390/info12110446
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 447: Data-Driven Multi-Agent Vehicle Routing
           in a Congested City

    • Authors: Alex Solter, Fuhua Lin, Dunwei Wen, Xiaokang Zhou
      First page: 447
      Abstract: Navigation in a traffic congested city can prove to be a difficult task. Often a path that may appear to be the fastest option is much slower due to congestion. If we can predict the effects of congestion, it may be possible to develop a better route that allows us to reach our destination more quickly. This paper studies the possibility of using a centralized real-time traffic information system containing travel time data collected from each road user. These data are made available to all users, such that they may be able to learn and predict the effects of congestion for building a route adaptively. This method is further enhanced by combining the traffic information system data with previous routing experiences to determine the fastest route with less exploration. We test our method using a multi-agent simulation, demonstrating that this method produces a lower total route time for all vehicles than when using either a centralized traffic information system or direct experience alone.
      Citation: Information
      PubDate: 2021-10-27
      DOI: 10.3390/info12110447
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 448: Time-Optimal Gathering under Limited
           Visibility with One-Axis Agreement

    • Authors: Pavan Poudel, Gokarna Sharma
      First page: 448
      Abstract: We consider the distributed setting of N autonomous mobile robots that operate in Look-Compute-Move (LCM) cycles following the well-celebrated classic oblivious robots model. We study the fundamental problem of gathering N autonomous robots on a plane, which requires all robots to meet at a single point (or to position within a small area) that is not known beforehand. We consider limited visibility under which robots are only able to see other robots up to a constant Euclidean distance and focus on the time complexity of gathering by robots under limited visibility. There exists an O(DG) time algorithm for this problem in the fully synchronous setting, assuming that the robots agree on one coordinate axis (say north), where DG is the diameter of the visibility graph of the initial configuration. In this article, we provide the first O(DE) time algorithm for this problem in the asynchronous setting under the same assumption of robots’ agreement with one coordinate axis, where DE is the Euclidean distance between farthest-pair of robots in the initial configuration. The runtime of our algorithm is a significant improvement since for any initial configuration of N≥1 robots, DE≤DG, and there exist initial configurations for which DG can be quadratic on DE, i.e., DG=Θ(DE2). Moreover, our algorithm is asymptotically time-optimal since the trivial time lower bound for this problem is Ω(DE).
      Citation: Information
      PubDate: 2021-10-27
      DOI: 10.3390/info12110448
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 449: Factors That Determine the Adoption
           Intention of Direct Mobile Purchases through Social Media Apps

    • Authors: Vaggelis Saprikis, Giorgos Avlogiaris
      First page: 449
      Abstract: In the last few years, a number of social media e-business models including the social networking giants of Facebook, Pinterest and Instagram have offered direct purchase abilities to both their users and the involved enterprises. Hence, individuals can buy directly without having to leave the social media website. At the same time, there is a significant increase in the number of online purchases through mobile devices. To add to this, nowadays, the vast majority of internet users prefer to surf via their smartphone rather than a desktop PC. The aforementioned facts reveal the abilities and potential dynamics of Mobile Social Commerce (MSC), which is considered not only the present but also the future of e-commerce, as well as an area of prosperous academic and managerial concern. In spite of its several extant abilities and its booming future, MSC has been little examined until now. Therefore, this study aims to determine the factors that impact smartphone users’ behavioral intention to adopt direct purchases through social media apps in a country where these kinds of m-services are not yet available. In specific, it extends the well-established Unified Theory of Acceptance and Use of Technology (UTAUT) model with the main ICT facilitators (i.e., convenience, reward and security) and inhibitors (i.e., risk and anxiety). The suggested conceptual model aims to increase the understanding on the topic and strengthen the importance of this major type of MSC. Convenience sampling was applied to gather the data and Structural Equation Modeling (SEM) was then performed to investigate the research hypotheses of the proposed conceptual model. The results show that performance expectancy exerts a positive impact on behavioral intention. Furthermore, all ICT facilitators examined do impact significantly on smartphone users’ decision to adopt direct mobile purchases through social media apps, whereas anxiety exerts a negative effect.
      Citation: Information
      PubDate: 2021-10-28
      DOI: 10.3390/info12110449
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 450: WebPGA: An Educational Technology That
           Supports Learning by Reviewing Paper-Based Programming Assessments

    • Authors: Yancy Vance Paredes, I-Han Hsiao
      First page: 450
      Abstract: Providing feedback to students is one of the most effective ways to enhance their learning. With the advancement of technology, many tools have been developed to provide personalized feedback. However, these systems are only beneficial when interactions are done on digital platforms. As paper-based assessment is still a dominantly preferred evaluation method, particularly in large blended-instruction classes, the sole use of electronic educational systems presents a gap between how students learn the subject from the physical and digital world. This has motivated the design and the development of a new educational technology that facilitates the digitization, grading, and distribution of paper-based assessments to support blended-instruction classes. With the aid of this technology, different learning analytics can be readily captured. A retrospective analysis was conducted to understand the students’ behaviors in an Object-Oriented Programming and Data Structures class from a public university. Their behavioral differences and the associated learning impacts were analyzed by leveraging their digital footprints. Results showed that students made significant efforts in reviewing their examinations. Notably, the high-achieving and the improving students spent more time reviewing their mistakes and started doing so as soon as the assessment became available. Finally, when students were guided in the reviewing process, they were able to identify items where they had misconceptions.
      Citation: Information
      PubDate: 2021-10-29
      DOI: 10.3390/info12110450
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 451: A Text Mining Approach in the
           Classification of Free-Text Cancer Pathology Reports from the South
           African National Health Laboratory Services

    • Authors: Okechinyere J. Achilonu, Victor Olago, Elvira Singh, René M. J. C. Eijkemans, Gideon Nimako, Eustasius Musenge
      First page: 451
      Abstract: A cancer pathology report is a valuable medical document that provides information for clinical management of the patient and evaluation of health care. However, there are variations in the quality of reporting in free-text style formats, ranging from comprehensive to incomplete reporting. Moreover, the increasing incidence of cancer has generated a high throughput of pathology reports. Hence, manual extraction and classification of information from these reports can be intrinsically complex and resource-intensive. This study aimed to (i) evaluate the quality of over 80,000 breast, colorectal, and prostate cancer free-text pathology reports and (ii) assess the effectiveness of random forest (RF) and variants of support vector machine (SVM) in the classification of reports into benign and malignant classes. The study approach comprises data preprocessing, visualisation, feature selections, text classification, and evaluation of performance metrics. The performance of the classifiers was evaluated across various feature sizes, which were jointly selected by four filter feature selection methods. The feature selection methods identified established clinical terms, which are synonymous with each of the three cancers. Uni-gram tokenisation using the classifiers showed that the predictive power of RF model was consistent across various feature sizes, with overall F-scores of 95.2%, 94.0%, and 95.3% for breast, colorectal, and prostate cancer classification, respectively. The radial SVM achieved better classification performance compared with its linear variant for most of the feature sizes. The classifiers also achieved high precision, recall, and accuracy. This study supports a nationally agreed standard in pathology reporting and the use of text mining for encoding, classifying, and production of high-quality information abstractions for cancer prognosis and research.
      Citation: Information
      PubDate: 2021-10-30
      DOI: 10.3390/info12110451
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 452: A Knowledge-Based Sense Disambiguation
           Method to Semantically Enhanced NL Question for Restricted Domain

    • Authors: Ammar Arbaaeen, Asadullah Shah
      First page: 452
      Abstract: Within the space of question answering (QA) systems, the most critical module to improve overall performance is question analysis processing. Extracting the lexical semantic of a Natural Language (NL) question presents challenges at syntactic and semantic levels for most QA systems. This is due to the difference between the words posed by a user and the terms presently stored in the knowledge bases. Many studies have achieved encouraging results in lexical semantic resolution on the topic of word sense disambiguation (WSD), and several other works consider these challenges in the context of QA applications. Additionally, few scholars have examined the role of WSD in returning potential answers corresponding to particular questions. However, natural language processing (NLP) is still facing several challenges to determine the precise meaning of various ambiguities. Therefore, the motivation of this work is to propose a novel knowledge-based sense disambiguation (KSD) method for resolving the problem of lexical ambiguity associated with questions posed in QA systems. The major contribution is the proposed innovative method, which incorporates multiple knowledge sources. This includes the question’s metadata (date/GPS), context knowledge, and domain ontology into a shallow NLP. The proposed KSD method is developed into a unique tool for a mobile QA application that aims to determine the intended meaning of questions expressed by pilgrims. The experimental results reveal that our method obtained comparable and better accuracy performance than the baselines in the context of the pilgrimage domain.
      Citation: Information
      PubDate: 2021-10-31
      DOI: 10.3390/info12110452
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 453: Gamifying Computer Science Education for
           Z Generation

    • Authors: Hadeel Mohammed Jawad, Samir Tout
      First page: 453
      Abstract: Generation Z members use their smart devices as part of their everyday routine. Teaching methods may need to be updated to make learning materials more interesting for this generation. This paper suggests gamifying computer science subjects to enhance the learning experience for this generation. Additionally, many students face difficulty in understanding computer science materials and algorithms. Gamifying computer science education is one of the suggested teaching methods to simplify topics and increase students’ engagement. Moreover, the field of computer science is dominated by males. The use of gamification could increase women’s interest in this field. This paper demonstrates different techniques that were developed by the researchers to employ gamification in teaching computer science topics. The data was collected at the end of the two different courses. Results show that students enjoyed the suggested teaching method and found it useful. This paper also demonstrates two tools and their gamification elements. These tools were developed by the researchers to help people learn computer programming and information security.
      Citation: Information
      PubDate: 2021-11-01
      DOI: 10.3390/info12110453
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 454: Graph Analysis Using Fast Fourier
           Transform Applied on Grayscale Bitmap Images

    • Authors: Pawel Baszuro, Jakub Swacha
      First page: 454
      Abstract: There is spiking interest in graph analysis, mainly sparked by social network analysis done for various purposes. With social network graphs often achieving very large size, there is a need for capable tools to perform such an analysis. In this article, we contribute to this area by presenting an original approach to calculating various graph morphisms, designed with overall performance and scalability as the primary concern. The proposed method generates a list of candidates for further analysis by first decomposing a complex network into a set of sub-graphs, transforming sub-graphs into intermediary structures, which are then used to generate grey-scaled bitmap images, and, eventually, performing image comparison using Fast Fourier Transform. The paper discusses the proof-of-concept implementation of the method and provides experimental results achieved on sub-graphs in different sizes randomly chosen from a reference dataset. Planned future developments and key considered areas of application are also described.
      Citation: Information
      PubDate: 2021-11-01
      DOI: 10.3390/info12110454
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 455: An Intelligent Hierarchical Security
           Framework for VANETs

    • Authors: Fábio Gonçalves, Joaquim Macedo, Alexandre Santos
      First page: 455
      Abstract: Vehicular Ad hoc Networks (VANETs) are an emerging type of network that increasingly encompass a larger number of vehicles. They are the basic support for Intelligent Transportation Systems (ITS) and for establishing frameworks which enable communication among road entities and foster the development of new applications and services aimed at enhancing driving experience and increasing road safety. However, VANETs’ demanding characteristics make it difficult to implement security mechanisms, creating vulnerabilities easily explored by attackers. The main goal of this work is to propose an Intelligent Hierarchical Security Framework for VANET making use of Machine Learning (ML) algorithms to enhance attack detection, and to define methods for secure communications among entities, assuring strong authentication, privacy, and anonymity. The ML algorithms used in this framework have been trained and tested using vehicle communications datasets, which have been made publicly available, thus providing easily reproducible and verifiable results. The obtained results show that the proposed Intrusion Detection System (IDS) framework is able to detect attacks accurately, with a low False Positive Rate (FPR). Furthermore, results show that the framework can benefit from using different types of algorithms at different hierarchical levels, selecting light and fast processing algorithms in the lower levels, at the cost of accuracy, and using more precise, accurate, and complex algorithms in nodes higher in the hierarchy.
      Citation: Information
      PubDate: 2021-11-02
      DOI: 10.3390/info12110455
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 456: FPGA-Based Voice Encryption Equipment
           under the Analog Voice Communication Channel

    • Authors: Xinyu Ge, Guiling Sun, Bowen Zheng, Ruili Nan
      First page: 456
      Abstract: This paper describes a voice encryption device that can be widely used in civil voice call encryption. This article uses a composite encryption method to divide the speech into frames, rearrange them in the time domain, and encrypt the content of the frames. The experimental results show that the device can complete the encryption normally under various analog voice call conditions, and the voice delay, quality, encryption effect, etc. are guaranteed. Compared with traditional time-domain encryption, it effectively solves the original voice information remaining in the encrypted information, and further increases the security of the voice.
      Citation: Information
      PubDate: 2021-11-04
      DOI: 10.3390/info12110456
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 457: Design and Implementation of Energy
           Management System Based on Spring Boot Framework

    • Authors: Fang Zhang, Guiling Sun, Bowen Zheng, Liang Dong
      First page: 457
      Abstract: This paper designs and implements an energy management system based on the Spring Boot framework. The system mainly includes three layers, which are the data collection layer, the business logic layer, and the display interface layer from bottom to top. The data collection layer is based on the RS-485 electrical standard and the MODBUS communication protocol. The two protocols connect all energy consumption monitoring points into a mixed topology communication network in the enterprise. The programs in the data collection layer poll each energy consumption monitoring point in the network to collect the data and transmit to the business logic layer. The business logic layer is developed on the basis of the Spring Boot framework and mainly includes two parts: the MySQL database and Tomcat server. In the MySQL database, the stored data are horizontally split according to the time column and stored in different data tables. The split of data reduces the load of a single data table and improves the query performance of the database. The Tomcat server is built into the Spring Boot framework to provide a basic environment for system operation. The Spring Boot framework is the core component of the system. It is responsible for collecting, storing, and analyzing data from energy consumption monitoring points, receiving and processing data requests from the display interface layer. It also provides standard data interfaces to the external programs. The display interface layer is developed on the basis of the Vue framework and integrated into the Spring Boot framework. The display layer combines an open-source visualization chart library called ECharts to provide users with a fully functional and friendly human–computer interaction interface. Through the calculation of hardware and software costs, considering the personnel cost in different regions, the total cost of the energy management system can be estimated. The cost of construction was approximately 210,000 USD in this paper. Since the system was actually deployed in a manufacturing company in December 2019, it has been operating stably for more than 600 days.
      Citation: Information
      PubDate: 2021-11-04
      DOI: 10.3390/info12110457
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 458: Measuring Discrimination against Older
           People Applying the Fraboni Scale of Ageism

    • Authors: Ágnes Hofmeister-Tóth, Ágnes Neulinger, János Debreceni
      First page: 458
      Abstract: The progressive aging of developed societies, caused by profound demographic changes, brings with it the necessity of confronting the subject of discrimination against older people. In the last 50 years, many scales of ageism have been developed to measure beliefs and attitudes towards older adults. The purpose of our study was to adapt the full Fraboni Scale of Ageism (FSA) to Hungarian language and assess its reliability, validity, and psychometric properties. The sample of the study was representative of the Hungarian population, and the data collection took place online. In our study, we compare the dimensions of the scale with other international studies and present the attitudes and biases of the Hungarian population against the older people. The results of the study indicate that attitudes toward older people are more positive among women, older people, and people living in villages. In this study, we concluded that the Hungarian version of the Fraboni Scale of Ageism is a suitable instrument for both measuring the extent of ageism in the Hungarian population and contributing to further testing the international reliability, validity, and psychometric properties of the Fraboni Scale of Ageism.
      Citation: Information
      PubDate: 2021-11-05
      DOI: 10.3390/info12110458
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 459: Exploring the Impact of COVID-19 on
           Social Life by Deep Learning

    • Authors: Jose Antonio Jijon-Vorbeck, Isabel Segura-Bedmar
      First page: 459
      Abstract: Due to the globalisation of the COVID-19 pandemic, and the expansion of social media as the main source of information for many people, there have been a great variety of different reactions surrounding the topic. The World Health Organization (WHO) announced in December 2020 that they were currently fighting an “infodemic” in the same way as they were fighting the pandemic. An “infodemic” relates to the spread of information that is not controlled or filtered, and can have a negative impact on society. If not managed properly, an aggressive or negative tweet can be very harmful and misleading among its recipients. Therefore, authorities at WHO have called for action and asked the academic and scientific community to develop tools for managing the infodemic by the use of digital technologies and data science. The goal of this study is to develop and apply natural language processing models using deep learning to classify a collection of tweets that refer to the COVID-19 pandemic. Several simpler and widely used models are applied first and serve as a benchmark for deep learning methods, such as Long Short-Term Memory (LSTM) and Bidirectional Encoder Representations from Transformers (BERT). The results of the experiments show that the deep learning models outperform the traditional machine learning algorithms. The best approach is the BERT-based model.
      Citation: Information
      PubDate: 2021-11-05
      DOI: 10.3390/info12110459
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 460: Asset Management Method of Industrial IoT
           Systems for Cyber-Security Countermeasures

    • Authors: Noritaka Matsumoto, Junya Fujita, Hiromichi Endoh, Tsutomu Yamada, Kenji Sawada, Osamu Kaneko
      First page: 460
      Abstract: Cyber-security countermeasures are important for IIoT (industrial Internet of things) systems in which IT (information technology) and OT (operational technology) are integrated. The appropriate asset management is the key to creating strong security systems to protect from various cyber threats. However, the timely and coherent asset management methods used for conventional IT systems are difficult to be implemented for IIoT systems. This is because these systems are composed of various network protocols, various devices, and open technologies. Besides, it is necessary to guarantee reliable and real-time control and save CPU and memory usage for legacy OT devices. In this study, therefore, (1) we model various asset configurations for IIoT systems and design a data structure based on SCAP (Security Content Automation Protocol). (2) We design the functions to automatically acquire the detailed information from edge devices by “asset configuration management agent”, which ensures a low processing load. (3) We implement the proposed asset management system to real edge devices and evaluate the functions. Our contribution is to automate the asset management method that is valid for the cyber security countermeasures in the IIoT systems.
      Citation: Information
      PubDate: 2021-11-08
      DOI: 10.3390/info12110460
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 461: Automation of Basketball Match Data

    • Authors: Łukasz Chomątek, Kinga Sierakowska
      First page: 461
      Abstract: Despite the fact that sport plays a substantial role in people’s lives, funding varies significantly from one discipline to another. For example, in Poland, women’s basketball in the lower divisions, is primarily developing thanks to enthusiasts. The aim of the work was to design and implement a system for analyzing match protocols containing data about the match. Particular attention was devoted to the course of the game, i.e., the order of scoring points. This type of data is not typically stored on the official websites of basketball associations but is significant from the point of view of coaches. The obtained data can be utilized to analyze the team’s game during the season, the quality of players, etc. In terms of obtaining data from match protocols, a dedicated algorithm for identifying the table was used, while a neural network was utilized to recognize the numbers (with 70% accuracy). The conducted research has shown the proposed system is well suited for data acquisition based on match protocols what implies the possibility of increasing the availability of data on the games. This will allow the development of this sport discipline. Obtained conclusions can be generalized to other disciplines, where the games are recorded in paper form.
      Citation: Information
      PubDate: 2021-11-08
      DOI: 10.3390/info12110461
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 462: Profiling Attack against RSA Key
           Generation Based on a Euclidean Algorithm

    • Authors: Sadiel de la Fe, Han-Byeol Park, Bo-Yeon Sim, Dong-Guk Han, Carles Ferrer
      First page: 462
      Abstract: A profiling attack is a powerful variant among the noninvasive side channel attacks. In this work, we target RSA key generation relying on the binary version of the extended Euclidean algorithm for modular inverse and GCD computations. To date, this algorithm has only been exploited by simple power analysis; therefore, the countermeasures described in the literature are focused on mitigating only this kind of attack. We demonstrate that one of those countermeasures is not effective in preventing profiling attacks. The feasibility of our approach relies on the extraction of several leakage vectors from a single power trace. Moreover, because there are known relationships between the secrets and the public modulo in RSA, the uncertainty in some of the guessed secrets can be reduced by simple tests. This increases the effectiveness of the proposed attack.
      Citation: Information
      PubDate: 2021-11-09
      DOI: 10.3390/info12110462
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 463: RADAR: Resilient Application for
           Dependable Aided Reporting

    • Authors: Antonia Azzini, Nicola Cortesi, Giuseppe Psaila
      First page: 463
      Abstract: Many organizations must produce many reports for various reasons. Although this activity could appear simple to carry out, this fact is not at all true: indeed, generating reports requires the collection of possibly large and heterogeneous data sets. Furthermore, different professional figures are involved in the process, possibly with different skills (database technicians, domain experts, employees): the lack of common knowledge and of a unifying framework significantly obstructs the effective and efficient definition and continuous generation of reports. This paper presents a novel framework named RADAR, which is the acronym for “Resilient Application for Dependable Aided Reporting”: the framework has been devised to be a ”bridge” between data and employees in charge of generating reports. Specifically, it builds a common knowledge base in which database administrators and domain experts describe their knowledge about the application domain and the gathered data; this knowledge can be browsed by employees to find out the relevant data to aggregate and insert into reports, while designing report layouts; the framework assists the overall process from data definition to report generation. The paper presents the application scenario and the vision by means of a running example, defines the data model and presents the architecture of the framework.
      Citation: Information
      PubDate: 2021-11-09
      DOI: 10.3390/info12110463
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 464: Effect of Personality Traits on Banner
           Advertisement Recognition

    • Authors: Stefanos Balaskas, Maria Rigou
      First page: 464
      Abstract: This article investigates the effect of personality traits on the attitude of web users towards online advertising. Utilizing and analyzing personality traits along with possible correlations between these traits and their influence on consumers’ buying behavior is crucial not only in research studies; this also holds for commercial implementations, as it allows businesses to set up and run sophisticated and strategic campaign designs in the field of digital marketing. This article starts with a literature review on advertisement recall and personality traits, which is followed by the procedure and processes undertaken to conduct the experiment, construct the online store, and design and place the advertisements. Collected data from the personality questionnaire and the two experiment questionnaires (pre and post-test) are presented using descriptive statistics, and data collected from the eye-tracking are analyzed using visual behavior assessment metrics. The results show that personality traits and especially honesty–humility can prove to be a predictive force for some important aspects of banner advertisement recognizability.
      Citation: Information
      PubDate: 2021-11-10
      DOI: 10.3390/info12110464
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 465: Data Ownership: A Survey

    • Authors: Jad Asswad, Jorge Marx Gómez
      First page: 465
      Abstract: The importance of data is increasing along its inflation in our world today. In the big data era, data is becoming a main source for innovation, knowledge and insight, as well as a competitive and financial advantage in the race of information procurement. This interest in acquiring and exploiting data, in addition to the existing concerns regarding the privacy and security of information, raises the question of who should own the data and how the ownership of data can be preserved. This paper discusses and analyses the concept of data ownership and provides an overview on the subject from different point of views. It surveys also the state-of-the-art of data ownership in health, transportation, industry, energy and smart cities sectors and outlines lessons learned with an extended definition of data ownership that may pave the way for future research and work in this area.
      Citation: Information
      PubDate: 2021-11-10
      DOI: 10.3390/info12110465
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 466: Tortuosity Index Calculations in Retinal
           Images: Some Criticalities Arising from Commonly Used Approaches

    • Authors: Francesco Martelli, Claudia Giacomozzi
      First page: 466
      Abstract: A growing body of research in retinal imaging is recently considering vascular tortuosity measures or indexes, with definitions and methods mostly derived from cardiovascular research. However, retinal microvasculature has its own peculiarities that must be considered in order to produce reliable measurements. This study analyzed and compared various derived metrics (e.g., TI, TI_avg, TI*CV) across four existing computational workflows. Specifically, the implementation of the models on two critical OCT images highlighted main pitfalls of the methods, which may fail in reliably differentiating a highly tortuous image from a normal one. A tentative, encouraging approach to mitigate the issue on the same OCT exemplificative images is described in the paper, based on the suggested index TI*CV.
      Citation: Information
      PubDate: 2021-11-10
      DOI: 10.3390/info12110466
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 467: Research on Building DSM Fusion Method
           Based on Adaptive Spline and Target Characteristic Guidance

    • Authors: Jinming Liu, Hao Chen, Shuting Yang
      First page: 467
      Abstract: In order to adapt to the actual scene of a stereo satellite observing the same area sequentially and improve the accuracy of the target-oriented 3D reconstruction, this paper proposed a building DSM fusion update method based on adaptive splines and target characteristic guidance. This method analyzed the target characteristics of surface building targets to explore their intrinsic geometric structure information, established a nonlinear fusion method guided by the target characteristics to achieve the effective fusion of multiple DSMs on the basis of maintaining the target structural characteristics, and supported the online updating of DSM to ensure the needs of practical engineering applications. This paper presented a DSM fusion method for surface building targets and finally conducted DSM fusion experiments using typical urban area images of different scenes. The experimental results showed that the proposed method can effectively constrain and improve the DSM of buildings, and the integrity of the overall construction of the target 3D model structure was significantly improved, indicating that this paper provides an effective and efficient DSM constraint method for buildings.
      Citation: Information
      PubDate: 2021-11-10
      DOI: 10.3390/info12110467
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 468: Analyzing the Behavior and Financial
           Status of Soccer Fans from a Mobile Phone Network Perspective: Euro 2016,
           a Case Study

    • Authors: Gergő Pintér, Imre Felde
      First page: 468
      Abstract: In this study, Call Detail Records (CDRs) covering Budapest for the month of June in 2016 were analyzed. During this observation period, the 2016 UEFA European Football Championship took place, which significantly affected the habit of the residents despite the fact that not a single match was played in the city. We evaluated the fans’ behavior in Budapest during and after the Hungarian matches and found that the mobile phone network activity reflected the football fans’ behavior, demonstrating the potential of the use of mobile phone network data in a social sensing system. The Call Detail Records were enriched with mobile phone properties and used to analyze the subscribers’ devices. Applying the device information (Type Allocation Code) obtained from the activity records, the Subscriber Identity Modules (SIM), which do not operate in cell phones, were omitted from mobility analyses, allowing us to focus on the behavior of people. Mobile phone price was proposed and evaluated as a socioeconomic indicator and the correlation between the phone price and the mobility customs was found. We also found that, besides the cell phone price, the subscriber age and subscription type also had effects on users’ mobility. On the other hand, these factors did not seem to affect their interest in football.
      Citation: Information
      PubDate: 2021-11-12
      DOI: 10.3390/info12110468
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 469: Partial Fractional Fourier Transform
           (PFrFT)-MIMO-OFDM for Known Underwater Acoustic Communication Channels

    • Authors: Yixin Chen, Carmine Clemente, John J. Soraghan
      First page: 469
      Abstract: Communication over doubly selective channels (both time and frequency selective) suffers from significant intercarrier interference (ICI). This problem is severe in underwater acoustic communications. In this paper, a novel partial fractional (PFrFT)-MIMO-OFDM system is proposed and implemented to further mitigate ICI. A new iterative band minimum mean square error (BMMSE) weight combining based on LDLH factorization is used in a scenario of perfect knowledge of channel information. The proposed method is extended from SISO-OFDM configuration to MIMO-OFDM. Simulation results demonstrate that the proposed PFrFT-LDLH outperforms the other methods in the SISO-OFDM scenario and that its performance can be improved in MIMO-OFDM scenarios.
      Citation: Information
      PubDate: 2021-11-12
      DOI: 10.3390/info12110469
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 470: Usage and Temporal Patterns of Public
           Bicycle Systems: Comparison among Points of Interest

    • Authors: Xingchen Yan, Liangpeng Gao, Jun Chen, Xiaofei Ye
      First page: 470
      Abstract: The public bicycle system is an important component of “mobility as a service” and has become increasingly popular in recent years. To provide a better understanding of the station activity and driving mechanisms of public bicycle systems, the study mainly compares the usage and temporal characteristics of public bicycles in the vicinity of the most common commuting-related points of interest and land use. It applies the peak hour factor, distribution fitting, and K-means clustering analysis on station-based data and performs the public bicycles usage and operation comparison among different points of interest and land use. The following results are acquired: (1) the demand type for universities and hospitals in peaks is return-oriented when that of middle schools is hire-oriented; (2) bike hire and return at metro stations and hospitals are frequent, while only the rental at malls is; (3) compared to middle schools and subway stations with the shortest bike usage duration, malls have the longest duration, valued at 18.08 min; and (4) medical and transportation land, with the most obvious morning return peak and the most concentrated usage in a whole day, respectively, both present a lag relation between bike rental and return. In rental-return similarity, the commercial and office lands present the highest level.
      Citation: Information
      PubDate: 2021-11-15
      DOI: 10.3390/info12110470
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 471: Severity Assessment and Progression
           Prediction of COVID-19 Patients Based on the LesionEncoder Framework and
           Chest CT

    • Authors: You-Zhen Feng, Sidong Liu, Zhong-Yuan Cheng, Juan C. Quiroz, Dana Rezazadegan, Ping-Kang Chen, Qi-Ting Lin, Long Qian, Xiao-Fang Liu, Shlomo Berkovsky, Enrico Coiera, Lei Song, Xiao-Ming Qiu, Xiang-Ran Cai
      First page: 471
      Abstract: Automatic severity assessment and progression prediction can facilitate admission, triage, and referral of COVID-19 patients. This study aims to explore the potential use of lung lesion features in the management of COVID-19, based on the assumption that lesion features may carry important diagnostic and prognostic information for quantifying infection severity and forecasting disease progression. A novel LesionEncoder framework is proposed to detect lesions in chest CT scans and to encode lesion features for automatic severity assessment and progression prediction. The LesionEncoder framework consists of a U-Net module for detecting lesions and extracting features from individual CT slices, and a recurrent neural network (RNN) module for learning the relationship between feature vectors and collectively classifying the sequence of feature vectors. Chest CT scans of two cohorts of COVID-19 patients from two hospitals in China were used for training and testing the proposed framework. When applied to assessing severity, this framework outperformed baseline methods achieving a sensitivity of 0.818, specificity of 0.952, accuracy of 0.940, and AUC of 0.903. It also outperformed the other tested methods in disease progression prediction with a sensitivity of 0.667, specificity of 0.838, accuracy of 0.829, and AUC of 0.736. The LesionEncoder framework demonstrates a strong potential for clinical application in current COVID-19 management, particularly in automatic severity assessment of COVID-19 patients. This framework also has a potential for other lesion-focused medical image analyses.
      Citation: Information
      PubDate: 2021-11-15
      DOI: 10.3390/info12110471
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 472: Revolutions Take Time

    • Authors: Peter Wittenburg, George Strawn
      First page: 472
      Abstract: The 2018 paper titled “Common Patterns in Revolutionary Infrastructures and Data” has been cited frequently, since we compared the current discussions about research data management with the developments of large infrastructures in the past believing, similar to philosophers such as Luciano Floridi, that the creation of an interoperable data domain will also be a revolutionary step. We identified the FAIR principles and the FAIR Digital Objects as nuclei for achieving the necessary convergence without which such new infrastructures will not take up. In this follow-up paper, we are elaborating on some factors that indicate that it will still take much time until breakthroughs will be achieved which is mainly devoted to sociological and political reasons. Therefore, it is important to describe visions such as FDO as self-standing entities, the easy plug-in concept, and the built-in security more explicitly to give a long-range perspective and convince policymakers and decision-makers. We also looked at major funding programs which all follow different approaches and do not define a converging core yet. This can be seen as an indication that these funding programs have huge potentials and increase awareness about data management aspects, but that we are far from converging agreements which we finally will need to create a globally integrated data space in the future. Finally, we discuss the roles of some major stakeholders who are all relevant in the process of agreement finding. Most of them are bound by short-term project cycles and funding constraints, not giving them sufficient space to work on long-term convergence concepts and take risks. The great opportunity to get funds for projects improving approaches and technology with the inherent danger of promising too much and the need for continuous reporting and producing visible results after comparably short periods is like a vicious cycle without a possibility to break out. We can recall that coming to the Internet with TCP/IP as a convergence standard was dependent on years of DARPA funding. Building large revolutionary infrastructures seems to be dependent on decision-makers that dare to think strategically and test out promising concepts at a larger scale.
      Citation: Information
      PubDate: 2021-11-16
      DOI: 10.3390/info12110472
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 473: Help Me Learn! Architecture and
           Strategies to Combine Recommendations and Active Learning in Manufacturing

    • Authors: Patrik Zajec, Jože M. Rožanec, Elena Trajkova, Inna Novalija, Klemen Kenda, Blaž Fortuna, Dunja Mladenić
      First page: 473
      Abstract: This research work describes an architecture for building a system that guides a user from a forecast generated by a machine learning model through a sequence of decision-making steps. The system is demonstrated in a manufacturing demand forecasting use case and can be extended to other domains. In addition, the system provides the means for knowledge acquisition by gathering data from users. Finally, it implements an active learning component and compares multiple strategies to recommend media news to the user. We compare such strategies through a set of experiments to understand how they balance learning and provide accurate media news recommendations to the user. The media news aims to provide additional context to demand forecasts and enhance judgment on decision-making.
      Citation: Information
      PubDate: 2021-11-16
      DOI: 10.3390/info12110473
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 474: DBA_SSD: A Novel End-to-End Object
           Detection Algorithm Applied to Plant Disease Detection

    • Authors: Jun Wang, Liya Yu, Jing Yang, Hao Dong
      First page: 474
      Abstract: In response to the difficulty of plant leaf disease detection and classification, this study proposes a novel plant leaf disease detection method called deep block attention SSD (DBA_SSD) for disease identification and disease degree classification of plant leaves. We propose three plant leaf detection methods, namely, squeeze-and-excitation SSD (Se_SSD), deep block SSD (DB_SSD), and DBA_SSD. Se_SSD fuses SSD feature extraction network and attention mechanism channel, DB_SSD improves VGG feature extraction network, and DBA_SSD fuses the improved VGG network and channel attention mechanism. To reduce the training time and accelerate the training process, the convolutional layers trained in the Image Net image dataset by the VGG model are migrated to this model, whereas the collected plant leaves disease image dataset is randomly divided into training set, validation set, and test set in the ratio of 8:1:1. We chose the PlantVillage dataset after careful consideration because it contains images related to the domain of interest. This dataset consists of images of 14 plants, including images of apples, tomatoes, strawberries, peppers, and potatoes, as well as the leaves of other plants. In addition, data enhancement methods, such as histogram equalization and horizontal flip were used to expand the image data. The performance of the three improved algorithms is compared and analyzed in the same environment and with the classical target detection algorithms YOLOv4, YOLOv3, Faster RCNN, and YOLOv4 tiny. Experiments show that DBA_SSD outperforms the two other improved algorithms, and its performance in comparative analysis is superior to other target detection algorithms.
      Citation: Information
      PubDate: 2021-11-16
      DOI: 10.3390/info12110474
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 475: The Use of Information and Communication
           Technology (ICT) in the Implementation of Instructional Supervision and
           Its Effect on Teachers’ Instructional Process Quality

    • Authors: Bambang Budi Wiyono, Agus Wedi, Saida Ulfa, Arda Purnama Putra
      First page: 475
      Abstract: This study aimed to explore communication techniques based on the information and communication technology (ICT) used in the implementation of instructional supervision to determine their effect on the teacher’s learning process and find effective techniques to improve the quality of the teacher’s learning process. This research was conducted in Blitar City with a sample of 60 teachers through a random sampling technique. The data collection technique used a rating scale, checklist, and open-form questionnaire. Descriptive statistics were used to describe the data, while the Pearson product-moment correlation techniques and multiple regression were used to test the research hypotheses. The results show that the most widely used ICT-based communication techniques are WhatsApp, Google Meet, Zoom, Skype, and Google Forms. These are followed by email, video-recording, and audio-recording techniques. The use of ICT is still rare. There is a significant relationship between the use of ICT in instructional supervision and the quality of the teacher’s teaching-learning process, except when using telephones and televisions. ICT techniques are most commonly used for synchronous communication, followed by use for sharing information, and recording activities. The use of ICT in instructional supervision simultaneously affects the teacher’s instructional process.
      Citation: Information
      PubDate: 2021-11-16
      DOI: 10.3390/info12110475
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 476: Predicting Student Dropout in Self-Paced
           MOOC Course Using Random Forest Model

    • Authors: Sheran Dass, Kevin Gary, James Cunningham
      First page: 476
      Abstract: A significant problem in Massive Open Online Courses (MOOCs) is the high rate of student dropout in these courses. An effective student dropout prediction model of MOOC courses can identify the factors responsible and provide insight on how to initiate interventions to increase student success in a MOOC. Different features and various approaches are available for the prediction of student dropout in MOOC courses. In this paper, the data derived from a self-paced math course, College Algebra and Problem Solving, offered on the MOOC platform Open edX partnering with Arizona State University (ASU) from 2016 to 2020 is considered. This paper presents a model to predict the dropout of students from a MOOC course given a set of features engineered from student daily learning progress. The Random Forest Model technique in Machine Learning (ML) is used in the prediction and is evaluated using validation metrics including accuracy, precision, recall, F1-score, Area Under the Curve (AUC), and Receiver Operating Characteristic (ROC) curve. The model developed can predict the dropout or continuation of students on any given day in the MOOC course with an accuracy of 87.5%, AUC of 94.5%, precision of 88%, recall of 87.5%, and F1-score of 87.5%, respectively. The contributing features and interactions were explained using Shapely values for the prediction of the model.
      Citation: Information
      PubDate: 2021-11-17
      DOI: 10.3390/info12110476
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 477: The Impact of Social Media Activities on
           Brand Equity

    • Authors: Ra’ed Masa’deh, Shafig AL-Haddad, Dana Al Abed, Hadeel Khalil, Lina AlMomani, Taghreed Khirfan
      First page: 477
      Abstract: This study aims to investigate the impact of Social Media Activities on brand equity (brand awareness and brand image). A cross-sectional quantitative study has been conducted using a validated questionnaire distributed to a convenience sample of 362 participants who used one or more forms of an Airline’s social media. Multiple Regression analysis was performed using SPSS version 20 to test the hypotheses. Results revealed a significant impact of Social Media Activities as a whole on brand equity. It was found that entertainment, customization, interaction and EWOM significantly affected the brand image, while customization, trendiness, interaction and EWOM significantly affected brand awareness. This study is one of few to examine the impact of social media activities on brand equity towards Airlines in Middle Eastern countries. The study provided several theoretical and practical implications that can benefit airline managers in their marketing efforts using various social media activities.
      Citation: Information
      PubDate: 2021-11-18
      DOI: 10.3390/info12110477
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 478: Risk Factors When Implementing ERP
           Systems in Small Companies

    • Authors: Ann Svensson, Alexander Thoss
      First page: 478
      Abstract: Implementation of enterprise resource planning (ERP) systems often aims to improve the companies’ processes in order to gain competitive advantage on the market. Especially, small companies need to integrate systems with suppliers and customers; hence, ERP systems often become a requirement. ERP system implementation processes in small enterprises contain several risk factors. Research has concluded that ERP implementation projects fail to a relatively high degree. Small companies are found to be constrained by limited resources, limited IS (information systems) knowledge and lack of IT expertise in ERP implementation. There are relatively few empirical research studies on implementing ERP systems in small enterprises and there is a large gap in research that could guide managers of small companies. This paper is based on a case study of three small enterprises that are planning to implement ERP systems that support their business processes. The aim of the paper is to identify the risk factors that can arise when implementing ERP systems in small enterprises. The analysis shows that an ERP system is a good solution to avoid using many different, separate systems in parallel. However, the study shows that it is challenging to integrate all systems used by suppliers and customers. An ERP system can include all information in one system and all information can also easily be accessed within that system. However, the implementation could be a demanding process as it requires engagement from all involved people, especially the managers of the companies.
      Citation: Information
      PubDate: 2021-11-19
      DOI: 10.3390/info12110478
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 479: The Use of ICT for Communication between
           Teachers and Students in the Context of Higher Education Institutions

    • Authors: João Batista, Helena Santos, Rui Pedro Marques
      First page: 479
      Abstract: Recently, the communication paradigm has been changing in society in the higher education context because of the ease of access to the Internet and the high number of mobile devices. Thus, universities have increased their interest in accepting different and sophisticated communication technologies to improve student participation in the educational process. This study aimed to assess how students and teachers use communication technologies to communicate with each other and what their expectations, satisfaction, and attitudes regarding the results of this use are. An analysis model was used in a case study at the University of Aveiro to support the study. Data were obtained through an online questionnaire, which collected 570 responses from students and 172 responses from teachers. These data were processed through descriptive statistics techniques and inference tests (t-tests). The primary outcomes are that publishing and sharing technologies and electronic mail are the most commonly used communication technologies by students and teachers, suggesting that their use will not decline soon. However, other communication technologies were also revealed to be widely used and accepted, with excellent levels of confirmation of expectation.
      Citation: Information
      PubDate: 2021-11-19
      DOI: 10.3390/info12110479
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 480: Personalized Advertising Computational
           Techniques: A Systematic Literature Review, Findings, and a Design

    • Authors: Iosif Viktoratos, Athanasios Tsadiras
      First page: 480
      Abstract: This work conducts a systematic literature review about the domain of personalized advertisement, and more specifically, about the techniques that are used for this purpose. State-of-the-art publications and techniques are presented in detail, and the relationship of this domain with other related domains such as artificial intelligence (AI), semantic web, etc., is investigated. Important issues such as (a) business data utilization in personalized advertisement models, (b) the cold start problem in the domain, (c) advertisement visualization issues, (d) psychological factors in the personalization models, (e) the lack of rich datasets, and (f) user privacy are highlighted and are pinpointed to help and inspire researchers for future work. Finally, a design framework for personalized advertisement systems has been designed based on these findings.
      Citation: Information
      PubDate: 2021-11-19
      DOI: 10.3390/info12110480
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 481: Special Issue on Emerging Trends and
           Challenges in Supervised Learning Tasks

    • Authors: Barbara Pes
      First page: 481
      Abstract: With the massive growth of data-intensive applications, the machine learning field has gained widespread popularity [...]
      Citation: Information
      PubDate: 2021-11-19
      DOI: 10.3390/info12110481
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 482: Raising Awareness about Cloud Security in
           Industry through a Board Game

    • Authors: Tiange Zhao, Tiago Gasiba, Ulrike Lechner, Maria Pinto-Albuquerque
      First page: 482
      Abstract: Today, many products and solutions are provided on the cloud; however, the amount and financial losses due to cloud security incidents illustrate the critical need to do more to protect cloud assets adequately. A gap lies in transferring what cloud and security standards recommend and require to industry practitioners working in the front line. It is of paramount importance to raise awareness about cloud security of these industrial practitioners. Under the guidance of design science paradigm, we introduce a serious game to help participants understand the inherent risks, understand the different roles, and encourage proactive defensive thinking in defending cloud assets. In our game, we designed and implemented an automated evaluator as a novel element. We invite the players to build defense plans and attack plans for which the evaluator calculates success likelihoods. The primary target group is industry practitioners, whereas people with limited background knowledge about cloud security can also participate in and benefit from the game. We design the game and organize several trial runs in an industrial setting. Observations of the trial runs and collected feedback indicate that the game ideas and logic are useful and provide help in raising awareness of cloud security in industry. Our preliminary results share insight into the design of the serious game and are discussed in this paper.
      Citation: Information
      PubDate: 2021-11-19
      DOI: 10.3390/info12110482
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 483: An Analytical and Numerical Detour for
           the Riemann Hypothesis

    • Authors: Michel Riguidel
      First page: 483
      Abstract: From the functional equation F(s)=F(1−s) of Riemann’s zeta function, this article gives new insight into Hadamard’s product formula. The function S1(s)=d(lnF(s))/ds and its family of associated Sm functions, expressed as a sum of rational fractions, are interpreted as meromorphic functions whose poles are the poles and zeros of the F function. This family is a mathematical and numerical tool which makes it possible to estimate the value F(s) of the function at a point s=x+iy=x˙+½+iy in the critical strip S from a point 𝓈=½+iy on the critical line ℒ.Generating estimates Sm∗(s) of Sm(s) at a given point requires a large number of adjacent zeros, due to the slow convergence of the series. The process allows a numerical approach of the Riemann hypothesis (RH). The method can be extended to other meromorphic functions, in the neighborhood of isolated zeros, inspired by the Weierstraß canonical form. A final and brief comparison is made with the ζ and F functions over finite fields.
      Citation: Information
      PubDate: 2021-11-21
      DOI: 10.3390/info12110483
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 484: Recent Advances in Dialogue Machine

    • Authors: Siyou Liu, Yuqi Sun, Longyue Wang
      First page: 484
      Abstract: Recent years have seen a surge of interest in dialogue translation, which is a significant application task for machine translation (MT) technology. However, this has so far not been extensively explored due to its inherent characteristics including data limitation, discourse properties and personality traits. In this article, we give the first comprehensive review of dialogue MT, including well-defined problems (e.g., 4 perspectives), collected resources (e.g., 5 language pairs and 4 sub-domains), representative approaches (e.g., architecture, discourse phenomena and personality) and useful applications (e.g., hotel-booking chat system). After systematical investigation, we also build a state-of-the-art dialogue NMT system by leveraging a breadth of established approaches such as novel architectures, popular pre-training and advanced techniques. Encouragingly, we push the state-of-the-art performance up to 62.7 BLEU points on a commonly-used benchmark by using mBART pre-training. We hope that this survey paper could significantly promote the research in dialogue MT.
      Citation: Information
      PubDate: 2021-11-22
      DOI: 10.3390/info12110484
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 485: Predictive Maintenance for Switch Machine
           Based on Digital Twins

    • Authors: Jia Yang, Yongkui Sun, Yuan Cao, Xiaoxi Hu
      First page: 485
      Abstract: As a unique device of railway networks, the normal operation of switch machines involves railway safe and efficient operation. Predictive maintenance becomes the focus of the switch machine. Aiming at the low accuracy of the prediction state and the difficulty in state visualization, the paper proposes a predictive maintenance model for switch machines based on Digital Twins (DT). It constructs a DT model for the switch machine, which contains a behavior model and a rule model. The behavior model is a high-fidelity visual model. The rule model is a high-precision prediction model, which is combined with long short-term memory (LSTM) and autoregressive Integrated Moving Average model (ARIMA). Experiment results show that the model can be more intuitive with higher prediction accuracy and better applicability. The proposed DT approach is potentially practical, providing a promising idea for switching machines in predictive maintenance.
      Citation: Information
      PubDate: 2021-11-22
      DOI: 10.3390/info12110485
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 486: Analysis of Unsatisfying User Experiences
           and Unmet Psychological Needs for Virtual Reality Exergames Using Deep
           Learning Approach

    • Authors: Xiaoyan Zhang, Qiang Yan, Simin Zhou, Linye Ma, Siran Wang
      First page: 486
      Abstract: The number of consumers playing virtual reality games is booming. To speed up product iteration, the user experience team needs to collect and analyze unsatisfying experiences in time. In this paper, we aim to detect the unsatisfying experiences hidden in online reviews of virtual reality exergames using a deep learning method and find out the unmet psychological needs of users based on self-determination theory. Convolutional neural networks for sentence classification (textCNN) are used in this study to classify online reviews with unsatisfying experiences. For comparison, we set eXtreme gradient boosting (XGBoost) with lexical features as the baseline of machine learning. Term frequency-inverse document frequency (TF-IDF) is used to extract keywords from every set of classified reviews. The micro-F1 score of textCNN classifier is 90.00, which is better than 82.69 of XGBoost. The top 10 keywords of every set of reviews reflect relevant topics of unmet psychological needs. This paper explores the potential problems causing unsatisfying experiences and unmet psychological needs in virtual reality exergames through text mining and makes a supplement for experimental studies about virtual reality exergames.
      Citation: Information
      PubDate: 2021-11-22
      DOI: 10.3390/info12110486
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 487: Multimatcher Model to Enhance Ontology
           Matching Using Background Knowledge

    • Authors: Sohaib Al-Yadumi, Wei-Wei Goh, Ee-Xion Tan, Noor Zaman Jhanjhi, Patrice Boursier
      First page: 487
      Abstract: Ontology matching is a rapidly emerging topic crucial for semantic web effort, data integration, and interoperability. Semantic heterogeneity is one of the most challenging aspects of ontology matching. Consequently, background knowledge (BK) resources are utilized to bridge the semantic gap between the ontologies. Generic BK approaches use a single matcher to discover correspondences between entities from different ontologies. However, the Ontology Alignment Evaluation Initiative (OAEI) results show that not all matchers identify the same correct mappings. Moreover, none of the matchers can obtain good results across all matching tasks. This study proposes a novel BK multimatcher approach for improving ontology matching by effectively generating and combining mappings from biomedical ontologies. Aggregation strategies to create more effective mappings are discussed. Then, a matcher path confidence measure that helps select the most promising paths using the final mapping selection algorithm is proposed. The proposed model performance is tested using the Anatomy and Large Biomed tracks offered by the OAEI 2020. Results show that higher recall levels have been obtained. Moreover, the F-measure values achieved with our model are comparable with those obtained by the state of the art matchers.
      Citation: Information
      PubDate: 2021-11-22
      DOI: 10.3390/info12110487
      Issue No: Vol. 12, No. 11 (2021)
  • Information, Vol. 12, Pages 409: Combating Fake News with Transformers: A
           Comparative Analysis of Stance Detection and Subjectivity Analysis

    • Authors: Panagiotis Kasnesis, Lazaros Toumanidis, Charalampos Z. Patrikakis
      First page: 409
      Abstract: The widespread use of social networks has brought to the foreground a very important issue, the veracity of the information circulating within them. Many natural language processing methods have been proposed in the past to assess a post’s content with respect to its reliability; however, end-to-end approaches are not comparable in ability to human beings. To overcome this, in this paper, we propose the use of a more modular approach that produces indicators about a post’s subjectivity and the stance provided by the replies it has received to date, letting the user decide whether (s)he trusts or does not trust the provided information. To this end, we fine-tuned state-of-the-art transformer-based language models and compared their performance with previous related work on stance detection and subjectivity analysis. Finally, we discuss the obtained results.
      Citation: Information
      PubDate: 2021-10-03
      DOI: 10.3390/info12100409
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 410: How Many Participants Are Required for
           Validation of Automated Vehicle Interfaces in User Studies'

    • Authors: Yannick Forster, Frederik Naujoks, Andreas Keinath
      First page: 410
      Abstract: Empirical validation and verification procedures require the sophisticated development of research methodology. Therefore, researchers and practitioners in human–machine interaction and the automotive domain have developed standardized test protocols for user studies. These protocols are used to evaluate human–machine interfaces (HMI) for driver distraction or automated driving. A system or HMI is validated in regard to certain criteria that it can either pass or fail. One important aspect is the number of participants to include in the study and the respective number of potential failures concerning the pass/fail criteria of the test protocol. By applying binomial tests, the present work provides recommendations on how many participants should be included in a user study. It sheds light on the degree to which inferences from a sample with specific pass/fail ratios to a population is permitted. The calculations take into account different sample sizes and different numbers of observations within a sample that fail the criterion of interest. The analyses show that required sample sizes increase to high numbers with a rising degree of controllability that is assumed for a population. The required sample sizes for a specific controllability verification (e.g., 85%) also increase if there are observed cases of fails in regard to the safety criteria. In conclusion, the present work outlines potential sample sizes and valid inferences about populations and the number of observed failures in a user study.
      Citation: Information
      PubDate: 2021-10-06
      DOI: 10.3390/info12100410
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 411: Big-Data Management: A Driver for Digital

    • Authors: Panagiotis Kostakis, Antonios Kargas
      First page: 411
      Abstract: The rapid evolution of technology has led to a global increase in data. Due to the large volume of data, a new characterization occurred in order to better describe the new situation, namel. big data. Living in the Era of Information, businesses are flooded with information through data processing. The digital age has pushed businesses towards finding a strategy to transform themselves in order to overtake market changes, successfully compete, and gain a competitive advantage. The aim of current paper is to extensively analyze the existing online literature to find the main (most valuable) components of big-data management according to researchers and the business community. Moreover, analysis was conducted to help readers in understanding how these components can be used from existing businesses during the process of digital transformation.
      Citation: Information
      PubDate: 2021-10-07
      DOI: 10.3390/info12100411
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 412: GPR Investigation at the Archaeological
           Site of Le Cesine, Lecce, Italy

    • Authors: Emanuele Colica, Antonella Antonazzo, Rita Auriemma, Luigi Coluccia, Ilaria Catapano, Giovanni Ludeno, Sebastiano D’Amico, Raffaele Persico
      First page: 412
      Abstract: In this contribution, we present some results achieved in the archaeological site of Le Cesine, close to Lecce, in southern Italy. The investigations have been performed in a site close to the Adriatic Sea, only slightly explored up to now, and where the presence of an ancient Roman harbour is alleged on the basis of remains visible above all under the current sea level. This measurement campaign has been performed in the framework of a short-term scientific mission (STSM) performed in the framework of the European Cost Action 17131 (acronym SAGA), and has been aimed to identify possible points where future localized excavation might and hopefully will be performed in the next few years. Both a traditional elaboration and an innovative data processing based on a linear inverse scattering model have been performed on the data.
      Citation: Information
      PubDate: 2021-10-08
      DOI: 10.3390/info12100412
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 413: New Approach of Measuring Human
           Personality Traits Using Ontology-Based Model from Social Media Data

    • Authors: Andry Alamsyah, Nidya Dudija, Sri Widiyanesti
      First page: 413
      Abstract: Human online activities leave digital traces that provide a perfect opportunity to understand their behavior better. Social media is an excellent place to spark conversations or state opinions. Thus, it generates large-scale textual data. In this paper, we harness those data to support the effort of personality measurement. Our first contribution is to develop the Big Five personality trait-based model to detect human personalities from their textual data in the Indonesian language. The model uses an ontology approach instead of the more famous machine learning model. The former better captures the meaning and intention of phrases and words in the domain of human personality. The legacy and more thorough ways to assess nature are by doing interviews or by giving questionnaires. Still, there are many real-life applications where we need to possess an alternative method, which is cheaper and faster than the legacy methodology to select individuals based on their personality. The second contribution is to support the model implementation by building a personality measurement platform. We use two distinct features for the model: an n-gram sorting algorithm to parse the textual data and a crowdsourcing mechanism that facilitates public involvement contributing to the ontology corpus addition and filtering.
      Citation: Information
      PubDate: 2021-10-08
      DOI: 10.3390/info12100413
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 414: Text Mining and Sentiment Analysis of
           Newspaper Headlines

    • Authors: Arafat Hossain, Md. Karimuzzaman, Md. Moyazzem Hossain, Azizur Rahman
      First page: 414
      Abstract: Text analytics are well-known in the modern era for extracting information and patterns from text. However, no study has attempted to illustrate the pattern and priorities of newspaper headlines in Bangladesh using a combination of text analytics techniques. The purpose of this paper is to examine the pattern of words that appeared on the front page of a well-known daily English newspaper in Bangladesh, The Daily Star, in 2018 and 2019. The elucidation of that era’s possible social and political context was also attempted using word patterns. The study employs three widely used and contemporary text mining techniques: word clouds, sentiment analysis, and cluster analysis. The word cloud reveals that election, kill, cricket, and Rohingya-related terms appeared more than 60 times in 2018, whereas BNP, poll, kill, AL, and Khaleda appeared more than 80 times in 2019. These indicated the country’s passion for cricket, political turmoil, and Rohingya-related issues. Furthermore, sentiment analysis reveals that words of fear and negative emotions appeared more than 600 times, whereas anger, anticipation, sadness, trust, and positive-type emotions came up more than 400 times in both years. Finally, the clustering method demonstrates that election, politics, deaths, digital security act, Rohingya, and cricket-related words exhibit similarity and belong to a similar group in 2019, whereas rape, deaths, road, and fire-related words clustered in 2018 alongside a similar-appearing group. In general, this analysis demonstrates how vividly the text mining approach depicts Bangladesh’s social, political, and law-and-order situation, particularly during election season and the country’s cricket craze, and also validates the significance of the text mining approach to understanding the overall view of a country during a particular time in an efficient manner.
      Citation: Information
      PubDate: 2021-10-09
      DOI: 10.3390/info12100414
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 415: Short Word-Length Entering Compressive
           Sensing Domain: Improved Energy Efficiency in Wireless Sensor Networks

    • Authors: Nuha A. S. Alwan, Zahir M. Hussain
      First page: 415
      Abstract: This work combines compressive sensing and short word-length techniques to achieve localization and target tracking in wireless sensor networks with energy-efficient communication between the network anchors and the fusion center. Gradient descent localization is performed using time-of-arrival (TOA) data which are indicative of the distance between anchors and the target thereby achieving range-based localization. The short word-length techniques considered are delta modulation and sigma-delta modulation. The energy efficiency is due to the reduction of the data volume transmitted from anchors to the fusion center by employing any of the two delta modulation variants with compressive sensing techniques. Delta modulation allows the transmission of one bit per TOA sample. The communication energy efficiency is increased by RⱮ, R ≥ 1, where R is the sample reduction ratio of compressive sensing, and Ɱ is the number of bits originally present in a TOA-sample word. It is found that the localization system involving sigma-delta modulation has a superior performance to that using delta-modulation or pure compressive sampling alone, in terms of both energy efficiency and localization error in the presence of TOA measurement noise and transmission noise, owing to the noise shaping property of sigma-delta modulation.
      Citation: Information
      PubDate: 2021-10-11
      DOI: 10.3390/info12100415
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 416: An Approach to Ranking the Sources of
           Information Dissemination in Social Networks

    • Authors: Lidia Vitkova, Igor Kotenko, Andrey Chechulin
      First page: 416
      Abstract: The problem of countering the spread of destructive content in social networks is currently relevant for most countries of the world. Basically, automatic monitoring systems are used to detect the sources of the spread of malicious information, and automated systems, operators, and counteraction scenarios are used to counteract it. The paper suggests an approach to ranking the sources of the distribution of messages with destructive content. In the process of ranking objects by priority, the number of messages created by the source and the integral indicator of the involvement of its audience are considered. The approach realizes the identification of the most popular and active sources of dissemination of destructive content. The approach does not require the analysis of graphs of relationships and provides an increase in the efficiency of the operator. The proposed solution is applicable both to brand reputation monitoring systems and for countering cyberbullying and the dissemination of destructive information in social networks.
      Citation: Information
      PubDate: 2021-10-11
      DOI: 10.3390/info12100416
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 417: Cybersecurity Awareness Framework for

    • Authors: Mohammed Khader, Marcel Karam, Hanna Fares
      First page: 417
      Abstract: Cybersecurity is a multifaceted global phenomenon representing complex socio-technical challenges for governments and private sectors. With technology constantly evolving, the types and numbers of cyberattacks affect different users in different ways. The majority of recorded cyberattacks can be traced to human errors. Despite being both knowledge- and environment-dependent, studies show that increasing users’ cybersecurity awareness is found to be one of the most effective protective approaches. However, the intangible nature, socio-technical dependencies, constant technological evolutions, and ambiguous impact make it challenging to offer comprehensive strategies for better communicating and combatting cyberattacks. Research in the industrial sector focused on creating institutional proprietary risk-aware cultures. In contrast, in academia, where cybersecurity awareness should be at the core of an academic institution’s mission to ensure all graduates are equipped with the skills to combat cyberattacks, most of the research focused on understanding students’ attitudes and behaviors after infusing cybersecurity awareness topics into some courses in a program. This work proposes a conceptual Cybersecurity Awareness Framework to guide the implementation of systems to improve the cybersecurity awareness of graduates in any academic institution. This framework comprises constituents designed to continuously improve the development, integration, delivery, and assessment of cybersecurity knowledge into the curriculum of a university across different disciplines and majors; this framework would thus lead to a better awareness among all university graduates, the future workforce. This framework may be adjusted to serve as a blueprint that, once adjusted by academic institutions to accommodate their missions, guides institutions in developing or amending their policies and procedures for the design and assessment of cybersecurity awareness.
      Citation: Information
      PubDate: 2021-10-12
      DOI: 10.3390/info12100417
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 418: Could a Conversational AI Identify
           Offensive Language'

    • Authors: Daniela America da Silva, Henrique Duarte Borges Louro, Gildarcio Sousa Goncalves, Johnny Cardoso Marques, Luiz Alberto Vieira Dias, Adilson Marques da Cunha, Paulo Marcelo Tasinaffo
      First page: 418
      Abstract: In recent years, we have seen a wide use of Artificial Intelligence (AI) applications in the Internet and everywhere. Natural Language Processing and Machine Learning are important sub-fields of AI that have made Chatbots and Conversational AI applications possible. Those algorithms are built based on historical data in order to create language models, however historical data could be intrinsically discriminatory. This article investigates whether a Conversational AI could identify offensive language and it will show how large language models often produce quite a bit of unethical behavior because of bias in the historical data. Our low-level proof-of-concept will present the challenges to detect offensive language in social media and it will discuss some steps to propitiate strong results in the detection of offensive language and unethical behavior using a Conversational AI.
      Citation: Information
      PubDate: 2021-10-12
      DOI: 10.3390/info12100418
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 419: Financial Volatility Forecasting: A
           Sparse Multi-Head Attention Neural Network

    • Authors: Hualing Lin, Qiubi Sun
      First page: 419
      Abstract: Accurately predicting the volatility of financial asset prices and exploring its laws of movement have profound theoretical and practical guiding significance for financial market risk early warning, asset pricing, and investment portfolio design. The traditional methods are plagued by the problem of substandard prediction performance or gradient optimization. This paper proposes a novel volatility prediction method based on sparse multi-head attention (SP-M-Attention). This model discards the two-dimensional modeling strategy of time and space of the classic deep learning model. Instead, the solution is to embed a sparse multi-head attention calculation module in the network. The main advantages are that (i) it uses the inherent advantages of the multi-head attention mechanism to achieve parallel computing, (ii) it reduces the computational complexity through sparse measurements and feature compression of volatility, and (iii) it avoids the gradient problems caused by long-range propagation and therefore, is more suitable than traditional methods for the task of analysis of long time series. In the end, the article conducts an empirical study on the effectiveness of the proposed method through real datasets of major financial markets. Experimental results show that the prediction performance of the proposed model on all real datasets surpasses all benchmark models. This discovery will aid financial risk management and the optimization of investment strategies.
      Citation: Information
      PubDate: 2021-10-14
      DOI: 10.3390/info12100419
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 420: Industrial Networks Driven by SDN
           Technology for Dynamic Fast Resilience

    • Authors: Nteziriza Nkerabahizi Josbert, Wang Ping, Min Wei, Yong Li
      First page: 420
      Abstract: Software-Defined Networking (SDN) provides the prospect of logically centralized management in industrial networks and simplified programming among devices. It also facilitates the reconfiguration of connectivity when there is a network element failure. This paper presents a new Industrial SDN (ISDN) resilience that addresses the gap between two types of resilience: the first is restoration while the second is protection. Using a restoration approach increases the recovery time proportionally to the number of affected flows contrarily to the protection approach which attains the fast recovery. Nevertheless, the protection approach utilizes more flow rules (flow entries) in the switch which in return increments the lookup time taken to discover an appropriate flow entry in the flow table. This can have a negative effect on the end-to-end delay before a failure occurs (in the normal situation). In order to balance both approaches, we propose a Mixed Fast Resilience (MFR) approach to ensure the fast recovery of the primary path without any impact on the end-to-end delay in the normal situation. In the MFR, the SDN controller establishes a new path after failure detection and this is based on flow rules stored in its memory through the dynamic hash table structure as the internal flow table. At that time, it transmits the flow rules to all switches across the appropriate secondary path simultaneously from the failure point to the destination switch. Moreover, these flow rules which correspond to secondary paths are cached in the hash table by considering the current minimum path weight. This strategy leads to reduction in the load at the SDN controller and the calculation time of a new working path. The MFR approach applies the dual primary by considering several metrics such as packet-loss probability, delay, and bandwidth which are the Quality of Service (QoS) requirements for many industrial applications. Thus, we have built a simulation network and conducted an experimental testbed. The results showed that our resilience approach reduces the failure recovery time as opposed to the restoration approaches and is more scalable than a protection approach. In the normal situation, the MFR approach reduces the lookup time and end-to-end delay than a protection approach. Furthermore, the proposed approach improves the performance by minimizing the packet loss even under failing links.
      Citation: Information
      PubDate: 2021-10-15
      DOI: 10.3390/info12100420
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 421: The Digital Dimension of Mobilities:
           Mapping Spatial Relationships between Corporeal and Digital Displacements
           in Barcelona

    • Authors: Fiammetta Brandajs
      First page: 421
      Abstract: This paper explores the ways in which technologies reshape everyday activities, adopting a mobility perspective of the digital environment, which is reframed in terms of the constitutive/substitutive element of corporeal mobility. We propose the construction of a Digital Mobility Index, quantified by measuring the usage typology in which the technology is employed to enable mobility. Through a digital perspective on mobilities, it is possible to investigate how embodied practices and experiences of different modes of physical or virtual displacements are facilitated and emerge through technologies. The role of technologies in facilitating the anchoring of mobilities, transporting the tangible and intangible flow of goods, and in mediating social relations through space and time is emphasized through analysis of how digital usage can reproduce models typical of the neoliberal city, the effects of which in terms of spatial (in)justice have been widely discussed in the literature. The polarization inherent to the digital divide has been characterized by a separation between what has been called the “space of flows” (well connected, mobile, and offering more opportunities) and the “space of places” (poorly connected, fixed, and isolated). This digital divide indeed takes many forms, including divisions between classes, urban locations, and national spaces. By mapping “hyper- and hypo-mobilized” territories in Barcelona, this paper examines two main dimensions of digital inequality, on the one hand identifying the usage of the technological and digital in terms of the capacity to reach services and places, and on the other, measuring the territorial demographic and economic propensity to access to ICT as a predictive insight into the geographies of the social gap which emerge at municipal level. This approach complements conventional data sources such as municipal statistics and the digital divide enquiry conducted in Barcelona into the underlying digital capacities of the city and the digital skills of the population.
      Citation: Information
      PubDate: 2021-10-15
      DOI: 10.3390/info12100421
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 422: Relativistic Effects on
           Satellite–Ground Two–Way Precise Time Synchronization

    • Authors: Yanming Guo, Yan Bai, Shuaihe Gao, Zhibing Pan, Zibin Han, Decai Zou, Xiaochun Lu, Shougang Zhang
      First page: 422
      Abstract: An ultrahigh precise clock (space optical clock) will be installed onboard a low-orbit spacecraft (a usual expression for a low-orbit satellite operating on an orbit at an altitude of less than 1000 km) in the future, which will be expected to obtain better time-frequency performance in a microgravity environment, and provide the possible realization of ultrahigh precise long-range time synchronization. The advancement of the microwave two-way time synchronization method can offer an effective solution for developing time-frequency transfer technology. In this study, we focus on a method of precise satellite-ground two-way time synchronization and present their key aspects. For reducing the relativistic effects on two-way precise time synchronization, we propose a high-precision correction method. We show the results of tests using simulated data with fully realistic effects such as atmospheric delays, orbit errors, and earth gravity, and demonstrate the satisfactory performance of the methods. The accuracy of the relativistic error correction method is investigated in terms of the spacecraft attitude error, phase center calibration error (the residual error after calibrating phase center offset), and precise orbit determination (POD) error. The results show that the phase center calibration error and POD error contribute greatly to the residual of relativistic correction, at approximately 0.1~0.3 ps, and time synchronization accuracy better than 0.6 ps can be achieved with our proposed methods. In conclusion, the relativistic error correction method is effective, and the satellite-ground two-way precise time synchronization method yields more accurate results. The results of Beidou two-way time synchronization system can only achieve sub-ns accuracy, while the final accuracy obtained by the methods in this paper can improved to ps-level.
      Citation: Information
      PubDate: 2021-10-15
      DOI: 10.3390/info12100422
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 423: Method to Address Complexity in
           Organizations Based on a Comprehensive Overview

    • Authors: Aleksandra Revina, Ünal Aksu, Vera G. Meister
      First page: 423
      Abstract: Digitalization increasingly enforces organizations to accommodate changes and gain resilience. Emerging technologies, changing organizational structures and dynamic work environments bring opportunities and pose new challenges to organizations. Such developments, together with the growing volume and variety of the exchanged data, mainly yield complexity. This complexity often represents a solid barrier to efficiency and impedes understanding, controlling, and improving processes in organizations. Hence, organizations are prevailingly seeking to identify and avoid unnecessary complexity, which is an odd mixture of different factors. Similarly, in research, much effort has been put into measuring, reviewing, and studying complexity. However, these efforts are highly fragmented and lack a joint perspective. Further, this negatively affects the complexity research acceptance by practitioners. In this study, we extend the body of knowledge on complexity research and practice addressing its high fragmentation. In particular, a comprehensive literature analysis of complexity research is conducted to capture different types of complexity in organizations. The results are comparatively analyzed, and a morphological box containing three aspects and ten features is developed. In addition, an established multi-dimensional complexity framework is employed to synthesize the results. Using the findings from these analyses and adopting the Goal Question Metric, we propose a method for complexity management. This method serves to provide key insights and decision support in the form of extensive guidelines for addressing complexity. Thus, our findings can assist organizations in their complexity management initiatives.
      Citation: Information
      PubDate: 2021-10-16
      DOI: 10.3390/info12100423
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 424: Improving Undergraduate Novice Programmer
           Comprehension through Case-Based Teaching with Roles of Variables to
           Provide Scaffolding

    • Authors: Nianfeng Shi
      First page: 424
      Abstract: A role-based teaching approach was proposed in order to decrease the cognitive load placed by the case-based teaching method in the undergraduate novice programmer comprehension. The results are evaluated by using the SOLO (Structure of Observed Learning Outcomes) taxonomy. Data analysis suggested novice programmers with role-based teaching tended to experience better performances, including the SOLO level of program comprehension, program debugging scores, program explaining scores, except for programming language knowledge scores, compared with the classical case-based teaching method. Considering the SOLO category of program comprehension and performances, evidence that the roles of variables can provide scaffolding to understand case programs through combining its program structure with its related problem domain is discussed, and the SOLO categories for relational reasoning are proposed. Meanwhile, the roles of variables can assist the novice in learning programming language knowledge. These results indicate that combing case-based teaching with the role of variables is an effective way to improve novice program comprehension.
      Citation: Information
      PubDate: 2021-10-16
      DOI: 10.3390/info12100424
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 425: Missing Data Imputation in Internet of
           Things Gateways

    • Authors: Cinthya M. França, Rodrigo S. Couto, Pedro B. Velloso
      First page: 425
      Abstract: In an Internet of Things (IoT) environment, sensors collect and send data to application servers through IoT gateways. However, these data may be missing values due to networking problems or sensor malfunction, which reduces applications’ reliability. This work proposes a mechanism to predict and impute missing data in IoT gateways to achieve greater autonomy at the network edge. These gateways typically have limited computing resources. Therefore, the missing data imputation methods must be simple and provide good results. Thus, this work presents two regression models based on neural networks to impute missing data in IoT gateways. In addition to the prediction quality, we analyzed both the execution time and the amount of memory used. We validated our models using six years of weather data from Rio de Janeiro, varying the missing data percentages. The results show that the neural network regression models perform better than the other imputation methods analyzed, based on the averages and repetition of previous values, for all missing data percentages. In addition, the neural network models present a short execution time and need less than 140 KiB of memory, which allows them to run on IoT gateways.
      Citation: Information
      PubDate: 2021-10-17
      DOI: 10.3390/info12100425
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 426: Critical Factors for Predicting Users’
           Acceptance of Digital Museums for Experience-Influenced Environments

    • Authors: Yue Wu, Qianling Jiang, Shiyu Ni, Hui’e Liang
      First page: 426
      Abstract: Digital museums that use modern technology are gradually replacing traditional museums to stimulate personal growth and promote cultural exchange and social enrichment. With the development and popularization of the mobile Internet, user experience has become a concern in this field. From the perspective of the dynamic stage of user experience, in this study, we expand ECM and TAM by combining the characteristics of users and systems, thereby, constructing the theoretical model and 12 hypotheses about the influencing factors of users’ continuance intentions toward digital museums. A total of 262 valid questionnaires were collected, and the structural equation model tested the model. This study identifies variables that play a role and influence online behavior in a specific experiential environment: (1) Perceived playfulness, perceived usefulness, and satisfaction are the critical variables that affect users’ continuance intentions. (2) Expectation confirmation has a significant influence on perceived playfulness, perceived ease of use, and satisfaction. (3) Media richness is an essential driver of confirmation, perceived ease of use, and perceived usefulness. The conclusions can be used as a reference for managers to promote the construction and innovation of digital museums and provide a better experience to meet users’ needs.
      Citation: Information
      PubDate: 2021-10-17
      DOI: 10.3390/info12100426
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 427: On Information Orders on Metric Spaces

    • Authors: Oliver Olela Otafudu, Oscar Valero
      First page: 427
      Abstract: Information orders play a central role in the mathematical foundations of Computer Science. Concretely, they are a suitable tool to describe processes in which the information increases successively in each step of the computation. In order to provide numerical quantifications of the amount of information in the aforementioned processes, S.G. Matthews introduced the notions of partial metric and Scott-like topology. The success of partial metrics is given mainly by two facts. On the one hand, they can induce the so-called specialization partial order, which is able to encode the existing order structure in many examples of spaces that arise in a natural way in Computer Science. On the other hand, their associated topology is Scott-like when the partial metric space is complete and, thus, it is able to describe the aforementioned increasing information processes in such a way that the supremum of the sequence always exists and captures the amount of information, measured by the partial metric; it also contains no information other than that which may be derived from the members of the sequence. R. Heckmann showed that the method to induce the partial order associated with a partial metric could be retrieved as a particular case of a celebrated method for generating partial orders through metrics and non-negative real-valued functions. Motivated by this fact, we explore this general method from an information orders theory viewpoint. Specifically, we show that such a method captures the essence of information orders in such a way that the function under consideration is able to quantify the amount of information and, in addition, its measurement can be used to distinguish maximal elements. Moreover, we show that this method for endowing a metric space with a partial order can also be applied to partial metric spaces in order to generate new partial orders different from the specialization one. Furthermore, we show that given a complete metric space and an inf-continuous function, the partially ordered set induced by this general method enjoys rich properties. Concretely, we will show not only its order-completeness but the directed-completeness and, in addition, that the topology induced by the metric is Scott-like. Therefore, such a mathematical structure could be used for developing metric-based tools for modeling increasing information processes in Computer Science. As a particular case of our new results, we retrieve, for a complete partial metric space, the above-explained celebrated fact about the Scott-like character of the associated topology and, in addition, that the induced partial ordered set is directed-complete and not only order-complete.
      Citation: Information
      PubDate: 2021-10-18
      DOI: 10.3390/info12100427
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 428: Robust and Precise Matching Algorithm
           Combining Absent Color Indexing and Correlation Filter

    • Authors: Ying Tian, Shun’ichi Kaneko, So Sasatani, Masaya Itoh, Ming Fang
      First page: 428
      Abstract: This paper presents a novel method that absorbs the strong discriminative ability from absent color indexing (ABC) to enhance sensitivity and combines it with a correlation filter (CF) for obtaining a higher precision; this method is named ABC-CF. First, by separating the original color histogram, apparent and absent colors are introduced. Subsequently, an automatic threshold acquisition is proposed using a mean color histogram. Next, a histogram intersection is selected to calculate the similarity. Finally, CF follows them to solve the drift caused by ABC during the matching process. The novel approach proposed in this paper realizes robustness in distortion of target images and higher margins in fundamental matching problems, and then achieves more precise matching in positions. The effectiveness of the proposed approach can be evaluated in the comparative experiments with other representative methods by use of the open data.
      Citation: Information
      PubDate: 2021-10-18
      DOI: 10.3390/info12100428
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 429: Study on Customized Shuttle Transit Mode
           Responding to Spatiotemporal Inhomogeneous Demand in Super-Peak

    • Authors: Hao Zheng, Xingchen Zhang, Junhua Chen
      First page: 429
      Abstract: Instantaneous mega-traffic flow has long been one of the major challenges in the management of mega-cities. It is difficult for the public transportation system to cope directly with transient mega-capacity flows, and the uneven spatiotemporal distribution of demand is the main cause. To this end, this paper proposed a customized shuttle bus transportation model based on the “boarding-transfer-alighting” framework, with the goal of minimizing operational costs and maximizing service quality to address mega-transit demand with uneven spatiotemporal distribution. The fleet application is constructed by a pickup and delivery problem with time window and transfer (PDPTWT) model, and a heuristic algorithm based on Tabu Search and ALNS is proposed to solve the large-scale computational problem. Numerical tests show that the proposed algorithm has the same accuracy as the commercial solution software, but has a higher speed. When the demand size is 10, the proposed algorithm can save 24,000 times of time. In addition, 6 reality-based cases are presented, and the results demonstrate that the designed option can save 9.93% of fleet cost, reduce 45.27% of vehicle waiting time, and 33.05% of passenger waiting time relative to other existing customized bus modes when encountering instantaneous passenger flows with time and space imbalance.
      Citation: Information
      PubDate: 2021-10-18
      DOI: 10.3390/info12100429
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 430: File System Support for
           Privacy-Preserving Analysis and Forensics in Low-Bandwidth Edge

    • Authors: Aril Bernhard Ovesen, Tor-Arne Schmidt Nordmo, Håvard Dagenborg Johansen, Michael Alexander Riegler, Pål Halvorsen, Dag Johansen
      First page: 430
      Abstract: In this paper, we present initial results from our distributed edge systems research in the domain of sustainable harvesting of common good resources in the Arctic Ocean. Specifically, we are developing a digital platform for real-time privacy-preserving sustainability management in the domain of commercial fishery surveillance operations. This is in response to potentially privacy-infringing mandates from some governments to combat overfishing and other sustainability challenges. Our approach is to deploy sensory devices and distributed artificial intelligence algorithms on mobile, offshore fishing vessels and at mainland central control centers. To facilitate this, we need a novel data plane supporting efficient, available, secure, tamper-proof, and compliant data management in this weakly connected offshore environment. We have built our first prototype of Dorvu, a novel distributed file system in this context. Our devised architecture, the design trade-offs among conflicting properties, and our initial experiences are further detailed in this paper.
      Citation: Information
      PubDate: 2021-10-18
      DOI: 10.3390/info12100430
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 431: Towards Edge Computing Using Early-Exit
           Convolutional Neural Networks

    • Authors: Roberto G. Pacheco, Kaylani Bochie, Mateus S. Gilbert, Rodrigo S. Couto, Miguel Elias M. Campista
      First page: 431
      Abstract: In computer vision applications, mobile devices can transfer the inference of Convolutional Neural Networks (CNNs) to the cloud due to their computational restrictions. Nevertheless, besides introducing more network load concerning the cloud, this approach can make unfeasible applications that require low latency. A possible solution is to use CNNs with early exits at the network edge. These CNNs can pre-classify part of the samples in the intermediate layers based on a confidence criterion. Hence, the device sends to the cloud only samples that have not been satisfactorily classified. This work evaluates the performance of these CNNs at the computational edge, considering an object detection application. For this, we employ a MobiletNetV2 with early exits. The experiments show that the early classification can reduce the data load and the inference time without imposing losses to the application performance.
      Citation: Information
      PubDate: 2021-10-19
      DOI: 10.3390/info12100431
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 432: An Ontological Approach to Enhancing
           Information Sharing in Disaster Response

    • Authors: Linda Elmhadhbi, Mohamed-Hedi Karray, Bernard Archimède, J. Neil Otte, Barry Smith
      First page: 432
      Abstract: Managing complex disaster situations is a challenging task because of the large number of actors involved and the critical nature of the events themselves. In particular, the different terminologies and technical vocabularies that are being exchanged among Emergency Responders (ERs) may lead to misunderstandings. Maintaining a shared semantics for exchanged data is a major challenge. To help to overcome these issues, we elaborate a modular suite of ontologies called POLARISCO that formalizes the complex knowledge of the ERs. Such a shared vocabulary resolves inconsistent terminologies and promotes semantic interoperability among ERs. In this work, we discuss developing POLARISCO as an extension of Basic Formal Ontology (BFO) and the Common Core Ontologies (CCO). We conclude by presenting a real use-case to check the efficiency and applicability of the proposed ontology.
      Citation: Information
      PubDate: 2021-10-19
      DOI: 10.3390/info12100432
      Issue No: Vol. 12, No. 10 (2021)
  • Information, Vol. 12, Pages 433: Algebraic Fault Analysis of SHA-256
           Compression Function and Its Application

    • Authors: Kazuki Nakamura, Koji Hori, Shoichi Hirose
      First page: 433
      Abstract: Cryptographic hash functions play an essential role in various aspects of cryptography, such as message authentication codes, pseudorandom number generation, digital signatures, and so on. Thus, the security of their hardware implementations is an important research topic. Hao et al. proposed an algebraic fault analysis (AFA) for the SHA-256 compression function in 2014. They showed that one could recover the whole of an unknown input of the SHA-256 compression function by injecting 65 faults and analyzing the outputs under normal and fault injection conditions. They also presented an almost universal forgery attack on HMAC-SHA-256 using this result. In our work, we conducted computer experiments for various fault-injection conditions in the AFA for the SHA-256 compression function. As a result, we found that one can recover the whole of an unknown input of the SHA-256 compression function by injecting an average of only 18 faults on average. We also conducted an AFA for the SHACAL-2 block cipher and an AFA for the SHA-256 compression function, enabling almost universal forgery of the chopMD-MAC function.
      Citation: Information
      PubDate: 2021-10-19
      DOI: 10.3390/info12100433
      Issue No: Vol. 12, No. 10 (2021)
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762

Your IP address:
Home (Search)
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-