Subjects -> MANUFACTURING AND TECHNOLOGY (Total: 363 journals)
    - CERAMICS, GLASS AND POTTERY (31 journals)
    - MACHINERY (34 journals)
    - MANUFACTURING AND TECHNOLOGY (223 journals)
    - METROLOGY AND STANDARDIZATION (6 journals)
    - PACKAGING (19 journals)
    - PAINTS AND PROTECTIVE COATINGS (4 journals)
    - PLASTICS (42 journals)
    - RUBBER (4 journals)

MACHINERY (34 journals)

Showing 1 - 27 of 27 Journals sorted alphabetically
Acta Mechanica Solida Sinica     Hybrid Journal   (Followers: 8)
Advanced Energy Materials     Hybrid Journal   (Followers: 31)
Applied Mechanics Reviews     Full-text available via subscription   (Followers: 27)
CORROSION     Full-text available via subscription   (Followers: 20)
Electric Power Components and Systems     Hybrid Journal   (Followers: 7)
Foundations and Trends® in Electronic Design Automation     Full-text available via subscription   (Followers: 1)
International Journal of Machine Tools and Manufacture     Hybrid Journal   (Followers: 8)
International Journal of Machining and Machinability of Materials     Hybrid Journal   (Followers: 4)
International Journal of Manufacturing Technology and Management     Hybrid Journal   (Followers: 8)
International Journal of Precision Technology     Hybrid Journal   (Followers: 1)
International Journal of Rapid Manufacturing     Hybrid Journal   (Followers: 3)
International Journal of Rotating Machinery     Open Access   (Followers: 2)
Journal of Machinery Manufacture and Reliability     Hybrid Journal   (Followers: 2)
Journal of Manufacturing and Materials Processing     Open Access  
Journal of Mechanics     Hybrid Journal   (Followers: 9)
Journal of Strain Analysis for Engineering Design     Hybrid Journal   (Followers: 5)
Journal of Terramechanics     Hybrid Journal   (Followers: 4)
Machine Design     Partially Free   (Followers: 183)
Machine Learning and Knowledge Extraction     Open Access   (Followers: 12)
Machines     Open Access   (Followers: 4)
Materials     Open Access   (Followers: 4)
Mechanics Based Design of Structures and Machines: An International Journal     Hybrid Journal   (Followers: 8)
Micromachines     Open Access   (Followers: 2)
Pump Industry Analyst     Full-text available via subscription   (Followers: 1)
Russian Engineering Research     Hybrid Journal  
Sensor Review     Hybrid Journal   (Followers: 2)
Surface Engineering and Applied Electrochemistry     Hybrid Journal   (Followers: 6)
Similar Journals
Journal Cover
Machine Learning and Knowledge Extraction
Number of Followers: 12  

  This is an Open Access Journal Open Access journal
ISSN (Online) 2504-4990
Published by MDPI Homepage  [84 journals]
  • MAKE, Vol. 4, Pages 316-349: Counterfactual Models for Fair and Adequate
           Explanations

    • Authors: Nicholas Asher, Lucas De Lara, Soumya Paul, Chris Russell
      First page: 316
      Abstract: Recent efforts have uncovered various methods for providing explanations that can help interpret the behavior of machine learning programs. Exact explanations with a rigorous logical foundation provide valid and complete explanations, but they have an epistemological problem: they are often too complex for humans to understand and too expensive to compute even with automated reasoning methods. Interpretability requires good explanations that humans can grasp and can compute. We take an important step toward specifying what good explanations are by analyzing the epistemically accessible and pragmatic aspects of explanations. We characterize sufficiently good, or fair and adequate, explanations in terms of counterfactuals and what we call the conundra of the explainee, the agent that requested the explanation. We provide a correspondence between logical and mathematical formulations for counterfactuals to examine the partiality of counterfactual explanations that can hide biases; we define fair and adequate explanations in such a setting. We provide formal results about the algorithmic complexity of fair and adequate explanations. We then detail two sophisticated counterfactual models, one based on causal graphs, and one based on transport theories. We show transport based models have several theoretical advantages over the competition as explanation frameworks for machine learning algorithms.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-03-31
      DOI: 10.3390/make4020014
      Issue No: Vol. 4, No. 2 (2022)
       
  • MAKE, Vol. 4, Pages 350-370: An Attention-Based ConvLSTM Autoencoder with
           Dynamic Thresholding for Unsupervised Anomaly Detection in Multivariate
           Time Series

    • Authors: Tareq Tayeh, Sulaiman Aburakhia, Ryan Myers, Abdallah Shami
      First page: 350
      Abstract: As a substantial amount of multivariate time series data is being produced by the complex systems in smart manufacturing (SM), improved anomaly detection frameworks are needed to reduce the operational risks and the monitoring burden placed on the system operators. However, building such frameworks is challenging, as a sufficiently large amount of defective training data is often not available and frameworks are required to capture both the temporal and contextual dependencies across different time steps while being robust to noise. In this paper, we propose an unsupervised Attention-Based Convolutional Long Short-Term Memory (ConvLSTM) Autoencoder with Dynamic Thresholding (ACLAE-DT) framework for anomaly detection and diagnosis in multivariate time series. The framework starts by pre-processing and enriching the data, before constructing feature images to characterize the system statuses across different time steps by capturing the inter-correlations between pairs of time series. Afterwards, the constructed feature images are fed into an attention-based ConvLSTM autoencoder, which aims to encode the constructed feature images and capture the temporal behavior, followed by decoding the compressed knowledge representation to reconstruct the feature images’ input. The reconstruction errors are then computed and subjected to a statistical-based, dynamic thresholding mechanism to detect and diagnose the anomalies. Evaluation results conducted on real-life manufacturing data demonstrate the performance strengths of the proposed approach over state-of-the-art methods under different experimental settings.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-04-02
      DOI: 10.3390/make4020015
      Issue No: Vol. 4, No. 2 (2022)
       
  • MAKE, Vol. 4, Pages 371-396: VloGraph: A Virtual Knowledge Graph Framework
           for Distributed Security Log Analysis

    • Authors: Kabul Kurniawan, Andreas Ekelhart, Elmar Kiesling, Dietmar Winkler, Gerald Quirchmayr, A Min Tjoa
      First page: 371
      Abstract: The integration of heterogeneous and weakly linked log data poses a major challenge in many log-analytic applications. Knowledge graphs (KGs) can facilitate such integration by providing a versatile representation that can interlink objects of interest and enrich log events with background knowledge. Furthermore, graph-pattern based query languages, such as SPARQL, can support rich log analyses by leveraging semantic relationships between objects in heterogeneous log streams. Constructing, materializing, and maintaining centralized log knowledge graphs, however, poses significant challenges. To tackle this issue, we propose VloGraph—a distributed and virtualized alternative to centralized log knowledge graph construction. The proposed approach does not involve any a priori parsing, aggregation, and processing of log data, but dynamically constructs a virtual log KG from heterogeneous raw log sources across multiple hosts. To explore the feasibility of this approach, we developed a prototype and demonstrate its applicability to three scenarios. Furthermore, we evaluate the approach in various experimental settings with multiple heterogeneous log sources and machines; the encouraging results from this evaluation suggest that the approach can enable efficient graph-based ad-hoc log analyses in federated settings.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-04-11
      DOI: 10.3390/make4020016
      Issue No: Vol. 4, No. 2 (2022)
       
  • MAKE, Vol. 4, Pages 397-417: Missing Data Estimation in Temporal
           Multilayer Position-Aware Graph Neural Network (TMP-GNN)

    • Authors: Bahareh Najafi, Saeedeh Parsaeefard, Alberto Leon-Garcia
      First page: 397
      Abstract: GNNs have been proven to perform highly effectively in various node-level, edge-level, and graph-level prediction tasks in several domains. Existing approaches mainly focus on static graphs. However, many graphs change over time and their edge may disappear, or the node/edge attribute may alter from one time to the other. It is essential to consider such evolution in the representation learning of nodes in time-varying graphs. In this paper, we propose a Temporal Multilayer Position-Aware Graph Neural Network (TMP-GNN), a node embedding approach for dynamic graphs that incorporates the interdependence of temporal relations into embedding computation. We evaluate the performance of TMP-GNN on two different representations of temporal multilayered graphs. The performance is assessed against the most popular GNNs on a node-level prediction task. Then, we incorporate TMP-GNN into a deep learning framework to estimate missing data and compare the performance with their corresponding competent GNNs from our former experiment, and a baseline method. Experimental results on four real-world datasets yield up to 58% lower ROCAUC for the pair-wise node classification task, and 96% lower MAE in missing feature estimation, particularly for graphs with a relatively high number of nodes and lower mean degree of connectivity.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-04-30
      DOI: 10.3390/make4020017
      Issue No: Vol. 4, No. 2 (2022)
       
  • MAKE, Vol. 4, Pages 418-431: Estimating the Best Time to View Cherry
           Blossoms Using Time-Series Forecasting Method

    • Authors: Tomonari Horikawa, Munenori Takahashi, Masaki Endo, Shigeyoshi Ohno, Masaharu Hirota, Hiroshi Ishikawa
      First page: 418
      Abstract: In recent years, tourist information collection using the internet has become common. Tourists are increasingly using internet resources to obtain tourist information. Social network service (SNS) users share tourist information of various kinds. Twitter, one SNS, has been used for many studies. We are pursuing research supporting a method using Twitter to help tourists obtain information: estimates of the best time to view cherry blossoms. Earlier studies have proposed a low-cost moving average method using geotagged tweets related to location information. Geotagged tweets are helpful as social sensors for real-time estimation and for the acquisition of local tourist information because the information can reflect real-world situations. Earlier studies have used weighted moving averages, indicating that a person can estimate the best time to view cherry blossoms in each prefecture. This study proposes a time-series prediction method using SNS data and machine learning as a new method for estimating the best times for viewing for a certain period. Combining the time-series forecasting method and the low-cost moving average method yields an estimate of the best time to view cherry blossoms. This report describes results confirming the usefulness of the proposed method by experimentation with estimation of the best time to view beautiful cherry blossoms in each prefecture and municipality.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-04-30
      DOI: 10.3390/make4020018
      Issue No: Vol. 4, No. 2 (2022)
       
  • MAKE, Vol. 4, Pages 432-445: Knowledgebra: An Algebraic Learning Framework
           for Knowledge Graph

    • Authors: Tong Yang, Yifei Wang, Long Sha, Jan Engelbrecht, Pengyu Hong
      First page: 432
      Abstract: Knowledge graph (KG) representation learning aims to encode entities and relations into dense continuous vector spaces such that knowledge contained in a dataset could be consistently represented. Dense embeddings trained from KG datasets benefit a variety of downstream tasks such as KG completion and link prediction. However, existing KG embedding methods fell short to provide a systematic solution for the global consistency of knowledge representation. We developed a mathematical language for KG based on an observation of their inherent algebraic structure, which we termed as Knowledgebra. By analyzing five distinct algebraic properties, we proved that the semigroup is the most reasonable algebraic structure for the relation embedding of a general knowledge graph. We implemented an instantiation model, SemE, using simple matrix semigroups, which exhibits state-of-the-art performance on standard datasets. Moreover, we proposed a regularization-based method to integrate chain-like logic rules derived from human knowledge into embedding training, which further demonstrates the power of the developed language. As far as we know, by applying abstract algebra in statistical learning, this work develops the first formal language for general knowledge graphs, and also sheds light on the problem of neural-symbolic integration from an algebraic perspective.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-05-05
      DOI: 10.3390/make4020019
      Issue No: Vol. 4, No. 2 (2022)
       
  • MAKE, Vol. 4, Pages 446-473: Machine Learning in Disaster Management:
           Recent Developments in Methods and Applications

    • Authors: Vasileios Linardos, Maria Drakaki, Panagiotis Tzionas, Yannis L. Karnavas
      First page: 446
      Abstract: Recent years include the world’s hottest year, while they have been marked mainly, besides the COVID-19 pandemic, by climate-related disasters, based on data collected by the Emergency Events Database (EM-DAT). Besides the human losses, disasters cause significant and often catastrophic socioeconomic impacts, including economic losses. Recent developments in artificial intelligence (AI) and especially in machine learning (ML) and deep learning (DL) have been used to better cope with the severe and often catastrophic impacts of disasters. This paper aims to provide an overview of the research studies, presented since 2017, focusing on ML and DL developed methods for disaster management. In particular, focus has been given on studies in the areas of disaster and hazard prediction, risk and vulnerability assessment, disaster detection, early warning systems, disaster monitoring, damage assessment and post-disaster response as well as cases studies. Furthermore, some recently developed ML and DL applications for disaster management have been analyzed. A discussion of the findings is provided as well as directions for further research.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-05-07
      DOI: 10.3390/make4020020
      Issue No: Vol. 4, No. 2 (2022)
       
  • MAKE, Vol. 4, Pages 474-487: The Case of Aspect in Sentiment Analysis:
           Seeking Attention or Co-Dependency'

    • Authors: Anastazia Žunić, Padraig Corcoran, Irena Spasić
      First page: 474
      Abstract: (1) Background: Aspect-based sentiment analysis (SA) is a natural language processing task, the aim of which is to classify the sentiment associated with a specific aspect of a written text. The performance of SA methods applied to texts related to health and well-being lags behind that of other domains. (2) Methods: In this study, we present an approach to aspect-based SA of drug reviews. Specifically, we analysed signs and symptoms, which were extracted automatically using the Unified Medical Language System. This information was then passed onto the BERT language model, which was extended by two layers to fine-tune the model for aspect-based SA. The interpretability of the model was analysed using an axiomatic attribution method. We performed a correlation analysis between the attribution scores and syntactic dependencies. (3) Results: Our fine-tuned model achieved accuracy of approximately 95% on a well-balanced test set. It outperformed our previous approach, which used syntactic information to guide the operation of a neural network and achieved an accuracy of approximately 82%. (4) Conclusions: We demonstrated that a BERT-based model of SA overcomes the negative bias associated with health-related aspects and closes the performance gap against the state-of-the-art in other domains.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-05-13
      DOI: 10.3390/make4020021
      Issue No: Vol. 4, No. 2 (2022)
       
  • MAKE, Vol. 4, Pages 22-41: A Transfer Learning Evaluation of Deep Neural
           Networks for Image Classification

    • Authors: Nermeen Abou Baker, Nico Zengeler, Uwe Handmann
      First page: 22
      Abstract: Transfer learning is a machine learning technique that uses previously acquired knowledge from a source domain to enhance learning in a target domain by reusing learned weights. This technique is ubiquitous because of its great advantages in achieving high performance while saving training time, memory, and effort in network design. In this paper, we investigate how to select the best pre-trained model that meets the target domain requirements for image classification tasks. In our study, we refined the output layers and general network parameters to apply the knowledge of eleven image processing models, pre-trained on ImageNet, to five different target domain datasets. We measured the accuracy, accuracy density, training time, and model size to evaluate the pre-trained models both in training sessions in one episode and with ten episodes.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-01-14
      DOI: 10.3390/make4010002
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 42-65: NER in Archival Finding Aids: Extended

    • Authors: Luís Filipe da Costa Cunha, José Carlos Ramalho
      First page: 42
      Abstract: The amount of information preserved in Portuguese archives has increased over the years. These documents represent a national heritage of high importance, as they portray the country’s history. Currently, most Portuguese archives have made their finding aids available to the public in digital format, however, these data do not have any annotation, so it is not always easy to analyze their content. In this work, Named Entity Recognition solutions were created that allow the identification and classification of several named entities from the archival finding aids. These named entities translate into crucial information about their context and, with high confidence results, they can be used for several purposes, for example, the creation of smart browsing tools by using entity linking and record linking techniques. In order to achieve high result scores, we annotated several corpora to train our own Machine Learning algorithms in this context domain. We also used different architectures, such as CNNs, LSTMs, and Maximum Entropy models. Finally, all the created datasets and ML models were made available to the public with a developed web platform, NER@DI.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-01-17
      DOI: 10.3390/make4010003
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 66-102: A Survey of Near-Data Processing Architectures
           for Neural Networks

    • Authors: Mehdi Hassanpour, Marc Riera, Antonio González
      First page: 66
      Abstract: Data-intensive workloads and applications, such as machine learning (ML), are fundamentally limited by traditional computing systems based on the von-Neumann architecture. As data movement operations and energy consumption become key bottlenecks in the design of computing systems, the interest in unconventional approaches such as Near-Data Processing (NDP), machine learning, and especially neural network (NN)-based accelerators has grown significantly. Emerging memory technologies, such as ReRAM and 3D-stacked, are promising for efficiently architecting NDP-based accelerators for NN due to their capabilities to work as both high-density/low-energy storage and in/near-memory computation/search engine. In this paper, we present a survey of techniques for designing NDP architectures for NN. By classifying the techniques based on the memory technology employed, we underscore their similarities and differences. Finally, we discuss open challenges and future perspectives that need to be explored in order to improve and extend the adoption of NDP architectures for future computing platforms. This paper will be valuable for computer architects, chip designers, and researchers in the area of machine learning.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-01-17
      DOI: 10.3390/make4010004
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 103-104: Acknowledgment to Reviewers of Machine
           Learning and Knowledge Extraction in 2021

    • Authors: Machine Learning; Knowledge Extraction Editorial Office
      First page: 103
      Abstract: Rigorous peer-reviews are the basis of high-quality academic publishing [...]
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-01-28
      DOI: 10.3390/make4010005
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 105-130: Machine Learning Based Restaurant Sales
           Forecasting

    • Authors: Austin Schmidt, Md Wasi Ul Kabir, Md Tamjidul Hoque
      First page: 105
      Abstract: To encourage proper employee scheduling for managing crew load, restaurants need accurate sales forecasting. This paper proposes a case study on many machine learning (ML) models using real-world sales data from a mid-sized restaurant. Trendy recurrent neural network (RNN) models are included for direct comparison to many methods. To test the effects of trend and seasonality, we generate three different datasets to train our models with and to compare our results. To aid in forecasting, we engineer many features and demonstrate good methods to select an optimal sub-set of highly correlated features. We compare the models based on their performance for forecasting time steps of one-day and one-week over a curated test dataset. The best results seen in one-day forecasting come from linear models with a sMAPE of only 19.6%. Two RNN models, LSTM and TFT, and ensemble models also performed well with errors less than 20%. When forecasting one-week, non-RNN models performed poorly, giving results worse than 20% error. RNN models extended better with good sMAPE scores giving 19.5% in the best result. The RNN models performed worse overall on datasets with trend and seasonality removed, however many simpler ML models performed well when linearly separating each training instance.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-01-30
      DOI: 10.3390/make4010006
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 131-149: A Novel Framework for Fast Feature Selection
           Based on Multi-Stage Correlation Measures

    • Authors: Ivan-Alejandro Garcia-Ramirez, Arturo Calderon-Mora, Andres Mendez-Vazquez, Susana Ortega-Cisneros, Ivan Reyes-Amezcua
      First page: 131
      Abstract: Datasets with thousands of features represent a challenge for many of the existing learning methods because of the well known curse of dimensionality. Not only that, but the presence of irrelevant and redundant features on any dataset can degrade the performance of any model where training and inference is attempted. In addition, in large datasets, the manual management of features tends to be impractical. Therefore, the increasing interest of developing frameworks for the automatic discovery and removal of useless features through the literature of Machine Learning. This is the reason why, in this paper, we propose a novel framework for selecting relevant features in supervised datasets based on a cascade of methods where speed and precision are in mind. This framework consists of a novel combination of Approximated and Simulate Annealing versions of the Maximal Information Coefficient (MIC) to generalize the simple linear relation between features. This process is performed in a series of steps by applying the MIC algorithms and cutoff strategies to remove irrelevant and redundant features. The framework is also designed to achieve a balance between accuracy and speed. To test the performance of the proposed framework, a series of experiments are conducted on a large battery of datasets from SPECTF Heart to Sonar data. The results show the balance of accuracy and speed that the proposed framework can achieve.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-02-08
      DOI: 10.3390/make4010007
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 150-171: Explainable Machine Learning Reveals
           Capabilities, Redundancy, and Limitations of a Geospatial Air Quality
           Benchmark Dataset

    • Authors: Scarlet Stadtler, Clara Betancourt, Ribana Roscher
      First page: 150
      Abstract: Air quality is relevant to society because it poses environmental risks to humans and nature. We use explainable machine learning in air quality research by analyzing model predictions in relation to the underlying training data. The data originate from worldwide ozone observations, paired with geospatial data. We use two different architectures: a neural network and a random forest trained on various geospatial data to predict multi-year averages of the air pollutant ozone. To understand how both models function, we explain how they represent the training data and derive their predictions. By focusing on inaccurate predictions and explaining why these predictions fail, we can (i) identify underrepresented samples, (ii) flag unexpected inaccurate predictions, and (iii) point to training samples irrelevant for predictions on the test set. Based on the underrepresented samples, we suggest where to build new measurement stations. We also show which training samples do not substantially contribute to the model performance. This study demonstrates the application of explainable machine learning beyond simply explaining the trained model.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-02-11
      DOI: 10.3390/make4010008
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 172-221: Hierarchical Reinforcement Learning: A Survey
           and Open Research Challenges

    • Authors: Matthias Hutsebaut-Buysse, Kevin Mets, Steven Latré
      First page: 172
      Abstract: Reinforcement learning (RL) allows an agent to solve sequential decision-making problems by interacting with an environment in a trial-and-error fashion. When these environments are very complex, pure random exploration of possible solutions often fails, or is very sample inefficient, requiring an unreasonable amount of interaction with the environment. Hierarchical reinforcement learning (HRL) utilizes forms of temporal- and state-abstractions in order to tackle these challenges, while simultaneously paving the road for behavior reuse and increased interpretability of RL systems. In this survey paper we first introduce a selection of problem-specific approaches, which provided insight in how to utilize often handcrafted abstractions in specific task settings. We then introduce the Options framework, which provides a more generic approach, allowing abstractions to be discovered and learned semi-automatically. Afterwards we introduce the goal-conditional approach, which allows sub-behaviors to be embedded in a continuous space. In order to further advance the development of HRL agents, capable of simultaneously learning abstractions and how to use them, solely from interaction with complex high dimensional environments, we also identify a set of promising research directions.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-02-17
      DOI: 10.3390/make4010009
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 222-239: An Analysis of Cholesteric Spherical
           Reflector Identifiers for Object Authenticity Verification

    • Authors: Arenas, Demirci, Lenzini
      First page: 222
      Abstract: Arrays of Cholesteric Spherical Reflectors (CSRs), microscopic cholesteric liquid crystals in a spherical shape, have been argued to become a game-changing technology in anti-counterfeiting. Used to build identifiable tags or coating, called CSR IDs, they can supply objects with unclonable fingerprint-like characteristics, making it possible to authenticate objects. In a previous study, we have shown how to extract minutiæ from CSR IDs. In this journal version, we build on that previous research, consolidate the methodology, and test it over CSR IDs obtained by different production processes. We measure the robustness and reliability of our procedure on large and variegate sets of CSR IDs’ images taken with a professional microscope (Laboratory Data set) and with a microscope that could be used in a realistic scenario (Realistic Data set). We measure intra-distance and interdistance, proving that we can distinguish images coming from the same CSR ID from images of different CSR IDs. However, without surprise, images in Laboratory Data set have an intra-distance that on average is less, and with less variance, than the intra-distance between responses from Realistic Data set. With this evidence, we discuss a few requirements for an anti-counterfeiting technology based on CSRs.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-02-24
      DOI: 10.3390/make4010010
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 240-253: Developing a Novel Fair-Loan Classifier
           through a Multi-Sensitive Debiasing Pipeline: DualFair

    • Authors: Arashdeep Singh, Jashandeep Singh, Ariba Khan, Amar Gupta
      First page: 240
      Abstract: Machine learning (ML) models are increasingly being used for high-stake applications that can greatly impact people’s lives. Sometimes, these models can be biased toward certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this “model discrimination” by updating the training data (pre-processing), altering the model learning process (in-processing), or manipulating the model output (post-processing). However, more work can be done in extending this situation to intersectional fairness, where we consider multiple sensitive parameters (e.g., race) and sensitive options (e.g., black or white), thus allowing for greater real-world usability. Prior work in fairness has also suffered from an accuracy–fairness trade-off that prevents both accuracy and fairness from being high. Moreover, the previous literature has not clearly presented holistic fairness metrics that work with intersectional fairness. In this paper, we address all three of these problems by (a) creating a bias mitigation technique called DualFair and (b) developing a new fairness metric (i.e., AWI, a measure of bias of an algorithm based upon inconsistent counterfactual predictions) that can handle intersectional fairness. Lastly, we test our novel mitigation method using a comprehensive U.S. mortgage lending dataset and show that our classifier, or fair loan predictor, obtains relatively high fairness and accuracy metrics.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-03-12
      DOI: 10.3390/make4010011
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 254-275: Comparison of Text Mining Models for Food and
           Dietary Constituent Named-Entity Recognition

    • Authors: Nadeesha Perera, Thi Thuy Linh Nguyen, Matthias Dehmer, Frank Emmert-Streib
      First page: 254
      Abstract: Biomedical Named-Entity Recognition (BioNER) has become an essential part of text mining due to the continuously increasing digital archives of biological and medical articles. While there are many well-performing BioNER tools for entities such as genes, proteins, diseases or species, there is very little research into food and dietary constituent named-entity recognition. For this reason, in this paper, we study seven BioNER models for food and dietary constituents recognition. Specifically, we study a dictionary-based model, a conditional random fields (CRF) model and a new hybrid model, called FooDCoNER (Food and Dietary Constituents Named-Entity Recognition), which we introduce combining the former two models. In addition, we study deep language models including BERT, BioBERT, RoBERTa and ELECTRA. As a result, we find that FooDCoNER does not only lead to the overall best results, comparable with the deep language models, but FooDCoNER is also much more efficient with respect to run time and sample size requirements of the training data. The latter has been identified via the study of learning curves. Overall, our results not only provide a new tool for food and dietary constituent NER but also shed light on the difference between classical machine learning models and recent deep language models.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-03-16
      DOI: 10.3390/make4010012
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 276-315: Robust Reinforcement Learning: A Review of
           Foundations and Recent Advances

    • Authors: Janosch Moos, Kay Hansel, Hany Abdulsamad, Svenja Stark, Debora Clever, Jan Peters
      First page: 276
      Abstract: Reinforcement learning (RL) has become a highly successful framework for learning in Markov decision processes (MDP). Due to the adoption of RL in realistic and complex environments, solution robustness becomes an increasingly important aspect of RL deployment. Nevertheless, current RL algorithms struggle with robustness to uncertainty, disturbances, or structural changes in the environment. We survey the literature on robust approaches to reinforcement learning and categorize these methods in four different ways: (i) Transition robust designs account for uncertainties in the system dynamics by manipulating the transition probabilities between states; (ii) Disturbance robust designs leverage external forces to model uncertainty in the system behavior; (iii) Action robust designs redirect transitions of the system by corrupting an agent’s output; (iv) Observation robust designs exploit or distort the perceived system state of the policy. Each of these robust designs alters a different aspect of the MDP. Additionally, we address the connection of robustness to the risk-based and entropy-regularized RL formulations. The resulting survey covers all fundamental concepts underlying the approaches to robust reinforcement learning and their recent advances.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2022-03-19
      DOI: 10.3390/make4010013
      Issue No: Vol. 4, No. 1 (2022)
       
  • MAKE, Vol. 4, Pages 1-21: A Comparison of Surrogate Modeling Techniques
           for Global Sensitivity Analysis in Hybrid Simulation

    • Authors: Nikolaos Tsokanas, Roland Pastorino, Božidar Stojadinović
      First page: 1
      Abstract: Hybrid simulation is a method used to investigate the dynamic response of a system subjected to a realistic loading scenario. The system under consideration is divided into multiple individual substructures, out of which one or more are tested physically, whereas the remaining are simulated numerically. The coupling of all substructures forms the so-called hybrid model. Although hybrid simulation is extensively used across various engineering disciplines, it is often the case that the hybrid model and related excitation are conceived as being deterministic. However, associated uncertainties are present, whilst simulation deviation, due to their presence, could be significant. In this regard, global sensitivity analysis based on Sobol’ indices can be used to determine the sensitivity of the hybrid model response due to the presence of the associated uncertainties. Nonetheless, estimation of the Sobol’ sensitivity indices requires an unaffordable amount of hybrid simulation evaluations. Therefore, surrogate modeling techniques using machine learning data-driven regression are utilized to alleviate this burden. This study extends the current global sensitivity analysis practices in hybrid simulation by employing various different surrogate modeling methodologies as well as providing comparative results. In particular, polynomial chaos expansion, Kriging and polynomial chaos Kriging are used. A case study encompassing a virtual hybrid model is employed, and hybrid model response quantities of interest are selected. Their respective surrogates are developed, using all three aforementioned techniques. The Sobol’ indices obtained utilizing each examined surrogate are compared with each other, and the results highlight potential deviations when different surrogates are used.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-12-24
      DOI: 10.3390/make4010001
      Issue No: Vol. 4, No. 1 (2021)
       
  • MAKE, Vol. 3, Pages 771-787: A Critical Study on Stability Measures of
           Feature Selection with a Novel Extension of Lustgarten Index

    • Authors: Rikta Sen, Ashis Kumar Mandal, Basabi Chakraborty
      First page: 771
      Abstract: Stability of feature selection algorithm refers to its robustness to the perturbations of the training set, parameter settings or initialization. A stable feature selection algorithm is crucial for identifying the relevant feature subset of meaningful and interpretable features which is extremely important in the task of knowledge discovery. Though there are many stability measures reported in the literature for evaluating the stability of feature selection, none of them follows all the requisite properties of a stability measure. Among them, the Kuncheva index and its modifications, are widely used in practical problems. In this work, the merits and limitations of the Kuncheva index and its existing modifications (Lustgarten, Wald, nPOG/nPOGR, Nogueira ) are studied and analysed with respect to the requisite properties of stability measure. One more limitation of the most recent modified similarity measure, Nogueira’s measure, has been pointed out. Finally, corrections to Lustgarten’s measure have been proposed to define a new modified stability measure that satisfies the desired properties and overcomes the limitations of existing popular similarity based stability measures. The effectiveness of the newly modified Lustgarten’s measure has been evaluated with simple toy experiments.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-09-24
      DOI: 10.3390/make3040038
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 788-801: An Assessment of the Application of Private
           Aggregation of Ensemble Models to Sensible Data

    • Authors: Sergio Yovine, Franz Mayr, Sebastián Sosa, Ramiro Visca
      First page: 788
      Abstract: This paper explores the use of Private Aggregation of Teacher Ensembles (PATE) in a setting where students have their own private data that cannot be revealed as is to the ensemble. We propose a privacy model that introduces a local differentially private mechanism to protect student data. We implemented and analyzed it in case studies from security and health domains, and the result of the experiment was twofold. First, this model does not significantly affecs predictive capabilities, and second, it unveiled interesting issues with the so-called data dependency privacy loss metric, namely, high variance and values.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-09-25
      DOI: 10.3390/make3040039
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 802-818: Knowledge Graphs Representation for
           Event-Related E-News Articles

    • Authors: M.V.P.T. Lakshika, H.A. Caldera
      First page: 802
      Abstract: E-newspaper readers are overloaded with massive texts on e-news articles, and they usually mislead the reader who reads and understands information. Thus, there is an urgent need for a technology that can automatically represent the gist of these e-news articles more quickly. Currently, popular machine learning approaches have greatly improved presentation accuracy compared to traditional methods, but they cannot be accommodated with the contextual information to acquire higher-level abstraction. Recent research efforts in knowledge representation using graph approaches are neither user-driven nor flexible to deviations in the data. Thus, there is a striking concentration on constructing knowledge graphs by combining the background information related to the subjects in text documents. We propose an enhanced representation of a scalable knowledge graph by automatically extracting the information from the corpus of e-news articles and determine whether a knowledge graph can be used as an efficient application in analyzing and generating knowledge representation from the extracted e-news corpus. This knowledge graph consists of a knowledge base built using triples that automatically produce knowledge representation from e-news articles. Inclusively, it has been observed that the proposed knowledge graph generates a comprehensive and precise knowledge representation for the corpus of e-news articles.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-09-26
      DOI: 10.3390/make3040040
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 819-834: Fully Homomorphically Encrypted Deep Learning
           as a Service

    • Authors: George Onoufriou, Paul Mayfield, Georgios Leontidis
      First page: 819
      Abstract: Fully Homomorphic Encryption (FHE) is a relatively recent advancement in the field of privacy-preserving technologies. FHE allows for the arbitrary depth computation of both addition and multiplication, and thus the application of abelian/polynomial equations, like those found in deep learning algorithms. This project investigates how FHE with deep learning can be used at scale toward accurate sequence prediction, with a relatively low time complexity, the problems that such a system incurs, and mitigations/solutions for such problems. In addition, we discuss how this could have an impact on the future of data privacy and how it can enable data sharing across various actors in the agri-food supply chain, hence allowing the development of machine learning-based systems. Finally, we find that although FHE incurs a high spatial complexity cost, the run time is within expected reasonable bounds, while allowing for absolutely private predictions to be made, in our case for milk yield prediction with a Mean Absolute Percentage Error (MAPE) of 12.4% and an accuracy of 87.6% on average.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-10-13
      DOI: 10.3390/make3040041
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 835-862: A Review of the Role of Machine Learning
           Techniques towards Brain–Computer Interface Applications

    • Authors: Saim Rasheed
      First page: 835
      Abstract: This review article provides a deep insight into the Brain–Computer Interface (BCI) and the application of Machine Learning (ML) technology in BCIs. It investigates the various types of research undertaken in this realm and discusses the role played by ML in performing different BCI tasks. It also reviews the ML methods used for mental state detection, mental task categorization, emotion classification, electroencephalogram (EEG) signal classification, event-related potential (ERP) signal classification, motor imagery categorization, and limb movement classification. This work explores the various methods employed in BCI mechanisms for feature extraction, selection, and classification and provides a comparative study of reviewed methods. This paper assists the readers to gain information regarding the developments made in BCI and ML domains and future improvements needed for improving and designing better BCI applications.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-10-26
      DOI: 10.3390/make3040042
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 863-878: Recent Advances in Deep Reinforcement
           Learning Applications for Solving Partially Observable Markov Decision
           Processes (POMDP) Problems Part 2—Applications in Transportation,
           Industries, Communications and Networking and More Topics

    • Authors: Xuanchen Xiang, Simon Foo, Huanyu Zang
      First page: 863
      Abstract: The two-part series of papers provides a survey on recent advances in Deep Reinforcement Learning (DRL) for solving partially observable Markov decision processes (POMDP) problems. Reinforcement Learning (RL) is an approach to simulate the human’s natural learning process, whose key is to let the agent learn by interacting with the stochastic environment. The fact that the agent has limited access to the information of the environment enables AI to be applied efficiently in most fields that require self-learning. It’s essential to have an organized investigation—we can make good comparisons and choose the best structures or algorithms when applying DRL in various applications. The first part of the overview introduces Markov Decision Processes (MDP) problems and Reinforcement Learning and applications of DRL for solving POMDP problems in games, robotics, and natural language processing. In part two, we continue to introduce applications in transportation, industries, communications and networking, etc. and discuss the limitations of DRL.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-10-28
      DOI: 10.3390/make3040043
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 879-899: Deep Self-Organizing Map of Convolutional
           Layers for Clustering and Visualizing Image Data

    • Authors: Christos Ferles, Yannis Papanikolaou, Stylianos P. Savaidis, Stelios A. Mitilineos
      First page: 879
      Abstract: The self-organizing convolutional map (SOCOM) hybridizes convolutional neural networks, self-organizing maps, and gradient backpropagation optimization into a novel integrated unsupervised deep learning model. SOCOM structurally combines, architecturally stacks, and algorithmically fuses its deep/unsupervised learning components. The higher-level representations produced by its underlying convolutional deep architecture are embedded in its topologically ordered neural map output. The ensuing unsupervised clustering and visualization operations reflect the model’s degree of synergy between its building blocks and synopsize its range of applications. Clustering results are reported on the STL-10 benchmark dataset coupled with the devised neural map visualizations. The series of conducted experiments utilize a deep VGG-based SOCOM model.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-11-14
      DOI: 10.3390/make3040044
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 900-921: A Multi-Component Framework for the Analysis
           and Design of Explainable Artificial Intelligence

    • Authors: Mi-Young Kim, Shahin Atakishiyev, Housam Khalifa Bashier Babiker, Nawshad Farruque, Randy Goebel, Osmar R. Zaïane, Mohammad-Hossein Motallebi, Juliano Rabelo, Talat Syed, Hengshuai Yao, Peter Chun
      First page: 900
      Abstract: The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and social value. Second, the emerging and growing concern for creating ethical and trusted AI systems, including compliance with regulatory principles to ensure transparency and trust. These two threads have created a kind of “perfect storm” of research activity, all motivated to create and deliver any set of tools and techniques to address the XAI demand. As some surveys of current XAI suggest, there is yet to appear a principled framework that respects the literature of explainability in the history of science and which provides a basis for the development of a framework for transparent XAI. We identify four foundational components, including the requirements for (1) explicit explanation knowledge representation, (2) delivery of alternative explanations, (3) adjusting explanations based on knowledge of the explainee, and (4) exploiting the advantage of interactive explanation. With those four components in mind, we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a basic history of XAI ideas, and then synthesize those ideas into a simple framework that can guide the design of AI systems that require XAI.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-11-18
      DOI: 10.3390/make3040045
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 922-945: Language Semantics Interpretation with an
           Interaction-Based Recurrent Neural Network

    • Authors: Shaw-Hwa Lo, Yiqiao Yin
      First page: 922
      Abstract: Text classification is a fundamental language task in Natural Language Processing. A variety of sequential models are capable of making good predictions, yet there is a lack of connection between language semantics and prediction results. This paper proposes a novel influence score (I-score), a greedy search algorithm, called Backward Dropping Algorithm (BDA), and a novel feature engineering technique called the “dagger technique”. First, the paper proposes to use the novel influence score (I-score) to detect and search for the important language semantics in text documents that are useful for making good predictions in text classification tasks. Next, a greedy search algorithm, called the Backward Dropping Algorithm, is proposed to handle long-term dependencies in the dataset. Moreover, the paper proposes a novel engineering technique called the “dagger technique” that fully preserves the relationship between the explanatory variable and the response variable. The proposed techniques can be further generalized into any feed-forward Artificial Neural Networks (ANNs) and Convolutional Neural Networks (CNNs), and any neural network. A real-world application on the Internet Movie Database (IMDB) is used and the proposed methods are applied to improve prediction performance with an 81% error reduction compared to other popular peers if I-score and “dagger technique” are not implemented.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-11-19
      DOI: 10.3390/make3040046
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 946-965: A Novel Feature Representation for Prediction
           of Global Horizontal Irradiance Using a Bidirectional Model

    • Authors: Sourav Malakar, Saptarsi Goswami, Bhaswati Ganguli, Amlan Chakrabarti, Sugata Sen Roy, K. Boopathi, A. G. Rangaraj
      First page: 946
      Abstract: Complex weather conditions—in particular clouds—leads to uncertainty in photovoltaic (PV) systems, which makes solar energy prediction very difficult. Currently, in the renewable energy domain, deep-learning-based sequence models have reported better results compared to state-of-the-art machine-learning models. There are quite a few choices of deep-learning architectures, among which Bidirectional Gated Recurrent Unit (BGRU) has apparently not been used earlier in the solar energy domain. In this paper, BGRU was used with a new augmented and bidirectional feature representation. The used BGRU network is more generalized as it can handle unequal lengths of forward and backward context. The proposed model produced 59.21%, 37.47%, and 76.80% better prediction accuracy compared to traditional sequence-based, bidirectional models, and some of the established states-of-the-art models. The testbed considered for evaluation of the model is far more comprehensive and reliable considering the variability in the climatic zones and seasons, as compared to some of the recent studies in India.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-11-25
      DOI: 10.3390/make3040047
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 966-989: Analysis of Explainers of Black Box Deep
           Neural Networks for Computer Vision: A Survey

    • Authors: Vanessa Buhrmester, David Münch, Michael Arens
      First page: 966
      Abstract: Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized as being non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificially generated datasets, which often do not reflect reality. By basing decision-making algorithms on Deep Neural Networks, prejudice and unfairness may be promoted unknowingly due to a lack of transparency. Hence, several so-called explanators, or explainers, have been developed. Explainers try to give insight into the inner structure of machine learning black boxes by analyzing the connection between the input and output. In this survey, we present the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks. We give a comprehensive overview about the taxonomy of related studies and compare several survey papers that deal with explainability in general. We work out the drawbacks and gaps and summarize further research ideas.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-12-08
      DOI: 10.3390/make3040048
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 990-1008: AI-Based Video Clipping of Soccer Events

    • Authors: Joakim Olav Valand, Haris Kadragic, Steven Alexander Hicks, Vajira Lasantha Thambawita, Cise Midoglu, Tomas Kupka, Dag Johansen, Michael Alexander Riegler, Pål Halvorsen
      First page: 990
      Abstract: The current gold standard for extracting highlight clips from soccer games is the use of manual annotations and clippings, where human operators define the start and end of an event and trim away the unwanted scenes. This is a tedious, time-consuming, and expensive task, to the extent of being rendered infeasible for use in lower league games. In this paper, we aim to automate the process of highlight generation using logo transition detection, scene boundary detection, and optional scene removal. We experiment with various approaches, using different neural network architectures on different datasets, and present two models that automatically find the appropriate time interval for extracting goal events. These models are evaluated both quantitatively and qualitatively, and the results show that we can detect logo and scene transitions with high accuracy and generate highlight clips that are highly acceptable for viewers. We conclude that there is considerable potential in automating the overall soccer video clipping process.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-12-08
      DOI: 10.3390/make3040049
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 1009-1029: Detection and Classification of Knee
           Injuries from MR Images Using the MRNet Dataset with Progressively
           Operating Deep Learning Methods

    • Authors: Ali Can Kara, Fırat Hardalaç
      First page: 1009
      Abstract: This study aimed to build progressively operating deep learning models that could detect meniscus injuries, anterior cruciate ligament (ACL) tears and knee abnormalities in magnetic resonance imaging (MRI). The Stanford Machine Learning Group MRNet dataset was employed in the study, which included MRI image indexes in the coronal, sagittal, and axial axes, each having 1130 trains and 120 validation items. The study is divided into three sections. In the first section, suitable images are selected to determine the disease in the image index based on the disturbance under examination. It is also used to identify images that have been misclassified or are noisy and/or damaged to the degree that they cannot be utilised for diagnosis in the first section. The study employed the 50-layer residual networks (ResNet50) model in this section. The second part of the study involves locating the region to be focused on based on the disturbance that is targeted to be diagnosed in the image under examination. A novel model was built by integrating the convolutional neural networks (CNN) and the denoising autoencoder models in the second section. The third section is dedicated to making a diagnosis of the disease. In this section, a novel ResNet50 model is trained to identify disease diagnoses or abnormalities, independent of the ResNet50 model used in the first section. The images that each model selects as output after training are referred to as progressively operating deep learning methods since they are supplied as an input to the following model.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-12-16
      DOI: 10.3390/make3040050
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 1030-1054: Automated Event Detection and
           Classification in Soccer: The Potential of Using Multiple Modalities

    • Authors: Olav Andre Nergård Rongved, Markus Stige, Steven Alexander Hicks, Vajira Lasantha Thambawita, Cise Midoglu, Evi Zouganeli, Dag Johansen, Michael Alexander Riegler, Pål Halvorsen
      First page: 1030
      Abstract: Detecting events in videos is a complex task, and many different approaches, aimed at a large variety of use-cases, have been proposed in the literature. Most approaches, however, are unimodal and only consider the visual information in the videos. This paper presents and evaluates different approaches based on neural networks where we combine visual features with audio features to detect (spot) and classify events in soccer videos. We employ model fusion to combine different modalities such as video and audio, and test these combinations against different state-of-the-art models on the SoccerNet dataset. The results show that a multimodal approach is beneficial. We also analyze how the tolerance for delays in classification and spotting time, and the tolerance for prediction accuracy, influence the results. Our experiments show that using multiple modalities improves event detection performance for certain types of events.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-12-16
      DOI: 10.3390/make3040051
      Issue No: Vol. 3, No. 4 (2021)
       
  • MAKE, Vol. 3, Pages 525-541: Deterministic Local Interpretable
           Model-Agnostic Explanations for Stable Explainability

    • Authors: Muhammad Rehman Zafar, Naimul Khan
      First page: 525
      Abstract: Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-06-30
      DOI: 10.3390/make3030027
      Issue No: Vol. 3, No. 3 (2021)
       
  • MAKE, Vol. 3, Pages 542-553: Voting in Transfer Learning System for
           Ground-Based Cloud Classification

    • Authors: Mario Manzo, Simone Pellino
      First page: 542
      Abstract: Cloud classification is a great challenge in meteorological research. The different types of clouds, currently known and present in our skies, can produce radioactive effects that impact the variation of atmospheric conditions, with consequent strong dominance over the earth’s climate and weather. Therefore, identifying their main visual features becomes a crucial aspect. In this paper, the goal is to adopt pretrained deep neural networks-based architecture for clouds image description, and subsequently, classification. The approach is pyramidal. Proceeding from the bottom up, it partially extracts previous knowledge of deep neural networks related to original task and transfers it to the new task. The updated knowledge is integrated in a voting context to provide a classification prediction. The framework trains the neural models on unbalanced sets, a condition that makes the task even more complex, and combines the provided predictions through statistical measures. An experimental phase on different cloud image datasets is performed, and the results achieved show the effectiveness of the proposed approach with respect to state-of-the-art competitors.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-07-12
      DOI: 10.3390/make3030028
      Issue No: Vol. 3, No. 3 (2021)
       
  • MAKE, Vol. 3, Pages 554-581: Recent Advances in Deep Reinforcement
           Learning Applications for Solving Partially Observable Markov Decision
           Processes (POMDP) Problems: Part 1—Fundamentals and Applications in
           Games, Robotics and Natural Language Processing

    • Authors: Xuanchen Xiang, Simon Foo
      First page: 554
      Abstract: The first part of a two-part series of papers provides a survey on recent advances in Deep Reinforcement Learning (DRL) applications for solving partially observable Markov decision processes (POMDP) problems. Reinforcement Learning (RL) is an approach to simulate the human’s natural learning process, whose key is to let the agent learn by interacting with the stochastic environment. The fact that the agent has limited access to the information of the environment enables AI to be applied efficiently in most fields that require self-learning. Although efficient algorithms are being widely used, it seems essential to have an organized investigation—we can make good comparisons and choose the best structures or algorithms when applying DRL in various applications. In this overview, we introduce Markov Decision Processes (MDP) problems and Reinforcement Learning and applications of DRL for solving POMDP problems in games, robotics, and natural language processing. A follow-up paper will cover applications in transportation, communications and networking, and industries.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-07-15
      DOI: 10.3390/make3030029
      Issue No: Vol. 3, No. 3 (2021)
       
  • MAKE, Vol. 3, Pages 582-600: Proposing an Ontology Model for Planning
           Photovoltaic Systems

    • Authors: Farhad Khosrojerdi, Stéphane Gagnon, Raul Valverde
      First page: 582
      Abstract: The performance of a photovoltaic (PV) system is negatively affected when operating under shading conditions. Maximum power point tracking (MPPT) systems are used to overcome this hurdle. Designing an efficient MPPT-based controller requires knowledge about power conversion in PV systems. However, it is difficult for nontechnical solar energy consumers to define different parameters of the controller and deal with distinct sources of data related to the planning. Semantic Web technologies enable us to improve knowledge representation, sharing, and reusing of relevant information generated by various sources. In this work, we propose a knowledge-based model representing key concepts associated with an MPPT-based controller. The model is featured with Semantic Web Rule Language (SWRL), allowing the system planner to extract information about power reductions caused by snow and several airborne particles. The proposed ontology, named MPPT-On, is validated through a case study designed by the System Advisor Model (SAM). It acts as a decision support system and facilitate the process of planning PV projects for non-technical practitioners. Moreover, the presented rule-based system can be reused and shared among the solar energy community to adjust the power estimations reported by PV planning tools especially for snowy months and polluted environments.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-07-31
      DOI: 10.3390/make3030030
      Issue No: Vol. 3, No. 3 (2021)
       
  • MAKE, Vol. 3, Pages 601-614: Orientation-Encoding CNN for Point Cloud
           Classification and Segmentation

    • Authors: Hongbin Lin, Wu Zheng, Xiuping Peng
      First page: 601
      Abstract: With the introduction of effective and general deep learning network frameworks, deep learning based methods have achieved remarkable success in various visual tasks. However, there are still tough challenges in applying them to convolutional neural networks due to the lack of a potential rule structure of point clouds. Therefore, by taking the original point clouds as the input data, this paper proposes an orientation-encoding (OE) convolutional module and designs a convolutional neural network for effectively extracting local geometric features of point sets. By searching for the same number of points in 8 directions and arranging them in order in 8 directions, the OE convolution is then carried out according to the number of points in the direction, which realizes the effective feature learning of the local structure of the point sets. Further experiments on diverse datasets show that the proposed method has competitive performance on classification and segmentation tasks of point sets.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-08-02
      DOI: 10.3390/make3030031
      Issue No: Vol. 3, No. 3 (2021)
       
  • MAKE, Vol. 3, Pages 615-661: Classification of Explainable Artificial
           Intelligence Methods through Their Output Formats

    • Authors: Giulia Vilone, Luca Longo
      First page: 615
      Abstract: Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulations.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-08-04
      DOI: 10.3390/make3030032
      Issue No: Vol. 3, No. 3 (2021)
       
  • MAKE, Vol. 3, Pages 662-671: Surrogate Object Detection Explainer (SODEx)
           with YOLOv4 and LIME

    • Authors: Jonas Herskind Sejr, Peter Schneider-Kamp, Naeem Ayoub
      First page: 662
      Abstract: Due to impressive performance, deep neural networks for object detection in images have become a prevalent choice. Given the complexity of the neural network models used, users of these algorithms are typically given no hint as to how the objects were found. It remains, for example, unclear whether an object is detected based on what it looks like or based on the context in which it is located. We have developed an algorithm, Surrogate Object Detection Explainer (SODEx), that can explain any object detection algorithm using any classification explainer. We evaluate SODEx qualitatively and quantitatively by detecting objects in the COCO dataset with YOLOv4 and explaining these detections with LIME. This empirical evaluation does not only demonstrate the value of explainable object detection, it also provides valuable insights into how YOLOv4 detects objects.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-08-06
      DOI: 10.3390/make3030033
      Issue No: Vol. 3, No. 3 (2021)
       
  • MAKE, Vol. 3, Pages 672-694: A Survey of Machine Learning-Based Solutions
           for Phishing Website Detection

    • Authors: Lizhen Tang, Qusay H. Mahmoud
      First page: 672
      Abstract: With the development of the Internet, network security has aroused people’s attention. It can be said that a secure network environment is a basis for the rapid and sound development of the Internet. Phishing is an essential class of cybercriminals which is a malicious act of tricking users into clicking on phishing links, stealing user information, and ultimately using user data to fake logging in with related accounts to steal funds. Network security is an iterative issue of attack and defense. The methods of phishing and the technology of phishing detection are constantly being updated. Traditional methods for identifying phishing links rely on blacklists and whitelists, but this cannot identify new phishing links. Therefore, we need to solve how to predict whether a newly emerging link is a phishing website and improve the accuracy of the prediction. With the maturity of machine learning technology, prediction has become a vital ability. This paper offers a state-of-the-art survey on methods for phishing website detection. It starts with the life cycle of phishing, introduces common anti-phishing methods, mainly focuses on the method of identifying phishing links, and has an in-depth understanding of machine learning-based solutions, including data collection, feature extraction, modeling, and evaluation performance. This paper provides a detailed comparison of various solutions for phishing website detection.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-08-20
      DOI: 10.3390/make3030034
      Issue No: Vol. 3, No. 3 (2021)
       
  • MAKE, Vol. 3, Pages 695-719: Benchmarking Studies Aimed at Clustering and
           Classification Tasks Using K-Means, Fuzzy C-Means and Evolutionary Neural
           Networks

    • Authors: Adam Pickens, Saptarshi Sengupta
      First page: 695
      Abstract: Clustering is a widely used unsupervised learning technique across data mining and machine learning applications and finds frequent use in diverse fields ranging from astronomy, medical imaging, search and optimization, geology, geophysics, and sentiment analysis, to name a few. It is therefore important to verify the effectiveness of the clustering algorithm in question and to make reasonably strong arguments for the acceptance of the end results generated by the validity indices that measure the compactness and separability of clusters. This work aims to explore the successes and limitations of two popular clustering mechanisms by comparing their performance over publicly available benchmarking data sets that capture a variety of data point distributions as well as the number of attributes, especially from a computational point of view by incorporating techniques that alleviate some of the issues that plague these algorithms. Sensitivity to initialization conditions and stagnation to local minima are explored. Further, an implementation of a feedforward neural network utilizing a fully connected topology in particle swarm optimization is introduced. This serves to be a guided random search technique for the neural network weight optimization. The algorithms utilized here are studied and compared, from which their applications are explored. The study aims to provide a handy reference for practitioners to both learn about and verify benchmarking results on commonly used real-world data sets from both a supervised and unsupervised point of view before application in more tailored, complex problems.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-08-31
      DOI: 10.3390/make3030035
      Issue No: Vol. 3, No. 3 (2021)
       
  • MAKE, Vol. 3, Pages 720-739: Artificial Neural Network Analysis of Gene
           Expression Data Predicted Non-Hodgkin Lymphoma Subtypes with High Accuracy
           

    • Authors: Joaquim Carreras, Rifat Hamoudi
      First page: 720
      Abstract: Predictive analytics using artificial intelligence is a useful tool in cancer research. A multilayer perceptron neural network used gene expression data to predict the lymphoma subtypes of 290 cases of non-Hodgkin lymphoma (GSE132929). The input layer included both the whole array of 20,863 genes and a cancer transcriptome panel of 1769 genes. The output layer was lymphoma subtypes, including follicular lymphoma, mantle cell lymphoma, diffuse large B-cell lymphoma, Burkitt lymphoma, and marginal zone lymphoma. The neural networks successfully classified the cases consistent with the lymphoma subtypes, with an area under the curve (AUC) that ranged from 0.87 to 0.99. The most relevant predictive genes were LCE2B, KNG1, IGHV7_81, TG, C6, FGB, ZNF750, CTSV, INGX, and COL4A6 for the whole set; and ARG1, MAGEA3, AKT2, IL1B, S100A7A, CLEC5A, WIF1, TREM1, DEFB1, and GAGE1 for the cancer panel. The characteristic predictive genes for each lymphoma subtypes were also identified with high accuracy (AUC = 0.95, incorrect predictions = 6.2%). Finally, the topmost relevant 30 genes of the whole set, which belonged to apoptosis, cell proliferation, metabolism, and antigen presentation pathways, not only predicted the lymphoma subtypes but also the overall survival of diffuse large B-cell lymphoma (series GSE10846, n = 414 cases), and most relevant cancer subtypes of The Cancer Genome Atlas (TCGA) consortium including carcinomas of breast, colorectal, lung, prostate, and gastric, melanoma, etc. (7441 cases). In conclusion, neural networks predicted the non-Hodgkin lymphoma subtypes with high accuracy, and the highlighted genes also predicted the survival of a pan-cancer series.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-09-10
      DOI: 10.3390/make3030036
      Issue No: Vol. 3, No. 3 (2021)
       
  • MAKE, Vol. 3, Pages 740-770: Explainable Artificial Intelligence for Human
           Decision Support System in the Medical Domain

    • Authors: Samanta Knapič, Avleen Malhi, Rohit Saluja, Kary Främling
      First page: 740
      Abstract: In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-09-19
      DOI: 10.3390/make3030037
      Issue No: Vol. 3, No. 3 (2021)
       
  • MAKE, Vol. 3, Pages 299-317: Templated Text Synthesis for Expert-Guided
           Multi-Label Extraction from Radiology Reports

    • Authors: Patrick Schrempf, Hannah Watson, Eunsoo Park, Maciej Pajak, Hamish MacKinnon, Keith W. Muir, David Harris-Birtill, Alison Q. O’Neil
      First page: 299
      Abstract: Training medical image analysis models traditionally requires large amounts of expertly annotated imaging data which is time-consuming and expensive to obtain. One solution is to automatically extract scan-level labels from radiology reports. Previously, we showed that, by extending BERT with a per-label attention mechanism, we can train a single model to perform automatic extraction of many labels in parallel. However, if we rely on pure data-driven learning, the model sometimes fails to learn critical features or learns the correct answer via simplistic heuristics (e.g., that “likely” indicates positivity), and thus fails to generalise to rarer cases which have not been learned or where the heuristics break down (e.g., “likely represents prominent VR space or lacunar infarct” which indicates uncertainty over two differential diagnoses). In this work, we propose template creation for data synthesis, which enables us to inject expert knowledge about unseen entities from medical ontologies, and to teach the model rules on how to label difficult cases, by producing relevant training examples. Using this technique alongside domain-specific pre-training for our underlying BERT architecture i.e., PubMedBERT, we improve F1 micro from 0.903 to 0.939 and F1 macro from 0.512 to 0.737 on an independent test set for 33 labels in head CT reports for stroke patients. Our methodology offers a practical way to combine domain knowledge with machine learning for text classification tasks.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-03-24
      DOI: 10.3390/make3020015
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 318-332: Transfer Learning in Smart Environments

    • Authors: Amin Anjomshoaa, Edward Curry
      First page: 318
      Abstract: The knowledge embodied in cognitive models of smart environments, such as machine learning models, is commonly associated with time-consuming and costly processes such as large-scale data collection, data labeling, network training, and fine-tuning of models. Sharing and reuse of these elaborated resources between intelligent systems of different environments, which is known as transfer learning, would facilitate the adoption of cognitive services for the users and accelerate the uptake of intelligent systems in smart building and smart city applications. Currently, machine learning processes are commonly built for intra-organization purposes and tailored towards specific use cases with the assumption of integrated model repositories and feature pools. Transferring such services and models beyond organization boundaries is a challenging task that requires human intervention to find the matching models and evaluate them. This paper investigates the potential of communication and transfer learning between smart environments in order to empower a decentralized and peer-to-peer ecosystem for seamless and automatic transfer of services and machine learning models. To this end, we explore different knowledge types in the context of smart built environments and propose a collaboration framework based on knowledge graph principles for describing the machine learning models and their corresponding dependencies.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-03-29
      DOI: 10.3390/make3020016
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 333-356: Privacy and Trust Redefined in Federated
           Machine Learning

    • Authors: Pavlos Papadopoulos, Will Abramson, Adam J. Hall, Nikolaos Pitropakis, William J. Buchanan
      First page: 333
      Abstract: A common privacy issue in traditional machine learning is that data needs to be disclosed for the training procedures. In situations with highly sensitive data such as healthcare records, accessing this information is challenging and often prohibited. Luckily, privacy-preserving technologies have been developed to overcome this hurdle by distributing the computation of the training and ensuring the data privacy to their owners. The distribution of the computation to multiple participating entities introduces new privacy complications and risks. In this paper, we present a privacy-preserving decentralised workflow that facilitates trusted federated learning among participants. Our proof-of-concept defines a trust framework instantiated using decentralised identity technologies being developed under Hyperledger projects Aries/Indy/Ursa. Only entities in possession of Verifiable Credentials issued from the appropriate authorities are able to establish secure, authenticated communication channels authorised to participate in a federated learning workflow related to mental health data.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-03-29
      DOI: 10.3390/make3020017
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 357-373: Estimating Neural Network’s Performance
           with Bootstrap: A Tutorial

    • Authors: Umberto Michelucci, Francesca Venturini
      First page: 357
      Abstract: Neural networks present characteristics where the results are strongly dependent on the training data, the weight initialisation, and the hyperparameters chosen. The determination of the distribution of a statistical estimator, as the Mean Squared Error (MSE) or the accuracy, is fundamental to evaluate the performance of a neural network model (NNM). For many machine learning models, as linear regression, it is possible to analytically obtain information as variance or confidence intervals on the results. Neural networks present the difficulty of not being analytically tractable due to their complexity. Therefore, it is impossible to easily estimate distributions of statistical estimators. When estimating the global performance of an NNM by estimating the MSE in a regression problem, for example, it is important to know the variance of the MSE. Bootstrap is one of the most important resampling techniques to estimate averages and variances, between other properties, of statistical estimators. In this tutorial, the application of resampling techniques (including bootstrap) to the evaluation of neural networks’ performance is explained from both a theoretical and practical point of view. The pseudo-code of the algorithms is provided to facilitate their implementation. Computational aspects, as the training time, are discussed, since resampling techniques always require simulations to be run many thousands of times and, therefore, are computationally intensive. A specific version of the bootstrap algorithm is presented that allows the estimation of the distribution of a statistical estimator when dealing with an NNM in a computationally effective way. Finally, algorithms are compared on both synthetically generated and real data to demonstrate their performance.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-03-29
      DOI: 10.3390/make3020018
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 374-391: On the Scale Invariance in State of the Art
           CNNs Trained on ImageNet

    • Authors: Mara Graziani, Thomas Lompech, Henning Müller, Adrien Depeursinge, Vincent Andrearczyk
      First page: 374
      Abstract: The diffused practice of pre-training Convolutional Neural Networks (CNNs) on large natural image datasets such as ImageNet causes the automatic learning of invariance to object scale variations. This, however, can be detrimental in medical imaging, where pixel spacing has a known physical correspondence and size is crucial to the diagnosis, for example, the size of lesions, tumors or cell nuclei. In this paper, we use deep learning interpretability to identify at what intermediate layers such invariance is learned. We train and evaluate different regression models on the PASCAL-VOC (Pattern Analysis, Statistical modeling and ComputAtional Learning-Visual Object Classes) annotated data to (i) separate the effects of the closely related yet different notions of image size and object scale, (ii) quantify the presence of scale information in the CNN in terms of the layer-wise correlation between input scale and feature maps in InceptionV3 and ResNet50, and (iii) develop a pruning strategy that reduces the invariance to object scale of the learned features. Results indicate that scale information peaks at central CNN layers and drops close to the softmax, where the invariance is reached. Our pruning strategy uses this to obtain features that preserve scale information. We show that the pruning significantly improves the performance on medical tasks where scale is a relevant factor, for example for the regression of breast histology image magnification. These results show that the presence of scale information at intermediate layers legitimates transfer learning in applications that require scale covariance rather than invariance and that the performance on these tasks can be improved by pruning off the layers where the invariance is learned. All experiments are performed on publicly available data and the code is available on GitHub.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-04-03
      DOI: 10.3390/make3020019
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 392-413: Towards CRISP-ML(Q): A Machine Learning
           Process Model with Quality Assurance Methodology

    • Authors: Stefan Studer, Thanh Binh Bui, Christian Drescher, Alexander Hanuschkin, Ludwig Winkler, Steven Peters, Klaus-Robert Müller
      First page: 392
      Abstract: Machine learning is an established and frequently used technique in industry and academia, but a standard process model to improve success and efficiency of machine learning applications is still missing. Project organizations and machine learning practitioners face manifold challenges and risks when developing machine learning applications and have a need for guidance to meet business expectations. This paper therefore proposes a process model for the development of machine learning applications, covering six phases from defining the scope to maintaining the deployed machine learning application. Business and data understanding are executed simultaneously in the first phase, as both have considerable impact on the feasibility of the project. The next phases are comprised of data preparation, modeling, evaluation, and deployment. Special focus is applied to the last phase, as a model running in changing real-time environments requires close monitoring and maintenance to reduce the risk of performance degradation over time. With each task of the process, this work proposes quality assurance methodology that is suitable to address challenges in machine learning development that are identified in the form of risks. The methodology is drawn from practical experience and scientific literature, and has proven to be general and stable. The process model expands on CRISP-DM, a data mining process model that enjoys strong industry support, but fails to address machine learning specific tasks. The presented work proposes an industry- and application-neutral process model tailored for machine learning applications with a focus on technical tasks for quality assurance.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-04-22
      DOI: 10.3390/make3020020
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 414-434: Review of Automatic Microexpression
           Recognition in the Past Decade

    • Authors: Liangfei Zhang, Ognjen Arandjelović
      First page: 414
      Abstract: Facial expressions provide important information concerning one’s emotional state. Unlike regular facial expressions, microexpressions are particular kinds of small quick facial movements, which generally last only 0.05 to 0.2 s. They reflect individuals’ subjective emotions and real psychological states more accurately than regular expressions which can be acted. However, the small range and short duration of facial movements when microexpressions happen make them challenging to recognize both by humans and machines alike. In the past decade, automatic microexpression recognition has attracted the attention of researchers in psychology, computer science, and security, amongst others. In addition, a number of specialized microexpression databases have been collected and made publicly available. The purpose of this article is to provide a comprehensive overview of the current state of the art automatic facial microexpression recognition work. To be specific, the features and learning methods used in automatic microexpression recognition, the existing microexpression data sets, the major outstanding challenges, and possible future development directions are all discussed.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-05-02
      DOI: 10.3390/make3020021
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 435-452: Assessing the Robustness of Cluster Solutions
           in Emotionally-Annotated Pictures Using Monte-Carlo Simulation Stabilized
           K-Means Algorithm

    • Authors: Marko Horvat, Alan Jović, Kristijan Burnik
      First page: 435
      Abstract: Clustering is a very popular machine-learning technique that is often used in data exploration of continuous variables. In general, there are two problems commonly encountered in clustering: (1) the selection of the optimal number of clusters, and (2) the undecidability of the affiliation of border data points to neighboring clusters. We address both problems and describe how to solve them in application to affective multimedia databases. In the experiment, we used the unsupervised learning algorithm k-means and the Nencki Affective Picture System (NAPS) dataset, which contains 1356 semantically and emotionally annotated pictures. The optimal number of centroids was estimated, using the empirical elbow and silhouette rules, and validated using the Monte-Carlo simulation approach. Clustering with k = 1–50 centroids is reported, along with dominant picture keywords and descriptive statistical parameters. Affective multimedia databases, such as the NAPS, have been specifically designed for emotion and attention experiments. By estimating the optimal cluster solutions, it was possible to gain deeper insight into affective features of visual stimuli. Finally, a custom software application was developed for study in the Python programming language. The tool uses the scikit-learn library for the implementation of machine-learning algorithms, data exploration and visualization. The tool is freely available for scientific and non-commercial purposes.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-05-04
      DOI: 10.3390/make3020022
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 453-466: Single-Core Multiscale Residual Network for
           the Super Resolution of Liquid Metal Specimen Images

    • Authors: Keqing Ning, Zhihao Zhang, Kai Han, Siyu Han, Xiqing Zhang
      First page: 453
      Abstract: In a gravity-free or microgravity environment, liquid metals without crystalline nuclei achieve a deep undercooling state. The resulting melts exhibit unique properties, and the research of this phenomenon is critical for exploring new metastable materials. Owing to the rapid crystallization rates of deeply undercooled liquid metal droplets, as well as cost concerns, experimental systems meant for the study of liquid metal specimens usually use low-resolution, high-framerate, high-speed cameras, which result in low-resolution photographs. To facilitate subsequent studies by material scientists, it is necessary to use super-resolution techniques to increase the resolution of these photographs. However, existing super-resolution algorithms cannot quickly and accurately restore the details contained in images of deeply undercooled liquid metal specimens. To address this problem, we propose the single-core multiscale residual network (SCMSRN) algorithm for photographic images of liquid metal specimens. In this model, multiple cascaded filters are used to obtain feature information, and the multiscale features are then fused by a residual network. Compared to existing state-of-the-art artificial neural network super-resolution algorithms, such as SRCNN, VDSR and MSRN, our model was able to achieve higher PSNR and SSIM scores and reduce network size and training time.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-05-27
      DOI: 10.3390/make3020023
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 467-480: Hardness of Learning in Rich Environments and
           Some Consequences for Financial Markets

    • Authors: Ayan Bhattacharya
      First page: 467
      Abstract: This paper examines the computational feasibility of the standard model of learning in economic theory. It is shown that the information update technique at the heart of this model is impossible to compute in all but the simplest scenarios. Specifically, using tools from theoretical machine learning, the paper first demonstrates that there is no polynomial implementation of the model unless the independence structure of variables in the data is publicly known. Next, it is shown that there cannot exist a polynomial algorithm to infer the independence structure; consequently, the overall learning problem does not have a polynomial implementation. Using the learning model when it is computationally infeasible carries risks, and some of these are explored in the latter part of the paper in the context of financial markets. Especially in rich, high-frequency environments, it implies discarding a lot of useful information, and this can lead to paradoxical outcomes in interactive game-theoretic situations. This is illustrated in a trading example where market prices can never reflect an informed trader’s information, no matter how many rounds of trade. The paper provides new theoretical motivation for the use of bounded rationality models in the study of financial asset pricing—the bound on rationality arising from the computational hardness in learning.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-05-28
      DOI: 10.3390/make3020024
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 481-506: A Machine Learning Evaluation of the Effects
           of South Africa’s COVID-19 Lockdown Measures on Population Mobility

    • Authors: Albert Whata, Charles Chimedza
      First page: 481
      Abstract: Following the declaration by the World Health Organisation (WHO) on 11 March 2020, that the global COVID-19 outbreak had become a pandemic, South Africa implemented a full lockdown from 27 March 2020 for 21 days. The full lockdown was implemented after the publication of the National Disaster Regulations (NDR) gazette on 18 March 2020. The regulations included lockdowns, public health measures, movement restrictions, social distancing measures, and social and economic measures. We developed a hybrid model that consists of a long-short term memory auto-encoder (LSTMAE) and the kernel quantile estimator (KQE) algorithm to detect change-points. Thereafter, we utilised the Bayesian structural times series models (BSTSMs) to estimate the causal effect of the lockdown measures. The LSTMAE and KQE, successfully detected the changepoint that resulted from the full lockdown that was imposed on 27 March 2020. Additionally, we quantified the causal effect of the full lockdown measure on population mobility in residential places, workplaces, transit stations, parks, grocery and pharmacy, and retail and recreation. In relative terms, population mobility at grocery and pharmacy places decreased significantly by −17,137.04% (p-value = 0.001 < 0.05). In relative terms, population mobility at transit stations, retail and recreation, workplaces, parks, and residential places decreased significantly by −998.59% (p-value = 0.001 < 0.05), −1277.36% (p-value = 0.001 < 0.05), −2175.86% (p-value = 0.001 < 0.05), −370.00% (p-value = 0.001< 0.05), and −22.73% (p-value = 0.001 < 0.05), respectively. Therefore, the full lockdown Level 5 imposed on March 27, 2020 had a causal effect on population mobility in these categories of places.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-06-01
      DOI: 10.3390/make3020025
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 507-524: Going to Extremes: Weakly Supervised Medical
           Image Segmentation

    • Authors: Holger R. Roth, Dong Yang, Ziyue Xu, Xiaosong Wang, Daguang Xu
      First page: 507
      Abstract: Medical image annotation is a major hurdle for developing precise and robust machine-learning models. Annotation is expensive, time-consuming, and often requires expert knowledge, particularly in the medical field. Here, we suggest using minimal user interaction in the form of extreme point clicks to train a segmentation model which, in effect, can be used to speed up medical image annotation. An initial segmentation is generated based on the extreme points using the random walker algorithm. This initial segmentation is then used as a noisy supervision signal to train a fully convolutional network that can segment the organ of interest, based on the provided user clicks. Through experimentation on several medical imaging datasets, we show that the predictions of the network can be refined using several rounds of training with the prediction from the same weakly annotated data. Further improvements are shown using the clicked points within a custom-designed loss and attention mechanism. Our approach has the potential to speed up the process of generating new training datasets for the development of new machine-learning and deep-learning-based models for, but not exclusively, medical image analysis.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-06-02
      DOI: 10.3390/make3020026
      Issue No: Vol. 3, No. 2 (2021)
       
  • MAKE, Vol. 3, Pages 84-94: Rumor Detection Based on SAGNN: Simplified
           Aggregation Graph Neural Networks

    • Authors: Liang Zhang, Jingqun Li, Bin Zhou, Yan Jia
      First page: 84
      Abstract: Identifying fake news on media has been an important issue. This is especially true considering the wide spread of rumors on popular social networks such as Twitter. Various kinds of techniques have been proposed for automatic rumor detection. In this work, we study the application of graph neural networks for rumor classification at a lower level, instead of applying existing neural network architectures to detect rumors. The responses to true rumors and false rumors display distinct characteristics. This suggests that it is essential to capture such interactions in an effective manner for a deep learning network to achieve better rumor detection performance. To this end we present a simplified aggregation graph neural network architecture. Experiments on publicly available Twitter datasets demonstrate that the proposed network has performance on a par with or even better than that of state-of-the-art graph convolutional networks, while significantly reducing the computational complexity.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-01-04
      DOI: 10.3390/make3010005
      Issue No: Vol. 3, No. 1 (2021)
       
  • MAKE, Vol. 3, Pages 95-122: Learning DOM Trees of Web Pages by Subpath
           Kernel and Detecting Fake e-Commerce Sites

    • Authors: Kilho Shin, Taichi Ishikawa, Yu-Lu Liu, David Lawrence Shepard
      First page: 95
      Abstract: The subpath kernel is a class of positive definite kernels defined over trees, which has the following advantages for the purposes of classification, regression and clustering: it can be incorporated into a variety of powerful kernel machines including SVM; It is invariant whether input trees are ordered or unordered; It can be computed by significantly fast linear-time algorithms; And, finally, its excellent learning performance has been proven through intensive experiments in the literature. In this paper, we leverage recent advances in tree kernels to solve real problems. As an example, we apply our method to the problem of detecting fake e-commerce sites. Although the problem is similar to phishing site detection, the fact that mimicking existing authentic sites is harmful for fake e-commerce sites marks a clear difference between these two problems. We focus on fake e-commerce site detection for three reasons: e-commerce fraud is a real problem that companies and law enforcement have been cooperating to solve; Inefficiency hampers existing approaches because datasets tend to be large, while subpath kernel learning overcomes these performance challenges; And we offer increased resiliency against attempts to subvert existing detection methods through incorporating robust features that adversaries cannot change: the DOM-trees of web-sites. Our real-world results are remarkable: our method has exhibited accuracy as high as 0.998 when training SVM with 1000 instances and evaluating accuracy for almost 7000 independent instances. Its generalization efficiency is also excellent: with only 100 training instances, the accuracy score reached 0.996.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-01-14
      DOI: 10.3390/make3010006
      Issue No: Vol. 3, No. 1 (2021)
       
  • MAKE, Vol. 3, Pages 123-167: Interpretable Topic Extraction and Word
           Embedding Learning Using Non-Negative Tensor DEDICOM

    • Authors: Lars Hillebrand, David Biesner, Christian Bauckhage, Rafet Sifa
      First page: 123
      Abstract: Unsupervised topic extraction is a vital step in automatically extracting concise contentual information from large text corpora. Existing topic extraction methods lack the capability of linking relations between these topics which would further help text understanding. Therefore we propose utilizing the Decomposition into Directional Components (DEDICOM) algorithm which provides a uniquely interpretable matrix factorization for symmetric and asymmetric square matrices and tensors. We constrain DEDICOM to row-stochasticity and non-negativity in order to factorize pointwise mutual information matrices and tensors of text corpora. We identify latent topic clusters and their relations within the vocabulary and simultaneously learn interpretable word embeddings. Further, we introduce multiple methods based on alternating gradient descent to efficiently train constrained DEDICOM algorithms. We evaluate the qualitative topic modeling and word embedding performance of our proposed methods on several datasets, including a novel New York Times news dataset, and demonstrate how the DEDICOM algorithm provides deeper text analysis than competing matrix factorization approaches.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-01-19
      DOI: 10.3390/make3010007
      Issue No: Vol. 3, No. 1 (2021)
       
  • MAKE, Vol. 3, Pages 168-169: Acknowledgment to Reviewers of MAKE in 2020

    • Authors: MAKE Editorial Office MAKE Editorial Office
      First page: 168
      Abstract: Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that MAKE maintains its standards for the high quality of its published papers [...]
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-01-27
      DOI: 10.3390/make3010008
      Issue No: Vol. 3, No. 1 (2021)
       
  • MAKE, Vol. 3, Pages 170-205: Explainable AI Framework for Multivariate
           Hydrochemical Time Series

    • Authors: Thrun, Ultsch, Breuer
      First page: 170
      Abstract: The understanding of water quality and its underlying processes is important for the protection of aquatic environments. With the rare opportunity of access to a domain expert, an explainable AI (XAI) framework is proposed that is applicable to multivariate time series. The XAI provides explanations that are interpretable by domain experts. In three steps, it combines a data-driven choice of a distance measure with supervised decision trees guided by projection-based clustering. The multivariate time series consists of water quality measurements, including nitrate, electrical conductivity, and twelve other environmental parameters. The relationships between water quality and the environmental parameters are investigated by identifying similar days within a cluster and dissimilar days between clusters. The framework, called DDS-XAI, does not depend on prior knowledge about data structure, and its explanations are tendentially contrastive. The relationships in the data can be visualized by a topographic map representing high-dimensional structures. Two state of the art XAIs called eUD3.5 and iterative mistake minimization (IMM) were unable to provide meaningful and relevant explanations from the three multivariate time series data. The DDS-XAI framework can be swiftly applied to new data. Open-source code in R for all steps of the XAI framework is provided and the steps are structured application-oriented.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-02-04
      DOI: 10.3390/make3010009
      Issue No: Vol. 3, No. 1 (2021)
       
  • MAKE, Vol. 3, Pages 205-227: Property Checking with Interpretable Error
           Characterization for Recurrent Neural Networks

    • Authors: Franz Mayr, Sergio Yovine, Ramiro Visca
      First page: 205
      Abstract: This paper presents a novel on-the-fly, black-box, property-checking through learning approach as a means for verifying requirements of recurrent neural networks (RNN) in the context of sequence classification. Our technique steps on a tool for learning probably approximately correct (PAC) deterministic finite automata (DFA). The sequence classifier inside the black-box consists of a Boolean combination of several components, including the RNN under analysis together with requirements to be checked, possibly modeled as RNN themselves. On one hand, if the output of the algorithm is an empty DFA, there is a proven upper bound (as a function of the algorithm parameters) on the probability of the language of the black-box to be nonempty. This implies the property probably holds on the RNN with probabilistic guarantees. On the other, if the DFA is nonempty, it is certain that the language of the black-box is nonempty. This entails the RNN does not satisfy the requirement for sure. In this case, the output automaton serves as an explicit and interpretable characterization of the error. Our approach does not rely on a specific property specification formalism and is capable of handling nonregular languages as well. Besides, it neither explicitly builds individual representations of any of the components of the black-box nor resorts to any external decision procedure for verification. This paper also improves previous theoretical results regarding the probabilistic guarantees of the underlying learning algorithm.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-02-12
      DOI: 10.3390/make3010010
      Issue No: Vol. 3, No. 1 (2021)
       
  • MAKE, Vol. 3, Pages 228-242: A Combined Short Time Fourier Transform and
           Image Classification Transformer Model for Rolling Element Bearings Fault
           Diagnosis in Electric Motors

    • Authors: Christos T. Alexakos, Yannis L. Karnavas, Maria Drakaki, Ioannis A. Tziafettas
      First page: 228
      Abstract: The most frequent faults in rotating electrical machines occur in their rolling element bearings. Thus, an effective health diagnosis mechanism of rolling element bearings is necessary from operational and economical points of view. Recently, convolutional neural networks (CNNs) have been proposed for bearing fault detection and identification. However, two major drawbacks of these models are (a) their lack of ability to capture global information about the input vector and to derive knowledge about the statistical properties of the latter and (b) the high demand for computational resources. In this paper, short time Fourier transform (STFT) is proposed as a pre-processing step to acquire time-frequency representation vibration images from raw data in variable healthy or faulty conditions. To diagnose and classify the vibration images, the image classification transformer (ICT), inspired from the transformers used for natural language processing, has been suitably adapted to work as an image classifier trained in a supervised manner and is also proposed as an alternative method to CNNs. Simulation results on a famous and well-established rolling element bearing fault detection benchmark show the effectiveness of the proposed method, which achieved 98.3% accuracy (on the test dataset) while requiring substantially fewer computational resources to be trained compared to the CNN approach.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-02-16
      DOI: 10.3390/make3010011
      Issue No: Vol. 3, No. 1 (2021)
       
  • MAKE, Vol. 3, Pages 243-262: Automatic Feature Selection for Improved
           Interpretability on Whole Slide Imaging

    • Authors: Antoine Pirovano, Hippolyte Heuberger, Sylvain Berlemont, SaÏd Ladjal, Isabelle Bloch
      First page: 243
      Abstract: Deep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert’s level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification with the formalization of the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances. We measure the improvement using the tile-level AUC that we called Localization AUC, and show an improvement of more than 0.2. We also validate our results with a RemOve And Retrain (ROAR) measure. Then, after studying the impact of the number of features used for heat-map computation, we propose a corrective approach, relying on activation colocalization of selected features, that improves the performances and the stability of our proposed method.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-02-22
      DOI: 10.3390/make3010012
      Issue No: Vol. 3, No. 1 (2021)
       
  • MAKE, Vol. 3, Pages 263-283: Leaving No Stone Unturned: Flexible Retrieval
           of Idiomatic Expressions from a Large Text Corpus

    • Authors: Callum Hughes, Maxim Filimonov, Alison Wray, Irena Spasić
      First page: 263
      Abstract: Idioms are multi-word expressions whose meaning cannot always be deduced from the literal meaning of constituent words. A key feature of idioms that is central to this paper is their peculiar mixture of fixedness and variability, which poses challenges for their retrieval from large corpora using traditional search approaches. These challenges hinder insights into idiom usage, affecting users who are conducting linguistic research as well as those involved in language education. To facilitate access to idiom examples taken from real-world contexts, we introduce an information retrieval system designed specifically for idioms. Given a search query that represents an idiom, typically in its canonical form, the system expands it automatically to account for the most common types of idiom variation including inflection, open slots, adjectival or adverbial modification and passivisation. As a by-product of query expansion, other types of idiom variation captured include derivation, compounding, negation, distribution across multiple clauses as well as other unforeseen types of variation. The system was implemented on top of Elasticsearch, an open-source, distributed, scalable, real-time search engine. Flexible retrieval of idioms is supported by a combination of linguistic pre-processing of the search queries, their translation into a set of query clauses written in a query language called Query DSL, and analysis, an indexing process that involves tokenisation and normalisation. Our system outperformed the phrase search in terms of recall and outperformed the keyword search in terms of precision. Out of the three, our approach was found to provide the best balance between precision and recall. By providing a fast and easy way of finding idioms in large corpora, our approach can facilitate further developments in fields such as linguistics, language education and natural language processing.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-03-03
      DOI: 10.3390/make3010013
      Issue No: Vol. 3, No. 1 (2021)
       
  • MAKE, Vol. 3, Pages 284-298: From the Digital Data Revolution toward a
           Digital Society: Pervasiveness of Artificial Intelligence

    • Authors: Frank Emmert-Streib
      First page: 284
      Abstract: Technological progress has led to powerful computers and communication technologies that penetrate nowadays all areas of science, industry and our private lives. As a consequence, all these areas are generating digital traces of data amounting to big data resources. This opens unprecedented opportunities but also challenges toward the analysis, management, interpretation and responsible usage of such data. In this paper, we discuss these developments and the fields that have been particularly effected by the digital revolution. Our discussion is AI-centered showing domain-specific prospects but also intricacies for the method development in artificial intelligence. For instance, we discuss recent breakthroughs in deep learning algorithms and artificial intelligence as well as advances in text mining and natural language processing, e.g., word-embedding methods that enable the processing of large amounts of text data from diverse sources such as governmental reports, blog entries in social media or clinical health records of patients. Furthermore, we discuss the necessity of further improving general artificial intelligence approaches and for utilizing advanced learning paradigms. This leads to arguments for the establishment of statistical artificial intelligence. Finally, we provide an outlook on important aspects of future challenges that are of crucial importance for the development of all fields, including ethical AI and the influence of bias on AI systems. As potential end-point of this development, we define digital society as the asymptotic limiting state of digital economy that emerges from fully connected information and communication technologies enabling the pervasiveness of AI. Overall, our discussion provides a perspective on the elaborate relatedness of digital data and AI systems.
      Citation: Machine Learning and Knowledge Extraction
      PubDate: 2021-03-04
      DOI: 10.3390/make3010014
      Issue No: Vol. 3, No. 1 (2021)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 100.24.115.215
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-